Skip to Content, Navigation, or Footer.
Saturday, July 27, 2024
The Observer

SMC hosts conversation on the ethics of AI

On Thursday afternoon, the Digital and Public Humanities (DPH) program hosted “Ethics of AI” as the first of a series of three roundtable events to foreground discussions about artificial intelligence (AI). 

Sarah Noonan, associate professor of English, is the coordinator for the Digital and Public Humanities program.

“It is a 15-credit minor, supported by a National Endowment for the Humanities grant," Noonan said. “It provides students with the opportunity to engage in project-based learning opportunities that blend humanities style research with digital skills, often with an awareness of how the writing we do as humanities scholars has a broader public audience outside the classroom.” 

DPH is hosting this series of roundtable discussions to advance the conversation around how humanities connect with other disciplines.

“We think it’s really important for the program to integrate how digital technologies are influencing what we think it means to be human today,” Noonan said. “And we want to be starting that conversation here on campus in order to recognize that just because you can do a thing, doesn’t always mean you should do a thing. We need humanities scholars as part of these conversations.”

Christopher Wedrychowicz, professor of math and computer science, began the session by giving examples of technical problems and explaining how humans might tackle those problems as compared to AI. 

“What I want to get across here is to demystify these things a little bit. Why you would use machine learning, why is it appropriate for certain problems?" he said.

Megan Zwart, professor of philosophy said that AI use in writing presents a unique challenge.

“AI is kind of a form of plagiarism, but this in itself is brand new and interesting because it’s not exactly like cutting and pasting from Wikipedia or borrowing someone’s paper," she said. “Something is being generated, but it’s clearly not your intellectual property.” 

Zwart went on to talk about other issues with AI, such as patterns it may be producing.

“This is sometimes called the black box problem. You can get something to produce results, it can do what you want it to do, but you don’t know what patterns it’s finding. And some of those patterns might be really discriminatory," she said.

Zwart highlighted a recent example, where Amazon had to scrap its use of an AI recruiting tool that demonstrated bias against women.

“The algorithm looked at the history of Amazon employees who had been successful and decided that you were more likely to be successful if you were a man. So people who had obviously feminine names or had gone to women’s colleges were automatically downgraded,” Zwart said. “No one had chosen that, but it appeared there because of how the data was being interpreted by the algorithm.”

Zwart also displayed a Washington Post article titled “This is how AI image generators see the world,” which shows AI-generated images that invoke numerous stereotypes. 

In one demonstration, the AI was asked to show attractive people and generated multiple images of light-skinned, young, thin people. When prompted to show cleaning, the AI generated multiple images of women. 

“If we outsource our use of images and that kind of thing to AI, we’re going to get an even more biased version of these things,” Zwart said. 

Additionally, the ability of AI to generate false trailers, scripts or other creative products presents issues for the film industry, Zwart said. She prefaced her comments by showing a mock-up Star Wars trailer made in Wes Anderson’s style. The trailer featured AI-generated likenesses of stars such as Timothée Chalamet, Scarlett Johansson and more. 

“I loved Wes Anderson and that makes me smile, but none of those actors consented to being in there,” Zwart said. “This is part of what the actors are trying to negotiate in their strike, [it’s] these kinds of protections.” 

The technology raises questions about actor’s rights after they’ve signed with a studio.

“Do people have a right not to be artificially generated? Or if a studio owns your likeness can it use AI to do what it wants with it?” Zwart asked.

During audience questions, several audience members expressed concerns about who gets to decide the laws surrounding artificial intelligence.

“Should governments be responsible for deciding what we can and can’t do? But then you bring in problems of governments controlling our information sharing,” Zwart said in her response. “Do you want to leave it up to individuals? Well, we’re not really educated to tell the differences.”

“I think similar issues are already making their way through the courts, especially around image generating," Wedrychowicz added.

Wedrychowicz said that he’s seen similar issues already being discussed judicially, especially concerning AI image generation.

“They train AI on images created by artists and photographers. From what I understand there have already been some lawsuits, so it’ll be interesting to see how those make their way through the courts,” he said.

Further sessions in this roundtable series will be held in the spring semester and focus on chatbots and the creation of art by AI.