As the use of artificial intelligence in higher education increases, university professors have split opinions about its impact on education and the future of their fields. While some may embrace AI as a tool to enhance learning, others view it with skepticism due to ethical implications.
Northwestern State University of Louisiana faculty share their take on the stigma surrounding AI.
Damien Tristant, assistant professor of physics, explained that there is always a spectrum of opinions to follow when it comes to new technology.
“Every time there is something new, some people like it, some people are afraid about it,” Tristant said. “I understand that some people can be afraid about it. However, if you’re afraid about something that you don’t know, the best way to make sure that you’re not afraid anymore is to learn about it.”
He supports the use of AI as a useful tool in physics and embraces its adaptability.
“I think the beauty of AI or machine learning is you can apply it to anything — someone
who teaches English, someone who teaches science or — it’s easy to apply to anything,” Tristant said.
This principle even extends to humanitarian disciplines such as the creative arts. Marisol Villela Balderrama, assistant professor of art history, proposed the discussion of an AI takeover in art and attacked this stigma by comparing AI to photography.
“Something similar happened with photography; when photography started to be more available for people, people thought there may not be a need for art,” Villela said. “But AI is not going go away the same way photography didn’t disappear. But also photography didn’t kill art, it just took it into another direction.”
She noted that this phenomenon has not only occurred with photography and AI.
“I’m an art historian, so I look at art history and its development, and I think art is always related to technology, so AI is just like a new technology,” Villela said.
Though she feels technology will never kill art, she fears that a lack of diversity in art may arise with the evolution of AI.
“It is kind of like what happens with plants, with GMO and the modified plants that are starting to kill the originals. They say corn or the original tomatoes don’t exist anymore because they have been modified so many times and everything looks the same,” Villela said. “Do I wanna live in a world where everything looks the same?”
Villela explained that the process of modification is enhanced by AI’s programming to gather information from peripheral knowledge. She related this occurrence to an example in art history-related searches where AI will reference “popular” forms of art such as European art and exclude art forms from other countries. This creates biases within AI systems and to those who use them. Essentially, the expansion of knowledge is limited when information is only pulled based on popularity.
With this ideology in mind, she wrote an article titled “Incorporating AI into an Asian Art History Course” in which she explained that she plans to evaluate how AI compares to tradition in her classes. She plans to use an AI-generated image and compare it to the actual artwork or subject. Then, students would be tasked with identifying the differences.
Overall, Villela’s proposal introduces the use of AI in art history when used with human oversight. Similarly, James Mischler, director of the Institutional Review Board (IRB), calls for a need for human oversight when using AI in research due to its tendency to provide inaccurate information. NSU’s IRB is in charge of approving research by students and ensures the research follows ethical standards.
Mischler explained that AI systems are not good at sharing accurate data, which is why only relying on AI is not good for research. “So the human being must always be the one making the final decision.”
Mischler explained the stigma around AI’s decision-making capabilities and the fear of a “takeover” as displayed in movies such as “The Terminator.”
“There’s no evidence that AI can actually reason at all,” Mischler said. “What it’s really doing is calculating what the next word will be based on the words that came before, and I’m a linguist, so that’s something I already know about; that’s called collocation: what word comes next based on the word that came before it.”
He explained AI’s ability to collocate is often mistaken for genuine intelligence or reasoning, whereas AI is only capable of calculating.
Megan Lowe, director of university libraries, supports Mischler’s point that generative AI is unable to be created without pre-existing data.
“Generative AI is not truly creative. Gen. AI is not truly like a great problem solver itself. It just regurgitates from its database, so it’s never truly going to be the innovator of something,” Lowe said.
She added that, because of this principle, AI will never be able to take over jobs that require human experience such as teaching.
“Gen. AI is never going to have the capacity to recognize human beings, like students, as holistic people who have their own lived experiences, their own strengths and weaknesses, their own traumas, their own fears, all of these things that make us very human,” Lowe said. “Those things that make us human. Gen. AI can’t replicate that.
Though the worry that AI will replace human work continues to surround the topic, Lowe reiterated the importance of communication about AI preferences.
“I think it’s important for students to have access to those tools, and I think we need to be, as faculty members and instructors and professors, transparent about our concerns around AI,” Lowe said.
In her experience, Lowe has found that a majority of faculty have concerns regarding AI’s role in cheating culture.
“Most people in higher education, their immediate concern around generative AI had to do with its potential for plagiarism and cheating which is easy to understand,” Lowe said. “However, I think that that is the wrong attitude to have because plagiarism is very contextual. One man’s plagiarism is another man’s innovation.”
Lowe believes this subjective nature of plagiarism she refers to creates a greater need for communication from faculty to students regarding AI.
“We have to impress upon our students that academic integrity has value,” Lowe said. “If faculty aren’t talking about why academic integrity matters, then why should students intrinsically, inherently know what it is and why it matters?”
Lowe feels that faculty should teach students information literacy skills such as the definition of plagiarism and how to avoid it, citation of sources and intellectual property and copyright. This way, students would be properly educated on how to avoid plagiarism when using AI.
She believes proper education and policies are needed regarding the use of AI.
“At the end of the day, ethical use of gen. AI is possible, but we need to look at the individual needs of students, the individual needs of departments, the individual needs of assignments,” Lowe said.
Lowe, like many university faculty, recognizes the ethical stigma surrounding AI but agrees that open communication and education about AI will shape an informed perspective among the student body.