Professors speak on AI

Liliana Marinkovich says she doesn't use AI for her school work. Photo by Shianna Gibbons / The Jambar

By Jillian McIntosh

The Youngstown State University board of trustees modified the Student Code of Conduct to address the use of artificial intelligence at a meeting in June.

According to Article III, the unauthorized use of AI by a student is a violation of academic integrity:

— In taking quizzes, tests, assignments, or examinations.

— When completing assignments, solving problems, or carrying out other assignments as detailed in the course syllabus or in other instructions by the instructor.

Rachel Faerber-Ovaska is the faculty instructional design consultant of YSU’s Department of Cyberlearning. She said early forms of AI were narrow, using algorithms to analyze and predict patterns. 

“For example, like Spotify or your Netflix account, how is it that it was able to suggest things that you would probably like,” Faerber-Ovaska said. “It used a simple form of artificial intelligence to analyze your previous choices and then to predict what other choices would likely fit.”

Faerber-Ovaska said OpenAI researchers and scientists created a new form of software that used generative AI.

“The researchers have fed the entire accessible internet to these large language models,” Faerber-Ovaska said. “People’s discussion boards, Facebook marketplace ads, all of the libraries that have publicly accessible works — has been ingested.”

Faerber-Ovaska said she supports the authorized use of AI by students.

“Make sure you understand what your professor wants to permit or prohibit about using a chat bot. If they say you can use it, cite anything you receive and indicate what sort of assistance you got from it,” Faerber-Ovaska said. 

Mark Vopat, a philosophy professor at YSU, has used AI software and detectors such as ChatGPT and GPTZero. 

“There have been times when in grading I wanted to see if I gave the prompt … I would sometimes run it through ChatGPT just to see what kind of answer it would give.” Vopat said. “I’m using GPTZero and it’s been good. It has detected people who have just openly [admitted to using AI].” 

Vopat said he has also implemented specific learning methods as a direct response to unauthorized student use of AI.

“Now, I have in one of my classes where all the students, every few weeks, are going to turn in a reading notebook,” Vopat said. “Presumably, it’s going to be a lot more difficult to go in and say to an AI, and then try and copy that.”

Vopat also said tools for AI detection are not the most reliable. 

“Even if it’s 90% effective, that still means 10% of a class of 100 students, that could still be 10 students who are actually accused falsely,” Vopat said.

Jay Gordon, an English professor, said he had concerns about the faculty’s use of AI detection. 

“This is kind of the dangers you run into, and I worry with detectors, people will become dependent,” Gordon said. “If you have strong suspicions that the student uses ChatGPT, there is a difference between asking and accusing.”

Hillary Fuhrman, assistant provost for teaching and learning, oversees faculty development and the assessment of student learning.

The Institute for Teaching and Learning offers grants of $300 per year to professors to purchase software or equipment, according to Fuhrman. 

“We’ve done a mini-grant with a couple of faculty who wanted to get a ChatGPT subscription, so they could explore it more, in order to integrate it into their coursework in effective ways,” Fuhrman said.

Each professor agreed that maintaining academic integrity is the main priority for higher education institutions as AI becomes more prevalent in everyday and educational use.