Almost a year after the launch of ChatGPT, an AI workshop at TMU revealed that instructors are divided on the benefits of generative AI in the classroom.
The workshop, titled ‘AI 101: Understanding the Technology that is Transforming, Teaching, and Learning,’ was hosted by TMU’s Centre for Excellence in Learning and Teaching on October 12th. It brought together a handful of instructors from across TMU to learn about the impact and limits of generative AI as well as to discuss concerns around the technology—and there were many.
“Students have definitely been indicating [that] they’re using it more,” says Prabh Sidhu, a TA in TMU’s faculty of Global Management Studies and one of the workshop attendees. “We’re trying to figure out how to combat it.”
“I recently overheard one student say that students are crushing their assignments and failing their exams,” said Allyson Miller, director of TMU’s academic integrity office and the organizer of the workshop.
But some attendees were excited about the technology. Two instructors cited their desire to learn more about effective ways of using generative AI in the classroom as a reason for attending.
Miller is seeing a growing number of faculty approach her with concerns as well as misconceptions about AI—it’s why she set up the event. “Instructors say, ‘My students used ChatGPT,’ how do I prove it?” The answer is not simple, she said.
“We can’t trust detectors,” said Miller, referring to digital tools designed to identify the likelihood that a piece of content was generated by AI. She reminded us that when the United States Constitution was put through detection software, it was deemed likely to have been written by AI.
Instead, Miller said they believe AI might be challenging instructors to tweak their approach to monitoring student learning. “You can sit down with students and start talking to them and see based on that conversation what their knowledge is,” she said.
But from Sidhu’s perspective, the only viable way to mitigate the negative impacts of AI in his classroom is to have more in-class assessments.
“[AI] is definitely a problem but not in all disciplines,” said Sidhu. “I think it’s the structure of those courses where there are a lot of proctored tests—it’s less common there.”
Of his own faculty, he says the over-reliance on AI is more problematic because of the “report-based and essay-based” structure.
Maura Grossman, an AI ethics researcher at the University of Waterloo believes that the concerns around AI have ballooned to the point of dividing faculty and instructors in half.
“There are two camps—one is saying this is sort of an integrity issue or cheating,” said Grossman. “The other camp thinks we’re moving into an AI-driven world—these tools are going to be in the workplace, so it’s our responsibility as educators to introduce students to these tools.”
Miller appears to broach both camps; on one hand, she said she believes that professors should be checking whether students are using AI, going so far as to recommend that professors have a ChatGPT account to cross-reference student answers.
But she also thinks we shouldn’t be so quick to ban the tools. “I don’t know if our end goal should be to thwart students from using this technology,” said Miller. “We want them to be critical consumers of this technology, but how that happens is still to be determined.”
I'm a second-year master of journalism student at Toronto Metropolitan University and an associate producer with CBC's The National. During the COVID-19 lockdown, I started freelance writing and have since reported on everything from immigration and education to technology and finance. You can find me venting about Gen Z money struggles on a bi-weekly basis for the Globe and Mail.