Listen to this article here:
ChatGPT’s quick climb to the spotlight roughly a year ago left academic institutions with little time to brace for impact. Now, TMU is preparing to launch a new set of recommendations on generative AI use in the classroom.
“By the beginning of October we’re going to come out with new guidelines,” said Allyson Miller, director of the Academic Integrity Office. “The community update will suggest what instructors can include in their own [classroom] policies, provide syllabus options, and offer resources.”
But, Miller says, the guidelines still won’t consist of hard-and-fast rules on AI. “Most universities are reluctant to be too prescriptive,” said Miller, citing the novelty of the technology.
Current guidelines around the use of generative AI – which take the form of an FAQ sheet with recommendations rather than a firm policy – have not been updated since last January. This has left some instructors and students in search of clarity.
Sameh Al Natour, associate professor at TMU’s Information Technology Management program said the lack of a concrete policy around ChaGPT had presented some challenges during past exams.
“There was a synchronized, timed exam and students clearly used ChatGPT—there were absolutely irrelevant answers,” said Al Natour, citing exam responses that were unrelated to the case studies he used in the test. “But because we have nothing in [terms of] policy around ChatGPT, how can we report it?”
Miller says the new guidelines will provide policy templates, advice, and resources for instructors. But professors will still be the ones calling the shots and determining what’s best on a case-by-case basis.
“Some instructors will be keen for students to use it, others might not be—but it’s not up to anyone outside those courses to say what they want.”
Professors like Adrian Ma at TMU’s School of Journalism appreciate the autonomy. “I enjoy the ability to control how AI is used and how it’s prohibited,” said Ma, who has a positive outlook on tools like ChatGPT overall.
Not all universities give instructors the reins on AI. In France, top universities have taken firm, restrictive policies on the use of tools like ChatGPT, going so far as to say anyone found using ChatGPT can be expelled.
At TMU, instructors can rely on existing academic integrity policies to discipline students whom they suspect of cheating with the help of AI. “Policy 60 already covers academic misconduct and unauthorized use of generative AI through its descriptions of cheating,” said Miller.
Al Natour ultimately did turn to Policy 60 when raising concerns about students’ use of AI. “There were grounds to turn to the policy because the content was from outside and unreferenced,” he said.
But the rules are less clear when AI is used in more subtle ways, for instance, as a study tool. That’s where students often feel in the dark.
“The school’s policy is not clear,” says Sam Jabri-Pickett, a second-year master’s degree student in the School of Journalism who used ChatGPT as a study tool to break down and explain complex concepts from class.
Jabri-Pickett said he felt like the guidelines were “buried in emails that are more likely to be deleted by a student than anything else.”
As students and faculty await updated guidelines, Miller has some advice: always ask your instructor if you’re unsure about what’s allowed. “They’re the ones who determine the penalty or recommend a penalty, one way or another.”
I'm a second-year master of journalism student at Toronto Metropolitan University and an associate producer with CBC's The National. During the COVID-19 lockdown, I started freelance writing and have since reported on everything from immigration and education to technology and finance. You can find me venting about Gen Z money struggles on a bi-weekly basis for the Globe and Mail.