
Listen to the whole story here:
Prior to the Ontario provincial election in February, final year George Brown College fashion management student Daria Ghairat demonstrated how to spot artificial intelligence (AI) online to her mom.
Her family members sent her social media posts using AI to spread disinformation, such as TikTok videos claiming former Prime Minister Justin Trudeau and the federal government were in charge of provincial responsibilities such as housing and the Ontario Student Assistance Program (OSAP).
“With their age, they don’t tend to go online to do the quick research, because they don’t know where to start,” Ghairat said.
Ghairat’s anecdotal experiences are supported by a report released by Canada’s cyber intelligence agency, the Communications Security Establishment (CSE), in early March. The report warned that adversarial countries such as China and Russia “are most likely to use generative Al as a means of creating and spreading disinformation, designed to sow division among Canadians and push narratives conducive to the interests of foreign states.”
While these threats are real, the agency stressed that it is unlikely these attempts would “fundamentally undermine the integrity” of our next federal election.
The rise of generative AI-based tools makes fake content harder to detect and contributes to the “rising threat of foreign interference,” according to a December report by policy analyst Tiffany Kwok and policy and research assistant Mahtab Laghaei of The Dais, a public policy and leadership think tank at Toronto Metropolitan University (TMU).
The language barrier leaves immigrants more vulnerable to misinformation and disinformation. With most mainstream news sources in Canada being English and French, those seeking content in their native language tend to look online.
Grace Chen, a final year sociology and urban studies student at the University of British Columbia, said some of her family members watch a YouTube channel called Mr. and Mrs. Gao, with over six million subscribers. The account uses a mix of AI-generated images and stock videos as B-roll to discuss conspiracy theories.
“I feel like my main worry is that the AI content they’re consuming is less obvious in the sense that it’s not like some horribly AI-generated photorealistic video. In the political videos they watch, it used to [include] stock photos or stock videos, but now I’m pretty sure a lot of them just do AI content or AI editing,” Chen said.
“It leads them down this very suspicious pipeline of conspiracy theorists that also use AI-generated stuff in their videos, but present their argument in a way that’s super logical and focused and rational, so that it just feels like what they’re saying is inherently true.”
The Institute for Strategic Dialogue (ISD) released a 2021 report, “The Resilience of Online Right-Wing Extremism in Canada,” that analyzed over three million messages across social media platforms, including Facebook, YouTube, X (formerly known as Twitter), 4Chan and Telegram.
They identified right-wing extremists in Canada as “key drivers of disinformation,” including creating and spreading a viral disinformation campaign that suggested Canada was preparing to invade the U.S. if Donald Trump was elected in the 2020 presidential election.

The report also found that within Canadian right-wing extremist discussions, Trudeau was the most mentioned Canadian politician in 2020, with opinions being “overwhelmingly negative.” The Liberal Party of Canada was only the sixth most mentioned Canadian political party, which the ISD report said suggests “right-wing extremist actors are more focused on Trudeau as an individual than on his party.”
“I feel like the videos and the AI content do a really good job of taking an original, mundane or just general dislike for Trudeau, because the economy is not so great and people’s lives are not so great… and using AI images, using clickbait-y headlines, using different narratives, to make that dislike very personal,” Chen said.
“It’s effective for the people who were already on that sort of sway between the Liberal [Party of Canada] and the Conservative [Party of Canada],” she said.
Both misinformation and disinformation can be fueled through the use of AI, and Laghaei said that understanding the difference is crucial for the general public and especially policymakers.
“A very innocuous example of misinformation is, let’s say my grandparents are watching a cat video that was seemingly generated through AI. They believe that the cat video is real, but the creators may have thought that people… could have guessed that it was generative AI,” she said.
“And then there’s disinformation, which is actually synthetic material or text or any kind of content that is actually meant to fool and disinform you.”
According to Laghaei, political polarization can also be purposefully stoked with AI-fueled disinformation. “It makes people move more to the extreme, and there is little room left for working together on issues and to build consensus,” she said.
“When people move more to the extreme, it’s very likely that you will then vote for more extreme leaning politicians, and that can sometimes be of benefit to these adversarial countries,” Laghaei said.
The Dais report said that another factor in immigrants believing AI content is that they may have come from countries without free and fair elections, leading to a lower trust in authorities. “You [have] the option… but you also have the duty of figuring out who is the right candidate for you, reading about them, researching,” said Ghairat, who is from Afghanistan.
“All those options can become a little bit too much for a lot of these families,” she said.
Laghaei has also encountered many posts online where immigrants are made scapegoats for issues such as housing precarity and the rising cost of groceries in online disinformation campaigns.
“In order to achieve that viral sensation, creators of this content will try to build on something that the public is already concerned about or feels some strong feelings towards,” she said. “So what ends up happening is things like racism and discrimination become really augmented against vulnerable groups.”
According to Laghaei, the solution is digital literacy, and ensuring that it is as accessible as possible in multiple languages and to different groups. “I think especially in intergenerational immigrant groups, it’s really important that the young people are trained on these tools, because then they can pass it along to other family members,” she said.
A few weeks after Ghairait showed her mom how AI platforms such as ChatGPT, Midjourney, and Runway worked, she said her mom was able to correct her sisters when an AI video was sent into their group chat.
Laghaei recommends those concerned about digital literacy for themselves or family members to look into the programs offered at their local library. A 2024 survey of online harms in Canada from The Dais found that “Canadian residents overwhelmingly trust public libraries (89 percent) and schools (81 percent) as resources to learn about digital literacy and misinformation, a significantly higher trust level than news media and the federal government.” Another resource she recommended is Connected Canadians, a charity based in Ottawa that offers virtual and in-person one-on-one and group workshops, with services free for seniors.
These kinds of programming focus on digital literacy and how to better navigate the online world, rather than “correcting one specific narrative, which may kind of make them feel self conscious, or create more distrust in those like interpersonal relationships,” Laghaei said.
Older generations are not the only targets of online misinformation and disinformation campaigns. The survey found that Instagram is the number one source of news and current events for people under 30.
While these resources are helpful to tackle the issue on a personal level, Laghaei said that the onus to prevent AI-fueled and general disinformation shouldn’t rest solely on individuals.
“What I really want to stress is it’s really hard as an individual to grapple with these things… Think tanks or governments or social media platforms [need] to ensure that they are relaying information in an accessible and transparent way,” she said.
The survey from The Dais also found that approximately two in three Canadians are in favour of government intervention in regulating social media platforms, with the most notable change since 2022 being an increase in support for platforms to label deep fakes.
“In the upcoming election, make sure that you’re talking to your federal candidates about this topic and see how knowledgeable and in tune they are, and what they say they will do about it,” Laghaei said. “If we don’t have political officials who want to counter this, then we’re not going to see the changes that we want to see.”
Reporter, On The Record, Winter 2025.