Article

Taking aim at online hate and fake news

Dangerous disinformation campaigns on social media are on the rise. So are targeted personal attacks. The Digital Society Lab is using AI-powered countermeasures to push back.
Institution(s)
McMaster University
Province(s)
Ontario
Topic(s)
Social sciences
Political science

Social media can be a minefield of misinformation, filled with unvetted rumours, exaggerated half-truths and claims that should be taken with a generous pinch of salt. But increasingly, it’s also a tactical weapon for deliberate disinformation aimed at advancing malicious agendas.

Some foreign actors flood social media with twisted facts to interfere with elections. Others amplify fake news to promote distrust of vaccines. Meanwhile, AI-generated images and videos coming out of conflict zones make it difficult to tease out the truth.

According to political scientist Clifton van der Linden, director of McMaster University’s Digital Society Lab, that propaganda erodes our conception of a shared reality — and the result is hugely damaging to democratic resilience. “The foundations of a democratic society include a respect for difference of opinions, of course,” he says. “But there has to be some agreement on certain facts.”

The World Economic Forum considers misinformation and disinformation a serious global risk. And it’s only going to get worse, van der Linden predicts, as tools like generative AI and algorithmic microtargeting become more robust and accessible.

That’s why the Digital Society Lab is taking aim at the problem — with the help of a global multidisciplinary team and some high-performance computing power funded by the CFI.

Ferreting out fake news

Human fact checkers and community reporting tools can’t keep up with the scale of misinformation. So in the last few years, researchers and social media companies have turned to AI, training it to detect the language patterns that characterize fake news.

But that method has some major drawbacks. Because it’s content-specific, its fact-checking powers are limited to particular topics. The models can be reverse-engineered, allowing propaganda-pushers to figure out which keywords or patterns to avoid. Finally, it often fails to catch misinformation campaigns or disinformation campaigns before they have spread widely.

The Digital Society Lab is taking a different approach. Their algorithms will examine not only the information being distributed but also the attributes of the people sharing it and how the content spreads through networks. “It’s the first end-to-end scalable automated detection algorithm,” says van der Linden.

To develop those algorithms, the Digital Society Lab is initially focussing on modelling the spread of COVID-19 disinformation over Twitter (now X) and measuring its effects on public opinion. Ultimately, their goal is to produce a topic-agnostic set of algorithms that can detect the early proliferation of disinformation on any social media platform, allowing platform managers to take action before it can reach a wide audience.

Van der Linden anticipates they could even determine where it comes from and who it is influencing. “Hypothetically, we will see a difference between coordinated disinformation campaigns by foreign actors or malicious groups and conscientious objections to certain narratives by individual citizens from a genuine point of disagreement,” he says.

Tackling toxic messages

Disinformation isn’t the only seedy side of social media the Digital Society Lab is tackling. They’re also keen to address the prevalence of online hate directed at high-profile figures — particularly women and people of colour — that discourages many people from running for office or going into journalism.

To do that, they’re using advanced natural language processing models and other machine learning strategies to develop a so-called “hate filter,” akin to an email spam folder. “You can train your own filter using our backend in such a way as to screen out hateful emails or hateful messages on social media,” van der Linden explains.

Developing those solutions requires serious computational horsepower. According to van der Linden, that’s where the CFI played a key role, funding the advanced computers and research infrastructure they needed to get the lab up and running.

“The contribution from the CFI has really allowed for a massive acceleration of a really important research program,” he says. And because the lab’s computing power is available to other researchers through SHARCNET — a consortium of 19 colleges, universities and research institutes that operates a network of high-performance computer clusters across Ontario — it’s creating even more impact.

Training a new generation of computational social scientists

Headshot of Cliff van der Linden.

Although the Digital Society Lab only launched a few years ago — just before the COVID-19 lockdowns — it has grown to include a team of linguists, statisticians, engineers, political scientists and computational social scientists from around the world.

And while research is the primary focus, its role doesn’t end there. “We take our contributions to the university’s teaching mission seriously as well,” says the lab’s managing director, John McAndrews. He cites the diverse cohort of undergraduate research assistants, graduate students and postdocs who all benefit from cutting-edge training in computational social scientific methods.

Social scientists often have great ideas for social innovations, but they may lack the technical know-how or digital tools to implement them, he explains. “The lab stands at that intersection, bringing interdisciplinary expertise and high-performance computing to make those innovations possible.”

I feel, as an early career researcher, that the contribution from CFI has really allowed for a massive acceleration of a really important research program.” 

— Clifton van der Linden, McMaster University

Icon of documents and a magnifier
Melding social science and digital tech to create a social enterprise

During his doctoral studies, Clifton van der Linden developed Vote Compass, an online tool to survey users’ political views to model their alignment with candidates’ policy platforms. He subsequently transitioned this work into Vox Pop Labs, a full-fledged social enterprise.

First launched during the 2011 Canadian federal election, Vote Compass has now run in more than 50 elections and referenda worldwide — and was used by more than 30 million people. 

“Vote Compass is a prime example of how we can take methods developed in the social sciences and use digital technology to generate public benefits,” says van der Linden. 

This sidebar is extracted from a story that appeared in sponsored content inserted in The Globe and Mail called Excellence in Research and Innovation.