We often talk about how to stop the spread of “misinformation” online, and by “we” I mean “me.” Me often talk about how to stop the spread of “misinformation” online. But the word “misinformation” encompasses a lot of different things: saying that ivermectin cures COVID-19, that 9/11 was an inside job, that Jews secretly run the world, or even a “clickbait” headline reading “You won’t read this life-changing study about clickbait headlines,” which by the way is a real headline published on Fast Company about a study, which I DID read, that found that clickbait headlines aren’t necessarily any more effective than honest headlines and in some cases may perform worse.
But I’m not going to talk about THAT study. I’m going to talk about THIS study: “The fingerprints of misinformation: how deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions.” Carlos Carrasco-Farré, a researcher studying machine learning and social media at ESADE in Barcelona, points out that most research focuses on the type of misinformation we call “fake news,” so when we discuss how misinformation spreads, why it goes viral, and how we can stop it, we’re maybe only talking about one subset of misinformation. In fact, he points out, there are SIX distinct types of misinformation as described in a 2018 study that categorized 194 websites:
Clickbait. Sources that provide generally credible content, but use
exaggerated, misleading, or questionable headlines, social media
descriptions, and/or images.
Conspiracy Theory. Sources that are well-known promoters of
kooky conspiracy theories.
Fake News. Sources that entirely fabricate information, disseminate deceptive content, or grossly distort actual news reports.
Hate News. Sources that actively promote racism, misogyny,
homophobia, and other forms of discrimination.
Junk Science. Sources that promote pseudoscience, metaphysics,
naturalistic fallacies, and other scientifically dubious claims.
Rumor. Sources that traffic in rumors, gossip, innuendo, and
Instead of examining individual misinformative articles or other forms of media, Carrasco-Farré chose outlets that are known for distributing each of those types of misinformation. And I’m really not a fan of this tactic, because I looked at the sources he chose and for a start, they’re pretty obscure. Like, “dcgazette.com” is a blog that hasn’t been updated since 2020, and judging by the content I assume the owner died of COVID.
The other problem with this tactic is that outlets can put out a variety of misinformation. “JihadWatch.org” caught my eye because it’s filed under “conspiracy” but come on, that’s gotta also dip into the “hate” category too, right? (Let me save you a click: it absolutely does!)
This was probably the easiest way to go about it, because there is an existing database of these sources compiled in 2018 and pre-sorted into those six subsets of misinformation. But, yeah, honestly I think if you really want to distinguish between these categories, you’d want to examine specific articles that fall into them, not just articles that come from sources in those categories.
But there are still interesting results to check out that might have implications for how social media sites, for example, might be able to sniff out various types of misinformation. Carrasco-Farré found that, for instance, “fake news” is 18 times more negative than legitimate news. Hate speech, as you might predict, is even more negative: 30 times more negative. He also found that in general, all misinformation is less lexically diverse than legitimate information, meaning that they use less of a variety of words. This is part of what makes fake news in particular 13% easier to read compared to legitimate information, which may be why other studies show that the people who believe and share misinformation are more likely to have less media literacy.
He also found that misinformation tends to appeal more to viewers’ sense of morality, with fake news focusing on morality 37% more than legitimate news, and with hate speech being 50% more focused on morality.
This is critical information for websites like Twitter, Reddit, and Facebook who might want to stop the spread of misinformation – and yes, I’m being generous by assuming that’s what they want. With so much content being posted 24 hours a day, algorithms may be the best defense. And if you can fine tune those algorithms to identify the different kinds of misinformation they’ll be way more effective: target hate speech not just by looking for slurs (which people can easily misspell or use memes to hide), but by looking for emotionally charged negative language. Look for “fake news” not just by looking for hot topics, like YouTube demonetizing a video just because it’s about ivermectin, for example, but also by examining the lexical diversity and ease of reading. Seriously YouTube please do this! I can use more words! Big words! The biggest words! Antidisestablishmentarianism! Pulchritudinous! Triskaidekaphobia!
Anyway. You could also use this info in your own life, even if you don’t build algorithms for a living. Is a nugget of info a little TOO easy to parse? Does it stoke your emotions? Does it make you annoyed or angry? Maybe use that as a cue to step back and reevaluate whether or not it’s really true. It’s always a good idea to keep your bullshit detector up to date!