Will YouTube’s New Community Notes Stop Misinformation?
Like all social media platforms right now, YouTube has a misinformation problem. There are anti-vaxxers, flat-earthers, election deniers–the gang’s all here! Personally, I don’t see a lot of it unless I seek it out, which I do when I’m not logged in, so I always thought the algorithm, and the humans at Google, did a pretty good job of diminishing that misinformation.
But experts on the topic are a bit less optimistic compared to my anecdotal experience. A study published in the British Medical Journal in 2020 found that “Over one-quarter of the most viewed YouTube videos on COVID-19 contained misleading information, reaching millions of viewers worldwide,” for instance.
And Kate Starbird, the co-founder of University of Washington’s Center for an Informed Public, told NPR back in 2021 that YouTube was actually the worst offender when it comes to mis- and disinformation, which, as a longtime viewer of this channel you already know is false information that is spread possibly unintentionally or intentionally, respectively.
Now that Elon has turned Twitter into a MAGA/Nazi/pornbot haven, I’m not sure if YouTube is still number one, but it makes sense: Twitter and its clones, and even Facebook, are primarily text-based platforms. Written text is way easier to scan for misinformation, compared to video, which can include misinformation in the audio, in the visuals, in the captions, tucked away in the description box where none of you bother to look before commenting “WHERE ARE YOUR SOURCES” even though I specifically say that every one of my videos links to a full transcript with all sources in the description box you know what I’m getting distracted, let’s move on.
Anyway, misinformation researchers have long been wrestling with how to even study its proliferation on YouTube–not only because of the added complexity of video over text, but also because until Elon ruined Twitter, YouTube was the least transparent social network when it came to APIs allowing researchers to scrape massive amounts of data to then analyze.
Despite those difficulties, experts knew YouTube had a serious problem. Not only does misinformation flourish here, but it then gets posted to other platforms like Facebook and Xitter, which makes it especially important to stop the bullshit at the source. In 2022, more than 80 fact checking organizations from more than 40 different countries signed a letter calling out YouTube’s dangerous lack of moderation, and demanding that they take specific actions to begin to rectify it. They suggest four key changes: first, to actually allow independent researchers to more easily study YouTube’s data; second, to include links to rebuttals in videos with misinformation; third, to punish spreaders of misinformation by no longer recommending them to viewers; and fourth, to start caring about misinformation found in non-English videos.
The idea of including links to rebuttals was fleshed out in a study presented recently from a group of misinformation researchers from University of Washington and others. They created a browser plugin called Viblio, which allows users to add citations to YouTube videos, which can then appear along a replaced video timeline for viewers.
They did a very small test of the system, with a dozen users finding it useful and saying that it helped them “take control of their credibility assessments.” The researchers compared Viblio to Community Notes on Xitter, where users can debunk misinformation and have the correction appear alongside the original post.
Clearly YouTube has been considering all of this data and these calls for action, because just last week they announced that they would, in fact, be introducing a form of Community Notes here on YouTube.
They’re just starting, and so have chosen a handful of creators in good standing to try it out. They show an example of what it all might look like, in which this disgusting fake news video about “15 extinct animals you should know” is righteously debunked with the note explaining that actually the Giant Tortoise is still alive and kicking, you Russian disinformation agent.
When I first heard about this, I was of two minds, to be honest. At first I thought, “Hey, great, I love when platforms launch new initiatives to curb misinformation!” And then I thought, “I still occasionally get demonetized for talking about stuff like vaccines, even though I’m talking about the actual science, because the system sometimes skews too heavily towards assuming something is misinformation. And I talk about a lot of things that get people really riled up, so is it possible that I’m going to get mass “fact checked” by conspiracy theorists? Could QAnon nutters game this system to get their fantasies platformed on real science channels? And could notes be used to do what those experts asked YouTube to do: stop the algorithm from promoting repeat offenders who spread “misinformation?”
YouTube hasn’t told us much about the new system, but we can get an idea of how it might work, and how well it might work, by looking at the system it’s clearly based on: Xitter’s Community Notes.
Back in 2021, Twitter introduced BirdWatch, “a community-driven approach to help address misleading information” on the platform. Anyone could apply to be a Birdwatcher, and users could add new notes with better context for certain tweets, as well as rate existing notes on how helpful they were.
I was very optimistic about this initiative, because a lot of research had shown by then that users were actually pretty good at calling out and stopping misinformation if they are empowered to do so. I also liked that Twitter was being transparent about everything from the jump, allowing researchers and experts to access all their data and promising to publish the code for any algorithm that was built to help the program.
There were some early issues, though, as researchers noted in this study at the end of 2021. They found that users tended to only bother fact checking posts they were politically opposed to.
“Specifically,” the authors wrote, “users are more likely to write negative evaluations of tweets from counter-partisans; and are more likely to rate evaluations from counter-partisans as unhelpful. Our findings provide clear evidence that Birdwatch users preferentially challenge content from those with whom they disagree politically. While not necessarily indicating that Birdwatch is ineffective for identifying misleading content, these results demonstrate the important role that partisanship can play in content evaluation. Platform designers must consider the ramifications of partisanship when implementing crowdsourcing programs.”
It’s worth noting a few things about this study. First, it didn’t find that Birdwatch was useless; just that new approaches would have to be invented in order to balance things out and be less partisan.
And second, this important finding was the direct result of Twitter making their data freely available to experts to help refine the program.
And so then Elon Musk took over, and he made everything worse, because that’s just what he does. It’s all he knows how to do. He’s like the scorpion in that “scorpion and the frog” parable, if instead of stinging the frog and drowning them both he instead empowers neo-Nazis who might eventually kill him for being an immigrant from Africa.
First, Elon renamed Birdwatch “Community Notes,” which is objectively boring and dumb. And then he removed all that transparency stuff, and instead severely limited API access to make research even harder.
Finally, he basically dissolved the rest of the Trust and Safety team, insisting that ALL moderation could be done via Community Notes. Which, to be clear, is a way worse name than “Birdwatch.” I cannot stress that enough.
By 2023, Xitter employees and Community Notes users were reporting that things were so bad on the back end that it was possible Community Notes were making misinformation on the platform even worse. Wired talked with contributors who admitted that not only was the Community Notes system able to be gamed, but that they were actively gaming it. In order to combat Russian disinformation, they formed a group of about 25 people who had several accounts with access to Community Notes, and they coordinated their notes and their votes to push their debunks into the average user’s feed.
Because you see, not every Community Note ends up on the posts they’re trying to contextualize. That would be chaos, so instead Xitter uses an algorithm to figure out which ones go public–they would probably use humans to help with that, but they fired all of them, so it’ll have to be the ol’ algorithm’s sole responsibility.
They are, unsurprisingly, not forthcoming on how this algorithm works. But we do know that notes need to get plenty of upvotes and labels saying they’re helpful in order to go public. We also know that the algorithm is “bridging-based,” which means that the “helpful” votes need to come from across the political spectrum.
That is, of course, one way to stop a group like QAnon from note-bombing truthful posts. They can say that the 2020 election was stolen from Trump all they want, and they can rate the correction as “unhelpful” all they want, but unless they start new accounts and build up a history of liking Joe Biden and Social Security and normal person things, the algorithm will discount their contributions.
But here’s the problem: twenty years ago, ten years ago, hell maybe even five years ago you may have been able to argue that most people can agree on the nature of reality whether they’re on the political left or right. I would have argued with you back then, but now? That’s a damn tough argument for you to even make, because of the large number of far right extremists who have been happily living in their own reality completely unphased by the one the rest of us are in. These people have successfully taken over one of the two political parties here in the US, meaning that while there are still lots of conservatives who acknowledge objective reality, they are no longer in charge and they are dwindling in numbers. And, they are far less likely to be active fact checkers on platforms like Xitter. They aren’t motivated in the same way that deluded far right conspiracy theorists are.
And so, how does the algorithm determine if a “politicized” truth is true, when only one political “side” is saying so? Climate change is real. Vaccines are safe and effective. Abortion is safe and important healthcare. But are you going to get the majority of rightwing Community Notes to say so? Doubtful.
That’s why experts suspect that Community Notes fall apart on important issues, where bullshit ends up going unchallenged because the users can’t agree on whether a corrective note is “helpful,” but actually works well for non-political topics like celebrities, dropshipping scams, and AI. The director of the Poynter Institute’s MediaWise said, “These Community Notes people are very, extremely online, so they’re actually seeing a lot of the AI stuff on Reddit or other areas, so they’re able to pick up the AI-generated stuff,” and frequently do so more quickly than fact-checkers, he said.”
See? The terminally online are still good for something other than making memes I have to look up to fully understand. Sometimes I just find a random teenager and ask them to explain it to me.
The experts seem to agree that Birdwatch was great as one tool that Twitter could use alongside a more top-down corporate approach to stopping misinformation and the people who spread it, which is how Birdwatch was originally envisioned.
That brings me back to YouTube’s new community notes feature. They, too, say they’re planning to use a bridging-based algorithm that will require notes to have buy-in from “a broad audience across perspectives,” but the relatively short announcement doesn’t go into how they’ll address the problem of how to balance against an extremist far-right population.
That’s a major red flag for me, as is the continued lack of transparency on YouTube’s part. Experts have already pointed out that it’s a problem, and without independent researchers digging into the data and finding problems like political bias, the system just won’t ever be as good as it could be, and change will likely come much more slowly.
All in all, I’m cautiously optimistic about this effort. If YouTube is smart, they’ve read all the studies on what went wrong with Community Notes and will implement a better system that can act as a helpful aid to the humans who institute top-down moderation. I just hope that in a year, YouTube doesn’t pull an Elon, putting out an internal review suggesting their program is a miraculous debunker that will now replace their entire trust and safety team. Please don’t do that. Please.