Skepticism

New Study: Is it Worth it to Argue on Twitter?

This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca!

For literal decades now there has been a robust industry of researchers studying debunking. How do we correct misinformation? Do we have to be nice? Does sarcasm backfire? Should we use facts? Should we rely on emotional appeals? Is it impossible?

It’s kind of overwhelming for people like me, whose entire life’s work revolves around using science and rationality to try to stop the spread of false information, because obviously I value science and rationality so if a bunch of psychologists and sociologists find that one tactic is better than another then I’m very persuaded to try that. But for a start, I have trouble not using humor, sarcasm, and the occasional insult when I talk about this stuff, so while I can be a little flexible I can’t change my entire personality based on what some scientists think may be slightly more persuasive. And for another thing there are so many papers, many of them contradictory, and few of them replicated, and so few of them that happen in real world conditions as opposed to in surveys and labs that it’s tough to say what the best way to influence people is.

I talked about this issue a few years back when researchers published a meta-analysis of 20 different studies, finding that in general there is no one silver bullet method that will convince 100% of wrong people that they’re wrong. 

They also found that debunking misinformation can backfire when you counter it by just saying “yeah that’s wrong” or by providing a quick explanation. That can make people more likely to dig their heels in and get even more embedded in their wrongness. The better method, unfortunately, was engaging with wrong people and having them participate in a discussion, leading them away from their misinformation and introducing them to the actual facts. Ugh. Who even has the energy.

Anyway I’m talking about all this again because there’s yet another study in this field and it’s pretty interesting. It even has a great title: Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment. As you may guess from that title, it’s another one of those downer studies that just makes me look in a mirror for a long time whispering “what have I done with my life? What even matters? What is truth? What is dignity?” Though that may also just be because I just read Remains of the Day which I was expecting to be really boring because it’s literally the extended internal dialogue of an English butler but I gotta tell you, it got me. It got me. Anyway.

In a lot of previous studies, researchers recruit people on Mechanical Turk or somewhere and say “hey, that thing you think is wrong” and then “do you feel more wrong or more right now?” And if the person says they actually feel more right, then the conclusion is “telling people they’re wrong doesn’t work.” Okay, I’m way oversimplifying things but my point is that they aren’t necessarily “the real world” and they don’t necessarily take into account how complicated it can be for a person to change their minds on a deeply held belief.

This new study is not that, and I gotta give them props for that. Because what they did was go on Twitter and find 2,000 people who were sharing links to political misinformation that Snopes had rated as false — things like “Hillary Clinton paid a woman to make false rape claims about Donald Trump,” and “Virginia Gov. Ralph Northam said the National Guard would cut power and communications before killing anyone who didn’t comply with new gun legislation.” Wow. I missed that one. That’s…yeah, that’s something. 

They wanted to include some political balance but a ridiculous amount of bullshit was being shared by conservatives, so to throw in some lib stuff they also included “Donald Trump once evicted a disabled combat veteran for owning a small therapy dog,” which I also missed and it’s actually classic lib shit to make up something like that when we know for a fact that Donald Trump systematically refused to rent apartments to black people to the point that the Justice Department sued him in 1973 and forced him to promise not to do that anymore. Do you really need to make up a thing about a therapy dog? Do you? Do you really? Anyway.

So they found all these people sharing bullshit and then used human-looking bots (all white men with various political affiliations and at least 1,000 followers) to reply to the misinformation with very nice tweets like “I’m uncertain about this article — it might not be true. I found a link on Snopes that says this headline is false.” And then they link to the Snopes debunking.

Then they evaluated what their subject tweeted for the next 24 hours following the correction. They found that “across all specifications that being corrected significantly decreased the

quality of content retweeted by the user in the 24 hours following the correction.” And they also found that subjects increased their rate of retweeting toxicity. So. That’s fun.

The researchers wonder if this is perhaps due to the fact that the correction occurred in public, leading the subject to feel more humiliated and thus activating a response meant to try to overcome that humiliation. “Oh you think THAT was false? Well wait til you see THIS shit.”

This also sort of backs up the finding of that meta-analysis — a brief correction with no additional conversation was found to have the opposite immediate effect on people spreading misinformation. Had the bots in this study engaged in more of a conversational back and forth, maybe the results would be different.

It’s also worth noting that this is only an immediate reaction to correction. While this real world experiment is more accurate and way more entertaining than surveys, it still doesn’t account for the possibility that these bots planted a seed in those subjects’ minds. I’ve been in arguments with people where it’s important for me to “win,” but once the argument is over and I have time to reflect, I may come around to their way of thinking. I’ll never admit it, and I may even tell myself that I arrived to my new opinion all on my own, but it does happen. I’d love to see more research that looks at the longer term persuasiveness of misinformation corrections.

So there is, as always, more work to be done, which should cheer up sociologists everywhere trying to publish and not perish. And in the meanwhile I’m just going to keep trying my best to stop the spread of misinformation in whatever way I can — whether that’s through kind and thoughtful one on one discussion or extraordinarily profane sarcasm and meanness. Let’s be honest, mostly the latter.

Rebecca Watson

Rebecca is a writer, speaker, YouTube personality, and unrepentant science nerd. In addition to founding and continuing to run Skepchick, she hosts Quiz-o-Tron, a monthly science-themed quiz show and podcast that pits comedians against nerds. There is an asteroid named in her honor. Twitter @rebeccawatson Mastodon mstdn.social/@rebeccawatson Instagram @actuallyrebeccawatson TikTok @actuallyrebeccawatson YouTube @rebeccawatson BlueSky @rebeccawatson.bsky.social

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button