The end is in sight for this pandemic — it’s still likely many months away, but now that we have several effective and safe vaccines, and the United States federal government is beginning to develop a plan to distribute those vaccines now that Trump is out of office, we might be sort of back to normal by the fall. So maybe this is a good time to look back on what happened here and see what lessons we can learn and apply to future events.
For instance, when the pandemic hit and it quickly became clear that thousands of people would die from it, at the very least, a significant number of people just straight up did not give a shit. Yes, it was a big problem that a lot of people thought the whole thing was fake, that the deaths weren’t real, and they were fooled by powerful and wealthy people who got richer by exploiting that credulity. But a significant percentage of humans understood that COVID-19 was real, knew that it was killing people, but decided that those people’s lives were not worth the trouble of staying home or wearing a mask.
If humans are going to move forward as a species, we’re going to have to reckon with that. Because sure, eventually science swoops in to save the day: so long as enough people get vaccinated, eventually it may not matter if people wear masks or socially distance themselves. But that’s too late for the 2.14 million people who have already died, and the hundreds of thousands more who will die before we reach herd immunity.
And if we keep this same lack of empathy, it’s going to be too late for the human race as a whole when we face the one big public health threat that scientists have already told us we are undoubtedly hurtling toward: climate change.
In my previous video, I talked a bit about Twitter and the trouble with bots. I mentioned that Indiana University has been on this beat for a decade, first setting up Truthy as a way to identify bots spreading misinformation, which then transformed into BotOrNot. Well, BotOrNot is now Botometer. It’s like a 90s rapper, I can’t keep up with the name changes. Remember when we called him Puff Daddy? Are we back to calling him that? I tended to prefer P-Diddy but whatever.
Anyway, the Botometer has become something of the go-to tool for many social science researchers who are studying social media, despite the fact that I don’t think any social media company has ever really taken their results or their suggestions seriously. Most recently, researchers funded by Brown University’s Data Science Initiative used Botometer to examine Tweets about climate change, specifically examining the discourse around Trump pulling the United States out of the Paris agreement, which Biden reversed within moments of taking office.
Despite the fact that my expectations for social media companies are quite low, I was still surprised that the researchers took a random sample of nearly 185,000 Twitter users and found that nearly 10% of them were bots. And that 10% was responsible for about 25% of all the discourse about climate change on Twitter.
What were those bots tweeting? I’ll give you one guess. If you guessed “complete fucking bullshit” then congratulations, you win! You win a ravaged planet unsuitable for human life. The bots were spreading lies about how climate change isn’t real, and boosting quacks who downplayed the severity of the situation.
Pretty alarming, no? At this point I want to pause and point out something that I very nearly missed while creating this video. True story, I only found this study because after recording an early version of this video and uploading it but before I made it live, I happened to see a random YouTube video about something completely unrelated that happened to reference this study from last year, in which some other researchers looked into how accurate Botometer is. And they found the answer was, um, not super accurate!
They had Botometer examine several different sets of Twitter users: one set of English-speaking politicians (verified as human, though of course there’s no word on how they verified the politicians themselves weren’t lizard people), one set of German-speaking politicians, one set of very obvious English-speaking bots they got from a wiki about Twitter bots, one set of German-speaking bots, and finally one set of bots and humans that Botometer provided themselves to test the accuracy of the tool.
Here are the results, and to simplify things let me just tell you that the more area underneath the curve (the closer the top of the curve is to the upper left corner of the chart), the better Botometer did at differentiating between humans and bots. Botometer’s own dataset is the purple line in the middle. So you can see that for German-language Twitter users, Botometer was pretty bad at its job, resulting in a lot of bots labeled as humans and humans labeled as bots.
But for English-language users (the blue line), it actually performed better than the researchers who built it claimed. This was a bit of a relief to me because this very video, which again I had already written, performed, edited, and uploaded, was predicated on the idea that Botometer is pretty good at detecting bots. But the study’s conclusions are “the Botometer scores are imprecise when it comes to estimating bots; especially in a different language. We further show in an analysis of Botometer scores over time that Botometer’s thresholds, even when used very conservatively, are prone to variance, which, in turn, will lead to false negatives (i.e., bots being classified as humans) and false positives (i.e., humans being classified as bots). This has immediate consequences for academic research as most studies in social science using the tool will unknowingly count a high number of human users as bots and vice versa.”
That’s a bit overstated when it comes to English — Botometer isn’t perfect, but it still does a pretty damned good job. Still, I decided to rewrite and record and edit and upload this video because I still think it’s worth keeping this study in mind. The researchers point out that even the creators of Botometer caution other scientists about using the tool with a defined threshold — like, if Botometer is 80% sure this is a bot, we will consider it a bot.
They also point out that because Botometer uses current information, the scores will change over time, meaning that research using it may be difficult to replicate.
Finally, there’s a problem of bot type. Not all bots are the same — some are simple, some are complex, and some aren’t really “bots” at all! Many “bots” these days have actual humans behind them, being paid pennies per Tweet but working with the same goal as many bots — to alter the course of discussion in a way that’s favourable to the person paying for the “bot.”
So when you see research that talks about the proliferation of bots on social media, it’s worth bearing in mind whether they’re using a premade tool like Botometer or whether they created their own bespoke tool, and determining what sort of bots their tool was trained on, which will inform what kind of bots it can then detect.
In other words, I want to tell you that this climate change study is not a slam dunk case, and future research is needed to drill down on whether or not the “10%” figure really holds true when actual humans follow up on Botometer to see if those were really bots or not.
The fact remains, though, that bots of all types continue to be a very real problem on social media. Despite many studies showing that they are extremely common and extremely efficient at spreading misinformation, companies like Twitter don’t really do much to stop them. Possibly that’s because those companies still don’t actually make any money, and if they ban all the bots spreading dangerous misinformation it will look like there’s fewer people to advertise to and less activity happening on the site in general and the investors and advertisers propping them up will bail, so they just do the bare minimum to occasionally look as though they care about the spread of fake news.
And regardless of whether or not all the bots detected in this climate change study really are bots, it’s suspect that the presumed bots all seem to be focused on one side of the climate change “debate.” Previous research has suggested bots tend to play both sides, which also didn’t surprise me when that news came out because Russian bots, for instance, don’t necessarily just push one side — they tend to elevate both sides of controversial subjects just to sow discontent. But in this case the bots’ one-sidedness may support their “bot” designation, because it makes it less likely that Botometer would flag human accounts as bots (since the tool doesn’t look at the content of the Tweet to make its determination) and then it just so happens all those humans are particularly concerned with calling climate change a hoax.
The researchers in this climate change study point out that they have no way to figure out who set up these suspected bot accounts, because unlike Jack Dorsey they don’t have easy access to the accounts’ IP addresses or other information. However, due to the contents of what the bots tweet about one can make an educated guess as to their motives, and then one can look around to see whose motives match those of the bots. You know, people like the oil and gas industry, who already spend about a billion dollars per year lobbying to “control, delay, or block binding climate-motivated policy” in the US and other world governments. Because the people running those companies just want to keep making more and more money and don’t give a flying shit what they’re doing to the less fortunate, like those in low-lying coastal cities, and they know that by the time climate change affects their equally wealthy progeny, science may be ready to swoop in and save the day (for the rich). Perhaps an underwater supervillain lair. Maybe a nice terraformed bubble on Mars. Or maybe they just don’t give a shit about their progeny at all.
I know that if you watch my channel you probably already knew climate change is real, it is serious, and humanity needs to take action to mitigate it. So I guess my point here is just that we should prepare ourselves, because what we saw with COVID-19 is a microcosm of what is happening and what will happen with climate change: misinformation is spread by powerful and wealthy people hoping to exploit the credulous, sometimes using bots, sometimes using humans; the people most responsible for the situation will not change their behavior to help and will instead simply continue to benefit; blame for the situation will be placed on mostly marginalized individuals who are also operating in their own self-interests but without the benefit of money, government leadership, social safety nets, or even decent healthcare; and there will still be a subset of people who, at the end of the day, simply will not care about the large loss of life, even if they are the ones in the at-risk pool.
If we want a habitable planet (for humans at least) in 100 years, we need to fight the worst aspects of capitalism, increase education, encourage critical thinking, stop social media platforms from allowing the proliferation of misinformation, and raise future generations to give a shit about others. Easy, right? Or we just wait for rich people to fund scientific innovations that will help them save themselves. Six of one….