We’ve been pretty down on “the algorithm” here on this channel, and no, we don’t know why we’re suddenly referring to ourselves with the royal “we” but it makes us feel fancy so please just get off our back.
Seriously though I DO worry a lot about how algorithms affect our lives–about how social media is designed to create an echo chamber that pushes our happy buttons enough to keep us scrolling and scrolling, to the detriment of our own mental health as well as the greater health of society. A lot of research has been done in the past decade showing how the way these sites are built and the way we interact with them lead to increasing isolation and radicalization.
But there is a new series of four papers that were just published in Nature and Science that purport to show evidence that where we are today isn’t totally due to algorithms, but to individual choices. I need to say up front, though, that this research should NOT let social media companies off the hook, which I will get into more after we talk about what exactly these studies are.
Back in 2019, aka the Beforetimes, Facebook (aka Meta) committed to giving seventeen social scientists across many different institutions “unprecedented” access to user data from Facebook and Instagram, which is a nice change of pace considering the institutions they usually give user data to. Supposedly, the idea behind the “Facebook and Instagram Election Study (FIES)” was to allow scientists to examine how social media can “influence elections and alter democracies.” These are the first four papers to be published from this collaboration, with one appearing in Nature and the other three in a special edition of Science.
One paper just looked at the breakdown of misinformation on the platforms. “Asymmetric ideological segregation in exposure to political news on Facebook” found that conservatives interacted with the most misinformation, which isn’t surprising but what even I found surprising is that the misinformation was almost ENTIRELY found amongst conservatives, with “no equivalent on the liberal side.” Don’t get me wrong, liberals definitely fall for and pass along misinformation online, but apparently when you look at “the nearly full set of US-based users that are active on the platform, which was 231 million users during (the) study period,” the number of conservatives spreading misinformation is so large that it makes the liberal dumbasses into a statistical blip.
The other three papers all dealt with the question of whether or not social media companies can stop this problem by tweaking the algorithm. In “??How do social media feed algorithms affect attitudes and behavior in an election campaign?” researchers divided about 50,000 users into two groups: one got the regular Meta algorithm and the other got a chronological feed. You know, seeing posts from your friends in the order they’re posted. Imagine it. Chaotic.
In “Reshares on social media amplify political news but do not detectably affect beliefs or opinions,” researchers prevented a group of users from seeing reshared content, which would be posts from people who they don’t directly follow but who someone they DO follow shared.
And in “Like-minded sources on Facebook are prevalent but not polarizing,” the one Nature article, researchers cut back on how much content users saw from “like-minded” individuals and groups, to kind of combat the idea of an echo chamber.
In all three of those studies, they found that the interventions did NOT end up decreasing political polarization or change political activities like signing petitions.
So, that’s that, right? It’s not the algorithm, it’s the people!
Well, not so fast. As always, it’s a bit more complicated than that.
First of all, this research was done at a VERY heated time in American politics. This was the end of the 2020 elections, by which point the damage may have already been done and conservatives were already firmly entrenched in their attitudes.
Additionally, even though the studies didn’t show a change in political polarization, they DID all show what, in my eyes, are some substantial quality of life improvements:
“Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: (…) the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook.”
Meanwhile, “removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks.”
And lessening the echo chamber “increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language.”
The fact that these changes didn’t have a short-term effect on political polarization during the most politically polarized times in recent American history doesn’t mean that they wouldn’t work long term, and it doesn’t mean that there wouldn’t be other benefits. But because interventions like chronological feeds made people decrease the amount of time they spent on the platform, Meta has no real incentive to institute them except for, you know, concern for making the world a better place. Which is no incentive at all under capitalism, because those $12,000 remote control hydrofoil surfboards aren’t going to buy themselves. Seriously, asshole, go get a $100 foamie and learn to paddle for your next arm day, it’s not going to kill you.
Finally, let’s talk about the biggest red flag of all, or I should say a blue flag with a white “f” on it. All of this research comes from the benevolence of Facebook itself, who, for privacy reasons, did not allow the researchers to access any of the raw data themselves. As Joe Bak-Coleman, a social scientist at the Columbia School of Journalism, told Science, the partnership is “restrictive” and not the best way to study these technologies. Even though Meta states that they didn’t pay the scientists, didn’t tell them what to analyze, and didn’t interfere in the writing of the final papers, they DID ultimately control the data that, in the end, suggested that there’s not much they can do to stop all this dang polarization, and we should take that with a grain of salt.
Also worth noting is that back in 2021, an anonymous whistleblower released tens of thousands of pages of internal Facebook research. The documents revealed that Facebook had “evidence from a variety of sources that hate speech, divisive political speech and misinformation on Facebook and the family of apps are affecting societies around the world.” They showed that the leadership was aware that the site was becoming “angrier” but that Mark Zuckerberg refused to make changes because, YEP, they would prevent users from staying on the site longer.
That whistleblower eventually revealed her identity as Francis Haugen, a former data analyst at Facebook who for two years studied how the site’s algorithm spreads misinformation. She testified before Congress that “Facebook consistently chose to maximize its growth rather than implement safeguards on its platforms, just as it hid from the public and government officials internal research that illuminated the harms of Facebook products.”
She told CBS News last month that the company has actually prevented “researchers from studying its operations and even resorting to legal action against those who exposed the truth.
“They’ve sued researchers who caught them with egg on their face. Companies that are opaque can cut corners at the public expense and there’s no consequences,” she said.”
She has also pointed out that the company did make changes to the algorithm in the lead up to the 2020 election in the hope of preventing violence, but reverted those changes once the election was over. In the month that followed, we now know, extremists used Facebook to plan the January 6th insurrection.
So yeah, this is certainly an interesting group of studies and if you like this kind of thing I encourage you to check them out as they’re all open access. As always, links are in the transcript on my patreon, linked in the dooblydoo below. But while they’re interesting, they’re also very, very flawed, and they’re clearly being used by Facebook as a way to redirect attention away from what their own internal documents prove: their algorithm actually can exacerbate or ameliorate problems like political violence, but their decision making will always be influenced by money.
So please don’t take these four studies as evidence that there’s simply nothing social media can do to fix the problems that we see on social media. Yes, the internet in general has allowed weird people to willingly seek out and find each other, which has been great for marginalized communities and even for me who was able to connect with other skeptics and atheists at a time in my life when I didn’t have many real-world friends because I’d just moved to a new city and was working all the time. And yes, that also means that extremists can find each other–people who form communities based on hatred, where they radicalize one another. Yes, individuals continue to seek out news, opinions, and other voices who already agree with them, as they’ve done since the beginning of humanity.
But there is NO question that the design and implementation of social media affects our decisions on every level, in the same way that we’ve been affected by the design and implementation of advertisements and shopping malls and supermarkets and everything else since before social media existed. We can’t stop demanding that these corporations that have taken over our lives show some responsibility with their own decision making.
And I didn’t think at this point I’d have to keep saying these things but comments on my previous video make it clear I do, so: tax the rich, legislate nefarious corporations, disrupt a billionaire’s life, and put down your phone and go touch grass at least once a day.