How Reasonable Philosophies Led to FTX’s Crypto Scam Collapsing
This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca!
Transcript:
If you’re one of my patrons, you may already know that I do a monthly livestream where I answer any questions about anything. This month, Thomas asked me “??What beliefs did (I) hold a year ago that (I’m) not so sure about anymore?” And it just so happens that I’d spent the past two months wrestling with a particular belief that I thought required a bit more examination. In fact, it wasn’t just one belief but a set of interconnected philosophies that I knew had some limits but didn’t quite realize were potentially dangerous to all of humanity. And no, I’m not being extra, these philosophical implications really can be quite bad. That’s the fun of philosophy, isn’t it?
Here are the beliefs in question: effective altruism, utilitarianism, and longtermism.
To be fair, I didn’t hold any of these ideas fanatically: all of them seemed like pretty good ideas to me, with some obvious drawbacks. Let’s go through them: “effective altruism” is about critically evaluating our charity and finding the best way to do the most good with our resources. For instance, last year I talked about the research that suggests the best way to help people up and out of poverty is to simply give them money. Sounds reasonable, right?
Utilitarianism is the philosophy of maximizing happiness. I don’t ardently commit myself to any one philosophy but in general I think that actions are “moral” if they increase happiness and wellbeing in myself and others.
“Longtermism” is actually a term I only learned recently but aspects of it have been part of my own personal outlook for a long time now: generally it’s the idea that we should prioritize humanity’s future. Different people have different understandings of what that means but to me, I always kind of looked forward to a future where, for instance, humans would populate other planets so we don’t go extinct because of one stupid meteor or something, and also I imagined a future where we can upload human consciousness into the cloud, because, well, I guess because I don’t want to die.
So all of those things sound pretty reasonable to me, but recently I realized that unlike me, there are people who take all three of those ideas very, very seriously and it has some pretty terrifying results. I can thank one con artist for this realization: Sam Bankman-Fried, the now former billionaire CEO of crypto pyramid scheme FTX, which finally collapsed last month. Turns out, Bankman-Fried is also into effective altruism, utilitarianism, and longtermism and it resulted in him scamming billions of dollars from people, and honestly that’s like, the least awful possible result of his philosophy.
Let’s go through those reasonable beliefs again but from the perspective of someone who really gets into them: “effective altruism” is not just about trying to do the most good with our resources, but committing 100% of our resources to only the ABSOLUTE BEST way to increase total happiness. Because we’re also utilitarian, right? So the MOST effective kind of altruism will be the thing that increases total happiness the most. And what is that? Well, we can find that in longtermism: if our future utopia involves humanity populating the universe with GAZILLIONS of people and uploading their consciousnesses to the cloud so that they theoretically never die, then THAT population of future humans contains a nearly infinite amount of happiness that will tip the scales in their favor regardless of the happiness you put on the other side of that scale. So, the most effective altruism is whatever ensures that future generation’s existence: space travel, artificial intelligence, quantum computing, body augmentation, etc.
It’s worth noting that some proponents of effective altruism like Peter Singer argue that it is essentially impartial in terms of who gets the charity. Singer wrote in 1971 that “It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away … The moral point of view requires us to look beyond the interests of our own society.”
But when you combine effective altruism with utilitarianism and longtermism, that doesn’t actually make sense: the most effective altruism would go to the person who is most likely to bring about that future utopia, meaning that the life of, say, a billionaire white guy who is actively trying to put humans on Mars is worth more than the life of, say, a poor, elderly woman in a Mumbai slum.
All of this also suggests that suddenly the ends really do justify the means: in the actual real world example of Sam Bankman-Fried, who cares if a few million people lost billions of dollars in the FTX crash if that money was funneled into pandemic preparedness? Sure, a lot of that money was also funneled into political donations to ensure the scam could continue as long as possible but that would just make sure more money could prevent future pandemics that could literally stop the human race from experiencing extinction before we have a chance to get off the planet, so…eh?
You can maybe see now how, yeah, a few million people getting scammed out of a few billion dollars is really not the worst possible thing that this philosophy could justify. It could literally justify genocide: what’s the entire population of Africa compared to the future happiness of 10^45 posthumans spread across the galaxy? And yes that’s a real number cited by longtermist evangelists William MacAskill (who was Sam Bankman-Fried’s “mentor” and Hilary Greaves in a paper published last June. That’s 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 people who might live forever. They win every time.
And my reference to genocide isn’t all that farfetched: if there’s been a little itch in the back of your brain wondering whether longtermists have dabbled in eugenics, the answer is a resounding “yes”! People like Elon Musk have come to realize that they probably won’t be alive for the big cancer cures, the body mods, and the consciousness uploads to allow them to live forever, so they’ve fallen back on the next best thing: having loads and loads of children. Because a lot of them also believe that their wealth and success and apparent intelligence are coded in their genes, and so they’re pumping out babies at a remarkable rate. Jeffrey Epstein, for instance, had plans to impregnate 20 women (and then eventually have his head and dick cryogenically frozen). Musk has been tweeting constantly about population collapse, and currently has ten children (that we know of). When a reporter asked two “pronatalists” whether or not they were dabbling in eugenics by genetically testing and choosing embryos they felt were most likely to be smart, they laughed and said they were adding “hipster eugenicist” to their business cards.
Learning all of this hasn’t DRASTICALLY changed my own beliefs: I still believe in critically evaluating the effectiveness of our charity; I still believe that to a moderate degree in real world situations we should strive to increase happiness in the world when we can; and I would still love to see humanity spread out across the stars. But it’s that last one that I’ve reconsidered the most: maybe my selfish hope to live forever isn’t healthy. Maybe the future doesn’t need me, or my very special genetics. Maybe figuring out how to keep people alive forever will just lead to future gazillinoaires taking advantage while letting everyone else die, and maybe I wouldn’t even want to live forever in a world of Elon Musks and Sam Bankman-Frieds and other proud “hipster eugenicists”. It’s a lot to think about, but in the meanwhile I’m just going to keep caring about the current population of Earth and the next few generations who will be dealing with a severely damaged planet. Let’s call it “midtermism.” Moderation in all things, and especially in philosophy and religion.