Back at the University of Waterloo, the executive team of the local Effective Altruism Club had some of the truest believers in AI Doom — people with their probability of doom or a “p(doom)” more than 90%. Nowadays two are working in YC companies in AI and one is an accepted YC founder. I can’t take credit for all of their intellectual journeys, but I do think I played a part.
I know many of my readers can relate to having Doomers in their life. Most of the time, they’re not fundamentally bad people. They maybe took the wrong turn down a bad choice road and you’re wondering how to get them off that road. Here is how.
A Brief History of the Blogopshere
The majority of you will be reading this on Substack. Substack is the intellectual descendant of the blogosphere, a collection of blogs on the early internet. You’ll recognize many of the early blogosphere writers participating in the same debates, whether it’s Scott Alexander, Robin Hanson, Eliezer Yudkowsky, or Curtis Yarvin.
To understand the narrative structure of how people get into AI doom, you have to understand the norms and the culture around this early blogosphere. The message of the early blogosphere was one of meta-debate, or as they would prefer to call it, “rationality.” This isn’t the rationality of the French Revolution, nor the rationality of the Enlightenment.
Blogosphere Rationality is a set of habits and rhetorical techniques that promise to guide you to a better way of thinking, and a better way of thinking about thinking, and so on. This tended towards a style of argument that was elaborate and protracted.
More importantly, it pushed back against many of the legitimate political problems of the day. Within the context of arguments over social science, genetics, politics, social structures more broadly, there is a legitimate problem of censorship and scientism. Journals have immense publication bias on those issues, much of which remains today. There are cases where “scientists” fabricated evidence. In short, skepticism towards social science was valid.
Selectively ignoring much of this evidence based on intuitions or habits offered genuine improvements. To attack this problem, the blogosphere types essentially constructed long “sequences” of habits and practices ranging from how to structure an argument using elementary mathematical models to further-out practices like meditation or experimental drugs.
The output of these frameworks were large theories, with varying degrees of supporting evidence. What they all shared is that they were internally consistent, charismatic narratives.
Focus on Evidence, not Fictional Stories
This brings us to the first problem that I think has led many people down the bad choice road of AI Doom. Computer science is not political science. When people report data from a machine learning experiment, almost none of it is fabricated or p-hacked. These are legitimate results. This means that the habit of selectively choosing data based on large, internally consistent narratives has much less validity.
When I ask an AI doomer to explain their worldview, their first point is often to bring up a pure hypothetical, something like the “paperclip maximizer”. The narrative precedes any evidence.
They’ll almost always use the word “thought experiment”, which is not exclusive to them, but is a term that I think is highly misleading. Here’s why: An experiment in science collects data from the real world and uses that data to shape your theories. It can confirm or disconfirm your hypothesis. There is no such possibility in a thought experiment. It is a fictional scenario which provides zero evidence from the real world and has no chance of disconfirming your hypothesis.
A problem with this style of argumentation is that it has a much stronger psychological effect than empirical effect. Focusing on hypothetical doomsday scenarios strongly redirects their attention, regardless of what factual evidence, or lack thereof, supports it. Some of these people who I’ve since deprogrammed have been kept up at night by these hypotheticals, even if they have almost no understanding of machine learning. This is the power of a narrative’s psychological effect, even if nothing has been done to establish whether it is credible.
If a politician tried this trick, you would probably notice. It’s why politicians focus on their best issues, whether it’s Trump on immigration or Harris on abortion. Instead of trying to persuade voters on issues where they might have weak ratings, they can instead better win elections by focusing on the issues that they have the most to gain from, which voters agree with them the most on.
Ask Well-defined Fact Questions
Before I start the section, I want to re-emphasize this is for engaging with not an AI doomer leader, but the random guy in the movement, probably a college student or an early career software engineer. There’s an organized aversion to any attempt to empirically test claims among long-time AI doomers. They will go out of their way to Motte and Bailey or throw up a new hypothetical like a squid gushing out a pile of squid ink. And if you’re engaging with someone who might not necessarily be acting in bad faith, but is just so lost in his own squid ink, then this will not be so effective.
The typical thing I’ll bring up is that many of the techniques that have already been used to speed up machine learning performance in the past, such as CUDA optimization, data type changes, quantization, and even semiconductor growth through Moore’s law, are reaching the end of their development timelines. This means that the things that led machine learning to grow rapidly in the past are not going to do so in the future. I wrote an entire article on this called Diminishing Returns in Machine Learning. Another important point is that the trajectory of research and of economic growth within industries or scientific fields almost always reaches diminishing returns overall. Not only do existing ideas cease to generate more improvements, but new ideas become harder to find.
The thing I like about these questions is that they’re more like the force of gravity than the result of a future election. You’re not engaging in hypotheticals; you’re looking at well-defined facts. Obviously you should support any claim with facts, but maybe more importantly, you’re engaging them in an engineering mindset. You’re taking them out of a state of mind where they’re purely thinking about hypotheticals and letting their imagination take the reins.
Take Marginal Victories
In many cases, when you give someone new evidence, even if they believe the evidence is real, they aren’t going to completely unwind their narrative. Instead, they’ll fit the narrative to the new evidence. A common response when I bring up these facts is to push the day of doom further into the future. When he interviewed me for his podcast, Nathan Labenz argued that as time progressed, technological developments became increasingly uncertain. Indeed, there’s a grain of truth here. If you keep going far enough into the future, centuries or even millennia into the future, then technological developments do become unpredictable.
This could be called moving goalposts[]. But in many areas of life, adapting an existening narrative in response to new evidence does make more sense than reversing it.
In these cases, I think it’s good to allow them to fit their narrative, push back their dates, but to be very clear about the implications of the thing that they just did. If the concern is AI in 100 years, then wait until you have 80 years of new information before passing destructive laws. Often, their impulse will be to return again to the hypotheticals. And if you allow that to happen, then the imagination will take over again.
Instead, ask them to reflect on how these changes, which I think are actually quite big, should reflect on their actions, their priorities, their life goals, and their future. Think about it, if you thought the end of the world was in 10 years, and instead you think it’s going to be in 20 years, that’s actually a huge difference. And by asking them what they’ll do with their life, you’re once again putting them in that more reasonable engineering mindset.
Get Them Involved in Something
When you’re outside of the arena, progress can feel like a magical force that just happens to you. Once you start working in machine learning, even at an entry level, it becomes obvious that people are struggling to come up with new innovations just as you yourself are. Plenty of things that created improvements in the past no longer push the boundaries. There’s nothing like experience to debunk the ridiculous narrative of indefinite exponential growth in machine learning, or any other scientific field or industry.
That leads me to where we are today, where Waterloo ex-EAs now live healthy, happy lives contributing towards machine learning progress. What a happy ending!