Many people misinterpret why I write. They expect me to come away with a single fixed ideological position, as if to give them an instructional manual of how to act or tweet. No.
First, I write for truth. If not for truth, there is no point in writing. But I’m not someone who is afraid of prescriptive writing either. I think people should change their beliefs and actions based on factual arguments. Still, there’s a utility that people seem to want from my articles that I despise: essentially using them to dunk on ideological enemies in support of their oversimplified position on the current thing. An example is my recent article Diminishing Returns on Machine Learning.
Some people took this as a moment to tribally dunk on EAs, which was nowhere near the point of the article. Of course the article is in clear disagreement with some of the more extreme AI risk takes, like Eliezer Yudkowsky’s. But the original EA argument, that there is a small chance of superhuman AGI and that the possible costs are worth worrying about, is still consistent with some levels of decline in progress. Even if you have counterarguments against that position, it’s not completely invalidated by my piece. The point of the article is not to say “GPUs are over”. It’s to provide some kind of grounding for predictions about the trajectory of hardware development.
Some editors asked me to tie various parts of the piece to a central narrative. I completely understand if the goal is to make the piece more viral or even fun to read for normal people. But in my view there’s a certain fakeness to that; it isn’t how I think and I would be worried that the quality of my thinking would be worse if I did. Starting with a narrative and trying to fit data points in like puzzle pieces is a recipe for confirmation bias. But as Curtis Yarvin put it, “very few people have the capacity for unmotivated thinking”.
Part of interacting with many people with different interests in priorities is that I identify generalized flaws in their understanding of the world. How should people change their mind after reading my articles? It varies from person to person. That being said, I tend to advocate for axis shifts, where people focus on the problems that actually matter, instead of on things that are wrong or impossible. In the machine learning example, the core variable to focus on is the degree of research, which is necessary to drastically expand the performance – and consequently the abilities – of AI. That’s the core question for Doomers, Accelerationists, and everyone in between. Instead, most practical policy attempts a totalitarian crackdown on access – normal people’s ability to use models no smarter than ChatGPT is right now. This is economically disastrous with almost no upside in terms of restricting research – the thing which actually matters. However, it is also the default approach of regulators (a job which selects for people who are paranoid and despotic). The core axis shift to happen is to move the issue away from access entirely: something which the aforementioned despots will fight. In my view, there aren’t enough smart people in the world for us to be squabbling amongst ourselves while far dumber and far more evil people control the levers of power, so the priority for me is always the axis shift. As long as the debate is over something sensible and reality-based in the first place, the outcome will be better than the status quo.
This is my view in almost every policy area. There are productive debates and unproductive debates, and people should aim to do the former. This doesn’t mean that I don’t have moral preferences – it does mean that I’m happy with half measures. Part of the intellectual fervor of the internet means that there will be intense debates between some of the best people I know. Many of my friends want to prove themselves the smartest of them all in these debates. I just look at something like that and think to myself that a random interpolation of their positions would be far better than the status quo. That’s the point of aligning (pun intended) the debate to a better axis. In an axis where compromises can be made while keeping 90% of the value for both ‘sides’ most points should be reasonably satisfactory. An axis where people are squabbling over minute details that wouldn’t ultimately affect any practical solution anyway is the opposite.
If I were to summarize all of this in one sentence, it would be this: I believe that modern policy is not a contest between intellectuals or ideologies, but rather a continual siege in which cognitive elites have to stave off midwit social climbers and the continual entropic flow of political economy. For any problem X, if X doesn’t get solved, I doubt it’ll be because we couldn’t persuade a fellow informed, intelligent Substack writer to adopt a sensible compromise; instead it’ll be that some bureaucrat or legislator too motivated by status and risk aversion ended up missing the forest for the trees. I believe this to be true for AI risk, pandemic prevention, trade, immigration, postliberalism, demographic differences, and whatever else. Any area where cognitive elites have power I’m optimistic for and I see my primary political mission as shifting the balance of power more in that direction.
“I believe that modern policy is not a contest between intellectuals or ideologies, but rather a continual siege in which cognitive elites have to stave off midwit social climbers and the continual entropic flow of political economy.” Brilliant sentence. It’s basically the NPC meme where the mids can only say ESG DEI beep boop. Other than tech, where are cognitive elites in power?
Hey, SOME of us bureaucrats do try to do things for reasons other than political economy. (I think your point is broadly correct, though.)
Looking forward to the next posts in the series. I'm hopeful you address the counterargument of "weak AI increases researcher productivity which increases chances of a breakthrough that could bend the flattening curve back upwards."