

Discover more from From the New World
I considered writing an article about the increasingly stupid critiques of the Effective Altruism (EA) movement, but that got boring very quickly. Instead, I wrote an article about why EA gets exceptionally stupid criticism, which funnily enough is some type of hopefully much more constructive criticism.
“Not only engage with the most charitable version of your opponent’s argument, but also with the most charitable version of your opponent” ~ Angel Eduardo
A few months ago I was having a conversation with a very far left friend-of-friend. I was explaining statistical significance tests to him and he said something to the effect of “why should I listen to you if you think everything I believe is wrong”. Well, everything he said was wrong, which he would know if he understood statistical significance tests, but this is a chicken-and-egg problem. It also happens to be the chicken and egg problem facing even the best starman of effective altruism.
I make this case as someone who agrees with many effective altruist ideas. Well, it’s better to say I became aware of EAs in the first place because we discuss similar topics and agree on many, so they’re natural friends of mine. You can hear a great example in this podcast episode with Zvi Mowshowitz, which has one of the highest rates of agreement of any From the New World episode. Banning gain of function research, making it much harder for public health agencies to obstruct drugs, and separating science funding from dysfunctional bureaucracies are all big issues that I would hope I’ve convinced you of if you’ve been reading me for awhile. EAs more or less agree on these ideas. They’re also big fans of several FTNW guests, such as Tyler Cowen, Zvi Mowshowitz (who considers himself an EA correction: who does not), and Robin Hanson.
I preface this for my EA friends because you’ve all endured some extremely stupid critiques from all corners of the internet. I find the linked critiques wrong and distasteful, but they do indicate something worth paying attention to in both elite and mass psychology.
Before making a sharper point, let’s make a more general argument against the idea of the “Expanding Moral Circle”. In short, it’s the idea that you should give moral weight to more things. It’s easier for an EA to see what’s wrong with this when applying it to other groups. Let’s say some group expands its moral circle to include some malicious interest who wishes to abuse their goodwill. That malicious interest can easily manipulate whatever the group “cares” about in a way that is net negative to its goal. There are countless philosophical arguments about this, such as the utility monster, that are beyond the scope of this article.
Taking this one layer further, we can observe that any way for two groups to relate falls into three categories:
Indifference: the groups do not interact at all or do so only in limited self-interest, such as trade
Mutual influence: the groups influence (read: wield power) mutually over each other
Oligarchy: one group can influence the other, but not the other way around.
EAs can correctly respond to the first argument by saying that their organizations, debate norms, and personal traits makes it so that any input into the group requires logical and empirical discussion, so such malicious interests would not affect them much. This is in many aspects a real and true strength of EA, but in specific circumstances it is a weakness. This takes us back to how I started this article, with my failure to teach people about statistical significance tests. I’m just going to assert that we aren’t going to get anywhere close to the majority of Americans to understand statistical significance tests any time soon, let alone the more complex math required to figure out how likely catastrophic pandemics or AI will be (probably for biological reasons, but that’s also beyond the scope of this article). This means that the rational and empirical norms for major EA issues are literally inaccessible to the majority of Americans. What happens when you combine highly intelligence-restricted norms with a very wide moral circle?
“Oligarchy: one group can influence the other, but not the other way around.” Oops.
Digression: part of why this article matters to me is that it’s quite similar to the progression of my own worldview. Your view of the wisdom of crowds can be either constrained or unconstrained, and it’s been slowly beaten into me by failed predictions that the wisdom of crowds is almost always heavily constrained. Politics will never be about making the most thorough statistical arguments and always about which truths can be successfully fashioned into good rhetoric. A quick glance at PISA scores or headlines of past elections should make this obvious. Do I have to elaborate more on this? I feel like it’s more of a problem of convincing people to really take this point seriously and integrate it with the rest of their worldview, not convincing them that it’s propositionally true. At least that’s how it was for me. I speak about this with Dain Fitzgerald, who I credit with first raising this point to me.
This type of elite, oligarchic influence rubs many people the wrong way purely because of aesthetics: like it or not, it’s very easy for an ordinary person to mistake the aesthetics of EAs for DEI, ESG, or other Acronymed Oligarchies. It doesn’t matter if there are plenty of rational, empirical reasons why EA is not like those movements at all. The point is that people simply cannot evaluate that claim and default to distrust. Consequently, it’s far from obvious that someone who can’t make those judgments would want to be inside of EA’s “moral circle”. It is quite a shame that EA sees a jump in popularity (or as Tyler Cowen says at EA Global DC, has reached its peak), at the very time when fake charities, scientists, and experts have poisoned the well so thoroughly. Then again, perhaps those things are correlated.
Here is when my criticism turns into advice. Dear EAs: just act as non-threatening as you actually are. And by that I mean being very clear that you don’t need people who disagree with you to change their ways of life very much, especially if the ways of life in question don’t involve making AIs or viruses. Now I know many EAs will say “Seriously? Your critique is just communication?” It isn’t. When I say act, I don’t just mean “communicate”, I mean act. I mean to make the actual practical position of EAs that anyone who doesn’t want to be in the moral circle doesn’t have to be, once again with the possible exception of AI and virus researchers. Trying to include people in the moral circle who don’t want to be almost certainly causes more counterreaction than good in the first place.
At the same time, I don’t fault any EAs for thinking I mean “communicate”, since large parts of EA already act like this; it’s more a matter of preservation than change. When I spoke with Tyler Cowen, a disagreement I had with EAs was that they are too conflict averse and don’t care about power enough. Nowadays, I’ve changed my mind: this is a good thing, at least when it comes to EA. I doubt EAs will successfully become an oligarchy because they lack the will to that type of power in the first place. In the small chance that they do, it would probably be through the Democratic party, which would actually still be a good thing, because I’d much rather the Democratic party use their oligarchic power to stop runaway AI and future pandemics instead of to spread conspiracy theories about racism and sexism. So I would fairly confidently answer the titular question with “no, EAs are not oligarchs, and will likely never be”.
In the present day EAs mostly try to create solutions by directing their well-earned money in harmless and constrained ways (“carrots”, rather than “sticks”). A good sign is how much money is directly towards technical AI safety, which is solving math problems to prevent the apocalypse. Regardless of how likely or not you think the apocalypse is, those math problems aren’t going to hurt anybody.
One last question might be: “if the wisdom of crowds is so constrained, why shouldn’t we be oligarchs?” Well maybe you should, but I think you already have enough on your plate preventing the apocalypse.
Effective Altruist or Oligarch?
Best article in the series!
Brian, I am disappointed - calling criticism of your belief stupid - while presenting shallow explanation how they are wrong in your view - is - what the best word? You guessed right. And, what is the meaning of this false binary strawman of oligarchy vs EA?
EA as any other -ism is another over the top ideology. And collectivist ideology, as any movement like that, inevitably ends up in a moralizing upsmanship, ostracism of non-believers, and other negative baggage.
As for practical usefulness of it - I am with Erik Hoel on this: https://erikhoel.substack.com/p/why-i-am-not-an-effective-altruist and https://erikhoel.substack.com/p/we-owe-the-future-but-why .
Erik covered it so well, I have nothing to add - I agree with him.