Besides the Bostrom and Hanson criticisms, this seems low on examples. I think the AI one is not a good one. Lots of old EAs are in favor of regulating AI.

I'd be curious to read some more examples.

Expand full comment

Us old-school, Givewell-supporting EAs may just need a new name. As far as I can tell Givewell is still going strong!

Let's carve out our own movement, and let the longtermist AI doomers do their own thing far away from us.

Expand full comment

Tyler Cowen has another line on this: "The demographics of the EA movement are essentially the US Democratic Party. And that's what the EA movement over time will evolve into."

I think that's accurate. Unless you're interested as a specialist in one of the popular cause areas, EA has become less intellectually interesting. But it may make for a marginally better democratic party. Compare AI safety to AI ethics, for example.

Expand full comment
Nov 20Liked by Brian Chau

"Better to have loved and lost, than never to have loved at all."

Expand full comment

The critique here seems to basically be that to the extent EA becomes more popular there will be more people who claim the label of EAs who deviate in undesirable ways from the ideals of the founding members of the group.

But as a good EA I don't evaluate the movement by how ideologically pure it remains or even how much I want to hang out with people who call themselves EAs but by how much good it does. The most successful path for EA *always* meant influencing the public more broadly to care more about effectiveness.

So that's not failure! That's what success looks like. That's the nature of successful movements, they grow and attract people who have related but somewhat different views. It's no different than the fact that democratic activists have to accept people who don't support their full agenda into the tent to win elections. If EA had remained a pure community of contrarians that means it would have failed to influence the vast majority of charitable giving to care more about effectiveness.

Those of us who liked the old EA community and ideals can easily enough find a new word to describe ourselves. Hell, that group was always better described by the term rationalists or the like anyway.

Expand full comment

Data? How has givewell's porfolio changed?

Expand full comment

This might be a "blind men and the elephant" problem, but I don't think old-style EA is how you characterized it. (I agree with the other commenters that this post is low on evidence.)

My introduction to EA was via GiveWell, which I still think of as the heart of old-style EA. Bed nets for the win! The broad attack on non-profits is weird, given that GiveWell is a non-profit that directs funds to other non-profits. It seemed like conventional wisdom was that charitable giving is harder than investing (lack of good reporting standards and metrics, along with some uncertainty about goals) but that's not a reason to avoid it altogether, because charities do things that for-profit companies won't. It's a reason to be picky, and also to avoid areas that for-profit businesses cover well.

"For-profits are bad" is a corruption of "for-profit companies don't do everything, and they don't invest (enough) in some causes we care about." Uncorrupted, it's a way of avoiding taking sides on capitalism and allowing for organizations with alternative goals than making money.

It seems like taking sides on capitalism is the opposite of that? It's engaging in culture war rather than avoiding it. Being reflexively anti-regulation is a libertarian thing, not an EA thing. When was EA about wanting to repeal the FDA?

You seem to be arguing that old-style EA was a consequence of libertarianism, rather than being libertarian-friendly, which is different.

Expand full comment

When I first heard of EA, I thought it was an excellent idea. However, I was observing up close a similar decay in my area of environmental sciences.

As I have watched the initial analytical thinking of 1970s in environmental science areas dealing with real burning rivers, NOx formation, and acid rain type issues evolve into sloppy activist thinking where every activist had their own "truth". The deterioration of the results became apparent as more money flowed in and activist became regulators and effort achieved worse outcomes to the detriment of humanity. As real science with quantitative thinking often gives results that don't conform with peoples "feelings", emotions, desires, greed, and real "decision making power" is totally arbitrary, the evolutionary institutional pressure is to force compliance with feelings and non-quantitative thinking until the point of where reality intervenes. Reality intervention happens rapidly in private sector profit making institutions, but not in monopoly government and NGO non-profit type institutions.

For a specific example, Aquaculture (growing aquatic plants and animals) is effectively dead in the US with zero growth in the US for almost half a century but double digit growth in the rest of the world outside the USA. Aquatic animals are more efficient than land animals in converting feed into meat on the table (factor of 2 to 4 less feed energy / Kg of meat on the table). Fish/shrimp don't waste energy to stand up or keep warm. The activists were much better propaganda creators and most of their so called science was pure garbage and p-hacking. "Truth" is irrelevant.

Sorry to see your EA go to hell the same way. Aquaculture had areas of the world who didn't buy the emotional line with China now being the dominant player. Perhaps China is using EA to achieve its social goals using quantitative methods of analysis, but their "goals" may be a bit different than our goals.

Expand full comment
Nov 19·edited Nov 19

At a mood level it feels weird to associate AI doomers with irrational feminized college students. Yud & co, the original AI doomers, alongside Bostrom, also an early AI doomer, and early EA, have led the charge on AI doomerism. They also seem like the opposite of 'irrational feminized college students'; the archetype is an (over?)-rational male-coded autodidactic. Through their own methodology, they've arrived at AI Safetyism, even as they've been neutral to negative on Safetyism more generally. So the over-rational male-coded nerds end up bedfellows with irrational feminized college students. I guess as a historical account it feels wrong to say the _reason_ for this is regression to costal college norms, given that many of the OG EA leaders have been leading this charge, and many of the other leaders are OG over-rationalists, rather than irrational.

And...over-confidence in one's own rationality is a huge problem, so by describing this group as over-rationalists, I don't mean to give them more credit, particularly. Maybe they _have_ talked themselves into a form of Safetyism that's flawed and dangerous the same reasons as all the other kinds of Safetyism. It just doesn't seem like "regression to the coastal college norms" is an accurate account of what's happened.

Expand full comment

> And thus EA went from caring about proving its points with empirical evidence to socially desirable narratives. It went from wanting to repeal the FDA to wanting to make a new one for AI.

I'm not sure you want to explain this solely by reference to different people joining the identity. You can read Scott Alexander repeatedly decrying the FDA at the same time that he's posting a series of hysterical freakouts about AI.

Expand full comment

There is nothing special about giving mosquito nets to Africa. It's not ground breaking. It probably even makes the world a worse place (Africa isn't getting any better and they will probably mass immigrate and destroy the first world). Gates foundation was using METRICS long before EA and you can look up the limitations.

All of these EA people would have been better off working for-profit and funding other smart people to do for-profit stuff. PayPal mafia forever. Would have made the world richer and better.

Expand full comment

"Flew in world class autists" aaaand done. Bai

Expand full comment

Does anyone believe in values any more? That is actually the reason to give anything to anyone.

Expand full comment

> This was the lie that donating based on your feelings was the best way to help people. This lie wasn’t just false, it was obviously the opposite of the truth. Every year, there are plenty of news reports of how fraudulent charities would steal money, or somehow even make the problem worse.

The logical consequence of the nature of this lie, is not EA, but rather is Ayn Rand's Objectivism.

Expand full comment