Besides the Bostrom and Hanson criticisms, this seems low on examples. I think the AI one is not a good one. Lots of old EAs are in favor of regulating AI.
Anther example given is that OpenAI was set up to be a non-profit, which is a very weird example because it happened eight years ago and was the choice of Sam Altman and co rather than EAs.
Even the Bostrom and Hanson criticisms seem weak. Current EA consensus on Bostrom is something like "his work on AI x-risk was extremely important, even if he's said some stuff I disagree with". Qualified praise isn't some new phenomenon,, it's how EAs have felt about Singer since the beginning. Bostrom is still director if the Future of Life Institute. Hanson has always been more EA-adjacent, but in his areas of expertise (e.g. prediction markets) he's still as involved as ever, and was recently a keynote speaker at Manifest, an EA-adjacent conference. I fail to see how "some people were upset at Bostrom and Hanson for comments they made, but ultimately they remained in good standing among EAs" is a criticism against EA, much less enough to support the much stronger thesis of this post.
Yeah the old vs new EA distinction seems vague here... AI safety/alignment has been valued by EA for years ex. Scott Aaronson. It's also a topic that has been in their website intro.
Yeah, this completely misunderstands why old EA's were against regulating AI.
It's not that people were intrinsically opposed to pursuing AI regulation, it's that they doubted the possibility of good regulation and weren't really sure what to ask for anyway. Now that the situation has changed, I suspect that most of the older EA's have updated as well.
Tyler Cowen has another line on this: "The demographics of the EA movement are essentially the US Democratic Party. And that's what the EA movement over time will evolve into."
I think that's accurate. Unless you're interested as a specialist in one of the popular cause areas, EA has become less intellectually interesting. But it may make for a marginally better democratic party. Compare AI safety to AI ethics, for example.
The critique here seems to basically be that to the extent EA becomes more popular there will be more people who claim the label of EAs who deviate in undesirable ways from the ideals of the founding members of the group.
But as a good EA I don't evaluate the movement by how ideologically pure it remains or even how much I want to hang out with people who call themselves EAs but by how much good it does. The most successful path for EA *always* meant influencing the public more broadly to care more about effectiveness.
So that's not failure! That's what success looks like. That's the nature of successful movements, they grow and attract people who have related but somewhat different views. It's no different than the fact that democratic activists have to accept people who don't support their full agenda into the tent to win elections. If EA had remained a pure community of contrarians that means it would have failed to influence the vast majority of charitable giving to care more about effectiveness.
Those of us who liked the old EA community and ideals can easily enough find a new word to describe ourselves. Hell, that group was always better described by the term rationalists or the like anyway.
This might be a "blind men and the elephant" problem, but I don't think old-style EA is how you characterized it. (I agree with the other commenters that this post is low on evidence.)
My introduction to EA was via GiveWell, which I still think of as the heart of old-style EA. Bed nets for the win! The broad attack on non-profits is weird, given that GiveWell is a non-profit that directs funds to other non-profits. It seemed like conventional wisdom was that charitable giving is harder than investing (lack of good reporting standards and metrics, along with some uncertainty about goals) but that's not a reason to avoid it altogether, because charities do things that for-profit companies won't. It's a reason to be picky, and also to avoid areas that for-profit businesses cover well.
"For-profits are bad" is a corruption of "for-profit companies don't do everything, and they don't invest (enough) in some causes we care about." Uncorrupted, it's a way of avoiding taking sides on capitalism and allowing for organizations with alternative goals than making money.
It seems like taking sides on capitalism is the opposite of that? It's engaging in culture war rather than avoiding it. Being reflexively anti-regulation is a libertarian thing, not an EA thing. When was EA about wanting to repeal the FDA?
You seem to be arguing that old-style EA was a consequence of libertarianism, rather than being libertarian-friendly, which is different.
When I first heard of EA, I thought it was an excellent idea. However, I was observing up close a similar decay in my area of environmental sciences.
As I have watched the initial analytical thinking of 1970s in environmental science areas dealing with real burning rivers, NOx formation, and acid rain type issues evolve into sloppy activist thinking where every activist had their own "truth". The deterioration of the results became apparent as more money flowed in and activist became regulators and effort achieved worse outcomes to the detriment of humanity. As real science with quantitative thinking often gives results that don't conform with peoples "feelings", emotions, desires, greed, and real "decision making power" is totally arbitrary, the evolutionary institutional pressure is to force compliance with feelings and non-quantitative thinking until the point of where reality intervenes. Reality intervention happens rapidly in private sector profit making institutions, but not in monopoly government and NGO non-profit type institutions.
For a specific example, Aquaculture (growing aquatic plants and animals) is effectively dead in the US with zero growth in the US for almost half a century but double digit growth in the rest of the world outside the USA. Aquatic animals are more efficient than land animals in converting feed into meat on the table (factor of 2 to 4 less feed energy / Kg of meat on the table). Fish/shrimp don't waste energy to stand up or keep warm. The activists were much better propaganda creators and most of their so called science was pure garbage and p-hacking. "Truth" is irrelevant.
Sorry to see your EA go to hell the same way. Aquaculture had areas of the world who didn't buy the emotional line with China now being the dominant player. Perhaps China is using EA to achieve its social goals using quantitative methods of analysis, but their "goals" may be a bit different than our goals.
At a mood level it feels weird to associate AI doomers with irrational feminized college students. Yud & co, the original AI doomers, alongside Bostrom, also an early AI doomer, and early EA, have led the charge on AI doomerism. They also seem like the opposite of 'irrational feminized college students'; the archetype is an (over?)-rational male-coded autodidactic. Through their own methodology, they've arrived at AI Safetyism, even as they've been neutral to negative on Safetyism more generally. So the over-rational male-coded nerds end up bedfellows with irrational feminized college students. I guess as a historical account it feels wrong to say the _reason_ for this is regression to costal college norms, given that many of the OG EA leaders have been leading this charge, and many of the other leaders are OG over-rationalists, rather than irrational.
And...over-confidence in one's own rationality is a huge problem, so by describing this group as over-rationalists, I don't mean to give them more credit, particularly. Maybe they _have_ talked themselves into a form of Safetyism that's flawed and dangerous the same reasons as all the other kinds of Safetyism. It just doesn't seem like "regression to the coastal college norms" is an accurate account of what's happened.
This is basically exactly what I'm trying to establish in the article. The median EA is not like Yud but in fact significantly worse than Yud, believe it or not. There's no substitute for going to the EA conferences and orgs to demonstrate this, but its evidently true.
hmmm, okay, I am not arguing with you that the median EA is worse than Yud, but even Yud has been pretty critical of open AI, right? He's probably not specifically complained about them being too capitalist, but he's leading the charge on safetyist critiques of OpenAI, and is on record (on Bankless Podcast) saying he'd prefer to just shut it all down. I suppose he doesn't take the anti-capitalist critique that you are criticizing in the post, but he seems to go much further, rather than being more restrained, when he said he wants to shut it down.
"The old school EAs are alright" and "AI safetyism is dangerous" seem like contradictory takes to me. The best synthesis is "old school EAs are alright but they're wrong on this", I suppose.
> And thus EA went from caring about proving its points with empirical evidence to socially desirable narratives. It went from wanting to repeal the FDA to wanting to make a new one for AI.
I'm not sure you want to explain this solely by reference to different people joining the identity. You can read Scott Alexander repeatedly decrying the FDA at the same time that he's posting a series of hysterical freakouts about AI.
There is nothing special about giving mosquito nets to Africa. It's not ground breaking. It probably even makes the world a worse place (Africa isn't getting any better and they will probably mass immigrate and destroy the first world). Gates foundation was using METRICS long before EA and you can look up the limitations.
All of these EA people would have been better off working for-profit and funding other smart people to do for-profit stuff. PayPal mafia forever. Would have made the world richer and better.
I would argue that the statement of EA that Brian gives:
> This was the lie that donating based on your feelings was the best way to help people. This lie wasn’t just false, it was obviously the opposite of the truth. Every year, there are plenty of news reports of how fraudulent charities would steal money, or somehow even make the problem worse.
is confusing two issues.
There is whether the charity's cause is good, and there is whether the charity is effective at advancing its nominal cause.
There's nothing wrong with rating charities according to how effective they are at achieving their own stated goals. But EA is very loud and proud about its mission to rate charities according to how good or bad their goals are. This is a mistake; if you have decided that you're going to make some kind of charitable donation, then the correct way to choose "what does the charity that I'm donating to do?" is by reference to your feelings. (Where "feelings" and "values" are synonymous.)
> This was the lie that donating based on your feelings was the best way to help people. This lie wasn’t just false, it was obviously the opposite of the truth. Every year, there are plenty of news reports of how fraudulent charities would steal money, or somehow even make the problem worse.
The logical consequence of the nature of this lie, is not EA, but rather is Ayn Rand's Objectivism.
I'd say that Objectivism really fell apart after the Nathaniel Branden affair. In short, Ayn Rand argued herself into having a "rational" affair with Nathaniel Branden, who was her protege and intellectual heir. When Branden developed a relationship with Patrecia Scott, Rand was enraged, condemned Branden, and threw him out of the Objectivist movement.
Her subsequent intellectual heir, Leonard Peikoff, was dogmatic and uninspiring; after Rand's death, Objectivism rather quickly dwindled away.
AFAIK, Objectivism is still a thing, but has nowhere near the influence it did during the mid 1960s.
I've met objectivists and unstable family life and relationships seem very common. Objectivist moral values are very at odds with traditional family values. It is not a surprise that most of the characters in Ayn Rand's novels are single and childless.
"I'm going to love this person for better or worse."
"I'm going to sacrifice for my child."
When they do marry there seems to be a lot of divorce.
I’ve been using the presence of children as a metric for the usefulness of a philosophy for a while now. If the philosophy doesn’t mention children, it’s a bad philosophy.
So that gets rid of Objectivism, Libertarianism, and neo-Marxism.
Combine that with the mountains of skulls test, and off you go!
Besides the Bostrom and Hanson criticisms, this seems low on examples. I think the AI one is not a good one. Lots of old EAs are in favor of regulating AI.
I'd be curious to read some more examples.
Anther example given is that OpenAI was set up to be a non-profit, which is a very weird example because it happened eight years ago and was the choice of Sam Altman and co rather than EAs.
Even the Bostrom and Hanson criticisms seem weak. Current EA consensus on Bostrom is something like "his work on AI x-risk was extremely important, even if he's said some stuff I disagree with". Qualified praise isn't some new phenomenon,, it's how EAs have felt about Singer since the beginning. Bostrom is still director if the Future of Life Institute. Hanson has always been more EA-adjacent, but in his areas of expertise (e.g. prediction markets) he's still as involved as ever, and was recently a keynote speaker at Manifest, an EA-adjacent conference. I fail to see how "some people were upset at Bostrom and Hanson for comments they made, but ultimately they remained in good standing among EAs" is a criticism against EA, much less enough to support the much stronger thesis of this post.
100% agree. A stronger case would use quantitative data - especially survey data about EA opinions, career choices, and/or donation flows over time.
Yeah the old vs new EA distinction seems vague here... AI safety/alignment has been valued by EA for years ex. Scott Aaronson. It's also a topic that has been in their website intro.
Can you expand on the "feminization" of EA?
Yeah, this completely misunderstands why old EA's were against regulating AI.
It's not that people were intrinsically opposed to pursuing AI regulation, it's that they doubted the possibility of good regulation and weren't really sure what to ask for anyway. Now that the situation has changed, I suspect that most of the older EA's have updated as well.
Us old-school, Givewell-supporting EAs may just need a new name. As far as I can tell Givewell is still going strong!
Let's carve out our own movement, and let the longtermist AI doomers do their own thing far away from us.
Tyler Cowen has another line on this: "The demographics of the EA movement are essentially the US Democratic Party. And that's what the EA movement over time will evolve into."
I think that's accurate. Unless you're interested as a specialist in one of the popular cause areas, EA has become less intellectually interesting. But it may make for a marginally better democratic party. Compare AI safety to AI ethics, for example.
"Better to have loved and lost, than never to have loved at all."
The critique here seems to basically be that to the extent EA becomes more popular there will be more people who claim the label of EAs who deviate in undesirable ways from the ideals of the founding members of the group.
But as a good EA I don't evaluate the movement by how ideologically pure it remains or even how much I want to hang out with people who call themselves EAs but by how much good it does. The most successful path for EA *always* meant influencing the public more broadly to care more about effectiveness.
So that's not failure! That's what success looks like. That's the nature of successful movements, they grow and attract people who have related but somewhat different views. It's no different than the fact that democratic activists have to accept people who don't support their full agenda into the tent to win elections. If EA had remained a pure community of contrarians that means it would have failed to influence the vast majority of charitable giving to care more about effectiveness.
Those of us who liked the old EA community and ideals can easily enough find a new word to describe ourselves. Hell, that group was always better described by the term rationalists or the like anyway.
Having said all that yah, I think Yudkowsky et al are dead wrong on AI doomerism and regulating AI but they aren't particularly mainstream anyway.
Data? How has givewell's porfolio changed?
This might be a "blind men and the elephant" problem, but I don't think old-style EA is how you characterized it. (I agree with the other commenters that this post is low on evidence.)
My introduction to EA was via GiveWell, which I still think of as the heart of old-style EA. Bed nets for the win! The broad attack on non-profits is weird, given that GiveWell is a non-profit that directs funds to other non-profits. It seemed like conventional wisdom was that charitable giving is harder than investing (lack of good reporting standards and metrics, along with some uncertainty about goals) but that's not a reason to avoid it altogether, because charities do things that for-profit companies won't. It's a reason to be picky, and also to avoid areas that for-profit businesses cover well.
"For-profits are bad" is a corruption of "for-profit companies don't do everything, and they don't invest (enough) in some causes we care about." Uncorrupted, it's a way of avoiding taking sides on capitalism and allowing for organizations with alternative goals than making money.
It seems like taking sides on capitalism is the opposite of that? It's engaging in culture war rather than avoiding it. Being reflexively anti-regulation is a libertarian thing, not an EA thing. When was EA about wanting to repeal the FDA?
You seem to be arguing that old-style EA was a consequence of libertarianism, rather than being libertarian-friendly, which is different.
When I first heard of EA, I thought it was an excellent idea. However, I was observing up close a similar decay in my area of environmental sciences.
As I have watched the initial analytical thinking of 1970s in environmental science areas dealing with real burning rivers, NOx formation, and acid rain type issues evolve into sloppy activist thinking where every activist had their own "truth". The deterioration of the results became apparent as more money flowed in and activist became regulators and effort achieved worse outcomes to the detriment of humanity. As real science with quantitative thinking often gives results that don't conform with peoples "feelings", emotions, desires, greed, and real "decision making power" is totally arbitrary, the evolutionary institutional pressure is to force compliance with feelings and non-quantitative thinking until the point of where reality intervenes. Reality intervention happens rapidly in private sector profit making institutions, but not in monopoly government and NGO non-profit type institutions.
For a specific example, Aquaculture (growing aquatic plants and animals) is effectively dead in the US with zero growth in the US for almost half a century but double digit growth in the rest of the world outside the USA. Aquatic animals are more efficient than land animals in converting feed into meat on the table (factor of 2 to 4 less feed energy / Kg of meat on the table). Fish/shrimp don't waste energy to stand up or keep warm. The activists were much better propaganda creators and most of their so called science was pure garbage and p-hacking. "Truth" is irrelevant.
Sorry to see your EA go to hell the same way. Aquaculture had areas of the world who didn't buy the emotional line with China now being the dominant player. Perhaps China is using EA to achieve its social goals using quantitative methods of analysis, but their "goals" may be a bit different than our goals.
At a mood level it feels weird to associate AI doomers with irrational feminized college students. Yud & co, the original AI doomers, alongside Bostrom, also an early AI doomer, and early EA, have led the charge on AI doomerism. They also seem like the opposite of 'irrational feminized college students'; the archetype is an (over?)-rational male-coded autodidactic. Through their own methodology, they've arrived at AI Safetyism, even as they've been neutral to negative on Safetyism more generally. So the over-rational male-coded nerds end up bedfellows with irrational feminized college students. I guess as a historical account it feels wrong to say the _reason_ for this is regression to costal college norms, given that many of the OG EA leaders have been leading this charge, and many of the other leaders are OG over-rationalists, rather than irrational.
And...over-confidence in one's own rationality is a huge problem, so by describing this group as over-rationalists, I don't mean to give them more credit, particularly. Maybe they _have_ talked themselves into a form of Safetyism that's flawed and dangerous the same reasons as all the other kinds of Safetyism. It just doesn't seem like "regression to the coastal college norms" is an accurate account of what's happened.
This is basically exactly what I'm trying to establish in the article. The median EA is not like Yud but in fact significantly worse than Yud, believe it or not. There's no substitute for going to the EA conferences and orgs to demonstrate this, but its evidently true.
hmmm, okay, I am not arguing with you that the median EA is worse than Yud, but even Yud has been pretty critical of open AI, right? He's probably not specifically complained about them being too capitalist, but he's leading the charge on safetyist critiques of OpenAI, and is on record (on Bankless Podcast) saying he'd prefer to just shut it all down. I suppose he doesn't take the anti-capitalist critique that you are criticizing in the post, but he seems to go much further, rather than being more restrained, when he said he wants to shut it down.
"The old school EAs are alright" and "AI safetyism is dangerous" seem like contradictory takes to me. The best synthesis is "old school EAs are alright but they're wrong on this", I suppose.
> And thus EA went from caring about proving its points with empirical evidence to socially desirable narratives. It went from wanting to repeal the FDA to wanting to make a new one for AI.
I'm not sure you want to explain this solely by reference to different people joining the identity. You can read Scott Alexander repeatedly decrying the FDA at the same time that he's posting a series of hysterical freakouts about AI.
There is nothing special about giving mosquito nets to Africa. It's not ground breaking. It probably even makes the world a worse place (Africa isn't getting any better and they will probably mass immigrate and destroy the first world). Gates foundation was using METRICS long before EA and you can look up the limitations.
All of these EA people would have been better off working for-profit and funding other smart people to do for-profit stuff. PayPal mafia forever. Would have made the world richer and better.
"Flew in world class autists" aaaand done. Bai
Does anyone believe in values any more? That is actually the reason to give anything to anyone.
I would argue that the statement of EA that Brian gives:
> This was the lie that donating based on your feelings was the best way to help people. This lie wasn’t just false, it was obviously the opposite of the truth. Every year, there are plenty of news reports of how fraudulent charities would steal money, or somehow even make the problem worse.
is confusing two issues.
There is whether the charity's cause is good, and there is whether the charity is effective at advancing its nominal cause.
There's nothing wrong with rating charities according to how effective they are at achieving their own stated goals. But EA is very loud and proud about its mission to rate charities according to how good or bad their goals are. This is a mistake; if you have decided that you're going to make some kind of charitable donation, then the correct way to choose "what does the charity that I'm donating to do?" is by reference to your feelings. (Where "feelings" and "values" are synonymous.)
> This was the lie that donating based on your feelings was the best way to help people. This lie wasn’t just false, it was obviously the opposite of the truth. Every year, there are plenty of news reports of how fraudulent charities would steal money, or somehow even make the problem worse.
The logical consequence of the nature of this lie, is not EA, but rather is Ayn Rand's Objectivism.
And subject to the same devolution that Objectivism fell into.
I actually don't know that much about this. Whatever happened to Objectivism?
I'd say that Objectivism really fell apart after the Nathaniel Branden affair. In short, Ayn Rand argued herself into having a "rational" affair with Nathaniel Branden, who was her protege and intellectual heir. When Branden developed a relationship with Patrecia Scott, Rand was enraged, condemned Branden, and threw him out of the Objectivist movement.
Her subsequent intellectual heir, Leonard Peikoff, was dogmatic and uninspiring; after Rand's death, Objectivism rather quickly dwindled away.
AFAIK, Objectivism is still a thing, but has nowhere near the influence it did during the mid 1960s.
I've met objectivists and unstable family life and relationships seem very common. Objectivist moral values are very at odds with traditional family values. It is not a surprise that most of the characters in Ayn Rand's novels are single and childless.
"I'm going to love this person for better or worse."
"I'm going to sacrifice for my child."
When they do marry there seems to be a lot of divorce.
I’ve been using the presence of children as a metric for the usefulness of a philosophy for a while now. If the philosophy doesn’t mention children, it’s a bad philosophy.
So that gets rid of Objectivism, Libertarianism, and neo-Marxism.
Combine that with the mountains of skulls test, and off you go!