If you are in San Francisco, please come to our open-invite dinner! Please register ahead of time to help us with an estimate for capacity, though you can show up if you forgot.
Effective Altruists are like foreigners in their own country. Garett Jones describes a process of ‘spaghetti assimilation’: while local cultures change immigrants, immigrant cultures change locals. The same is true not just of immigrants, but of cultural movements such as Effective Altruism.
California’s anti-AI bill SB 1047 has marked the first protracted media fight around an actual piece of legislation. The Effective Altruists going in and coming out of the fight are different breeds.
It marks a new cultural fusion. Over the course of the debate, the original sci-fi logic behind Effective Altruism was subsumed into traditional arguments around the war on terror, social media, and partisan politics. Both
and myself have articles in other publications explaining this evolution. Here is myself in Reason:Within six months of Wiener introducing S.B. 1047, a wave of legislators and regulators changed their minds as they caught up to the available evidence. Lawmakers of different parties, factions, and countries began to adopt a more optimistic tone.
…
This shift in sentiment was a loud rebuttal to the AI Safety movement, which has been heavily backed by billionaires like Sam Bankman-Fried and Dustin Moskovitz, as well as multimillionaire Jaan Tallinn. With hundreds of millions of dollars behind it, the movement has pushed to pause AI research, driven by the doomsday prediction that sufficiently advanced AI could lead to human extinction.
While mainstream experts focused on their work, lawmakers were swayed by one-sided narratives from the Open Philanthropy-funded Center for AI Safety (CAIS), which claims that "mitigating the risk of extinction from AI should be a global priority." Until recently, policymakers remained largely unaware of just how disconnected these narratives were from the broader AI community.
…
The AI Safety movement initially gained traction by exploiting a political vacuum. Mainstream academics and industry had little reason to engage in legislative outreach before the introduction of S.B. 1047. Many were shocked that lawmakers would seriously consider the claims made by CAIS and Wiener. "I'm surprised to see [S.B. 1047] being seriously discussed in the California legislature," remarked Ethan Fast from VCreate, a company using machine learning to treat diseases. AI researcher Kenneth O. Stanley offered a sharper critique: "It almost seems like science fiction to see an actual bill like this. A more useful bill would be narrower in scope and focus on addressing specific near-term harms."
The reaction from this has been to pivot, trying to frame SB 1047 as an Anthropic-supported bill about bioterrorism, rather than a bill about slowing down AI research itself.
This phase transition happens in many movements. Startups go from tackling technological problems to fending off competitors. Politicians go from appealing to their base in a primary to messaging for everyone in the general election.
In DC that has already occurred. Effective Altruists follow a long line of California radicals, not noticing that the country has moved past them. Dean documents this in Pirate Wires:
Why has Scott Wiener, a powerful California politician with big ambitions, chosen to alienate the tech community (recently referring to opposition to the bill as just the "loudest voices") — arguably the most important interest group for a man who wishes to represent San Francisco in Congress? More critically, why does he favor a relatively small group of advocates while ignoring — even mischaracterizing — many much larger groups of people who are also his constituents? The answer comes from a fusion of two toxic phenomena: Effective Altruists' longstanding influence over Wiener, who had been working on YIMBY initiatives with them for years before 1047, and the California legislature’s conception of itself as America’s regulators-in-chief.
When ChatGPT launched in late 2022, policymakers quickly became sure there was something they had to do about it. No one was quite sure what, exactly, but there was near-universal agreement, in the policymaking community, that action was necessary. We were told there was a race to regulate — and that the United States was losing it, first to China, then to the European Union. “In the coming years,” wrote Anu Bradford in Foreign Affairs, “there will be clear winners and losers not only in the race to develop AI technologies but also in the competition among the regulatory approaches that govern those technologies,” and in this competition, “the United States cannot afford to sit on the sidelines.” The question of what we were regulating, and what that regulation should be, was, evidently, of secondary importance. We were losing the race, and that was all that mattered.
At the time, almost all the groups doing policy work on AI were led by doomers — largely drawn from and funded by the Effective Altruist community. These groups bore nonpartisan and official-sounding names: the Center for AI Policy, the Center for the Governance of AI, and, importantly for our story, the Center for AI Safety, whose leader, Dan Hendrycks, is the intellectual driving force behind SB 1047.
Legislators, not knowing any better, looked to these groups for guidance, if only because they were the only ones who showed up. Much of the initial policy work on AI at the state and federal levels bore the mark of doomer thought, such as aggressive regulation of AI models (such as the Biden Executive Order’s compute-based reporting thresholds)
…
In Washington, these ideas were soon put into competition with those of other experts. The US Senate, for example, led by Majority Leader Chuck Schumer (D-NY) convened “Insight Forums” starting in the fall of 2023, attended by more than 150 AI leaders from academia, startups, venture capital, and Big Tech — as well as non-AI experts such as teachers’ and writers’ unions, the NAACP, and others. Doomers, such as MIT’s Max Tegmark, and non-doomers, such as Stanford’s Andrew Ng, had their voices heard.
These meetings helped calm down the rhetoric from DC.
…
No such thing happened in the ideological monoculture of Sacramento. Instead, Wiener — one of California’s most powerful and ambitious politicians— turned to Hendrycks and CAIS, which at that point had received nearly $10m from Open Philanthropy, EA's money arm. CAIS even set up a distinct lobbying group, the Center for AI Safety Action Fund, after "getting lots of inquiries from policymakers, including Senator Wiener... to have a vehicle that could do more direct policy work," per Nathan Calvin, CAIS senior policy counsel. Then, as a co-sponsor of 1047, CAIS and Hendrycks drafted the bill in all but name.
Regardless of what happens in California, it is clear that at a national level, there is overwhelming opposition to SB 1047-style legislation.
Almost all other relevant parties outside the Effective Altruism community have been opposed to SB 1047 for months. The bill has been described as unworkable by hundreds of academic researchers, including senior figures such as Andrew Ng, Fei-Fei Li and Yann LeCun
…
Congressional Democrats have also voiced their opposition, including Silicon Valley representatives Zoe Lofgren and Ro Khanna and former House Speaker Nancy Pelosi.
At a national level, Effective Altruists have already begun reframing the arguments for legislation to slow or stop AI in the language of counterterrorism or national security. In the words of MIRI (an Effective Altruism organization started by Eliezer Yudkowsky), “many of the people who work at those organizations agree with us, but in public, they say the watered-down version of the message”.
Many people have asked for a definition of ‘AI Safety’. Providing that definition has been difficult, because it is a movement which has become nebulous on purpose. The most inarguable definition is this:
AI Safety: The fusion of Effective Altruism and pre-existing political interests to control AI.