Need to Know
California Governor Newsom Vetoed SB 1047 this Sunday.
SB 1047 was written and co-sponsored by the Center for AI Safety, a group linked to Effective Altruism donors Sam Bankman-Fried, Dustin Moskovitz, and Jaan Tallinn
It was later supported by the Screen Actors Guild, SAG-AFTRA
The bill faced bipartisan opposition, including Speaker Emerita Nancy Pelosi, House Science Democrats, and California Republican Jay Obernolte.
In open letters with hundreds of signatories, academics and startups founders voiced their opposition.
Governor Newsom signed 17 bills regulating AI this month.
My organization, Alliance for the Future, has its official statement here. “Even in our divided country, it is possible to put aside partisan politics to grow the pie for everyone."
Key quotes
“A California-only approach may well be warranted - especially absent federal action by Congress - but it must be based on empirical evidence and science.”
~Governor Newsom
"SB 1047 is skewed towards addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks"
~House Science Democrats, also referenced by Nancy Pelosi
“Some worried that AI models would achieve superhuman intelligence, develop agendas of their own, and disempower or kill all human beings. Notably, this seems to have been the principal concern for Dan Hendrycks, who as head of the Center for AI Safety was an official sponsor of Weiner’s legislation. He has stated that his P(Doom) is 80 percent, meaning that he is very concerned about existential risk from AI.”
~Independent Journalist Tim Lee
What’s Next for AI Regulation?
What comes after SB 1047, by Dean Ball
California Rejects AI Regulatory Extremism, by Adam Thierer
Deep Dive: Why Derivative Liability Hurts Open Source
What SB 1047 bill does is move from our current liability regime to one where companies must fight and censor their own customers.Every company has pre-existing liability, which negates any need for new liability provisions around genuine safety concerns. If a car malfunctions and kills a driver, that opens the company to liability. What car companies aren't liable for are customers who use their car for vehicular homicide - the person who commits the crime is the person responsible.
SB 1047 is trying to change that - an unprecendented step for regulating software. Software is modular. People build on it, modify it, and combine it with other software all of the time. That freedom created the soapbox we're using to speak now. At the same time, it would not be possible if people who make basic tools - emails, operating systems, programming languages, and so on - could be sued for the actions of every cybercriminal.
But SB 1047 goes in an even more radical direction. It doesn't just punish publishers for uses of their AI models, but modifications of their AI models. These modifications can introduce sensitive or even classified data which were never present in the original model, at almost zero cost. What you are supporting is the equivalent of saying that a car manufacturer should be punished when someone mounts a machine gun on their car and goes around shooting people with it.
A better alternative: punish those who actually commit the crime.
Crisis averted: https://x.com/psychosort/status/1840111979958022412