If you’re in the DC area, please consider coming to AI Bloomers 4. It will be the last one in some time, since I will be off in California.
In a sentence
If passed, SB1047 will permanently entrench large AI companies over startups, unfairly target open source, devastate California’s lead in AI research, and cede control to a regulatory agency of unelected decels.
In a paragraph
SB1047 burdens developers with a mountain of compliance which will prevent startups from competing with legacy companies. It creates a new regulatory agency, the frontier model division, which will impose fees on AI developers while obstructing their research. These unelected bureaucrats will have broad powers to direct criminal liability and change regulatory standards at will. The bill’s co-sponsor, Center for AI Safety, is an extremist organization which believes AI research is likely to lead to human extinction. Consequently, the bill is designed to harm AI research itself, instead of focusing on malicious use, all while going out of its way to target open source using its derivative model standard. California owes its AI research lead to a vibrant startup economy. If we wish to keep it, California must block SB1047.
In an essay
A bill which threatens the future of startups, open source, and AI research is on its way to becoming law. Introduced by state senator Scott Wiener and co-sponsored by the SBF-funded doomersayer non-profit Center for AI Safety, SB 1047 passed the California senate on May 21st and is headed to the state assembly for a vote this August. If passed, the bill would severely restrict AI research, place asymmetric burdens on open source, and incarcerate developers who fail to predict how their AI models will be used.
The bill creates the Frontier Model Divison, a new regulatory agency within the California department of technology funded by fees to AI developers. The FMD puts developers of ‘covered models’ between a rock and a hard place: to either put themselves at risk of felony perjury by applying for a limited duty exemption or be burdened with months to years of compliance applications. The California Senate Appropriations Committee estimated they would spend “hundreds of thousands of dollars to counties for increased incarceration costs relating to the expansion of felony perjury in this bill.”
In a move against open source, SB 1047 also applies the same requirements and liabilities to developers whose models are used in a ‘derivative model’, defined as
“(A) A modified or unmodified copy of an artificial intelligence model.
(B) A combination of an artificial intelligence model with other
software. “
This means that perfectly legitimate uses of AI are held liable simply for being used in combination with malicious software. For example, take an AI which writes a simple introduction email. Alone, that AI is not harming anyone. However, if it is used to send emails in combination with a link to malware, it could be used to commit crimes covered by this bill. The AI model is being used for its intended purpose, to write emails, but because it is used in combination with malicious software, the developer could still be held liable under this definition of derivative model.
Defenders of the bill point to the covered models standard, which includes all AI models with 10^26 floating point operations of compute, or those with “similar or greater performance” on any of multiple unspecified benchmarks. Theoretically, this compute threshold is supposed to limit the bill to only cover large companies. In an open letter, State Senator Wiener argues “Our intention from the start has been for SB 1047 to allow startups to continue innovating unimpeded while imposing safety requirements only on the large and well-resourced developers building highly capable models at the frontier of AI development.”
However, since the size of models is rapidly increasing, this is a moving target that will affect more companies every year. If current model growth trends continue, this bill will likely apply to at least one model released in the next year and a wide range of models in roughly four years. A major factor behind this trend is the decreasing cost of compute, which means that current developers will be able to train larger models even if they spend the same dollar amount.
Moreover, the similar performance standard means that even startups which develop more efficient models to compete with industry leaders while avoiding the compute limit will still be subject to the same regulation.
The regulations on derivative models also mean that AI startups not working on training models independently will see the number of models available for them to modify be greatly limited.
There is still time to move in a better direction. In the US Congress, the bipartisan roadmap led by Senator Chuck Schumer (D-N.Y.) provides a model of prioritizing funding AI research over restricting it. Meanwhile, California risks becoming captured to an extreme ideology: Existential AI Safety funded by big tech donors such as convicted FTX founder Sam-Bankman Fried and Facebook founder Dustin Moskowitz, who believe that AI research is likely to cause human extinction.
Time after time, California has remained the playground of radical ideologies that both national parties reject. We cannot let that happen with AI.
In a reading list
https://x.com/psychosort/status/1784663936513524074
https://x.com/psychosort/status/1792674391475802461
https://www.context.fund/policy/sb_1047_analysis.html
https://www.answer.ai/posts/2024-04-29-sb1047.html
https://x.com/JvNixon/status/1793693913984999804
https://1a3orn.com/sub/essays-ca-1047-one.html
https://www.rstreet.org/commentary/california-and-other-states-threaten-to-derail-the-ai-revolution/
https://www.thefai.org/posts/california-s-push-to-regulate-ai-goes-too-far