It is not enough to stop SB 1047. We should make laws like SB 1047 illegal, at least at the State level.
Yesterday, Dean Ball, Van Linberg and I published a bill to federally pre-empt de-facto bans on open research and publishing like SB 1047, as well as an associated policy brief. An important critique of the evidence-based AI policy movement is that we publish a lot more evidence than policy. That’s a good problem to have, but we’re also trying to make up the gap with draft bills like these.
Here are some highlights:
FINDINGS.— Congress finds the following:
(1) Artificial intelligence (AI) will play a crucial role in advancing United States industry, enhancing national competitiveness, and strengthening our capabilities across various sectors.
(2) The development of artificial intelligence capabilities is necessary for the economic development and national security of the United States.
(3) It is imperative to foster responsible AI development and encourage innovation in this rapidly evolving field.
(4) The transformative potential of AI models necessitates a cohesive and uniform approach to its governance.
(5) The expertise to regulate national security concerns rests with the government of the United States.
(6) Improving access to AI models is necessary to ensure that Americans have an equal voice.
We draw on language from the Bipartisan Senate Roadmap and statements by House Members of both parties to build the broadest possible coalition for research and publishing freedoms.
A straightforward explanation of “Rebuttable Presumption”: the liability regime in which users are responsible for their own actions.
In the long run, the ideal liability regime will be determined by application of the common law and by the future capabilities of frontier AI (for example, agents—models that can take action for human users—would have quite different liability implications than a standard chatbot like ChatGPT). Thus, a simple standard with a principle that can help guide judges across the country would be ideal.
One such principle is a rebuttable presumption of user responsibility for model misuse, as opposed to developer responsibility. Under this standard, if a model is misused in a way that harms another person, that person (or an affected third party), can hold the user who misused the model responsible for the harm. If the user can demonstrate that the harm originated from a failure of the model, then the model developer may assume or share in the responsibility.
Note that misuse is untouched, meaning that States are free to set their own standards on issues like fraud, deepfakes, or industry-based regulation (i.e. use of AI in medicine). The political reality is that existing industry-based interests prefer the status quo and will oppose use-based pre-emption. Fortunately, our priority — protecting research and publishing freedoms — do not tread over any specific industries.
Research freedoms are the cornerstone of evidence-based AI policy. This is the start, not the end, of giving elected officials evidence-based options for the problems they are trying to solve.