The United States of America has a political tradition of personal accountability for crimes. We owe our ability to use the internet, cars, and other tools freely to this legal and political precedent. Effective Altruism, a large non-profit network motivated by doomsday scenarios, trying to subvert this tradition through bills such as SB 1047. Co-sponsor for the bill includes the EA-funded Center for AI Safety, who believe “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Fortunately, saner voices may yet prevail. The bipartisan senate roadmap led by Senate Majority Leader Chuck Schumer (D-NY) focuses on funding basic AI research and promoting AI innovation, while addressing more reasonable safety concerns on a case-by-case basis, leading to economist Tyler Cowen to proclaim “The AI ‘Safety Movement’ Is Dead”. We can seize the moment to promote better federal guidelines for AI policy.
Academics, engineers and founders have criticized SB1047’s approach to liability. Andrew Ng illustrates the costs of SB1047, and this type of regulation more broadly:
For example, an electric motor is a technology. When we put it in a blender, an electric vehicle, dialysis machine, or guided bomb, it becomes an application. Imagine if we passed laws saying, if anyone uses a motor in a harmful way, the motor manufacturer is liable. Motor makers would either shut down or make motors so tiny as to be useless for most applications. If we pass such a law, sure, we might stop people from building guided bombs, but we’d also lose blenders, electric vehicles, and dialysis machines. In contrast, if we look at specific applications, like blenders, we can more rationally assess risks and figure out how to make sure they’re safe, and even ban classes of applications, like certain types of munitions.
SB1047 demonstrates that extremists concentrated in one state can impose destructive costs on scientific research, economic growth, and national competitiveness nationwide. This motivates a pro-active federal approach to pre-emption that creates reasonable guidelines for state regulation against misuse while protecting researchers and startups.
The first order of business for federal pre-emption is to clearly separate liability for users from liability from researchers. Bipartisan precedents on internet regulation should guide future policy.
As a first step, look towards good samaritan protections for LLM developers. While there are imperfections with how these protections should apply to political speech specifically, the general precedent on liability is essential in ensuring a prospering and competitive AI ecosystem.
Collaborate on extending these good samaritan protections here.
AI is an appliance. The risks are those of any tech, from rock, through crossbow to the atomic bomb. Man himself is the only potential existential threat.
The objective of monopoly regulators is to expand their power and they don't care about anybody else. They killed nuclear power in the 70's by preventing any innovation and learning wile doing that has always been used to decrease costs with time making the benefits available to all.