No One Disagrees That AI Safety Requires Totalitarianism
Reading A Paper By EAs, For EAs
A few months ago, I went to EA Global London and had the above interaction with an attendee.
Now, onto the meat of today. https://arxiv.org/abs/2307.03718
This is a beautiful paper. It is beautiful because it is a bunch of EA-affiliated people independently coming to the conclusion that total authoritarianism is necessary to prevent what in their eyes is the threat AI poses to all of humanity. Since this sounds like an exaggeration, I will quote verbatim, or rather post screenshots verbatim, for most of this article.
The crux of the argument: AI creates so much innovation that it can’t be controlled top-down.
They predictably call for exactly the kind of regulatory capture most convenient to OpenAI, Deepmind, and other large players. They list some costs and benefits here. All of the citations about positive examples of AI are demonstrations of AI in the real world, all of the negative citations are just their own hypothetical scenarios constructed with no basis in reality.
Remember how they said earlier that it’s hard to define or prove harm, only the “possibility” of harm? That’s the key to the coming crackdown. At this point everyone in the audience should be aware that these are not just powerless academics, these are members of think tanks, government agencies, companies, and corporate boards with real power.
Re 1: the limiting factor of designing new biological weapons is equipment, safety, and not killing yourself with them.
Re 2: Not just wrong but the complete opposite of the truth. Based on an incorrect understand of the legacy press.
Re 3: Cyberattacks resulting from machine learning adoption are real, but far from catastrophic. Think of the argument: people’s ML algorithms will be hacked and become worse than not having the ML algorithm at all? Does anyone believe this circular argument about any technology?
Re 4: The footnote is just linking to their hypotheticals again, no real examples.
At What Cost?
I’ll start this section off with praise: they have put together an actual plan that would indeed put AI companies under government control. This is a competent crackdown strategy. It would work if not stopped by legislators, voters, or courts.
Additionally, targeting AI companies and hardware companies, at least doesn’t include normal users. I say this to contrast with the “AI Ethics” crowd who want to crack down on AI output, which is indistinguishable from the writing and art of normal people. Consequently, their preferred crackdown is completely indistinguishable from total crackdowns on normal people’s freedoms of speech and association. I say this to give EAs a bit of context, since they tend to ally themselves with people who would be far, far worse than themselves.
That being said, this will nonetheless be extremely economically destructive. Additionally, as we’ve seen with social media censorship, drug relations, or civil rights law, this can easily end in the regulatory state capturing the companies themselves and deputizing them to censor or otherwise attack individuals.
They lay out the three key problems:
If you pause for a moment and read the lines carefully, you will realize they are all synonyms for freedom. An equivalent reading:
The Unexpected Capabilities Problem: ML is easily used to create innovation.
The Deployment Safety Problem: ML is easy to update and build upon.
The profileration problem: People have the first amendment right to share ML.
This diagram is amazing because it’s just directly advocating for regulatory capture. This looks like a diagram set up by Theodore Roosevelt-style trust busters to criticize concentrated monopolies and their influence on politics. This is what the authors of the paper are directly advocating for. In case this wasn’t clear they also directly say they want regulatory capture in the next paragraph:
I said earlier that their plans are less totalitarian than those of the “AI Ethics” scam artists, but they are apparently not opposed to working with them. Other than that this is classic entrenchment of political constituencies – subsidizing people who ideologically loyal with taxpayer dollars.
Any endorsement of the EU crackdown strategy should be immediately rejected by any remotely sane American politician. I have an article in Pirate Wires criticizing the broader framework of crackdowns as China-lite and in some cases China-mega, if you want article-length treatment of this topic.
My normal style is to do more legislative forecasting and give a broader understanding of what measures are likely to be worsened by political coalitions, but with this paper I don’t even feel that is necessary. I guess we should thank EAs for saying the quiet part out loud: “Totalitarian crackdowns are necessary, we want to unify all companies through regulatory capture, freedom is the enemy and must be eliminated” all in one paper.
I expect one category of reply because I've already encountered it in real life: "But Brian, the article is correct, if we don't do totalitarianism we'll all die!"
If you want the direct arguments, start here:
Brian, what would it take to talk to you over the phone? I and lots of other people are really trying to understand what you see in this article. As it stands I think you have gone a little off your rocker here. I think people would appreciate it if you were explicit about which enforcement mechanism or proposal inside the paper you find totalitarian, because you seem to find some implication obvious that others do not.
First, to state the obvious: neither AI safety research programs like the stuff ARC does, nor simple licensing & red-teaming requirements for AGI research companies based the U.S., require totalitarianism. Yes, if you propose almost any law, without draconian measures some people will break the law. But that doesn't mean those laws "require" a totalitarian regime anymore than a law against stealing requires a totalitarian regime. So the title here is just sort of wrong.
The article suggests monitoring and restrictions over AGI companies, in the form of monitoring their hardware and research activities. The governance proposal for this may or may not lead to "regulatory capture", but A. that's a projection, and B. we already have regulatory capture of several industries and that hasn't lead to totalitarianism. We also have BSL restrictions over e.g. dangerous gain of function research, so it's not like this sort of intense regulation hasn't been applied to sectors of technology before.
The paper further suggests limiting proliferation of AGI models. You say this is also against free speech, but we already limit "proliferation" of e.g. government secrets, military hardware, nuclear weapon designs, etc. Adding foundational ML research to that list of things the government doesn't want you to share with China, in of itself, is not a notable encroachment on civil liberties at least in the sense that it limits your ability to debate policy or criticize your government, which is the feature of "totalitarian" speech restrictions that people care about. You can say these things will *lead* to totalitarianism or are a slippery slope, but then your article is again a projection, not a statement about what EAs are directly asking for.
When I read this I just think of debates around security/freedom/free speach etc.
Freedom sows the seeds of its own destruction, in that that people are free to chose to become unfree or less free. Some think we must limit it because too much freedom gives some people to much power, naturally it's the job of the government to do so.
I also like how you opt to use the "ML" instead of AI in many cases; ML feels mundane whereas AI (can) seem scary.