Discussion about this post

User's avatar
Jakey's avatar

Brian, what would it take to talk to you over the phone? I and lots of other people are really trying to understand what you see in this article. As it stands I think you have gone a little off your rocker here. I think people would appreciate it if you were explicit about which enforcement mechanism or proposal inside the paper you find totalitarian, because you seem to find some implication obvious that others do not.

First, to state the obvious: neither AI safety research programs like the stuff ARC does, nor simple licensing & red-teaming requirements for AGI research companies based the U.S., require totalitarianism. Yes, if you propose almost any law, without draconian measures some people will break the law. But that doesn't mean those laws "require" a totalitarian regime anymore than a law against stealing requires a totalitarian regime. So the title here is just sort of wrong.

The article suggests monitoring and restrictions over AGI companies, in the form of monitoring their hardware and research activities. The governance proposal for this may or may not lead to "regulatory capture", but A. that's a projection, and B. we already have regulatory capture of several industries and that hasn't lead to totalitarianism. We also have BSL restrictions over e.g. dangerous gain of function research, so it's not like this sort of intense regulation hasn't been applied to sectors of technology before.

The paper further suggests limiting proliferation of AGI models. You say this is also against free speech, but we already limit "proliferation" of e.g. government secrets, military hardware, nuclear weapon designs, etc. Adding foundational ML research to that list of things the government doesn't want you to share with China, in of itself, is not a notable encroachment on civil liberties at least in the sense that it limits your ability to debate policy or criticize your government, which is the feature of "totalitarian" speech restrictions that people care about. You can say these things will *lead* to totalitarianism or are a slippery slope, but then your article is again a projection, not a statement about what EAs are directly asking for.

Expand full comment
Philip Skogsberg's avatar

When I read this I just think of debates around security/freedom/free speach etc.

Freedom sows the seeds of its own destruction, in that that people are free to chose to become unfree or less free. Some think we must limit it because too much freedom gives some people to much power, naturally it's the job of the government to do so.

I also like how you opt to use the "ML" instead of AI in many cases; ML feels mundane whereas AI (can) seem scary.

Expand full comment
2 more comments...

No posts