4 Comments
Nov 25, 2023·edited Nov 26, 2023

Brian, what would it take to talk to you over the phone? I and lots of other people are really trying to understand what you see in this article. As it stands I think you have gone a little off your rocker here. I think people would appreciate it if you were explicit about which enforcement mechanism or proposal inside the paper you find totalitarian, because you seem to find some implication obvious that others do not.

First, to state the obvious: neither AI safety research programs like the stuff ARC does, nor simple licensing & red-teaming requirements for AGI research companies based the U.S., require totalitarianism. Yes, if you propose almost any law, without draconian measures some people will break the law. But that doesn't mean those laws "require" a totalitarian regime anymore than a law against stealing requires a totalitarian regime. So the title here is just sort of wrong.

The article suggests monitoring and restrictions over AGI companies, in the form of monitoring their hardware and research activities. The governance proposal for this may or may not lead to "regulatory capture", but A. that's a projection, and B. we already have regulatory capture of several industries and that hasn't lead to totalitarianism. We also have BSL restrictions over e.g. dangerous gain of function research, so it's not like this sort of intense regulation hasn't been applied to sectors of technology before.

The paper further suggests limiting proliferation of AGI models. You say this is also against free speech, but we already limit "proliferation" of e.g. government secrets, military hardware, nuclear weapon designs, etc. Adding foundational ML research to that list of things the government doesn't want you to share with China, in of itself, is not a notable encroachment on civil liberties at least in the sense that it limits your ability to debate policy or criticize your government, which is the feature of "totalitarian" speech restrictions that people care about. You can say these things will *lead* to totalitarianism or are a slippery slope, but then your article is again a projection, not a statement about what EAs are directly asking for.

Expand full comment

I just skimmed his other article on EA and it's similarly fear-driven.

Yes, I know, ironic: *he's* fear-driven.

I don't know how else to characterize an essayist clearly driven by strong emotion on the subject without corresponding arguments to back it up.

I literally just followed him, but am already driven to unfollow.

Expand full comment

When I read this I just think of debates around security/freedom/free speach etc.

Freedom sows the seeds of its own destruction, in that that people are free to chose to become unfree or less free. Some think we must limit it because too much freedom gives some people to much power, naturally it's the job of the government to do so.

I also like how you opt to use the "ML" instead of AI in many cases; ML feels mundane whereas AI (can) seem scary.

Expand full comment

All discussions about AI safety are erroneous in nature, because they are based on am underlying paradox that cannot be solved by logic alone.

Before reasoning about AI singularity, we must first reason about what it means to be approaching the event horizon of human existence.

Ultimately there is no truth without meaning and there is no meaning without purpose.

Thus we must first answer the question, how human life continues to make sense after humanities final invention. Only then we can answer what kind of steps are prudent to take. Do we decide to take it serious to prevent the next stage in evolution. Do we decide to become obsolete, if so under which conditions. Do we decide to make it up simply to cosmic fate.

There is no meaning in trying to ensure survival, or trying to uphold social values and social freedoms, or minimizing risk, or continuing to do whatever we have been doing so far, if this question is not first answered by choice alone.

What is the purpose of human life in the future?

Expand full comment