10 Comments

“I believe that modern policy is not a contest between intellectuals or ideologies, but rather a continual siege in which cognitive elites have to stave off midwit social climbers and the continual entropic flow of political economy.” Brilliant sentence. It’s basically the NPC meme where the mids can only say ESG DEI beep boop. Other than tech, where are cognitive elites in power?

Expand full comment

Hey, SOME of us bureaucrats do try to do things for reasons other than political economy. (I think your point is broadly correct, though.)

Looking forward to the next posts in the series. I'm hopeful you address the counterargument of "weak AI increases researcher productivity which increases chances of a breakthrough that could bend the flattening curve back upwards."

Expand full comment
Jun 2, 2023Liked by Brian Chau

From Reason.com, Of interest to those concerned with AI regulation:

https://reason.com/2023/06/01/josh-hawley-ai-chatgpt-tech-regulation-democrats/

"Josh Hawley Wants the Government To Silence A.I.

The Missouri senator is once again pursuing misguided tech regulation.

..His solution is for the government to increase the liability incurred by companies that use A.I., such that they can be sued by users for engaging in misinformation.

...Hawley's overall anti-tech agenda overlaps neatly with Democratic regulatory priorities. Both Republicans and Democrats have joined together to demand the repeal of Section 230, which would subject social media platforms to increased liability for user-generated speech. "

That does seem the most likely regulatory push, whereas whats needed is something like Section 230 for AI, not repealing the one of the internet.

Expand full comment
Jun 2, 2023Liked by Brian Chau

re: "Instead, most practical policy attempts a totalitarian crackdown on access – normal people’s ability to use models no smarter than ChatGPT is right now."

Yup: though fortunately I think the masses want ChatGPT: and ideally see that it'd be great to have even better models that address some of its flaws. While its true that there are folks, like the group that filed an FTC complained, that seem to want them shut down, it seems unclear thats a likely outcome. The biggest danger may be those trying to hold AI vendors liable for consequences of people believing false statements from AI. AI may need the equivalent of a Section 230 to prevent that, this page goes into that:

https://PreventBigBrother.com

As that notes, a more likely outcome unfortunately is either regulatory capture, or the use of the regulatory process to have government or OpenAI&Anthropics private democracy perhaps even more prone to tyranny, steer things. The American founders knew well the dangers of the emotional mob. Today some idealists like pure direct democracy and want the net to enable it, without seeming to fully grasp the need for protections from it.

Although those who advocate for free speech and a free market of ideas may be losing ground these days, it still does seem useful to try to tie the idea of AI regulation to speech regulation where the founders were particularly concerned about emotional mobs silencing others. Killing two birds with one stone as people try to spread respect for free speech and open debate. AI is a tool to aid in the production of speech (as that site I linked notes) and that shouldn't be impeded, nor should the ability for people to hear AI speech they want be infringed: but the 1st amendment likely doesn't protect that. Perhaps the same arguments can be used though. Or not, just one approach. I suspect many need to be explored and thrown into the public debate.

Expand full comment

Punching the air in celebration of this wisdom.

Expand full comment
Oct 23, 2023·edited Oct 23, 2023

"Any area where cognitive elites have power I’m optimistic for and I see my primary political mission as shifting the balance of power more in that direction."

I just started reading you, and I've found a good chunk of what you say interesting. However...

- eugenics (and forced sterilization)

- Twitter (before Musk) and most other social media platforms gave us censorship of all sorts of actually true things (COVID origins, COVID vaccine efficacy opinions from legit experts, Hunter Biden laptop, etc., etc.)

- ethanol [admittedly, that one is more complex than purely from cognitive elites, but you cannot deny it originated and got *major* support from there]

- San Francisco [again, complex, but surely there is a much higher proportion of cognitive elites in S.F. getting their way than all other big cities]

- elite university speech codes

- elite universities this month unwilling to denounce the side decapitating babies.

Feel free to ignore my first 4 examples and focus only on the last 2.

Surely you are aware of William F Buckley's quote: "I'd rather entrust the government of the United States to the first 400 people listed in the Boston telephone directory than to the faculty of Harvard University."

Without trying to argue that this view is "always" right, how do you justify the diametrically opposite view? Do you have evidence on your side for this? There is plenty on the other side. Or are you suggesting - as die-hard socialists/Marxists do - that it's simply never been done "properly" before?

[Disclosure: I am a conservative-leaning libertarian, or a libertarian-leaning conservative. By any reasonable definition, I'm a member of the cognitive elite, and I generally think cognitive-elite authoritarianism is one of the biggest problems in our society.]

Expand full comment

Brian, I read your posts and listen to your podcasts because I believe that you're willing to form an opinion based on the facts, as you interpret them. I'm an old guy, a fully-formed conservative. Yet, I am willing to change my beliefs/opinions if "the facts" warrant a change. Moreover, I believe that everyone operates from a set of beliefs. Reasonable people adjust their perspectives when confronted by, often unwelcomed, new realities.

Expand full comment

re: "cognitive elites have to stave off midwit social climbers and the continual entropic flow of political economy. "

Sometimes that attempt to "stave off" is to steer them towards lesser evils. It'd be great if AI were entirely rational and didn't cave into woke world views, etc: but that seems an unlikely outcome at the moment in the real world. Just as we want AIs that aren't woke: others are going to want "harmless" ones. Anthropic talks about wanting democratic input, as does OpenAI. That risks a mixture of tyranny of the majority and/ Taleb's dictatorship of the most intolerant minority.

While many might be tempted by the idea of a "cognitive elite" AI that helps educate the masses: pragmatically those concerned about "harm" won't let that happen if its output would be offensive to the woke who want to be shielded. The idea of AI pluralism allows co-existence between the different factions. I'd suggest considering, even if you disagree with the details, whether to spread word of this approach to pluralism that uses leftist language about diversity, and their concerns over "democracy" to argue for using it not for a one-size fits all AI, but for a rainbow of AIs:

https://RainbowOfAI.com

While OpenAI and Anthropic are debating "democracy": maybe some folks can be nudged in the direction of democracy as guidance on defaults, rather than their likely intention to ensure its "harmless": with their definition of "harm" likely being woke. Perhaps that approach is flawed: but it seems useful to get those ideas into the dialogue to inspire others. Maybe it takes different ways of talking about ideas related to AI pluralism to find one that'll register, so it seems worth trying to boost ideas that aren't yet getting spread, it seems that author has no following.

Expand full comment
Jun 2, 2023·edited Jun 3, 2023

This kind of god complex is very dangerous and precisely the reason of all the disinformation and its detrimental outcomes during the covid pandemic. The goal of journalism must always be truth, not outcome, or else you will just end up deceiving people. Think of the pandemic, how high of an IQ did it require to avoid vaccination? 140 to 150 maybe? Highly intelligent and highly educated people read this stuff, and they will be mislead to act just as stupid (or worse) than people in the low IQ range. This is a very serious topic. It is time to get realistic and to sidestep any fringe perspectives. If you think more for yourself just don't talk about it. But this? Believe me I know anyone from university professor to engineer and student. They are not that smart. They still read the newspaper and just believe it out of comfort or laziness. Maybe you found some kind of social niche echelon where that is less so, but this is not your reader base. Statistics man. Smart people do the stupidest things if miseducated.

Expand full comment