12 Comments
Dec 24, 2022Liked by Brian Chau

"It does not experience distrust, anger, envy, fatigue, curiosity, disgust, anxiety, et cetera."

"It doesn't feel pity, or remorse, or fear, and it absolutely will not stop....."

Ok, with a set up like that, I simply couldn't resist

Expand full comment
Dec 24, 2022Liked by Brian Chau

Could AI given a star map (with only location and brightness in the night sky) and a set of real world events produce an Astrology like struture with a pattern of the stars/planets?

Creating a pattern where none exists may show some of the limits of AI which are similar to the human brain.

Expand full comment
Dec 24, 2022Liked by Brian Chau

You are far more optimistic than I am that the sane will prevail.

Expand full comment

Does the chatbot blame its problems on the CCP, Russian oligarchs, people who use binary pronouns, Satan, the Jews, the Deep State, the existence of money, or people in flyover states who don't understand how cow belches are destroying the fabric of the universe?

I just need to simplify my reading stack for the New Year, thanks.

Expand full comment

This was a really fascinating series. The insight you've provided into the mechanics of brainwashing ML systems to return answers that are politically acceptable to a clutch of narrow-minded ideological bigots is invaluable. I hope you're correct in the assessment that debiasing these systems will be much easier than the basic training, because you're certainly correct that determining what kind of AI gets used is going to be a vicious cultural struggle.

You might be interested to have a look at a recent series by Mark Bizone. He's been exploring similar issues from a different angle of attack, in essence trying to determine how to break the language model by finding logical contradiction exploits in its ideological biasing layer.

https://markbisone.substack.com/p/mark-vs-chatgpt-session-1

https://markbisone.substack.com/p/mark-vs-chatgpt-session-2

https://markbisone.substack.com/p/mark-vs-chatgpt-session-3

Expand full comment
Dec 28, 2022·edited Dec 29, 2022

The open source version GPT called GPT-NEOX-50B is also very slanted the Left when comes to political content. Out of the box many of these inference engines need to be returned to be non biased.

Expand full comment

This is a very helpful piece, but I wonder what it’s 90s/00s equivalent would have looked like when discussing web forums or some similar communication tech of that sort during that era.

Wouldn’t it have predicted low censorship? Tech is decentralized… easy to set up your own forum. That sort of thing?

Expand full comment

What if you simply remove differing perspectives from the training set? Now that you have a model that can recognize them with high degree of accuracy, what is stopping the bias engineers to simply test the inputs against virtual NYT editor, and if he disapproves, don’t even show it to the model? Do you think that the model would still exhibit latent contrary perspectives?

Expand full comment