12 Comments
User's avatar
Yancey Ward's avatar

"It does not experience distrust, anger, envy, fatigue, curiosity, disgust, anxiety, et cetera."

"It doesn't feel pity, or remorse, or fear, and it absolutely will not stop....."

Ok, with a set up like that, I simply couldn't resist

Expand full comment
Dallas E Weaver's avatar

Could AI given a star map (with only location and brightness in the night sky) and a set of real world events produce an Astrology like struture with a pattern of the stars/planets?

Creating a pattern where none exists may show some of the limits of AI which are similar to the human brain.

Expand full comment
Brian Chau's avatar

I think yes, but it would largely imitate the patterns and tropes within existing astrology systems in human cultures

Expand full comment
Dallas E Weaver's avatar

I was thinking of giving it no information on existing astrology systems just location data and time lines on events and let it determine the "correlations" from a random patters. Do the AI systems have built in statistical analysis capacity or does it just see "correlations" and like people assumes they are valid and searches for more random fits.

Expand full comment
Hervé Eulacia's avatar

ML is just a tool. It will only find the things you want it to look for, within the constraints and data that you give it. If there is any creativity in the output, it’s due to the human doing the setup.

Expand full comment
Yancey Ward's avatar

You are far more optimistic than I am that the sane will prevail.

Expand full comment
Mathew Crawford's avatar

Does the chatbot blame its problems on the CCP, Russian oligarchs, people who use binary pronouns, Satan, the Jews, the Deep State, the existence of money, or people in flyover states who don't understand how cow belches are destroying the fabric of the universe?

I just need to simplify my reading stack for the New Year, thanks.

Expand full comment
John Carter's avatar

This was a really fascinating series. The insight you've provided into the mechanics of brainwashing ML systems to return answers that are politically acceptable to a clutch of narrow-minded ideological bigots is invaluable. I hope you're correct in the assessment that debiasing these systems will be much easier than the basic training, because you're certainly correct that determining what kind of AI gets used is going to be a vicious cultural struggle.

You might be interested to have a look at a recent series by Mark Bizone. He's been exploring similar issues from a different angle of attack, in essence trying to determine how to break the language model by finding logical contradiction exploits in its ideological biasing layer.

https://markbisone.substack.com/p/mark-vs-chatgpt-session-1

https://markbisone.substack.com/p/mark-vs-chatgpt-session-2

https://markbisone.substack.com/p/mark-vs-chatgpt-session-3

Expand full comment
Egfow's avatar

The open source version GPT called GPT-NEOX-50B is also very slanted the Left when comes to political content. Out of the box many of these inference engines need to be returned to be non biased.

Expand full comment
Alex W's avatar

This is a very helpful piece, but I wonder what it’s 90s/00s equivalent would have looked like when discussing web forums or some similar communication tech of that sort during that era.

Wouldn’t it have predicted low censorship? Tech is decentralized… easy to set up your own forum. That sort of thing?

Expand full comment
AM's avatar

What if you simply remove differing perspectives from the training set? Now that you have a model that can recognize them with high degree of accuracy, what is stopping the bias engineers to simply test the inputs against virtual NYT editor, and if he disapproves, don’t even show it to the model? Do you think that the model would still exhibit latent contrary perspectives?

Expand full comment