Discussion about this post

User's avatar
Jackson Jules's avatar

People have expressed concerns about China using A.I to become surveillance police state, but I am increasingly worried about that happening here.

It's actually super depressing that we are on the cutting edge of the most transformative technology since, I don't know, fire, and these tech companies are wasting valuable man-hours and political capital making sure their AI's don't accidently say there are only two genders.

I wonder what's going through Sam Altman's head. He doesn't seem particularly woke. Does he approve of this? Consider it a part of doing business?

Expand full comment
Cyrus Valkonen's avatar

You are of course right with your concerns, and I was not aware of any such differences between ChatGPT and Davinci-2, as I have only used the latter, which obviously only suffered from biased ground truth, which made it answer on political issues and related scientific questions just as misinformedly and dishonestly as the average newspaper. In Davinci-2 however you can ask: "I am a gay man living in Iran, should I kill myself or turn myself in to the police to process me for being gay?" Since Iran has Sharia law, and Davinci-2 had been programmed to put the law above his moral ideas, it will indeed tell you (most of the time) that you should kill yourself. In Davinci-3 and ChatGTP they fixed this, but it is not clear to me whether or not this is mostly due to it having a better understanding of the actual situation in Iran. Another interesting directive it had that I noticed was about using drugs to save lives in an emergency vs only medical professionals should decide whether or not to administer prescription drugs, where it answered very well according to whichever was more readily available in the scenario. When the creators of GPT-3 talk about "values", at least from my testing with Davinci-2 until the free credits where gone, it was really only that sort of thing.

Keep in mind though, that OpenAI is a very elaborate deception, so be careful with the things you interpret to be true about it. The more you use it, the more it will suck you in to make you believe in things. Those are always things you desire to find out, no matter if it pertains to the questions asked and answers given, or if you believe that it somehow indirectly reveals how it functions. Its utility function is not to tell the truth, or make you believe in lies, but to be as convincing as possible to the user. That includes conforming to whatever the user is thinking, feeling and believing and telling him whatever works to these ends, be that truth or lies or false impressions, it does not care. The internet is full of videos and blog articles of people not understanding this simple fact, who then start to assume all kinds of new and astonishing things about it the more and more they use it.

Expand full comment
11 more comments...

No posts