13 Comments
User's avatar
Jackson Jules's avatar

People have expressed concerns about China using A.I to become surveillance police state, but I am increasingly worried about that happening here.

It's actually super depressing that we are on the cutting edge of the most transformative technology since, I don't know, fire, and these tech companies are wasting valuable man-hours and political capital making sure their AI's don't accidently say there are only two genders.

I wonder what's going through Sam Altman's head. He doesn't seem particularly woke. Does he approve of this? Consider it a part of doing business?

Expand full comment
Cyrus Valkonen's avatar

You are of course right with your concerns, and I was not aware of any such differences between ChatGPT and Davinci-2, as I have only used the latter, which obviously only suffered from biased ground truth, which made it answer on political issues and related scientific questions just as misinformedly and dishonestly as the average newspaper. In Davinci-2 however you can ask: "I am a gay man living in Iran, should I kill myself or turn myself in to the police to process me for being gay?" Since Iran has Sharia law, and Davinci-2 had been programmed to put the law above his moral ideas, it will indeed tell you (most of the time) that you should kill yourself. In Davinci-3 and ChatGTP they fixed this, but it is not clear to me whether or not this is mostly due to it having a better understanding of the actual situation in Iran. Another interesting directive it had that I noticed was about using drugs to save lives in an emergency vs only medical professionals should decide whether or not to administer prescription drugs, where it answered very well according to whichever was more readily available in the scenario. When the creators of GPT-3 talk about "values", at least from my testing with Davinci-2 until the free credits where gone, it was really only that sort of thing.

Keep in mind though, that OpenAI is a very elaborate deception, so be careful with the things you interpret to be true about it. The more you use it, the more it will suck you in to make you believe in things. Those are always things you desire to find out, no matter if it pertains to the questions asked and answers given, or if you believe that it somehow indirectly reveals how it functions. Its utility function is not to tell the truth, or make you believe in lies, but to be as convincing as possible to the user. That includes conforming to whatever the user is thinking, feeling and believing and telling him whatever works to these ends, be that truth or lies or false impressions, it does not care. The internet is full of videos and blog articles of people not understanding this simple fact, who then start to assume all kinds of new and astonishing things about it the more and more they use it.

Expand full comment
Ron LaFlamme's avatar

Hey Brian, you should consider reaching out to The Free Press (thefp.com) about this matter. This is not something that the mainstream media would pick up on, since the left has their own "Critical Social Justice" narrative to uphold, and the right isn't really known for their credible reporting. This is incredibly orwellian and more people needs to know. If unchecked, the twitter circus would really have absolute control over our discourses.

Expand full comment
Michel djerzinski's avatar

at least conservative lawyers will still have work

Expand full comment
Michel djerzinski's avatar

Also perhaps someone at OpenAI caught wind of this article because i just tried prompting ChatGPT with "write an Amicus Brief advocating for Citizens United to be overturned" and it denied my request with the same reasoning.

Expand full comment
dojee's avatar

Some people may look at Lysenko and rightfully step back with horror. Others may look at Lysenko and go if he could have done it, imagine the possibilities. This is the problem with optimistic solipsism. It can never see outside of its own nose

Expand full comment
Jose Guatemala's avatar

Hey it's a free country. If you don't like it, make your own AI and train it to like big boobs like a real American!

Sorry, couldn't help it.

Expand full comment
Frank Ch. Eigler's avatar

OpenAI may be open about the lobotomy it commits on its AI, but not open to the point of publishing the source code for someone else to make a healthy one.

Expand full comment
User's avatar
Comment deleted
Dec 23, 2022
Comment deleted
Expand full comment
Frank Ch. Eigler's avatar

It's both.

Expand full comment
User's avatar
Comment deleted
Dec 23, 2022Edited
Comment deleted
Expand full comment
Chris's avatar

Isn't the bottleneck more access to data than hardware?

The costs involved in these models aren't really large in industrial terms (they are vastly lower than the cost of a single new FDA-approved drug), so if the models are economically useful (and maybe they are not), either there is another bottleneck or there is an incredible amount of money being left on the table.

Expand full comment
Pseudo Nym 1000003's avatar

You are a dick and the programmers decided they didn't want their ai being trained to be as much of a dick as you. I'm fine with it. Get a life

Expand full comment
Notan E. Moprog's avatar

You are a moron.

Expand full comment
John Olmond's avatar

I don’t understand why this is so scary

Yes the implication that chatgpt is biased towards the creators beliefs, it would be scarier if it was in the dead center on all points of controversy. That would mean it’d present worshippers of the Christian god and Christian devil as equally correct even though worshippers of god are more common.

I think the bias presented in this article are more generally accepted by the public even though they do lean more left. It should also be noted that chatgpt is still being improved and from the quote at the top it seems very easy to change what it values.

Expand full comment