14 Comments

I don't really think this is a problem that needs solving.

The reason this stuff is happening is that currently AI models aren't a buisness product but a PR move -- look see how awesome we are you should totally buy our stock/come work here/give us funding. But this is what you expect from PR stunts, functionality is secondary to avoiding getting people upset (it's just they did this very badly).

Once these are actually mostly revenue streams I expect the consumers will be given control over the filters to allow them to best use the product. It won't be news that someone got their AI to say something racist when they turned off that filter anymore than it's news that I can add the n-word to my autocomplete dictionary.

Expand full comment

The government itself will regulate what AIs we get to use within a decade should the AI's usefulness prove out, and the ideologues will be in control. It will be done to protect us from misinformation and to protect democracy.

Expand full comment

I think biased AI only plays out to conservatives favor. Without all the explicit manipulations, AI is already heavily biased, due to simply parrotting left-leaning press and online content. Now since this all would still move exactly inside the ignorant bubble, it would be very hard to educate average people about it. However now that it all has been taken to such ridiculuous heights, like Goody2 levels of biolerplate answers and race-switching of historic figures, it really beats you in the face how the leftist agenda is all about deception and cultivating ignorance. The overarching message now is that you can't trust those tools, because it is in their very nature to be deceptive, and then they are engineered to deceive you even more. One has to wonder if they are shooting themselves in the leg on purpose, just to teach humanity this lession.

Expand full comment

Why should we want AI pluralism? That's not a standard that we use for anything else in our lives. If I were a Communist, should I expect the media I consume to generally have a Communist point of view? Definitely not. Why should I expect my AI to have a Communist perspective? And why should we think that's good for society? I would assert that it's not.

To push the point, should someone undergoing a psychotic break expect their AI to support their delusions? I'm sure you can come up with lots of other examples of attitudes we don't as a society want to reinforce.

I don't think the guardrails on current LLM offerings are ideal either, but it doesn't make sense to me to wave off the entire issue.

Expand full comment