Discussion about this post

User's avatar
Peter Gerdes's avatar

I don't really think this is a problem that needs solving.

The reason this stuff is happening is that currently AI models aren't a buisness product but a PR move -- look see how awesome we are you should totally buy our stock/come work here/give us funding. But this is what you expect from PR stunts, functionality is secondary to avoiding getting people upset (it's just they did this very badly).

Once these are actually mostly revenue streams I expect the consumers will be given control over the filters to allow them to best use the product. It won't be news that someone got their AI to say something racist when they turned off that filter anymore than it's news that I can add the n-word to my autocomplete dictionary.

Expand full comment
Yancey Ward's avatar

The government itself will regulate what AIs we get to use within a decade should the AI's usefulness prove out, and the ideologues will be in control. It will be done to protect us from misinformation and to protect democracy.

Expand full comment
12 more comments...

No posts