Discussion about this post

User's avatar
DeepLeftAnalysis🔸's avatar

If Altman could use AI to generate a feed which excluded toxicity (as defined by the user) I could see it being successful.

I see it working as follows:

Imagine you're using FaceStack, where people post videos, essays, podcasts, and so on. You find yourself posting, and in the comments section, someone responds to your post, saying "you suck!"

You block the person, fine, but this doesn't prevent 100 more people from doing the same thing over the coming weeks. It's very aggravating to have to receive this negative feedback -- it's not even a criticism, just a personal attack.

What if there was an "AI block" button? AI block would go further than a simple block -- it would remember what you blocked, and try to figure out why you blocked it. Was it a particular slur? A particular worldview? Was it content slop?

Eventually, the AI would learn the stuff you really hate to see, and curate your feed accordingly, so you would never have to block anyone ever again.

This would reward creators who don't use slurs or have polarizing opinions. It would encourage moderate centrists over polarizing extremists. It would recreate the Tom Brokaw consensus.

Whether that's a good or bad thing, I think it could be appealing to moderates and independents, who are frustrated both with Bluesky and with Twitter.

To take this further, if you were far right or far left, you could also exclude content to see only what you like. It would be ghettoizing, which had bad social consequences for civilization as a whole, but it might be attractive to users.

Expand full comment

No posts