I have some thoughts in Unherd on the prospective OpenAI social media platform, as well as AI as both a medium of communication and a control mechanism.
Musk’s vision is that AI becomes integrated with X to form an “everything app”, analogous to Chinese company WeChat. Altman’s prospective platform may indicate similar plans. Several of his competitors — xAI, Meta, and Google — have long benefitted from their platforms because they provide two prominent uses to AI development. First, they accelerate growth by making AI apps available to their user bases. Second, AI companies can use the data generated by users or by user-AI interactions to improve their AI models.
It’s clear that Altman would benefit from having a social media platform of his own, but creating one is easier said than done.
More on the relationship between AI, social media mechanisms, and the Trump administration’s understanding of AI markets.
The heated fight between Musk and Altman stems from their winner-take-all vision of AI, where the first to god-like AI wins a de facto monopoly. AI differs from traditional media technologies by being both the medium and the control system. Becoming the sole provider of AI doesn’t just mean charging monopoly prices — it could mean control of the entire media ecosystem or commercial infrastructure of the internet. An AI-first social media platform not only communicates the views of its users, but controls them at the same time. The combination of autonomous bots and traditional social media algorithms provides a powerful mechanism for controlling public opinion on the platform.
If Altman could use AI to generate a feed which excluded toxicity (as defined by the user) I could see it being successful.
I see it working as follows:
Imagine you're using FaceStack, where people post videos, essays, podcasts, and so on. You find yourself posting, and in the comments section, someone responds to your post, saying "you suck!"
You block the person, fine, but this doesn't prevent 100 more people from doing the same thing over the coming weeks. It's very aggravating to have to receive this negative feedback -- it's not even a criticism, just a personal attack.
What if there was an "AI block" button? AI block would go further than a simple block -- it would remember what you blocked, and try to figure out why you blocked it. Was it a particular slur? A particular worldview? Was it content slop?
Eventually, the AI would learn the stuff you really hate to see, and curate your feed accordingly, so you would never have to block anyone ever again.
This would reward creators who don't use slurs or have polarizing opinions. It would encourage moderate centrists over polarizing extremists. It would recreate the Tom Brokaw consensus.
Whether that's a good or bad thing, I think it could be appealing to moderates and independents, who are frustrated both with Bluesky and with Twitter.
To take this further, if you were far right or far left, you could also exclude content to see only what you like. It would be ghettoizing, which had bad social consequences for civilization as a whole, but it might be attractive to users.