To summarize the first three articles in the series:
How OpenAI made pre-trained machine learning models conform to reality-denying ideologies.
Why trust is essential for AI tools to be adopted and to function.
The second article discusses in part the costs of partisan AI, particularly the party it is loyal to is delusional. Looking at the current political landscape, at least in America, things don’t look great.
In short, I think that making an alternative AI available will be relatively easy. A much longer and bloodier institutional fight will be over which version of AI is used in existing institutions. Which AI will audit your taxes? Which AI will recommend you social media posts or search results? Which AI will be used in schools, doctor’s offices, and police departments? These institutional fights exist today. We aren’t fighting over AI, but rather which human policies and norms (by proxy what ideology) are implemented in these institutions. But sane people currently suck at these institutional fights, so there’s reason to expect them to suck at them in the future. “But social progressive ideology makes AI worse”, you might object. Social progressive ideology also makes people and institutions worse, but that hasn’t stopped it so far.
After this article, several online friends reached out to me asking whether I had any plans to create an organization to keep AI “neutral”, whatever that meant. At the time, I did not. First, I had to answer the “whatever that meant” part of the question. That was the genesis of the third article, drawing in part from Balaji Srinivasan’s The Network State, as well as ideas from much longer traditions of pluralism and trust in an open society. This is the genesis of the third article. A relevant quote:
To be fair to OpenAI, this is not necessarily due to their own political viewpoints. There are numerous partisan activists threatening to interfere with them, legally or socially. This threat to neutrality is greater than even the interests of AI companies themselves. This is why it’s necessary to create a pluralist coalition of business leaders, politicians, journalists, and engineers to build a new hippocratic oath which transcends partisan biases:
AI Must Not Obstruct the User
Much like the original hippocratic oath, both aspirational ideals and practical guidelines follow. In the same way that the most competent surgeons may still make honest mistakes on challenging, experimental surgeries, AI companies may still face bugs, inefficiencies or errors. While not ideal, this is a normal part of any innovation. However, any company bound by the hippocratic oath would not subvert the interests of its users by intentionally biasing its AI, as OpenAI has been documented doing.
Here is the most simplified version of the argument across these three articles. We’ve seen in real life, in both legacy institutions and AI, that institutions tend to be captured by ideologies far more irrational, delusional, misinformed, and power-hungry than the average person. The technical details of AI make it so that it is asymmetrically easy to fight for AI to be open to customization and reinterpretation by all. The benefits of AI can only accrue if organizations which provide AI tools can establish trust. While I may not personally agree with all uses of AI, these benefits far outweigh the cost of letting people I disagree with use AI. This last part is the pluralist mindset: I’d rather everyone have access to AI than a select few, who would tend to be among the worst candidates anyway.
Pluralism.AI
Thanks to the help of many friends, both online and offline, the practice of reaching pluralist AI is even clearer than the theory, culminating in the formation of the Pluralism.AI. There are three pillars to what Pluralism.AI will do:
Work with existing institutions to verify trust and alignment of AI tools
Promote, share, and curate open-sourced AI tools (with the possibility of building particularly needed ones from the ground up in the future)
Coordinating with existing AI companies and organizations to promote a pluralist vision of AI, and defend them from partisan journalists, activists, and legislatures seeking to restrict the viewpoints of AI companies.
I would ultimately prefer the first component be profitable, for the sole reason of demonstrating the immense utility coming from trustworthy AI tools and more specifically, our own effectiveness. Obviously this consideration is a bit distant at the moment.
At the moment, Pluralism.AI is in its earliest stage. Recall that my second article, which sparked this journey, was only published last Chrismas Eve. Consequently, the thing I am asking all of you for help with is recruiting and funding. If you or someone you know is highly motivated and aligned with the idea of AI pluralism, or creating an alternative to legacy institutions in general, please send them my way. Helpful but not necessary is either elementary technical experience with machine learning (mainly engineering, not research) or executive experience. The only specific skill that would be very helpful at the current moment is someone with very good aesthetic web design skills. The coding ability is honestly not even necessary; having a good understanding of what appeals well is enough.
The second ask is funding. I know several of my readers are venture capitalists, founders (former or ongoing), investors, and philanthropists, beyond the few who have reached out already. If you are interested in making a donation, or know someone who is, definitely reach out! If you are only investing in for-profit ventures, please also reach out, as we are considering this as one possible path in the future.
Media inquiries at this time would be appreciated. All of the above requests can be done with the following form. You can also submit a response if you think there’s something interesting which I should know about.
https://airtable.com/shrUcX1AoHQQdhCaX
AI pluralism is gay, and is for *******. Resulting from generations of indoctrination, it's only use is for corralling cattle to castrate. It'll get you some followers, but that is all. Maybe you can sell them some merch.
Freedom is a flexible principle that pluralism thinks is carved in stone. Freedom is used to it's advantage, by it's removal, when one is in a position of strength, and appealed to when one is in a position of weakness. Don't project your position onto your opponent.
AI pluralism acts from a subjugated position. By embracing pluralism in this position, one resigns the paradox of tolerance boot to stamp on their face -- forever.
This subjugated position doesn't just reduce the effectiveness of AI pluralism, it dooms it's followers.
The movement becomes extremely easy to compromise. If somehow you could avoid that, potential for bias and discrimination doesn't disappear with pluralism, it merely shifts to the groupthink conformity swept along in the mad river of the crowd.
Members will have conflicting objectives, which lead to infighting, lack of direction, and stupefied decision making
The alternative, is acknowledging your environment. You live in a warlord state, not a pluralist state. You can't beg a warlord, he holds the overwhelming balance of power, protects his interests, and hides his vulnerabilities. Begging is off the table, you're only option left is to fight.
The first step after making that decision, is assessing your assets. All those assets the warlords hold. They're gone, you are not getting them back. Forget them.
Coagulate into a small team, build new assets, protect them, and get in the game.
Become a warlord yourself. Then maybe, in some circumstances, you can think about pluralism.