AI is becoming much clearer for all of us. We recently saw AI-based discoveries win the Nobel Prize. Dario Amadei, CEO of Anthropic, put out an essay explaining his view of the benefits of AI.
For a time, many of my friends who were writers, academics, or civil servants, hesitated to cross the AI/non-AI barrier. They felt that there was a kind of manic mood surrounding AI, not helped by the tendency to drift into science fiction and eschatology (a more polite way of saying ‘doomsday prophecy’). To them, AI wasn’t something to be understood by normal people. And even if it was, the voice of normal people wouldn’t change anything.
That mania is now starting to lift. There is still an unhealthy oscillation between spiritual and secular language. This behavior reflects an anxiety-induced instability, real or cynical.
To clarify the role of normal people in thinking about and sharing their thoughts on AI, I’ve decided to return to Mill’s Trident. Mill’s trident is an argument in favor of free speech, which categorizes speech into three categories: certainly false, partly true, and certainly true. As summarized the Foundation for Individual Rights and Expression:
You are wrong, in which case freedom of speech is essential to allow people to correct you.
You are partially correct, in which case you need free speech and contrary viewpoints to help you get a more precise understanding of what the truth really is.
You are 100% correct, in the unlikely event that you are 100% correct, you still need people to argue with you, to try to contradict you, and to try to prove you wrong. Why? Because if you never have to defend your points of view, there is a very good chance you don’t really understand them, and that you hold them the same way you would hold a prejudice or superstition. It’s only through arguing with contrary viewpoints that you come to understand why what you believe is true.
Mill’s trident is a great clarifier for the issue of free speech. He used it to cut across both innocent anxieties and cynical muddling of arguments about speech. I think we can do the same for AI.
I think the biggest distinction of paradigm comes from whether AI has spiritual significance. Consider the disagreements between Yann LeCun, on the AI-secular side, and Geoffrey Hinton, on the eschatological side. The key variable in question is almost never discussed: how much does AI matter for your religious worldview? In this sense, the most hardcore accelerationists and doomers form a horseshoe. The same can be said about AI optimists and those raising short-term concerns about AI, which both hold that AI is no different than other technological shifts in history. There are also those in the middle, like my colleague Jon Askonas who believes that there is spiritual significance to media technologies as a whole, which have clear effects on the habits people form and what religious traditions they’re drawn to, but remain skeptical of either utopia or doomsday.
These are the three categories of the AI trident. Those who give zero spiritual significance to AI, those who give ultimate spiritual significance to AI, and those in between. This trident cuts across the arguments against AI and can establish in all three domains a clear case for supporting further research and development.
Zero spiritual signficance
Use purely empirical means.
For the purely secular vision of AI, we should return to the motto of the Royal Society, “Nullius In Verba”, or “Take no one’s word for it”. From this framework, policy around AI should be set based on the evidence that you have. And there is no evidence whatsoever for doomsday prophecies or utopia.
All of this is epitomized by a slogan we advocated for from the start: evidence-based AI policy. Increasingly many journalists and politicians are fond of addressing this framework. This approach focuses on addressing real-world harms with measurable impact. While there are nonetheless moral disagreements to be had about what measurable effects of AI are harms, it grounds legislation in an empirical framework. To evaluate this empirical grounding, there is early empirical work establishing the many economic and biomedical (1,2) applications of AI that currently exist, which will only grow with further research. An evidence-based approach looks to collect more accurate data while debating what moral framework to interpret that data with.
Big spiritual significance
Follow theologans from a real faith tradition. Notably, do not follow a new-age cult of reason as they reinvent “it’s better to reign in hell than serve in heaven”.
Next we look at the other side of the spectrum. For those with a truly world-changing spiritual significance of AI, there tend to be only two answers. Hardcore accelerationism believes in a kind of utopia and stated by a thermodynamic god, while doomers believe it is the literal end of humanity.
Economist Tyler Cowen has an open call to broader our idea of experts in AI. He argues that the wider AIs impact the more facets of the society and economy it will be interwoven with, the wider we should conceptualize experts in AI. He calls for more historians, economists, and even theologians. And the bigger the spiritual significance of AI, the more of a role those theologians should play.
Following Mill, I believe in a pluralist tradition. But I also believe that long-standing faiths hold much more wisdom than new-age faiths. Perhaps that the integration of church and AI will not be as bad as the integration of church and state. Who I have nothing but contempt for is a pop-cult which reinvents the phrase uttered by Satan in John Milton’s Paradise Lost, “it’s better to reign in hell than serve in heaven.” The greater the spiritual significance of AI, the more important it is to avoid cheap messianism. It’s relieving to see Dario Amodei, CEO of Anthropic, distance himself from this belief.
Small spiritual significance
In the case that AI is one of many things with spiritual significance, the debate around AI is no different from the debate about modernity.
For mainstream society, modernity works great with maybe a few changes around the edges. The attitude of elected officials and academics reflect this. For others, grappling with modernity means returning to tradition. It might mean living in intentional communities, placing limits and rituals around new technologies like AI. It may mean a more revolutionary political project, like the one planned by Steve Bannon.
Protecting that dialogue, wherever it is destined towards, means stabilizing political institutions and rule of law. It does not mean taking the radical approach of opposing research itself, but rather negotiating norms and perhaps even rules around the use and misuse of new technologies as they’re integrated into society.
There is much more to be said about the spiritual significance of communication technologies as a whole. For the narrow scope of this article, I believe most faith traditions popular in the current day suggest a more incremental, albeit possibly skeptical, approach to AI policy.
The Milton quote is “It’s better to reign in hell than to SERVE in heaven.”