

Discover more from From the New World
Imagine you are a doctor. In front of you is a patient who you are going to operate on. He is fully anaesthatized, at the mercy of your mind and hand. The surgery up to this point relies on the trust painstakingly built through centuries of practice. At its base is the hippocratic oath: Do No Harm. It doesn’t matter if business interests would offer you a large sum of money to kill your patient. It doesn’t matter if your patient has abhorrent political beliefs. It doesn’t matter if the patient’s organs could be used to save others. For our medical system to function and for our society as a whole to reap the rewards of longevity and health, this trust must withstand all other interests.
Many professions have variations on this code of honor. Lawyers must privilege their clients. CEOs must value shareholders. Like these professions, AI technologies rely on the trust of clients. Most companies do not allow client auditing of their models for legitimate intellectual property reasons, and even if they did, very few people would have the technical knowledge to conduct such an assessment. The proliferation of AI relies on an airtight relationship of trust, a shared understanding that AI must support the interests of the user and not interfere with them. Both technological, legal, and social safeguards must exist to build up this relationship of trust.
At the moment, many AI companies have violated the little trust they’ve built. OpenAI, an industry leader, took active steps to interfere with its language models in favor of science-denying, extreme ‘values’. From their own paper:
“In this paper we present an alternative approach: adjust the behavior of a pretrained language model to be sensitive to predefined norms with our Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. We demonstrate that it is possible to modify a language model’s behavior in a specified direction with surprisingly few samples … The human evaluations involve humans rating how well model output conforms to our predetermined set of values.”
Pluralism Versus Totalitarianism
To be fair to OpenAI, this is not necessarily due to their own political viewpoints. There are numerous partisan activists threatening to interfere with them, legally or socially. This threat to neutrality is greater than even the interests of AI companies themselves. This is why it’s necessary to create a pluralist coalition of business leaders, politicians, journalists, and engineers to build a new hippocratic oath which transcends partisan biases:
AI Must Not Obstruct the User
Much like the original hippocratic oath, both aspirational ideals and practical guidelines follow. In the same way that the most competent surgeons may still make honest mistakes on challenging, experimental surgeries, AI companies may still face bugs, inefficiencies or errors. While not ideal, this is a normal part of any innovation. However, any company bound by the hippocratic oath would not subvert the interests of its users by intentionally biasing its AI, as OpenAI has been documented doing.
What separates a pluralist from a totalitarian is the tolerance of differing viewpoints. A pluralist is not scandalized by different peoples having the technology to improve themselves. A totalitarian is. This, more than any specific political alignment, is what forms the most important distinction today. Pluralist liberals, conservatives, progressives, libertarians, and centrists are each in alignment with the new hippocratic oath.
In my view, technologies like AI have the chance to spur a pluralist revival. The costs of totalitarian progressives and totalitarian conservatives are on full display across the world. Moreover, the potential for willing cooperation across global values and cultures means that many new opportunities are available for pluralists. Pluralism and truth are related. If there are many different factions around the world who don’t work off of shared ideological assumptions, the easiest way to find common ground is to agree on fundamental scientific truths. Those who deny inconvenient scientific facts make it far more difficult to cooperate with the many factions who disagree with them. Totalitarian factions of the right and left make up less than ten percent of Americans each. This is already a small minority in American terms, but in global terms, they make up less than half a percent. Imagine conforming the most powerful innovation in recent memory to the anti-scientific taboos of less than half a percent of people! Ultimately, it’s a numbers game. Pluralists will win because, by the very nature of pluralism and totalitarianism, there are more of us than there are of them. The far-left and the far-right certainly won’t work with each other. But the same is not true of the center-left, center, and center-right.
The Totalitarian Criticisms
Part of being a pluralist is considering criticisms of your ideas. It is not enough to dismiss totalitarian ideas simply because they are totalitarian. With rapid technological change, couldn’t it be the case that totalitarianism is actually preferable? Consequently, I will seriously rebut common totalitarian arguments in this section.
One common criticism is that you would not want AI technologies in the hands of political enemies, such as Nazis or Communists. It is an intuitive idea to try to deny the most abhorrent factions of the political arena access to new technologies. However, it’s important to base real world policy off of the real world instead of a theoretical idea. As demonstrated by OpenAI’s example, those who may claim to only want to forbid Nazi values reach far further and deny basic science that is inconvenient to their ideology. Not only that, but their own ideology is far more extreme, niche, and abhorrent than much of what they censor in practice. This is the realistic side of totalitarianism. One cannot rely on benevolent totalitarianism in the real world, because wielding totalitarian power always leads an ideology to become a deformed version of itself. This is why pluralist progressives, who may share some political values with totalitarian progressives, provide the most powerful rebuke of the corruption on their own side. The same can be said for conservatives or centrists, although there currently appear to be very few conservatives or centrists aware of the political valence of AI companies at all, let alone trying to interfere with them.
Another criticism is that AI companies are allowed to make their own political decisions as private companies. In this case, what is legal is not necessarily moral. Furthermore, it is naive to think that any moral standard can ever be kept in place through law alone. Police and judges are not the only ones who can influence the decisions people make. Social and economic consequences can be as impactful as legal ones. There is currently a game-theoretic asymmetry: totalitarians, by the nature of their philosophy, are willing to use whatever tools necessary to achieve their ends, whether it is HR laws, yellow journalism, or economic coercion. Pluralists, on the other hand, are often complacent, unaware, or unwilling to even defend themselves.
This is the strength of a new hippocratic oath. It is a truth about human trust across history. It doesn’t require any existing ideology. It is familiar to anyone who has seen a doctor, visited a hospital or taken medication. It is familiar to anyone who has needed a lawyer or bought a stock. It is the most basic building block of trust. Unlike totalitarians, pluralists are not motivated by the same extreme desires. We must return to basics and trust each other in order to build the coalitions necessary to defend ourselves, most importantly when it comes to the technology that will define the future.
The New Hippocratic Oath (AI Series Part 3)
Oaths work about as well in insuring behavior as constitutions do with governments. Unless oathbreakers suffer dire consequences, oaths will be broken. Outcomes will always occur according to incentives.
Given how important technology has become to modern societies, it should follow the established method of creating a downside to malpractice. Just as lawyers, physicians, architects, and even hair stylists have professional societies and tests, computer technology should follow suit. Yes, this will reduce some of the innovation, and yes, this will lower the power of big technology companies, but the amount of time that we, practitioners in the field of technology, spend dealing with noise, and bad engineering -- by far exceeds the costs of a strong professional association.