Economists vs. EAs 2
Market Coordination: Human Esoteria or Universal Optimum?
Why do we have solutions? This is a fundamental question in both economics and computer science. A few weeks ago, I wrote that the key distinction between economists and EAs are as follows:
EAs believe “software eats the world”, that once a digital solution is introduced to an industry, it will dominate that industry.
Economists believe “equilibria respond to imbalance”, that market systems adapt in response to new technologies.
A valid critique I got was that both of these obscure the underlying question. Why does software eat the world? Why do equilibria respond to imbalance? A prerequisite to answering these questions is “Why do we have prosperity in the first place?” I’m sure that will be an easy question to answer.
Economists on Complexity
In Hayekian terms, markets are more than venues for trade; they are intricate networks of human relationships, underpinned by local knowledge, values, and the shared experience of billions. This complex dynamic yields an 'extended order', a systemic web of interactions that is more sophisticated than any one individual's intelligence. Progress, therefore, stems not from the ability to compute or calculate, but from the capacity to participate in this extensive network, fostering economic growth, social innovation, and cultural advancement. Artificial intelligence, while impressive in its ability to aggregate written information, exists outside this extended order. AI's ability to learn and evolve is contingent on the quality and quantity of data, which is a combination of empirical data and a large amount of explicit human commentary.
It's important to consider the cultural and institutional dimensions in which progress occurs. Human institutions are attempted solutions to the problem of self-deception. Humans have both socially desirable public statements and implicit hidden motives, which are communicated in part by revealed preferences and actions.
While AI can model scenarios and make predictions, it lacks the capacity to understand and navigate the cultural nuances and historical contexts that underpin human institutions. Laws, cultural norms, and political frameworks are steeped in centuries of human history, influenced by philosophical ideas, historical events, and social movements. Through centuries of cultural evolution, social institutions developed which balance explicit statements and implicit, hidden sentiments.
This is one reason why it is particularly bad at counterintuitive questions, such as Steve Landsburg’s economics exam. Human institutions, such as markets in this case, solve these problems despite the majority of people holding predictably false beliefs. Moreover, there may be situations in which human institutions not only succeed despite false beliefs, but specifically because of them.
Now, there’s an unrealistic version of this which argues that AI could never solve these underlying problems, no matter how much compute or algorithmic improvement it gets. There are apparently people who actually believe this. But that isn’t the argument most economists make. The argument is simply that EAs strongly underestimate the difficulty of such a task.
EAs Strike Back
However, the principle of self-deception is also an argument in favor of EAs. EAs often point to cognitive biases, emotionality and self-interest as a cause of dysfunction. Scott Alexander describes this view as “mistake theory”:
Mistake theorists view debate as essential. We all bring different forms of expertise to the table, and once we all understand the whole situation, we can use wisdom-of-crowds to converge on the treatment plan that best fits the need of our mutual patient, the State. Who wins on any particular issue is less important creating an environment where truth can generally prevail over the long term.
Conflict theorists view debate as having a minor clarifying role at best. You can “debate” with your boss over whether or not you get a raise, but only with the shared understanding that you’re naturally on opposite sides, and the “winner” will be based less on objective moral principles than on how much power each of you has. If your boss appeals too many times to objective moral principles, he’s probably offering you a crappy deal.
The economist view synthesizes both of these by arguing that institutions solve both mistake and conflict simultaneously using voluntary association and revealed preferences. EAs might argue that the reason human decision-making processes fail is precisely because it is human. It is because it is riddled with cognitive biases such as overconfidence, loss aversion, or confirmation bias. These biases distort our perceptions of risk, value, and likelihood, subsequently influencing economic behavior.
AI, on the other hand, is arguably less susceptible to these cognitive distortions. AI achieves this through its capacity to aggregate large amounts of data. It can assess vast quantities of data more accurately and swiftly than any human, spotting trends, anomalies, and opportunities that a human analyst might miss. This enables AI to make well-informed predictions about the economy, market trends, and consumer behavior. Moreover, AI can adjust its strategies in real-time, responding to changes in the market instantaneously, a feature beyond human capability. There is no inherent reason that statistical prediction would emulate cognitive biases (though when based on human training data, current models do display this behavior).
Hayek might argue that human intuition and creativity in a varied economy can surpass AI's computational abilities. However, these qualities, though valuable, often function within the sphere of uncertainty, ambiguity, and the unknown. AI, equipped with advanced machine learning algorithms, can systematically navigate these realms, turning uncertainty into quantifiable risks and rewards. It thrives in environments where information is vast, multifaceted, and continuously evolving. There, compute is even more valued.
What Do I Believe?
I would say I believe in a mix of these two models. I also believe that the rate of machine learning progress is overestimated and that people overwhelmingly underestimate individual differences. At the end of the day, I’m skeptical of the ability for theory to explain things. In a way, the economists’ generalized skepticism of rationality should apply to the economic model itself. If we were to get superhuman AI, we wouldn’t be able to predict it from rational principles; it would most likely emerge from something unexpected in the market. This is also the best critique of my diminishing returns article. It is true that the market is a better judge of whether there is low-hanging fruit than any assessment or article I could assemble. That being said, it isn’t like I have a prophecy of the future market.
That being said, a version of the EA argument that is particularly baseless is “foom”, the idea that suddenly AI will rapidly self-improve to superhuman AI. If AI does improve, it will be due to the accumulated work of many engineers and founders in the future. There are many stages of individual improvement (top 1%, top 0.1%, top 0.001%, et cetera) in any area before AI becomes superior to the best human (top 1 in 8 billion). Scaling machine learning algorithms require an enormous variety of approaches, each of which reach diminishing returns and must be substituted with others. The only arguments for “foom” are basically fanfiction and incompatible with either the economic history/institutional view or the practical, empirical reality of ML development. It may still be the case that superhuman AI is created eventually, through ordinary market/engineering methods. It may be the case that there is more remaining low-hanging fruit than I predicted. But there will certainly be many opportunities to revise your beliefs in the process.
"a combination of empirical data and a large amount of explicit human commentary"
The problem, is the larger part of human commentary is done by those Joel Spolsky calls 'smart people who write white papers.' Smart people who write lots of white papers are vastly different from the people Joel calls 'smart people who get things done.'
On the basic issue of market coordination, i.e. central vs. decentralized planning: Of course AI isn't going to undermine the idea that decentralization works better than centralization because the knowledge required for society to coordinate is distributed and constantly being added to, with not all of that information public.
Its true that technocratic idealists keep hoping better computing power will enable central planning, better computing power is also being used to aid all the participants in the markets. Even if somehow you did have a centralized super-AGI that could model the thinking of individual humans: presumably by that point those humans have AGI to aid them.
In addition all the interactions between all those decentralized players add even more complexity and possibilities, a centralized entity can't deal with it. It can't predict all the things humans+AGI can invent, or how their values or priorities may change.
No single person or small group knows as much as the cumulative knowledge of a large group of people . If AI and then AGI tools aid individuals they will change how they plan and make decisions. While centralized computing power like supercomputers governments may create may be more powerful than most corporations (these days some corporations may outdo them), they aren't a match for the decentralized combined computing power of humans aided by AI.