Research Progress in AI is Continuous and Slowly Diminishing
Against Non-Empirical Models
A common anti-AI argument is that AI is inherently dangerous because the ability to improve AI research by using AI products makes a scenario where “recursive self-improvement” leads to a near-infinite amount of improvement in weeks, if not faster. Some versions of the argument say that if there is even a 0.01% chance this happens, the consequences would be so disastrous that it's worth worrying about despite the low probability. I will address that version of the argument.
The simplest counterargument is to look to real world evidence. Every single research field in human history has reached diminishing returns, in which useful discoveries become increasingly difficult to find. Given the length of human history and the vast diversity of scientific fields, this is strong enough to establish on a purely empirical basis that the probability of recursive self-improvement is far less than 0.01%.
Instead, the recursive self-improvement argument relies on a kind of AI monotheism that sees it as radically distinct from every single other research field, including nearby fields such as algorithms, cryptography, or hardware, in its ability to make itself more efficient. There is soft AI monotheism, which emphasizes the ability of AI to generate useful tools for human AI researchers, and hard AI monotheism, which emphasizes the ability of AI to conduct research all on its own. Both narratives emphasize exponential improvement and assert that it is unique to AI. This is far from reality.
AI monotheism is a suspicious belief simply in that it asserts an exception to a widely observed rule. However, there is also direct evidence against this assertion. Almost all scientific fields have the ability to invent tools that enable more efficient scientific research in a field, such as microscopes or particle accelerators and the ability to invent commercial successful products that generate money to reinvest into research. This is a rebuke to soft AI monotheism.
When it comes to hard AI monotheism, there is no evidence whatsoever that AI is remotely close to self-directed research. The nearest competitors currently struggle to automate menial cosmetic software changes. Moreover, there do exist real life examples of something that is able to generate resources and improve itself with those resources: companies and other human organizations. While companies are certainly influential and contribute useful research, they are far from growing indefinitely. Like scientific fields, they tend to improve until a point of diminishing returns, in which their core competency is exhausted. The closest example of an institution which does have longer-term exponential growth is the United States economy, which continues to grow at roughly 2% annually, through vast diversification and competition. Even so, the rate of US economic growth has diminished over time.
The error in both the soft and hard AI monotheists is that they are able to identify an isolated example of exponential growth, but have not yet identified the pattern of exponential decay. They measure “self-improvement” solely by input, de-facto assuming that a constant function converts input into output. The empirical evidence suggests that fields or firms with the ability to self-improve run into the fact that the returns to self improvement superexponentially diminish. Because returns diminish superexponentially, a slowdown in science occurs despite the theoretical ability for self-improvement ability to grow exponentially.
As economist Robin Hanson articulates: “The degree of computing power in hardware and software for each task is distributed in a log normal way ... As computing power increases exponentially, you're basically moving through that log normal distribution in a linear manner”
I would ultimately caution against accepting this model of AI progress as ground truth, instead deferring to current and future empirical evidence. However, this model is undoubtedly more accurate than the model of indefinite self-improvement.
I feel like this resembles people who thought Covid would be endlessly expontential
Soft and hard AI monotheism are very useful terms. Good that I'm a polytheist lol