Progressive “Groups”, AI, and Diminishing Returns
Don’t Forgive and Don’t Forget
I’m old enough to have lived through a few preference cascades — times in which opinion shifts so severely that the staunchest ideologues change to pretending they always believed the opposite in a matter of weeks. On Covid, lockdowns and mandates were righteous and prosocial until they definitely weren’t. On DEI, it was racist to hire on merit, until suddenly hiring on merit was what every serious company did.
The first few times this happens, it's natural to be cowed by fear. Even if you see the eventually-obvious truth, it’s natural to keep it to yourself, or only propose it as the lightest of suggestions. Only with experience, did I realize it’s both virtuous and opportunistic to take a clear, definite stance.
I’ve learned another lesson recently. The progressive “groups”, or special interest groups for radical race and environment advocacy, have persisted for decades. Many argue that they cost the Democrats the election, by forcing Democrats to take unpopular or reality-denying positions for several election cycles. The persistence of the groups is a puzzle. How can nominally Democratic interest groups cost the Democrats severely for many elections, but continue to exist? The answer is they are unaccountable. And while they can stir up social fervor to attack those on their side, there is no equivalent and opposite reaction when the preference cascade happens and everyone realizes those people are insane.
For years I’ve argued that Diminishing Returns in Machine Learning was inevitable, particularly in scaling. In just one week, the rush of public opinion shifted completely. Sparked by a Bloomberg article on top AI companies’ scaling failures, numerous AI Doomers now declare diminishing returns ‘obvious’ and immediately move the goalposts, even though the claim scaling would never reach diminishing returns was the exact premise of their failed attempt to ban open source AI and severely harm closed source AI.
The preference cascade is coming for “AI Safety”. In hindsight (and for many like myself, in foresight too), the AI Existential Safety movement was entirely unscientific. The real problems we should’ve been asking ourselves is whether we weren’t innovating enough, whether we were focusing on the wrong areas of research, and whether American law was getting in the way. The right question was never “are we going too fast”. It was always “are we too slow?”, or at the very least “will we be going too slow soon?” More specifically, a healthy scientific debate around AI, which still occurs in many conferences and research papers, is “what breakthroughs do we need to continue the pace of AI research?”.
Fortunately, I’ve been able to ride the preference cascade upwards. Recently, even Nancy Pelosi cited a letter noting there is “little scientific evidence” for the beliefs behind one of the Doomers’ proposed laws. In the fullness of time, I’m confident we will win the argument.
We’re due for a reset of the AI “Overton window”, or range of mainstream opinion. This is true both if you care about the speed of AI research or your academic and political freedoms.
AI Doomers may have already done irreparable damage to AI research. By wasting the time, attention, and resources of the AI field, they have already succeeded in slowing down AI research simply by creating opportunity cost. Of the hundreds of millions of dollars spent between astroturfed media campaigns and lobbying for authoritarian laws, the former has done more to slow down AI research than the latter, at least as of now.
Before the release of ChatGPT, some of us treated ‘AI Safety’ as a media circus, a distraction for *real* debates like whether FP8 or FP4 precision would perform better. Progress on these engineering and algorithmic choices, as well as executing on those choices, is what actually accelerates AI research.
On the other hand, when AI Doomers spend millions of dollars trying to pass laws like SB 1047 to intrude on the research and publishing of AI models, they threaten to make that work illegal. So someone has to engage them. Some, like my pro-AI compatriot
, want to nonetheless give a sympathetic steelman for these people, being generous to them when they make assertions and predictions that stray from the evidence. There was a time, circa 2022, when I took a similar approach. The EAs invited me to their conferences and paid for my travel just to disagree with them after all, so I would at least be generous. At these conferences, two things convinced me this approach would not work.First, I encountered worldviews remarkably untouched by evidence. In many of these conversations, I encountered identical patterns of debate. They would lay out their belief in indefinite AI scaling, proceeding at the same rate or resulting in an ‘intelligence explosion’. I would introduce them to slowdowns in progress that were widely known and published scientifical results. And they would come back with countless hypotheticals which each had no observed basis in reality. “What if we get recursive self improvement?” “What if it becomes a new species?” “What if we get breakaway evolution?” It was clear that for these people, AI progress was a nominalist word game where catchphrases need not be grounded in measurements of reality, but rather in fantasies written before modern AI algorithms were invented.
Second, I encountered overt lying, particularly from members of philanthropic foundations such as Center for Effective Altruism, FTX foundation, Open Philanthropy, and Lightspeed Grants. I raised concerns about the totalitarian nature of “AI Safety” solutions to members of each, all of whom promised in the strongest terms that they would not fund attempts to restrict freedom of research or speech. I later found out that they fund exactly that. Moreover, I learned that their scripted lying was a concerted effort to misrepresent themselves to the public.
Oftentimes, after the preference cascade shifts back to reality, we forgive the temporary tyrants. We forgave COVID lockdown advocates after we were free. There have now been at least four cycles of “political correctness” going too far and being superficially rolled back with zero consequences for the administrators and government employees responsible. The latter has greatly contributed to the interest groups that continue to damage the Democratic party and, in my view more importantly, the US government and administrative state.
With recent AI Doomer legislation, the fear-mongering was so obvious and so absurd that a great bipartisan coalition came together to oppose them, including Gavin Newsom and Nancy Pelosi. We stopped them before the worst harm was done. But they are rapidly becoming structurally identical to one of the many entrenched special interest groups which Democrats now blame for their election loss. They have unaccountable funding from billionaire Dustin Moskovitz and near-billionaire Jaan Tallinn for programs to infiltrate and manipulate bureaucracies.
Organizational politics does not favor freedom. Even if we win the argument, we may not win the bureaucratic struggle. As long as AI Doomers are in the business of attempting to impose their cultish beliefs on others by force of law, forgiving their errors and false prophecies is a mistake. Instead, the public should be kept aware of the AI Doomers’ previous attempts to impose their incorrect and sensationalist beliefs on others through the force of law for the next time they try to do the same.
Politically and administratively, freedom can only be defended if we banish the circus.
No recent issue has framed the difference between reality and hype more than AI. For those in the know we are in for a very long slog before AI can possibly act in a way autonomous enough to eliminate any but the most repetitive and uncreative jobs, and even in those the failures will be over very simple things and will dash many a company’s stock price and ability to raise further financing.
I often hear the classic example of the doorman from actual software engineers who understand the actual scaling issues involved - eliminating the doorman led to the lobby being destroyed and people sleeping in the building literally overnight, yet that’s one of the simplest, most “automatable” jobs in the entire economy!
AI in the form of LLMs cannot react to information outside its training set so by definition can only regurgitate prior solutions to existing problems. Yes it looks novel on its face but when you actually get to the limit of its training set it breaks down immediately. My company is inundated by multimillion dollar requests to license the totality of our private data to give them an advantage in their next model. This is a zero sum game. A fundamentally different type of model will need to be created to create any reasonably capable GAI, and that’s not where the money is being invested right now. It costs an order of magnitude more compute to train a model only incrementally more capable.
There is an absolute ceiling to the current technology but to your point, lay people only see a trajectory and assume it’s going to continue. They see geometric growth where it’s actually logarithmic. They see us at the bottom of the curve when it’s actually very far along.
As a technical question, is the Petering-out of scaling for text independent of potential gains from scaling when training on different or multiple kinds of input, likely to involve more computational intensity, for example, many good and varied kinds of electronic sensors observing humans doing complex work together and gleaming subtle patterns and so forth.