There have always been two types of people who claim to want to reduce disinformation. There are some who are simply partisans. There are others who have proposed and researched truly neutral technological and social solutions. I have long considered myself a member of the latter community and what we did as fundamentally good. Now I think our efforts are in vain, our language is poisoned, our ideas have been twisted, and our community has not caught up.
The Good Fight
Some readers, particularly those who haven’t followed my earlier writing and podcasting, may be skeptical that the neutral, good-hearted group exists at all. Here is just a brief summary of some of their proposals, all of which I still support to some degree.
Interruption: Social networks tend to trap people in routine habits and impulses. Adding occasional interruptions, such as a “you have now been on Facebook for one hour” popup, can interrupt these habits. These changes may decrease spiraling arguments, political addictions, and self-destructive behavior in general.
Network friction: Similar to interruption, simply adding a few seconds of lag to social media sites can decrease addictive behaviors and their political side-effects. This can also be done to posts, instead of individuals. For example, if posts take multiple clicks to share, or require copy-pasting, then impulse–driven sensationalism will be reduced.
Data portability: Allowing users to easily move their data, including friends, follows, posts, and images to other sites will make it easier for new social networks to form and increase competition.
Publishing perception gap data: while who is right about a political issue is often difficult to judge, what can always be reasonably measured is the proportion of the public, or of a particular political faction, that actually believes in a political position. Publishing this data and dispelling narratives about “the enemy” is a useful and relatively neutral way to dispel political conflict.
The posterboy for these types of solutions is Tristan Harris and the Center for Humane Technology. I think it is fairly clear that these solutions are neither partisan or artificially support legacy media. I won’t extend this section further than necessary, because the rest of this article is about how this well-meaning half of the anti-disinformation community has been misled by the partisan half.
The Bad Fight
It is simply not true that the entire anti-disinformation community operates in good faith. Unfortunately, it is one interconnected community, despite being split around half and half between neutrals and partisans in my estimation. It is incredibly difficult to disentangle academics, non-profits, organizations, and communicators who are in the neutral half from those in the partisan half. Funding sources, such as universities and charities, almost always fund both simultaneously. The ideas, studies and proposals of the neutral half are frequently cited and warped by the partisan half. Individuals in both groups regularly meet and collaborate with each other on common goals. In short, I cannot blame anyone who initially conflates myself, Tristan Harris, or anyone else who speaks about fighting disinformation with antagonists who desire partisan deplatforming, speech codes, de facto subsidies for legacy media, and election interference from tech companies.
The phrase of disinformation has often been weaponized in favor of some of the most consequential misinformation and disinformation itself, whether it is now-debunked zoonotic origins theories, early skepticism of masks, lockdown measures that fail basic cost-benefit analyses, conspiratorial “RussiaGate” narratives, etc. Even when I think disinformation is labeled correctly, such as with QAnon, the approaches taken to combat it are far from what I support. From the ubiquity with which the terms disinformation and misinformation are misused, I understand why trying to restore the terms to their original usage is seen as covering for their abuse.
Actually, it Did Have to End This Way
Much of my initial podcasting and writing was related to the idea of institutionalized conspiracy theories. In this thesis, social media disinformation gains most of its notoriety when it was propagated by an existing institution, such as a legacy media company or political party. In my then-naive view, there was shockingly little research on this considering that almost all major conspiracy theories, from QAnon to RussiaGate to “the Great Replacement” to Mass Racism Conspiracies to the truly insane anti-vaccine conspiracy theories (i.e. microchips) had major surges that exactly correlated to legacy media activity. The little research that existed was actually done by the partisans, targeting Fox News and talk radio. Now I realized this is because of a fundamental legacy institution bias among even some of the most well-intentioned researchers. It was simply an article of faith that legacy institutions would not be as susceptible to, or even act as the primary vectors of, disinformation. It isn’t like most of these scientists are cynically protecting legacy institutions or their own self-interest, either. They are genuinely attached to the idea that legacy institutions are good as an article of faith and no amount of evidence so far has been able to convince them. Of course, most of this is anecdotal and I don’t expect anyone to take my experiences as universal. Perhaps there is a secret group of researchers who are very passionate about addressing institutionalized conspiracy theories that I have simply never met. However, this is one of my strongest motivations for deserting from the anti-disinformation war.
One of the most popular ideas from my early podcast appearances was that the greatest conspiracy theory in American history was that Saddam had nuclear weapons. I expanded on this idea in an article last year:
Prior to the Iraq war, over ninety percent of Americans believed Saddam Hussein’s regime possessed weapons of mass destruction. This was not based on any substantive evidence and in hindsight was unequivocally false. The population of the third-largest nation on earth did not come to believe such an absurd, baseless idea from social media influencers, Russian agents or unfettered internet conversations. Instead, they were the last link in a network of unthinking trust, which passed falsehoods down from intelligence operatives to politicians to established media to each and every one of us.
In my view, this is the greatest threat of disinformation. It can act as a motivating and legitimizing force towards truly deranged and baseless ideas, ultimately leading to state action causing hundreds of thousands of deaths. Currently, the sum total of the anti-disinformation community would be completely useless in the face of a second WMD-style conspiracy theory. Parts of it would support the conspiracy theory.
The Unfortunate Truth
Rejecting false narratives takes not just knowledge and resources, but also skill, charm, and courage. Experts and fact-checkers, even the truly well-meaning ones, are no more able to control false information than a professor in projectile motion can make a half-court shot. The “war on disinformation” was bound to fail. It misunderstood the problem, did not choose people with the skills to combat the problem, and was itself riddled with the problems it claimed to be against. Most importantly, it had no autopoetic solutions – solutions that could perpetuate itself without constant outside support.
There is no silver bullet, but there are plenty that will backfire. Academic fields in particular often become static and dysfunctional, as Richard Hanania has documented. All of this is to say that any autopoetic or otherwise notable solution will almost certainly come from an outsider. It may come in the form of a new type of institution, a new social norm, or a new startup. It may come from a reader of this newsletter. Only time will tell.
Crowd sourcing error detection with incentives for error discovery and costs for the existence of errors hits at both the autopoetic requirements for a solution as well as the skin in the game aspect of preventing fraud.
A tiny example of this was while teaching computer programming to high school students, I paid them school money (tradable for food, sweatshirts, etc.) for any mistakes they found in my assignments or directions. They got a smaller amount for typos, medium amount for basic errors and students who uncovered conceptual problems, weaknesses, discrepancies, inconsistencies got serious rewards as well as public shout out.
TBH, I had minimal skin in the game. The school money was photo copied, but it did require-allow me to frequently admit that I made mistakes as a teacher and publicly declare again and again that achieving the highest quality product (in this case clear assignment with maximally supportive directions/resources) was the goal.
I love the thought of government agencies having to publish their budgets online and unleashing a country full of accountant-error checkers who get paid some amount of money for each mistake they find and the agencies losing money for each mistake they make.
https://phys.org/news/2022-01-rationality-declined-decades.html