The NTIA Sets Sail for Digital Censorship
Drawing a Red Line on Internal AI Tools
The implementation of the Biden Executive Order on artificial intelligence threatens to expand the realm of censorship and government control to realms it has never covered and should never cover.
AI is not just a tool for interpersonal communication, but independent self-organization, research, and reflection. Language models such as ChatGPT are as likely to be used in daily personal routines as interpersonal communications. Legitimate concerns about speech such as threats, impersonation, fraud, or classified information are historically directed towards interpersonal communications. No one cares if you utter a threat or impersonate someone else in an empty room. The Biden EO is set to change that. The internal/external distinction is crucial because of attempts to censor AI in the name of “safety”. Rather than keeping to this narrow tradition of government intervention, the NTIA report on AI Accountability, directed by the Biden EO, takes aim “across the AI lifecycle and value chain”.
“Both upstream developers and downstream deployers of AI systems should be accountable; existing laws and regulations may already specify accountability mechanisms for different actors.”
“Recognizing the fluidity of AI system knowledge and control, many commenters argued that accountability should run with the AI system through its entire lifecycle and across the AI value chain,43 lodging responsibility with AI system actors in accordance with their roles.44 This value chain of course includes actors who may be neither developers nor deployers, such as users, and many others including vendors, buyers, evaluators, testers, managers, and fiduciaries.“
In practice, the NTIA directive would interfere with the process of developing, understanding, and organizing your own ideas, a crucial right in a free, democratic society. It injects political speech controls into the twenty-first century equivalent of your text editor, file manager or calendar. These technologies will be ubiquitous in our lifetimes; in some industries such as software, it already is. Gatekeeping them behind ideological loyalty will be no different than gatekeeping electricity or internet access today.
What speech does the Biden administration aim to control? The NTIA report takes inspiration from ESG, or “Environmental, Social, and Governance”, an attempt by ideological activists to manipulate publicly traded companies.
While ESG disclosure models are not currently designed to evaluate AI’s impact, commenters suggested incorporating AI and data practices more generally into the evaluation.
In practice, this looks like forcing all AI companies to be as ideological as Google’s Gemini: refusing to depict white people in historic images, injecting anti-child arguments, and equating Elon Musk to Hitler. Ideological conformity isn’t cheap. As Pirate Wires quotes from a Google insider: “we spend probably half of our engineering hours on [diversity]”. The NTIA directives would compound these costs, forcing developers at every stage of the complex and multilayered AI development ecosystem to pay the same compliance costs as Google does for its foundational model.
The way to curtail thought censorship without getting in the way of any legitimate concerns about AI-generated material is to limit the endpoint of government interference to interpersonal communications. Of course, the limits of what AI-generated content is or isn’t allowed to be shared between people should also not be ideologically slanted. Nonetheless, it is crucial to draw this unequivocal red line guarding the internal lives of every American.
> In practice, the NTIA directive would interfere with the process of developing, understanding, and organizing your own ideas, a crucial right in a free, democratic society. It injects political speech controls into the twenty-first century equivalent of your text editor, file manager or calendar."
Though you preface with "in practice", the sections you quote don't say any of this. Can you find examples of it being used to prosecute this way?
I tend to have a very different take on these rules. People have always been deeply afraid of major changes to how the world works. We see it in every major advancement in information technology -- and this fear is felt on both sides of the political aisle.
The Biden EO is essentially entirely volountary and is just a fancy way of saying -- be responsible with AI. True, I wish it hadn't done so in a way that strikes people as lefty coded but if you get a democratic administration to try and explain what they think being responsible looks like off course it's going to be based on left wing models.
Most importantly, this EO heads off any real hard regulation for the time being. If the specifics cause issues the next republican administration can change it.
Ultimately, we should be happy about this because it heads off the demand for real regulation with bite.