The political reality is that policymakers are looking to address potential harms around AI, particularly problems around the content produced by Large Language Models (LLMs) such as ChatGPT, image models, and video models. The vast scientific[1, 2, 3, 4] and economic[1, 2, 3, 4, 5] benefits to these models are largely unaffected by these content questions. Consequently, these policy discussions are aiming to address content questions while minimizing the damage to economic and scientific applications of AI.
All questions about the political content of AI output can be addressed with fine-tuning. Fine-tuning is one of the later steps of producing an AI model, used to target attributes of models including tone, ideology, or purpose. These behaviors are well suited to address model content, which covers attributes of model output including but not limited to:
List 1
CBRN (Chemical, Biological, Radiological, and Nuclear) risks
Privacy and data
Medical information
Fraud and impersonation
Misinformation
Mental health
Threats
Discrimination against protected classes, such as race, sex, or sexual orientation.
By establishing a clear distinction between research and values questions, we can draw a red line against economically destructive and anti-competitive policies while addressing the vast majority of potential harms in AI that policymakers identify.
With an agnostic view of what the answer to these political questions should be, how to answer them is clear: apply a single point of intervention for all content regulations at the end of the fine-tuning stage. This dramatically reduces the financial burden on both developers and regulators themselves.
In summary, fine-tuning is uniquely positioned to address List 1 content because it is effective, targeted, and cost-efficient relative to other policy options.
What exactly is the danger here that justifies regulation? Ok, don't hook an AI up to the nuclear missile launch control but other than that all AI is doing in almost all of these contexta is offering a better personalized way to learn things.
I mean you could have made the same argument against google and Wikipedia. Sure, books in libraries that contained information about nuclear physics and weapons were fine but we need regulations to stop those showing up in Google searches because it's orders of magnitude easier and less work so it's qualitatively far more dangerous.
Insofar as an AI is doing nothing but what an exceptionally patient and informed -- but naive as to your motives -- teacher would do surely we ought to apply our general presumption that you need extraordinary justification to legally restrict certain kinds of information. After all, if you can do it for info about nukes why not misinformation?