Towards a Sane Compromise on AI Part 3
Regulatory Conflict and the Need for a Single Point of Intervention
Policy that assigns broad, uncoordinated, and sometimes contradictory jurisdiction over regulating AI is bound to repeat itself, inflicting legal ambiguity, difficult agency handoffs, compliance costs, and anti-competitive barriers on the American public.
Infamously, the National Environmental Protection Act introduces regulatory inconsistency which obstructs construction of clean energy such as solar panels and wind turbines. “ ‘Success’ in terms of the law is measured by comprehensive documentation, not environmental benefits.” The report continues: “NEPA is implemented on an agency-by-agency level. Each federal agency has its own NEPA procedures, its own list of categorical exclusions, and its own staff for managing the process. This decentralized system leads to a great deal of variation in how NEPA is administered.” This inter-agency and inter-level conflict continues with the Clean Air Act: “Adding to all of this is the fact that Clean Air Act regulation typically involves overlapping federal, state, and even local jurisdictions. In general, Clean Air Act permitting is handled by the states. But states must develop a state implementation plan (SIP)—a comprehensive plan for how to achieve national ambient air quality standards—for the EPA to sign off on.”
These comparisons are relevant due to the conflicting multi-agency approach of the Biden Executive Order on AI, as well as other laws[NYC, Colorado, EU AI Act] involving AI auditing. “applying the NEPA model to algorithmic systems would likely grind AI innovation to a halt in the face of lengthy delays, paperwork burdens and significant compliance costs.”
To reduce duplicative costs for both regulators and affected parties, we recommend that all legislators and regulators, to the extent that it is possible, implement their policies through a single point of intervention: fine-tuning.
Case Study: Biden EO
The implementation of the Biden Executive Order on AI risks an uncoordinated series of conflicting, sometimes contradictory regulatory directives.
The foremost reason to choose a single point of intervention is the anticompetitive economic effects of duplicative regulation. The machine learning pipeline involves a complex web of increasingly specialized vendors. This includes but is not limited to hardware, infrastructure, data curation, base model training, fine-tuning, prompt engineering / scaffolding, and distribution. Even within one category, there can be several intermediate steps and/or companies used in conjunction for one product. A kitchen sink approach which attempts to regulate all of these components creates duplicate burdens with no benefit to policy goals. This creates anticompetitive barriers to emerging companies, risks exacerbating monopoly power and reduces the efficacy of regulatory scrutiny by draining resources from regulators themselves. It is likely to replicate the dangerous precedent set by the self-destructive barriers in infrastructure permitting.
A single point of intervention also favors transparency and oversight. It makes the implementation of policy easier to understand for both lawmakers and developers. On a legislative level, this makes legible oversight and problem diagnosis more likely.
Legally, this point is best situated just prior to user distribution for a given AI product, as it is a natural point at which harms increase and most research is completed. As the next section will address, current technological solutions are currently capable of solving values questions.
Well articulated case.
Regrettably, there's no reason to believe the current party in power has any reason to prefer coherence, much less to be effective and efficient.
Rather, by Ockham's Razor, everything they do and say is to grasp and retain power.
TL;DR You can't negotiate with an addict.