Towards a Sane Compromise on AI Part 2
Content-Based Harms and Universal Benefits
As of June 17th, there were 648 AI-related state and federal bills, a majority of which address content issues in part or whole. This is in addition to directives issued by federal agencies as a result of the Biden Executive Order.
Several bills have been passed related to AI content. 28 states have bills targeting deepfakes. Colorado passed a law targeting “algorithmic discrimination” in hiring, legal decisions and insurance. NTIA issued a broad directive with an expansive list of harms, including “Lowered information integrity”, “Discriminatory treatment, impact, or bias”, “Impacts on privacy”, and “Infringement of intellectual property rights”. Fine-tuning is well suited to address List 1 content like the above issues.
Basic AI research is adopted in a variety of scientific fields and industries. Regulation which targets research rather than use cases would limit the efficacy and availability of all of these applications.
List 2
Monitoring grasslands, improving agricultural data.
Improving government services in over one thousand points of application.
Explaining concerns to reduce conspiracy theory belief.
Developing Cancer Vaccines
Automating Medical Paperwork
Lawmakers have established the importance of protecting and incentivizing the economic, scientific, and social benefits of fundamental AI research. In May 2024, a bipartisan Senate committee led by Senate Majority Leader Chuck Schumer (D-NY) released a bipartisan roadmap titled “Driving U.S. Innovation in Artificial Intelligence”. This marked a turn against restricting fundamental AI research, leading economist Tyler Cowen to announce that “The AI ‘Safety Movement’ Is Dead”. Instead, a growing bipartisan coalition[1, 2] supports addressing realistic harms related to the misuse of AI, rather than preventing or pausing AI research.
We draw upon the MIT framework for key objectives:
Maintaining U.S. AI leadership – which is vital to economic advancement and national security – while recognizing that AI, if not properly overseen, could have substantial detrimental effects on society (including compromising economic and national security interests).
Achieving broadly beneficial deployment of AI across a wide variety of domains. Beneficial AI requires prioritizing: security (against dangers such as deep fakes); individual privacy and autonomy (preventing abuses such as excessive surveillance and manipulation); safety (minimizing risks created by the deployment of AI, particularly in already regulated areas such as health, law and finance); shared prosperity (deploying AI in ways that create broadly accessible opportunities and gains from AI); and democratic and civic values (deploying AI in ways that are in keeping with societal norms).
While individuals may differ on which of these issues are worth addressing, given that many lawmakers are considering signing these bills into law, a framework for implementing some values-based policies efficiently is needed.
Does the Trump/Vance ticket change things in ur opinion?