Discussion about this post

User's avatar
W. James's avatar

Sorry, I should have first posted that this is great and it is useful to do, thanks for doing it. If there is a way to combat AGI fears, it does help. My immediate reaction from decades of debating true believers on various topics (from nonsense like homeopathy to socialists) was just that even if this is fully logical and accurate: long arguments leave room for true believers to find some way to rationalize a loophole.

There are so many uncertainties regarding predicting the future that I suspect its not going to be easy to get those driven by fear to think clearly enough to set it aside, unless/until there are simple short arguments. Of course sometimes the long arguments are needed first before people can figure out how to compress them, ala the famous saying "if i had more time this letter would be shorter".

Its just that there aren't enough people concerned with combatting the threat of regulations, and its a shame if AGI debate distracts attention from the difficult near term job of finding a way to head those off.

Expand full comment
W. James's avatar

I agree AGI isn't imminent. Yann LeCun, Meta's head of AI, seems to suggest its not worth his time debating the more extreme doomsayers since there are more realistic immediate concerns to focus on. To those concerned about regulation (unfortunately he is a pro-regulation type) its an interesting theoretical debate distracting from the near term. The FTC complaint calling for them to regulate AI heavily, OpenAI pushing for near term regulations, etc.

Perhaps AGI fears may heighten concerns that underly some of that: but it it seems focusing on the near term may be more of use than getting distracted by AGI debates about something that is very unlikely imminent. Those who believe it isn't imminent should believe those debates can be postponed (or left up to those who don't have an interest in fighting regulations), even if its a shame some sharp folks waste time on AGI fears in the meantime.

While its unfortunate that there are some sharp people in doomsayer mode regarding AGI that get media coverage: it seems thats a distraction from the near term threat of regulation for non-AGI, and other more timely AI issues like pluralism. OpenAI's request for helping "democratically" decide AI steerage is essentially pushing the idea of regulation via that process, whether voluntary at first but then perhaps with a push for "democratic" governments to adopt it.

Then perhaps a push to use those democratic processes designed to limit AI speech as an excuse to find a way apply them to human speech (at least where there is no 1st amendment, while they figure out ways around that here and hope that big tech falls in line to comply with this "democratic" process).

Or the OpenAI democracy push may indicate a a possible implicit hope that maybe they can head towards a monopoly that is run in a "good" fashion: and push Microsoft to prevent add-ons that allow other AIs to be plugged into their office suite, search engine, etc, so that people use the "good" AI. Or at least require all AIs they allow (and Google allows in their office suites, and their AIs) to comply with this "democratic" process.

I suspect many were taken by surprise by the level of emergent behavior in LLMs. All projections of the future are problematic, ala Alan Kay's "The best way to predict the future is to invent it". Those concerned with AGI will merely fall back on the non-falsifiable prior that someone may invent something leading to equally unexpected advances. Unfortunately prior trends aren't guarantees of the future: which leaves true believers loopholes that its unclear can easily be squashed by long arguments with lots of data about trends that may change.

There are other near term issues. The article https://FixJournalism.com on fixing mainstream journalism by using AI to nudge the mainstream news to be more neutral to help them gain back trust, mentions the issue of AIs that summarize the news: which undermines the advertising revenue from the news outlets. That article was drawing attention to how AI can help the news, but it can harm it in ways that may lead them to call for regulation. All news will need to be behind paywalls, or there isn't any revenue to generate the "free news" for AI, but of course some even US news can come from free NPR or even foreign outlets like the BBC. The news media has already been pushing for regulations to hand them a piece of Google&Facebook et al's revenue, and for bailouts and government funding.

The real concern is human alignment in the near term, in terms of mindset regarding regulations. That site trying to point out the analogies between regulating speech and regulating AI is an attempt to try to get some in the public who still value free speech to consider protecting AI speech like human speech, https://PreventBigBrother.com

Perhaps that isn't the approach that'll work: but there need to be varied attempts to try to figure out what might get traction with the general public or politicians.

Expand full comment
21 more comments...

No posts