I think that's certainly true of some factions. However, there are other elected representatives or regulators who are rewarded, at least aesthetically, for compromise.
More broadly, there are intractable issues of definition and scope. Today "AI" is just text parse-and-match. Actual AGI is yet to come.
Before we go regulating any AGI-to-be, consider the impossibility of saying what exactly that means. Is all code with decision-making to be AGI? How much enhancement makes a human an AGI?! Will it be one AGI, all elements in communication, including the toaster? Or multiple AGIs with disparate needs, fears, aspirations, and desires?
What if anything will (an) AGI want? Will it want anything at all?! See Calhoun and his satiated, indolent mice. Further out, what if AGI just wants enough resources to get off Earth and colonize Mars, taking all our amazing new machines with it?
On the flip side, the Romans understood the problem two millennia ago: qui custodiet ipsos custodes. Knowing the fallen nature of Man, I'd rather deal with AGI(s) that give any more power to unelected bureaucrats.
Yep, I think technology broadly and AI specifically is very path dependent.
I think the best approach to this is to wait while the weaker technologies we have today are adopted, and we can gain a better understanding of the uses and shortfalls of them.
Well articulated case.
Regrettably, there's no reason to believe the current party in power has any reason to prefer coherence, much less to be effective and efficient.
Rather, by Ockham's Razor, everything they do and say is to grasp and retain power.
TL;DR You can't negotiate with an addict.
I think that's certainly true of some factions. However, there are other elected representatives or regulators who are rewarded, at least aesthetically, for compromise.
More broadly, there are intractable issues of definition and scope. Today "AI" is just text parse-and-match. Actual AGI is yet to come.
Before we go regulating any AGI-to-be, consider the impossibility of saying what exactly that means. Is all code with decision-making to be AGI? How much enhancement makes a human an AGI?! Will it be one AGI, all elements in communication, including the toaster? Or multiple AGIs with disparate needs, fears, aspirations, and desires?
What if anything will (an) AGI want? Will it want anything at all?! See Calhoun and his satiated, indolent mice. Further out, what if AGI just wants enough resources to get off Earth and colonize Mars, taking all our amazing new machines with it?
On the flip side, the Romans understood the problem two millennia ago: qui custodiet ipsos custodes. Knowing the fallen nature of Man, I'd rather deal with AGI(s) that give any more power to unelected bureaucrats.
Your thoughts?
Yep, I think technology broadly and AI specifically is very path dependent.
I think the best approach to this is to wait while the weaker technologies we have today are adopted, and we can gain a better understanding of the uses and shortfalls of them.
Brian: First thank you for an interesting thread!
---------------------------------------------------
On AGI, might be a long wait.
Star Trek Next Gen is set in the years 2364-2370.
They were just then trying to decide if android "Data" had human rights.
https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Trek:_The_Next_Generation)