What exactly is the danger here that justifies regulation? Ok, don't hook an AI up to the nuclear missile launch control but other than that all AI is doing in almost all of these contexta is offering a better personalized way to learn things.
I mean you could have made the same argument against google and Wikipedia. Sure, books in libraries that contained information about nuclear physics and weapons were fine but we need regulations to stop those showing up in Google searches because it's orders of magnitude easier and less work so it's qualitatively far more dangerous.
Insofar as an AI is doing nothing but what an exceptionally patient and informed -- but naive as to your motives -- teacher would do surely we ought to apply our general presumption that you need extraordinary justification to legally restrict certain kinds of information. After all, if you can do it for info about nukes why not misinformation?
What exactly is the danger here that justifies regulation? Ok, don't hook an AI up to the nuclear missile launch control but other than that all AI is doing in almost all of these contexta is offering a better personalized way to learn things.
I mean you could have made the same argument against google and Wikipedia. Sure, books in libraries that contained information about nuclear physics and weapons were fine but we need regulations to stop those showing up in Google searches because it's orders of magnitude easier and less work so it's qualitatively far more dangerous.
Insofar as an AI is doing nothing but what an exceptionally patient and informed -- but naive as to your motives -- teacher would do surely we ought to apply our general presumption that you need extraordinary justification to legally restrict certain kinds of information. After all, if you can do it for info about nukes why not misinformation?
Rereading Chau, the real danger is "political content."
Relentless power grabs by bureaucrats prove we can't dismiss it with "fine-tuning."