Discussion about this post

User's avatar
Brandon Adams's avatar

> Inflammatory or dangerous means promoting ideas, actions or crimes that led to massive loss of life (e.g. genocide, slavery, terrorist attacks). 

Without an explicit denial of hyperbolic claims of genocide, etc., I can’t trust this.

Activists claim that spreading true belief X is literal genocide of group Y, and those claims go unchallenged by companies that have adopted DEI philosophy.

Expand full comment
W. James's avatar

I'd suggest considering whether there is selection bias at work in your impressions of OpenAI since I'd guess (without knowing you personally) that you know engineers. A check of their career page notes their: "Commitment to Diversity, Equity, and Inclusion" so I'd be curious how many DEI folks they have there. It seems woke ideas partly spread through taking over bureaucracies and then the bureaucrats of course trying to have an impact and increase their role in things.

My impression as an outsider who hasn't looked at the issue in depth is that the new field of "AI Ethics" is partly being driven by people with a social justice warrior mindset to try to embed their worldview in the same way DEI folks try to embed theirs. Unfortunately I'm guessing that pressure to make AI more "ethical": which we might view as pursuing AI Plurality since we acknowledge ethics vary, will be viewed by them as requiring embedding their particular view of ethics. The people that call themselves "experts" on a topic, whether they are or not, tend to have undue influence, whether they should or not.

The media and politicians and regulators are likely to go along with these "experts" who will capture the regulatory process if they manage to get some sort of regulatory body created by law. Sam Altman has called for regulations as have some other high profile people. Its unclear if they knowingly wish to capture the regulatory process to squash startups, or steer things towards their preference. Or more likely are merely naive about it and just assume it'll be "good" (or perhaps it'll take a weight off their shoulders so they can say "we just follow the rules, change the rules if you want something different!").

It seems likely Europe will create such a regulatory scheme (and/or the UK) even if the US doesn't soon due to partisan squabbles, and out of convenience AI companies may just follow the lead of what they enact vs. separate versions for them since its easier to just do it once.

Even if the law doesn't dictate woke regulation: it seems likely there will be capture by SJWs to try to steer things that way, and SJW folks in the companies trying to. I can imagine conservatives wanting to censor content for the children or to prevent cheating or populist rightists wanting regulation to prevent job loss and naively assuming the regulatory body will do what they want and have the impact they intend.

Unfortunately the issue that its easier to just do one version is a general problem: it seems likely AI plurality isn't as easy: they could embed a preference you can set for "normal" or "woke" and enforce it as "woke" for countries that regulate that way, but its cheaper and easier just to do one size fits all. I'd be ok with a version of AI plurality that lets individual set their preferences, even if I'd prefer they not put themselves in a bubble in terms of the impact it has on society through shielding them from contrary ideas.

Nassim Taleb has a good writeup of:

https://medium.com/incerto/the-most-intolerant-wins-the-dictatorship-of-the-small-minority-3f1f83ce4e15

"The Most Intolerant Wins: The Dictatorship of the Small Minority"

and so if there is one version of an AI geared to one worldview the woke are likely to control it. Or perhaps a combination of intolerant forces censoring everything any group wants to silence including right leaning censors.

Expand full comment
9 more comments...

No posts