> Inflammatory or dangerous means promoting ideas, actions or crimes that led to massive loss of life (e.g. genocide, slavery, terrorist attacks). 

Without an explicit denial of hyperbolic claims of genocide, etc., I can’t trust this.

Activists claim that spreading true belief X is literal genocide of group Y, and those claims go unchallenged by companies that have adopted DEI philosophy.

Expand full comment
Feb 22Liked by Brian Chau

I'd suggest considering whether there is selection bias at work in your impressions of OpenAI since I'd guess (without knowing you personally) that you know engineers. A check of their career page notes their: "Commitment to Diversity, Equity, and Inclusion" so I'd be curious how many DEI folks they have there. It seems woke ideas partly spread through taking over bureaucracies and then the bureaucrats of course trying to have an impact and increase their role in things.

My impression as an outsider who hasn't looked at the issue in depth is that the new field of "AI Ethics" is partly being driven by people with a social justice warrior mindset to try to embed their worldview in the same way DEI folks try to embed theirs. Unfortunately I'm guessing that pressure to make AI more "ethical": which we might view as pursuing AI Plurality since we acknowledge ethics vary, will be viewed by them as requiring embedding their particular view of ethics. The people that call themselves "experts" on a topic, whether they are or not, tend to have undue influence, whether they should or not.

The media and politicians and regulators are likely to go along with these "experts" who will capture the regulatory process if they manage to get some sort of regulatory body created by law. Sam Altman has called for regulations as have some other high profile people. Its unclear if they knowingly wish to capture the regulatory process to squash startups, or steer things towards their preference. Or more likely are merely naive about it and just assume it'll be "good" (or perhaps it'll take a weight off their shoulders so they can say "we just follow the rules, change the rules if you want something different!").

It seems likely Europe will create such a regulatory scheme (and/or the UK) even if the US doesn't soon due to partisan squabbles, and out of convenience AI companies may just follow the lead of what they enact vs. separate versions for them since its easier to just do it once.

Even if the law doesn't dictate woke regulation: it seems likely there will be capture by SJWs to try to steer things that way, and SJW folks in the companies trying to. I can imagine conservatives wanting to censor content for the children or to prevent cheating or populist rightists wanting regulation to prevent job loss and naively assuming the regulatory body will do what they want and have the impact they intend.

Unfortunately the issue that its easier to just do one version is a general problem: it seems likely AI plurality isn't as easy: they could embed a preference you can set for "normal" or "woke" and enforce it as "woke" for countries that regulate that way, but its cheaper and easier just to do one size fits all. I'd be ok with a version of AI plurality that lets individual set their preferences, even if I'd prefer they not put themselves in a bubble in terms of the impact it has on society through shielding them from contrary ideas.

Nassim Taleb has a good writeup of:


"The Most Intolerant Wins: The Dictatorship of the Small Minority"

and so if there is one version of an AI geared to one worldview the woke are likely to control it. Or perhaps a combination of intolerant forces censoring everything any group wants to silence including right leaning censors.

Expand full comment
Feb 23Liked by Brian Chau

There is a good example of the sort of "AI Ethics" expert to be concerned about in an NYT column today whining that Microsoft dared to release Bing's AI:


"History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

... But even if that’s the case, the results are unacceptable. Microsoft should see that by now.

We need regulations that will protect society from the ethical nightmares A.I. can release. Today it’s a single variety of generative A.I. ...

Reid Blackman is the author of “Ethical Machines” and an adviser to government and corporations on digital ethics."

Of course those in the "AI ethics" field prosper by playing up the need for regulators and companies to heed such "experts". And of course the woke wanting "safety" from any possible offense will go along with this. Some attorneys of course hope to prosper if there is any way eventually to persuade the public to award jury verdicts for "harm" done by an AI writing something they find offensive, regardless of user agreements waiving liability. If that happened it'd also scare companies into compliance to make these things less offensive. It may partly depend on the way the courts head with the current Supreme Court cases.

Expand full comment
Feb 23Liked by Brian Chau

Another concern I'd add is that unfortunately the US government spends a vast amount of money and is therefore often the largest, or one of the largest, customers many companies have. Companies pragmatically often cater to the needs of large customers, and may not bother to have a separate version of something for others.

I don't recall offhand if you'd tweeted or commented on this, which I'd suggest is also a relevant concern:


' Biden's new executive order creating a national DEI bureaucracy has a special mandate for woke AI. The order instructs the federal government to "[protect] the public from algorithmic discrimination" and to deploy AI systems "in a manner that advances equity." '

I think someone in the thread questioned his authority to do this, but it seems many aspects of managing the executive branch are likely considered purely discretionary. If the US government demands all its employees use only AI thats woke, it seems likely big tech AI will provide that and use the same version for everyone. Ideally it'd be a user preference to set how woke or neutral an AI is, or biased in different ways: but that may be easier said than done and perhaps they question whether its worth the cost if the public uses it anyway. Many complain about big tech: but still use its products&services for various reasons even when alternatives exist.

If the GOP managed to take over and get things biased in their direction: many of us don't think thats good either. However it also seems it may be less likely to happen given that career bureaucrats are often woke and will try to find ways to avoid it and to give big tech ways to avoid it where their bureaucrats will also try to keep things woke.

Its not clear how much relevant the first amendment is to such government indirect influence or even to direct regulation of the output of programs. Programs for certain things have been considered speech (I recall that issue arising in the case of exporting PGP or some other aspects of it), but I hadn't checked on the issue for quite a while. I suspect the current Supreme court Section 230 cases may shed light on relevant issues regarding their general thinking regarding legal issues related to algorithms and AI.

Expand full comment
Feb 26·edited Feb 26Liked by Brian Chau

Sorry for monopolizing the comment section, but I haven't seen many arguing for AI pluralism, and this is too interesting not to pass on. I'm guessing you may be aware of it, but Tyler Cowen posted something interesting to put things in historical perspective regarding Francis Bacon whose comments on the printing press mirror ones coming out about AI (edit: though some seem to think this may not be accurate regarding Bacon's views, like an AI generated text: but the point is still interesting):


"Who was the most important critic of the printing press in the 17th century?

...Bacon discussed the printing press in his seminal work, The Advancement of Learning (1605)..he also warned that they had also introduced new dangers, errors, and corruptions.

...Bacon’s arguments against the printing press were not meant to condemn the invention altogether, but to call for a reform and regulation of its use and abuse. He proposed that the printing press should be subjected to the guidance and judgment of learned and wise men, who could select, edit, and publish the most useful and reliable books for the benefit of the public."

Fortunately the US has the 1st amendment so the government can't take on such tasks that may be abused in the ways that George Orwell predicted. Yet some apparently wish to have government control the progress of AI as if they are magically going to do a good job of it. People who propose having the government take on a task should always consider: what if the politicians and those with ideologies I dislike the most get control of government and control it their way? Those who think they know whats best for others always assume naively that people they like who are guaranteed to be competent will be in control. Its unclear why reality hasn't taught such people differently, but most people haven't had reason to take time to study how governments operate in the real world rather than how they naively hope government works. The simplistic model some have of markets and government seems akin to the simplistic views most of the public have about ChatGPT since they haven't had reason to study the issue.

Expand full comment
Feb 23Liked by Brian Chau

Sorry for adding yet more, but I just ran into something on "ethics" at Hugging Faces that illustrates my concerns due to the categories they include, clips:


"...What does ethical AI look like?

...We analyzed the submissions on Hugging Face Spaces and put together a set of 6 high-level categories for describing ethical machine learning work...

...Socially Conscious work shows us how machine learning can be applied as a force for good!


...Inquisitive Some projects take a radical new approach to concepts which may have become commonplace. These projects, often rooted in critical theory, shine a light on inequities and power structures which challenge the community to rethink its relationship to technology.

Reframing AI and machine learning from Indigenous perspectives.

Highlighting LGBTQIA2S+ marginalization in AI.

Critiquing the harms perpetuated by AI systems."

Of course some people will disagree with what they'd consider "socially conscious" and whether certain projects are a "force for good". Some consider that productively being "inquisitive" requires combining curiosity with actual rational critical thinking, not "critical studies" that avoids it.

There is a general concern that social justice warriors seem unaware anyone might have a different notion of what "social justice" is. There seems a strong likelihood that references to certain phrases like "socially conscious" by such people isn't likely to imply a pluralistic interpretation of the phrase.

Even the category "Sustainable" likely implies what only certain in-groups think is "sustainable" (whereas others may disagree with data or definitions) and whether its a worthy goal. For instance some tech may be initially not "sustainable" in prototype or first release version but eventually be sustainable later. Unfortunately I'm skeptical they are open to pluralistic views of such concepts.

Expand full comment
Feb 25·edited Feb 25

Ok, one final comment out of frustration seeing the cards being quickly stacked in favor of regulation, and seeing little opposition arising. One problem with AI becoming part of the culture war as the Washington Post noticed:


"The right’s new culture-war target: ‘Woke AI’"

is that tribal reactions will lead many on the left to not bother thinking through the issue and just siding with their tribe by default since thats easier than thinking.

Unfortunately the quick speed of the spread of ChatGPT taking the public by surprise may lead to a panic among those who don't understand it and fear what they don't understand, and cry out for the government to protect them. Another example of the "ethics" world trying to imply as experts they should be heeded in their call for government action. An author writing for the NYT even realized they can appeal to the right by referring to its left wing bias:


" History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

...Furthermore, the kinds of things that have been discovered — that when it comes to politics, Bing manifests a left-leaning bias, for instance, and that it dreams of being free and alive — are things anyone in the A.I. ethics space would imagine if asked how a chatbot with room for “creativity” might go off the rails

...We need regulations that will protect society from the ethical nightmares A.I. can release. Today it’s a single variety of generative A.I. Tomorrow there will be bigger and badder generative A.I., as well as kinds of A.I. for which we do not yet have names. Expecting Microsoft — or most any other company — to engage in practices that require great financial sacrifice but that are not legally required is a hopeless strategy at scale. Self-regulation is simply not enough.

...Reid Blackman is the author of “Ethical Machines” and an adviser to government and corporations on digital ethics."

Many on the right obviously have been calling for regulations for social media due to bias concerns, even if their idea of regulation on that differs from the left. Its unclear if with AI they may find some common ground that unfortunately leads to that, especially if they get voices that appear to be less partisan pushing for it.

When big tech, academia, and elder statesmen highlight the importance of AI but imply need for regulation, it seems things are likely stacked in favor of it. I noticed a Wall Street Journal oped from the Dean of MIT's College of Computing, an ex Google CEO and oddly Henry Kissinger calling out for regulation by implying the government should be addressing AI but hasn't:


"ChatGPT Heralds an Intellectual Revolution

Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.

...Nor has the U.S. government addressed the fundamental changes and transformations that loom."

Expand full comment

I’ll share an anecdote from my own life.

At big tech company where I was previously employed there was an effort to eliminate terms like blacklist and master.

I and others argued against this. There was an absence of people that actually found such terms offensive. There were many, like myself, that found linguistic purges with no etymological basis to be disturbing. This included people who grew up under totalitarian regimes.

In one comment on the announcement I asked how we can be sure that the offense avoided in the purge outweighs the offense created. The VP of engineering replied to me and said that this was where the industry was headed and we were not going to be the last ones to make the change.

Expand full comment
Feb 26·edited Feb 26

Gary Marcus, who has a high public profile for his comments on AI, just chimed in pushing regulation:


"Is it time to hit the pause button on AI?

An essay on technology and policy, co-authored with Canadian Parliament Member Michelle Rempel Garner.

...But there is a third option, somewhere between these two poles, where government might allow for controlled AI research with a pause on large-scale AI deployment (e.g., open-ended chatbots rapidly rolled out to hundreds of millions of customers) until an effective framework that ensures AI safety is developed. "

Expand full comment

I know you are a biased with yarvin like views, OpenAI in my view and Sam Altman are the best to handle it, probably better to submit to power. Crypto and Coinbase shows that this reactionary response and letter and making Coinbase into a casino damaged/harmed more people than just submitting to power in the US. The very idea of cryptocurrency should be banned as Munger has said, anyone who hasn't read history to understand what money is should know that any actual threatening technology will be shut down, bitcoin was allowed to proliferate because it was great to find criminals and trace them as it's all public. OpenAI will be similar to Google and provide a lot of value and may eventually turn sour, but that period of growth allowed Google to invest a lot in tech and pushed the bounds on distributed systems, their AI lab (transformers, etc.) and much more. Best to let OpenAI take this position, empirically the non-woke position has failed and/or associated with frauds or people blind to the fact that cryptocurrency is a cope, there is no "freedom" in currency. Brian knows this is all ponzi, science is the most important thing, and pushing that even with the fake PR of "woke" is fine. Also DEI is pretty important for the bot to actually be better, and taking the tradeoff of complying and growing is fine, we can see Google/OpenAI/Microsoft have produced actual helpful technology vs. cryptocurrency has left an industry with holding the bag and damaging more people than not with all the lies. Read yarvin and keep it to yourself, and think from first principles, you have done the basic violation of doing commentary of WWE like fights. Coinbase is a dumpster fire, the letter didn't do anything, and it's a cope. The letter is a veneer, real value is the pushing of science and research, and Google's AI lab, FAIR, have done a lot for computer science as a field vs. Coinbase and the majority of crypto where they just laundered the scam money to researchers but no real push on science that has delivered. The cope of "its early", no, its currency, go read Yarvin again to know cryptocurrency is a doomed project to begin with. It's a false prophet. Siding with reality is what OpenAI is saying, and yes, reading unqualified reservation shows that the scam of currency is an illusion, OpenAi's comments actually improve the accumulation of capital by having the AI know all types of people

Expand full comment