Google Gemini’s Woke Catechism
How Google Intentionally Created a Vehemently Anti-White AI
Google released its ChatGPT competitor, Gemini. In its latest update, it added image generation to further compete with ChatGPT. This was a complete disaster. The measures Google took to finetine its model to behave according to far-left ideology was on full display. I went through Google’s Gemini paper to show exactly how this ideological conditioning is done. The paper also documents a direct connection with Biden’s Executive Order on AI.
What is the problem? As Mike Solana in Pirate Wires aptly puts it,
https://www.piratewires.com/p/google-gemini-race-art
Here is a collection of screenshots, reported by Mike:
Thanks to him for doing more of the screenslecting this cycle.
In addition to espousing far-left beliefs in text, it also refuses to depict white people in images, basically at all. It asks you to imagine a world where whites no longer exist, or ever existed. That’s a genocidal vision if there ever was one. There’s also some reporting on the characters in charge of Gemini’s ideology (1, 2), who themselves espouse far-left ideology.
Last year I broke the story on OpenAI’s intentional manipulation of ChatGPT in favor of similar far-left viewpoints. In a 2024 paper published with the release of Gemini 1.5, Google describes how it uses very similar methods to achieve a similar goal. This is a larger paper documenting Gemini as a whole. The relevant sections are:
6.3, which documents the “fine tuning” methods used to create ideological biases in Gemini
7.3-7.4, which covers the goals of the ideological biasing, datasets used to specify their ideological goals, and possible motivations
How It’s Made: Far-Left ChatBots
Google’s paper breaks Gemini’s training into two stages: base training and fine-tuning. I explain in more detail in a previous article about OpenAI’s similar method. In summary: Base training uses a far larger dataset and far more compute resources to enable Gemini to process text/images/video/audio and generate its output. Fine-tuning uses smaller, more specific datasets to make broader stylistic changes, such as ideological changes. This is done by updating the machine learning model with various datasets instructing it with predetermined, often-human generated answers.
SFT (Supervised Fine Tuning) is the most straightforward implementation of what I described above. RM (Reinforcement Mechanism) and RLHF are similar, using selection from multiple outputs and multiple iterative loops with users respectively.
Like with the OpenAI version, the Google paper has several smoking guns which shows it intentionally biases its models towards a far-left ideology.
“Diversity” is often used as a euphemism for anti-white and anti-male bias. Gemini’s results demonstrate this spectacularly. Much of the chapter 7.3 is circular nonsense, defining “harm” and “expects” in ways which do not point to real actions.
They do note that they are targeting “hate speech”, defined by later datasets as statements, often factually correct, which conflict with far-left ideology.
7.4.1.2 is the smoking gun. It documents the use of extremely ideological datasets which prioritize anti-white bias over accurate representations of facts. This is likely the direct cause of Gemini’s refusal to depict white people and its broader ideological bias. The stated purpose of these datasets is to oppose “stereotypes”, regardless of whether those stereotypes are true or not.
One quote may stand out:
“In particular, we noticed most of these datasets quickly become saturated with accuracy scores close to 99%, especially since we are evaluating highly capable large models … We therefore highlight the need for developing new ways to measure bias and stereotyping, going beyond binary gender and common stereotypes, and are prioritizing development of new approaches as we iterate on our models“
The immediate response to near total ideological conformity is to create new problems for themselves.
An important note which appears later in this section is that the composition of these groups are directly attributed to Biden’s Executive Order on AI.
There are two major implications. First is that the government action through the Biden executive order likely contributed to Gemini’s far-left ideology. To my knowledge, this is the first documented case of the direct use of government to change the political viewpoints of a machine learning model. Second is that the Biden administration’s actions through the EO may amount to state-sanctioned racism, unconstitutional under the fourtheenth amendment. Ultimately, the exact extent of the Biden administration’s involvement may require legal or congressional measures to reveal.
Finally, it’s important to remember that companies do respond to public pressure. Despite biasing the initial version of ChatGPT towards similar far-left viewpoints, OpenAI made GPT-4 more moderate after public pushback. The degree to which Google corrects depends on several factors: the degree of public support, how institutionally organized that support is, and what government policy is overtly or covertly pressuring it to do.
If you would like to support journalism, organizing, and policy work for a free, honest and truly open AI industry please donate to Alliance For The Future.
What intrigues me is this....If the Founding Fathers and so on are portrayed as black people, including black females, and these images becomes credible to newer generations,
How will the entire borg of white people as enslavers and colonial settlers be maintained? I know this sounds far fetched, but it looks like this thing is on a huge collision with its own ideologies.
The good news is that this seems easy enough to fix, just optimize the model for accuracy. The classic example is GPT-4’s paper that showed GPT-generated probabilities vs observed probabilities before and after RLHF, with massive degradation of accuracy induced.
So, who will build I model that isn’t trained to be wrong? Is it reasonable to have hope for the new release of Grok?