14 Comments

I don't really think this is a problem that needs solving.

The reason this stuff is happening is that currently AI models aren't a buisness product but a PR move -- look see how awesome we are you should totally buy our stock/come work here/give us funding. But this is what you expect from PR stunts, functionality is secondary to avoiding getting people upset (it's just they did this very badly).

Once these are actually mostly revenue streams I expect the consumers will be given control over the filters to allow them to best use the product. It won't be news that someone got their AI to say something racist when they turned off that filter anymore than it's news that I can add the n-word to my autocomplete dictionary.

Expand full comment

Due respect, but if you don't think this is a problem that needs solving, you are either very liberal or you have spent no meaningful time at all with these systems on issues that remotely touch politics.

Expand full comment

But my point is that as soon as they aren't just a toy the user will have the option to turn this crap off or even tune it to go the other way.

Currently AI is in the same place Boston robotics was 5-10 years ago with those videos of dancing robots. Yah it would be horrible if robots that were issued to help soldiers in combat danced gleefully and gave away their position -- but that's not what the product will actually do they are just trying to show off they have a critical capability (balance) that concerns potential future customers and to grab some good PR. These AI models are just demos designed to produce some good PR (big fuck up there google) and to convince investors and future customers they have enough control over the system to train it to avoid producing output the company paying for the product wouldn't want.

The AI that gets actually sold will need to do different things for different customers. Pepsi is going to want to be sure an AI they use doesn't claim to love coke, the NRA is going to want the AI they use to manage donation/get out the vote efforts doesn't say bad things about guns etc etc..

At the end of the day profit drives these companies and when these products actually become big revenue generators they be configurable to the customer's needs -- just like Google search offers safe search because porn upsets some people but also lets you turn it off.

And if these big tech companies refuse to do that they'll get their market stolen out from under them by someone who will. These are people who want to make money - sure they may have mistaken ideas about what the public will find normal but they aren't going to give up on selling these AI services to half or more of the country. Don't worry greed will sort it out.

Expand full comment

You *may* be correct but there is no guarantee at all that you will be, and lots of recent evidence to suggest you will not.

YouTube offers no way to turn off their censorship; is it only profit that drives YouTube?

Pre-Musk Twitter offered no such way. Do you genuinely believe it was only profit that drove pre-Musk Twitter?

Google Search offers no way I'm aware of to avoid the fact that it is tuned to favor leftist media sources and disfavor non-leftist sources, e.g. Do you really believe this is solely based on a quest to optimize profits?

Facebook and Google News offer little to no way to turn off their censorship of what you do and do not get to see. Do you genuinely believe that it is only profit that drives the leftism in big tech on politics and identity issues today?

Parler was deplatformed by AWS, Apple and Google Android virtually simultaneously. Do you genuinely believe this was done solely to optimize profits? [I guess here you could at least make an argument that it was - to curry favor with interventionist leftist regimes...]

Then there is the even harder problem that the training data sets are skewed left on most topics. There is no easy way to simply "overcome" this, or even make it easily user-tunable.

Expand full comment

In short it's not that there isn't a problem it's that there isn't an AI problem but a societal problem. Lots of people like and want censorship of ideas they dislike and when you run a media site it often makes sense to cater to that. It's just that selling AI services isn't that kind of market it's more like renting a tool.

Regarding Twitter I'm afraid the ad revenue has tanked since Musk took over and a number of the accounts that drew eyeballs have left. I disliked it's censorship policy but the problem is lots of people like it so they have to balance that.

YouTube.. absolutely they are catering to advertiser preferences. They are actually quite accommodating on that compared to network TV you just don't see it because TV doesn't tell you the shows they didn't run.

Ultimately, the shape of the market really matters and big as drivrn social media sites have network effects that make them reluctant to alienate the left because they are overrepresented in the people who make ad deciscions and are the most valuable eyeballs.

Expand full comment

I do concede the point in your last sentence and its relative importance. But it's also not good business strategy to piss off 40%+ of your potential market.

And you offer no evidence - or reasoning - to suggest why "the AI market" - certainly the part of the market that Microsoft and Google are going after - is fundamentally different from Google's core search market today. Nor examples of how companies have tacked backed to the political center after moving so hard to the left. Nor why large tech companies that have so much of their business in consumer markets that you assert "require" them to skew further and further to the left will be able to do a 180 with their AI products/services and not harm their core businesses.

Nor why Google's Pichai said that "we screwed up" with Gemini if in fact hard leftism is the correct strategy to keep the customers Google will continue selling search to for the indefinite future.

Claiming that it's "not an AI problem but a societal one" is either disingenuous or mere wordplay sophistry.

Claiming that it was advertiser's who insisted that YouTube censor COVID information that disagreed with the regime is... well, let's just say an "interesting" assertion.

Brian's and Nate Silver's and other similar explanations (that the companies and the deployment of the tech has gone so far left primarily because of other-than-profit-optimizing actions of engineers and leaders/managers) is far more plausible than what you've offered here.

Expand full comment

I don't think it's different than search. Do you think search is ideologically biased (beyond things like child porn that most people agree about)? It's different than social networks.

And maybe I'm wrong and these companies will be dumb and will abandon 50% of the market but unlike social networks that are natural monopolies that's enough to launch a competitor who will force them to change or beat them.

If google search didn't let you find gun content or prevented you from searching for anti-immigration arguments Bing would have destroyed them by now.

Expand full comment

The government itself will regulate what AIs we get to use within a decade should the AI's usefulness prove out, and the ideologues will be in control. It will be done to protect us from misinformation and to protect democracy.

Expand full comment

Yes!!! They're already doing it. With one hand the left tells us they need to regulate AI to prevent false information, with the other hand they create false information.

It's a self-fulfilling prophecy of controllism.

Expand full comment

I think biased AI only plays out to conservatives favor. Without all the explicit manipulations, AI is already heavily biased, due to simply parrotting left-leaning press and online content. Now since this all would still move exactly inside the ignorant bubble, it would be very hard to educate average people about it. However now that it all has been taken to such ridiculuous heights, like Goody2 levels of biolerplate answers and race-switching of historic figures, it really beats you in the face how the leftist agenda is all about deception and cultivating ignorance. The overarching message now is that you can't trust those tools, because it is in their very nature to be deceptive, and then they are engineered to deceive you even more. One has to wonder if they are shooting themselves in the leg on purpose, just to teach humanity this lession.

Expand full comment

I was actually with you until the last sentence. You think Google deliberately went against its own financial *and* ideological interests deliberately with what they did?

Occam's Razor and their history should tell you otherwise. Even the idea that one or two people within Google made this happen deliberately for the purpose you suggest is far-fetched, since it would require massive coordination within the Bard org to make it happen.

What percentage was deliberate hard-core ideologizing vs. soft-core ideologizing vs. outright incompetence, I doubt we will ever know.

Expand full comment

Why should we want AI pluralism? That's not a standard that we use for anything else in our lives. If I were a Communist, should I expect the media I consume to generally have a Communist point of view? Definitely not. Why should I expect my AI to have a Communist perspective? And why should we think that's good for society? I would assert that it's not.

To push the point, should someone undergoing a psychotic break expect their AI to support their delusions? I'm sure you can come up with lots of other examples of attitudes we don't as a society want to reinforce.

I don't think the guardrails on current LLM offerings are ideal either, but it doesn't make sense to me to wave off the entire issue.

Expand full comment

But that's exactly the standard we expect for pretty much everything we consider a tool, from cars to guns to text editors. You don't expect to write a communist article in your text editor only for a capitalist one to appear when you try to re-read it.

Expand full comment