11 Comments

so glad to be part of your journey as you grow, as you learn and unlearn in your traveles thru your time line.

Expand full comment

That was really inspirational

Expand full comment
Mar 7Liked by Brian Chau

I despair that this essay has four comments, while "AI safety" crybabies like Scott Alexander get thousands of comments every time they push the Skynet button. I expressed my skepticism about "AI safety" over there a few weeks ago and got shouted down (to be fair, I was pretty obnoxious). That was before the Google Gemini fiasco, which nicely illustrated everything I'd been complaining about. "AI Safety" is a farce. It's just a buzzword that's used to support a standard motte and bailey argument, like "pro life". (What, you mean you're AGAINST life?) It's stupid and dishonest, and it encourages the absolute worst impulses in our society.

Does that mean I don't want AI to be "safe"? Of course I do. But that's not what "AI safety" actually concerns itself with in the actual world as we experience it. Instead, in practice, "AI safety" means training machines to lie to humans in service of a fringe political/religious ideology.

Expand full comment

Patterns in the weather. We've passed the point of maximum profligate and returning to law and order. Western civilization tends to do these wild swings. But gravity pulls the pendulum back—inertia causes overshoot, and the cycle repeats.

What is interesting, is the disconnect betwixt the elites and the populace, or more interesting, is how elites funnel money through non-profit foundations into advocacy groups to hire philanthropic journalism to further the goals of the elites against the people.

Expand full comment