9 Comments
User's avatar
Jazzme's avatar

so glad to be part of your journey as you grow, as you learn and unlearn in your traveles thru your time line.

Expand full comment
Brian Chau's avatar

Thank you!

Expand full comment
[insert here] delenda est's avatar

That was really inspirational

Expand full comment
Brian Chau's avatar

Thank you!

Expand full comment
Ruxandra Teslo's avatar

I think I felt something similar:

https://www.writingruxandrabio.com/p/ideas-matter-how-i-stopped-being

Expand full comment
Michael Kelly's avatar

Patterns in the weather. We've passed the point of maximum profligate and returning to law and order. Western civilization tends to do these wild swings. But gravity pulls the pendulum back—inertia causes overshoot, and the cycle repeats.

What is interesting, is the disconnect betwixt the elites and the populace, or more interesting, is how elites funnel money through non-profit foundations into advocacy groups to hire philanthropic journalism to further the goals of the elites against the people.

Expand full comment
User's avatar
Comment deleted
Mar 7, 2024
Comment deleted
Expand full comment
Jacob Woessner's avatar

While I share your concern for the politically motivated RLHFing that led to models like Google Gemini being produced. I think it’s important to differentiate between “AI Safety” and “AI Alignment.” Scott and other people in the EA space are largely talking about AI Alignment which is concerned with preventing AI from acting on its own without paying attention to the desires of its human creators.

Expand full comment
Brian Chau's avatar

The problem is that in practice OpenPhil, GovAI, CSET, etc. are extremely willing to ally with malign actors and even endorse their policies.

https://www.fromthenew.world/p/hardware-is-centralized-software

Expand full comment
Jacob Woessner's avatar

I am largely in favor of some of the restrictions your post mentions. Are you opposed to these restrictions for just AI or any potentially dangerous technology?

In this post: https://www.fromthenew.world/p/no-one-disagrees-that-ai-safety-requires

You mention that AI does not increase the threat of biological weaponry because, "Re 1: the limiting factor of designing new biological weapons is equipment, safety, and not killing yourself with them. " So, does that mean you are for or against the regulation of lab equipment that could be potentially used to cultivate strains of dangerous bacteria/viruses?

Expand full comment