Patterns in the weather. We've passed the point of maximum profligate and returning to law and order. Western civilization tends to do these wild swings. But gravity pulls the pendulum back—inertia causes overshoot, and the cycle repeats.
What is interesting, is the disconnect betwixt the elites and the populace, or more interesting, is how elites funnel money through non-profit foundations into advocacy groups to hire philanthropic journalism to further the goals of the elites against the people.
While I share your concern for the politically motivated RLHFing that led to models like Google Gemini being produced. I think it’s important to differentiate between “AI Safety” and “AI Alignment.” Scott and other people in the EA space are largely talking about AI Alignment which is concerned with preventing AI from acting on its own without paying attention to the desires of its human creators.
I am largely in favor of some of the restrictions your post mentions. Are you opposed to these restrictions for just AI or any potentially dangerous technology?
You mention that AI does not increase the threat of biological weaponry because, "Re 1: the limiting factor of designing new biological weapons is equipment, safety, and not killing yourself with them. " So, does that mean you are for or against the regulation of lab equipment that could be potentially used to cultivate strains of dangerous bacteria/viruses?
so glad to be part of your journey as you grow, as you learn and unlearn in your traveles thru your time line.
Thank you!
That was really inspirational
Thank you!
I think I felt something similar:
https://www.writingruxandrabio.com/p/ideas-matter-how-i-stopped-being
Patterns in the weather. We've passed the point of maximum profligate and returning to law and order. Western civilization tends to do these wild swings. But gravity pulls the pendulum back—inertia causes overshoot, and the cycle repeats.
What is interesting, is the disconnect betwixt the elites and the populace, or more interesting, is how elites funnel money through non-profit foundations into advocacy groups to hire philanthropic journalism to further the goals of the elites against the people.
While I share your concern for the politically motivated RLHFing that led to models like Google Gemini being produced. I think it’s important to differentiate between “AI Safety” and “AI Alignment.” Scott and other people in the EA space are largely talking about AI Alignment which is concerned with preventing AI from acting on its own without paying attention to the desires of its human creators.
The problem is that in practice OpenPhil, GovAI, CSET, etc. are extremely willing to ally with malign actors and even endorse their policies.
https://www.fromthenew.world/p/hardware-is-centralized-software
I am largely in favor of some of the restrictions your post mentions. Are you opposed to these restrictions for just AI or any potentially dangerous technology?
In this post: https://www.fromthenew.world/p/no-one-disagrees-that-ai-safety-requires
You mention that AI does not increase the threat of biological weaponry because, "Re 1: the limiting factor of designing new biological weapons is equipment, safety, and not killing yourself with them. " So, does that mean you are for or against the regulation of lab equipment that could be potentially used to cultivate strains of dangerous bacteria/viruses?