Subscriber Threads
In monthly threads, subscribers can ask questions. I’ll leave short answers in the replies and much longer answers in a subscriber-only longpost at the end of the month. Hopefully my short answers will encourage more people to subscribe.
Small Changes to the Newsletter Structure
The AI Pluralism newsletter now has its own heading. Prepare to see many more articles there.
https://cactus.substack.com/s/ai-pluralism-newsletter
More Subscriber Bonuses
In the last subscriber Q&A, no one asked any questions. There was one guest recommendation who I’ve added to my season 5 booking schedule. That being said, I do want to provide more subscriber benefits, especially for the people who already signed up. I’d like to hear what you (the subscribers) think would be the most helpful benefits to add.
One option is a weekly podcast breakdown, a short written summary of some of my major takeaways from that weeks audio podcast (the podcast will continue to be available to everyone, the breakdown will be for subscribers).
Please leave suggestions and questions below!
First: Really love the podcast. Keep doing what you’re doing! .
Question about ChatGPT Impact on the world:
I think you were first to point out that Chat CPT doesn’t necessarily tell you truth, it tells you what masses of people on the internet *think* is the truth. Sometimes this is correct, and the information can be really useful. Other times those answers can be off in subtle or even not even so subtle ways. This is clearly a problem if Chat CPT style does become the main way that people get answers to their questions.
Do you know if this is something that is measured or can even be measured? Like can we measure how accurate ChatGPT is in a scalable / systematic way?
Is this is a known problem in the AI/ML world? What is to be done to stop this from happening?
Thanks!
I appreciated your comparison of the way ai image generation works by adding noise and the way the programmers worked to have it generate language. I feel like i have done this with photoshop occasionally, where I have a flat color and I need to bring it to some kind of recognizable texture, but simply running filters on it won’t produce a believable result. Either they just shift colors around or they plaster some prefab picture/pattern onto my color field. So I have to add noise and then push things around a bit and then the texturizing filters have something to grab onto.
The process isn’t drawing exactly, it has the feel of “this is what the machine is good at” so they made filters to do it. I just have to use it to approach making the image I want from a sort of angle.
When Richard Hanania talked to his philosophers about ai, they mentioned image generated ocean waves that are not built the way physical waves are. They used a word like “agonic” and also “non-agonic”. What do those mean? Will language generation have the same machine-centered feel when we are working with it?