Discussion about this post

User's avatar
Nate's avatar

First: Really love the podcast. Keep doing what you’re doing! .

Question about ChatGPT Impact on the world:

I think you were first to point out that Chat CPT doesn’t necessarily tell you truth, it tells you what masses of people on the internet *think* is the truth. Sometimes this is correct, and the information can be really useful. Other times those answers can be off in subtle or even not even so subtle ways. This is clearly a problem if Chat CPT style does become the main way that people get answers to their questions.

Do you know if this is something that is measured or can even be measured? Like can we measure how accurate ChatGPT is in a scalable / systematic way?

Is this is a known problem in the AI/ML world? What is to be done to stop this from happening?

Thanks!

Expand full comment
Eric Mauro's avatar

I appreciated your comparison of the way ai image generation works by adding noise and the way the programmers worked to have it generate language. I feel like i have done this with photoshop occasionally, where I have a flat color and I need to bring it to some kind of recognizable texture, but simply running filters on it won’t produce a believable result. Either they just shift colors around or they plaster some prefab picture/pattern onto my color field. So I have to add noise and then push things around a bit and then the texturizing filters have something to grab onto.

The process isn’t drawing exactly, it has the feel of “this is what the machine is good at” so they made filters to do it. I just have to use it to approach making the image I want from a sort of angle.

When Richard Hanania talked to his philosophers about ai, they mentioned image generated ocean waves that are not built the way physical waves are. They used a word like “agonic” and also “non-agonic”. What do those mean? Will language generation have the same machine-centered feel when we are working with it?

Expand full comment
2 more comments...

No posts