The normal distribution picture doesn’t really work because the moron wouldn’t know what “transcends” means. It should be “thinking is more than writing”.
Disagree. With respect to universal truths, I think they do exist but understanding them is like viewing a 3D object from a set of different view points. Depending on where you stand, you see a different part of that object and parts will be hidden from you. Then to extend the analogy, scale the dimensions from 3 to BIG_NUM, and it's apparent that two people will have the exact same set of perspectives on the object.
Authors writing style are the perspectives that you use to understand their subject (ref to var object, from above), so to me, the understanding imparted on reading is very contingent on these stylistic differences, and lead to very different understandings even of the same thing.
It's like "chatGPT rewrite storm of steel in the words of a zoomer egirl" will nowhere convey the original meaning of the text, even if its subject matter is preserved.
"It may be an imperfect representation, but surely that’s better than a reader never having read it at all."
Not necessarily. An imperfect representation is necessarily in some sense misleading. I get misunderstood too often as is, I would not be thrilled to find my thoughts repackaged elsewhere in an even more easily misunderstood form. Simplification carries similar risks. There's little wrong with offering a suitable summary to those actively looking for such a brief and understanding that is what they are getting, but I would be offended to have a long, deep, nuanced argument I have made be reduced to something that loses much of the depth and breadth and nuance and then have that paltry abstraction represented to an audience as if that oversimplification wholly represented my thoughts on the subject, as if a student who reads only the cliff notes of a great work of literature were to mistakenly believe that he had read the full text of the work itself.
Maybe these issues will vanish as AI continues to improve. Maybe not. There's a meaningful distinction between translation and interpretation. I don't think authors are unreasonable in being concerned about having their work either mistranslated or especially about having it misinterpreted by a mechanism only intended to be providing translation.
I also believe that content creators in general have a very valid argument that overreliance on AI for content creation is something of a creative dead end. It's not simply style at stake, thinking requires practice and training. Outsourcing thinking to AI seems inclined less to result in fully capable humans being augmented by AI to exceed their natural limits and rather more to trend toward humans with disuse-atrophied thinking capability leaning on AI as a crutch to maintain the pretense that they retain average thinking capacity. A tool may supplement the natural, but it ought not replace it wholly.
Pithy line
The normal distribution picture doesn’t really work because the moron wouldn’t know what “transcends” means. It should be “thinking is more than writing”.
Fair enough
Disagree. With respect to universal truths, I think they do exist but understanding them is like viewing a 3D object from a set of different view points. Depending on where you stand, you see a different part of that object and parts will be hidden from you. Then to extend the analogy, scale the dimensions from 3 to BIG_NUM, and it's apparent that two people will have the exact same set of perspectives on the object.
Authors writing style are the perspectives that you use to understand their subject (ref to var object, from above), so to me, the understanding imparted on reading is very contingent on these stylistic differences, and lead to very different understandings even of the same thing.
It's like "chatGPT rewrite storm of steel in the words of a zoomer egirl" will nowhere convey the original meaning of the text, even if its subject matter is preserved.
"It may be an imperfect representation, but surely that’s better than a reader never having read it at all."
Not necessarily. An imperfect representation is necessarily in some sense misleading. I get misunderstood too often as is, I would not be thrilled to find my thoughts repackaged elsewhere in an even more easily misunderstood form. Simplification carries similar risks. There's little wrong with offering a suitable summary to those actively looking for such a brief and understanding that is what they are getting, but I would be offended to have a long, deep, nuanced argument I have made be reduced to something that loses much of the depth and breadth and nuance and then have that paltry abstraction represented to an audience as if that oversimplification wholly represented my thoughts on the subject, as if a student who reads only the cliff notes of a great work of literature were to mistakenly believe that he had read the full text of the work itself.
Maybe these issues will vanish as AI continues to improve. Maybe not. There's a meaningful distinction between translation and interpretation. I don't think authors are unreasonable in being concerned about having their work either mistranslated or especially about having it misinterpreted by a mechanism only intended to be providing translation.
I also believe that content creators in general have a very valid argument that overreliance on AI for content creation is something of a creative dead end. It's not simply style at stake, thinking requires practice and training. Outsourcing thinking to AI seems inclined less to result in fully capable humans being augmented by AI to exceed their natural limits and rather more to trend toward humans with disuse-atrophied thinking capability leaning on AI as a crutch to maintain the pretense that they retain average thinking capacity. A tool may supplement the natural, but it ought not replace it wholly.