9 Comments
User's avatar
Occam’s Machete's avatar

Very hard to take this seriously given the number of industry experts who take AI risk seriously.

“I learned that no one believes in AI doom because of definite factors of production. They all believe in AI doom because of stories and hypotheticals. That made my theory of change mostly irrelevant. The only people who actually cared about evidence didn’t believe in AI doom in the first place.”

It was not invalid to consider the game theory and risks of atomic weapons before the Manhattan Project succeeded.

Or to have concerns about genetic engineering and artificial wombs, like in a recent podcast of yours.

It’s a fully general counter argument to invalidate reasoning about the future with incomplete evidence.

Expand full comment
Brian Chau's avatar

That's a whole lot of words with no evidence except blind appeal to authority.

Expand full comment
Occam’s Machete's avatar

1. Was that really what you consider to be "a whole lot of words"?

2. How on earth is it an "appeal to authority" to note you're dismissing with a handwave the numerous experts in the field as people who don't "actually care about evidence?" Where's your actual evidence of that? Or should we just take it on your authority what counts as evidence and who cares about it?

3. I'm not using an "appeal to authority" to say they are correct because they are authorities--it's a contested question among experts with high levels of uncertainty expressed by many, if not most. I am saying that you have not provided any evidence they don't care about evidence, in a field where they have have achieved technical success by caring about evidence. Seems unlikely they'd all simply stop caring about evidence when considering downside risks.

4. You're also not addressing the point that your criticism of reasoning about the future via thought experiments is completely wrong, and is an approach you routinely employ in other subject areas. Your handwaving dismissal of simple points and casting aspersions does you no favors if you seek to be taken seriously or appear remotely convincing to anyone not already on your side.

Expand full comment
Brian Chau's avatar

Provide some actual evidence for the ground truth of your belief. Not what other people think of it. Everything else is excuse making. It's a lot of words **for** someone who can't even pick one piece of factual evidence.

Who wouldn't dismiss something like that?

Expand full comment
Occam’s Machete's avatar

“Provide some actual evidence for the ground truth of your belief.”

That’s what I’m asking you to do about your statements in this essay.

You criticized “what other people think of it” and I’m asking you to defend those assertions.

Are there no valid concerns of AI risk a priori? What would count as evidence?

Are thought experiments and hypotheticals invalid tools for reasoning about the future? Why do you accept them on other topics?

You trying to change the topic to what I personally believe instead of defending your assertions against the whole class of your opponents strikes me as making an excuse.

Expand full comment
Brian Chau's avatar

Thank you for providing for the audience the specific form of emotional ignorance masking as thought which characterizes AI Doomers.

As for the vast quantity of empirical evidence against your doomsday prophecy, it must be your first time reading this newsletter. Here's where to start:

https://www.fromthenew.world/p/diminishing-returns-in-machine-learning

https://www.fromthenew.world/p/research-progress-in-ai-is-continuous

Expand full comment
Occam’s Machete's avatar

Are you incapable of engaging in meta-arguments and defending specific assertions you make?

“[My] doomsday prophecy”

I’m sorry did I make one of those?

I’ve read most of your stuff, and Robin Hanson. I hope you’re right about technical limitations on AI progress. However, one can still be concerned about significant AI risk even if ASI or the singularity are off the table.

Moreover, you ultimately being right on the overall question is a district issue from whether you have made defendable assertions here.

I guess if you’re trying to be a rhetoric-focused activist and not a reason-focused intellectual then go for it.

Expand full comment