Discussion about this post

User's avatar
Olli Järviniemi's avatar

I fully agree that artificial intelligence poses an existential risk to humanity, and for this reason, I've pivoted to work on the problem full-time.

On the topic of short, simple, standalone documents that lay out why someone might conclude that: My introduction can be found at https://ollij.fi/AI/

Duncan's introduction focuses on the high-level arguments; in contrast, my text gives more detailed technical arguments - including plenty of talk about present-day AI systems - while aiming to be understandable by lay people. So if you read Duncan's intro and felt like you still want to hear "the whole story from start to finish", step by step without glossing over the details, consider checking it out!

Max Harms's avatar

I like this! It feels like an overall good primer. But I also want to nitpick with the section titled "Smarter opponents simply do not lose to dumber opponents—not when the gap between them is big enough." In particular, notice how much weight the "enough" is lifting, there. Like, contrast it with a claim like "You can get rich if you make a rock that's smooth enough." That claim is... true? But it's implication ("Making rocks smooth is a good way to get rich.") is very different from where it gets its literal truth from (at some level of making things smooth you're surpassing modern materials science and therefore have a technology and/or new physics).

Like, more on the object level, I've heard a wishful thought that goes: "Information processing is not the sort of thing that scales infinitely. The human brain is, by volume at least, almost entirely long-distance wires. Two humans thinking together is almost always less than 200% as efficient as one human. We see diminishing returns all over the place, and there's probably something like a general exponential cost to increasing intellect, even in the best systems. What if AI runs out of steam at an intelligence stratum were it's not smart enough to beat us in the way you describe?"

These people haven't grokked how much room there is above us. But they're worth engaging with.

I don't see anywhere in your post where you argue that superintelligence will be super *enough* to make the "simply do not lose" section apply.

Relatedly, I think there's a more outside-view perspective that says "Sure, maybe the AI won't directly beat us. Maybe they'll trade with us and cooperate for the same reason that most psychopaths cooperate with social norms. But I don't see any reason why we'd expect humans to stay in control of the future in the long-term, there, and in the absence of being able to compete we may go extinct." (See: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) I think it might be good to mix in more of this kind of argument in, though you represent it in some places.

2 more comments...

No posts

Ready for more?