How would you characterize the case of poker? As a reasonably experienced player, my worst game is against a complete novice, because they barely even know the rules and therefore act essentially randomly or at least unpredictably. Is this part of a class of edge cases where superior knowledge and intelligence doesn't help? Or does your theory predict that a sufficiently higher level of capability exists to handle even this case?
My sense is that there are actually a lot of places like this—places where the *environment* is such that it interferes with the normal thing of intelligence conferring an advantage.
I think that what's going on with poker (or with novices sometimes baffling experienced chess players, etc.) is that there's sufficient randomness that the-ability-to-spot-patterns and the-ability-to-model-and-predict-your-opponent is *already* shaky, and if you bring in a noisy chaotic non-agent, then it all falls apart.
Like, I suspect that even a literal superintelligence would struggle in poker against a chaotic opponent. I bet they'd still win overall, and with a wider margin than you, but that the margin would still be fairly slim in an absolute sense.
It feels sort of analogous to being down on the ground in a chaotic battlefield. Yes, intelligence still allows you to be better at perceiving and planning such that if you could run the same scenario 10000 times, you'd come out on top *more often* than a less intelligent agent. But also you only *actually* get to run the scenario once, and with bullets flying around and bombs going off everywhere, sometimes you just get pasted and all the extra intelligence was for naught.
This seems like a good insight, but it seems like the word "antagonist" is pulling a lot of weight in your story. Like, I suspect that most of why superintelligence is dangerous routes thru other pathways than antagonism. For instance, intelligence yields more ability to acquire power/etc per unit time, independent of any interaction/deception/manipulation. Which doesn't invalidate your thesis, but seems worth keeping in mind when there's an intelligence difference between unaligned entities.
I mean, yes, absolutely, as far as SUPERintelligence is concerned.
I am not sure if you intend to refer to e.g. the difference in intelligence between humans and cows as "super," as well (or maybe between humans and smallpox; that was more of a directly antagonistic relationship). Like, whether "super" is an absolute threshold or a relative property.
> intelligence yields more ability to acquire power/etc per unit time, independent of any interaction/deception/manipulation
Yeah, I think this was weakly alluded to in the post; this is one of the things that the more intelligent antagonist *does* with its greater time horizons and expanded physical boundaries. Humans disengage from their conflict with the lion, go off and build clubs and slings, and then come back and take the lion out in a process that was largely completely invisible to the lion. I think that a sequel essay, or the actual formal paper version, would definitely need to flesh that out.
I think I meant super as a relative property about general intelligence. Like, I think humans should be consider superintelligent compared to basically all other life-forms (but ants are not superintelligent compared to trees, because even though ants have more intelligence it's basically not general intelligence).
I think we basically agree, so I don't feel much need to belabor the point, but most of the relationship between humans and other animals (domesticated animals are a bit of a special case, I think) isn't really "antagonistic" in the way you describe, but is more incidental. I don't really think about polar bears on a day-to-day basis, and they certainly don't think about me. Are we opponents? Eh. But thanks to optimizing the world for my interests (by burning a bunch of hydrocarbons), I'm making life harder for them. I think it makes sense to describe some agents as opponents of each other, and in these kinds of relationships there are specific ways in which intelligence comes to bear, but I also think agents that aren't really well described as being "opponents" still end up incidentally threatened by the other thanks to the interconnectedness of reality and the fact that optimization tends to wipe out everything except that which is being aimed for.
(Side note for bystanders since we're in a public forum. I, in fact, have offset my lifetime personal carbon emissions through donations to effective charities, and have the well-being of polar bears as part of what I am fighting for in my work. I don't endorse an uncaring attitude towards nature.)
How would you characterize the case of poker? As a reasonably experienced player, my worst game is against a complete novice, because they barely even know the rules and therefore act essentially randomly or at least unpredictably. Is this part of a class of edge cases where superior knowledge and intelligence doesn't help? Or does your theory predict that a sufficiently higher level of capability exists to handle even this case?
My sense is that there are actually a lot of places like this—places where the *environment* is such that it interferes with the normal thing of intelligence conferring an advantage.
I think that what's going on with poker (or with novices sometimes baffling experienced chess players, etc.) is that there's sufficient randomness that the-ability-to-spot-patterns and the-ability-to-model-and-predict-your-opponent is *already* shaky, and if you bring in a noisy chaotic non-agent, then it all falls apart.
Like, I suspect that even a literal superintelligence would struggle in poker against a chaotic opponent. I bet they'd still win overall, and with a wider margin than you, but that the margin would still be fairly slim in an absolute sense.
It feels sort of analogous to being down on the ground in a chaotic battlefield. Yes, intelligence still allows you to be better at perceiving and planning such that if you could run the same scenario 10000 times, you'd come out on top *more often* than a less intelligent agent. But also you only *actually* get to run the scenario once, and with bullets flying around and bombs going off everywhere, sometimes you just get pasted and all the extra intelligence was for naught.
This seems like a good insight, but it seems like the word "antagonist" is pulling a lot of weight in your story. Like, I suspect that most of why superintelligence is dangerous routes thru other pathways than antagonism. For instance, intelligence yields more ability to acquire power/etc per unit time, independent of any interaction/deception/manipulation. Which doesn't invalidate your thesis, but seems worth keeping in mind when there's an intelligence difference between unaligned entities.
I mean, yes, absolutely, as far as SUPERintelligence is concerned.
I am not sure if you intend to refer to e.g. the difference in intelligence between humans and cows as "super," as well (or maybe between humans and smallpox; that was more of a directly antagonistic relationship). Like, whether "super" is an absolute threshold or a relative property.
> intelligence yields more ability to acquire power/etc per unit time, independent of any interaction/deception/manipulation
Yeah, I think this was weakly alluded to in the post; this is one of the things that the more intelligent antagonist *does* with its greater time horizons and expanded physical boundaries. Humans disengage from their conflict with the lion, go off and build clubs and slings, and then come back and take the lion out in a process that was largely completely invisible to the lion. I think that a sequel essay, or the actual formal paper version, would definitely need to flesh that out.
I think I meant super as a relative property about general intelligence. Like, I think humans should be consider superintelligent compared to basically all other life-forms (but ants are not superintelligent compared to trees, because even though ants have more intelligence it's basically not general intelligence).
I think we basically agree, so I don't feel much need to belabor the point, but most of the relationship between humans and other animals (domesticated animals are a bit of a special case, I think) isn't really "antagonistic" in the way you describe, but is more incidental. I don't really think about polar bears on a day-to-day basis, and they certainly don't think about me. Are we opponents? Eh. But thanks to optimizing the world for my interests (by burning a bunch of hydrocarbons), I'm making life harder for them. I think it makes sense to describe some agents as opponents of each other, and in these kinds of relationships there are specific ways in which intelligence comes to bear, but I also think agents that aren't really well described as being "opponents" still end up incidentally threatened by the other thanks to the interconnectedness of reality and the fact that optimization tends to wipe out everything except that which is being aimed for.
(Side note for bystanders since we're in a public forum. I, in fact, have offset my lifetime personal carbon emissions through donations to effective charities, and have the well-being of polar bears as part of what I am fighting for in my work. I don't endorse an uncaring attitude towards nature.)