Reflections on one (1) abuser
I.
There are two main ways that I … digest? Metabolize? … let’s say “process and integrate,” when I have an Unpleasant Encounter in the social space.
The lesser of the two strategies is something like “just say out loud what happened.”
e.g. “Caleb Ditchfield has relentlessly harassed me and several other people, so I began documenting it as a warning to others.” Or “Bruno Parga publicly stated that he thinks Palestinian children are morally culpable and deserve whatever happens to them, because in expectation (according to him) they’re going to grow up radicalized as terrorists anyway.” Or “David Pearce said that he would be, exact quote, overjoyed, if literally all life were to end due to an artificial intelligence takeover, and did not rescind or replace that statement when asked about it multiple times.”
Usually these statements go hand-in-hand with me setting some kind of social boundary (blocking the person, for instance), and sometimes with a clarification of what exactly it was that bothered me (if it wasn’t already clear).
More often, though, what I do is sort of back out of the experience. Zoom out. Try to extrapolate, and generalize. I’ll try to see the unpleasant thing that just happened as a single instance of a class or category of bad thing, and use the leftover energy it gave me to help me work through my models and policies.
I will try to figure out why I didn’t like what happened. What about it, precisely, I think is bad. What it’s made of, what its parts are, how they interconnect. I ask myself “okay, how would I like to think and feel and react to this sort of thing, from now on? What would be a good way to orient to things-like-this?”
Et cetera. I try not to get stuck in the nitty-gritty specific detail of that single event, that individual person. I try not to let myself get yanked around by the tip of my nose, reacting and reacting and reacting. I try not to have a tunnel-visioned, knee-jerk, reflexive response, but instead to coax my brain to weigh up all of the relevant pieces, and put everything into perspective.
(I don’t always succeed at all of these things, but they’re what I’m aiming for.)
These posts are rarely “social” in the sense that they’re almost never about changing the social landscape around me, the way the first kind of post is. When I’m doing this sort of … philosophical vaguebooking? … I’m specifically not trying to call a person out or put them on the spot.
In fact, half the time I’m not even making a claim about what actually happened! When I say "if someone Xs, I will respond with Y," it is usually because I think somebody just did something at least a little bit Xish at me—
(Hopefully obviously? I mean, otherwise I wouldn't be writing the post. I wouldn't need to go through a processing process that involved thinking about my response to X.)
—but like. I'm writing the "if X, I'll Y" statement as a sort of detached contextless heuristic because that's what I'm writing down in my book of how to do life. It's not that I'm supremely confident that "X" is the correct interpretation of the thing that just happened, in real life. The thing that just happened inspired me to think about X, in general terms.
It could be that X is a very tight and true adaptation of the other person’s behavior, à la the movie version of Fight Club, or it could be that X bears almost no resemblance whatsoever to the actual events that took place, à la the movie version of World War Z.
My processing post is not me trying to stealth signal "pssst those of you who know what this post is REALLY about get that what I'm saying is that Steve just X'd at me." That’s monkey nonsense, and I'm exceptionally disinterested in it. If I want people to think that Steve did X and should be judged for it, I'm quite comfortable writing a post that says "Yo, everybody, Steve just did X and I think you should judge him for it."
The processing posts, in contrast, are me thinking out loud as I create and lay down my “if X, I’ll Y” policy.
If you, dear reader, ever suspect that you were upstream of me needing to do this processing, and you're like "hey, wtf, I didn't even X!"
...well, okay? Fine? I'm not presently evaluating "X, or not X?" That's a whole separate conversation. If you want to sit down and hash it out with me and be like "bro I definitely did not do X, you have hella misunderstood this whole situation" I will (probably) hear you out, and guess what?
No Y!
Because Y was my reasoned response to X. If you are, in fact, demonstrably not Xing, then that whole structure I just laid out doesn’t apply to you. Hurrah!
II.
A couple of years ago, I noticed that I was making something like the eighth or ninth processing post inspired by the same person. Like, I was making my third processing post in a single week and realized that all three of them were me trying to work through my experiences of Person X, and upon noticing that, I realized that I had also made at least half a dozen posts inspired by my interactions with them over the years prior.
I’m not entirely sure why, but I decided to start tagging those posts, in a quiet, unobtrusive way, so that I could refer back to them all together. It felt like it mattered, that they all sprang from the same source. It felt like each of them was relevant context for all of the others, and I wanted to be able to point at the gestalt if necessary.
(It was shortly after this time that I concluded something like “ah, yeah, I think that the word ‘abuser’ is actually an appropriate label for the way this person has consistently behaved toward me.”)
Previously, all of these words lived on Facebook. I’ve decided to repost them here, in a slightly more permanent and accessible way. The remainder of this piece is basically just a collection of mini-essays—some quite short, some fairly long.
I think the value of the collection is two-fold:
Most of the individual essays themselves sparked a lot of conversation, and a lot of people found my thinking-out-loud to be elucidating, and often applicable to their own struggles and confusions. I think there’s gold scattered throughout all of these musings, and I think that people currently dealing with certain kinds of low-key abuse or quasi-abuse can get a lot out of seeing me work through my own aftermath(s). I’ve offered a lot of this wisdom to other people since scraping it together myself, and it’s made a difference more than once.
I think there’s a thing that happens, where we learn that so-and-so raped someone, or so-and-so committed fraud, or whatever, and it’s sort of just this … floating node? It doesn’t really connect to or make sense with anything else, the “story” of the bad thing is a tumbleweed rather than a tree with roots and branches. It’s hard to make it fit with the rest of what you know about the person. I think there’s something really interesting about getting to look at the relationship between me and Person X from fifteen or twenty different places, at fifteen or twenty different moments in time, and seeing how all of the threads weave together.
(re: 2: it’s unfortunate that all of the below is filtered through me and my perspectives and my biases; it would be cooler if you could hear both sides of the story and weigh things more dispassionately. But I think the benefits of keeping the other person anonymous outweigh the loss of clarity and impartiality. Remember that these are stories. Remember that they are my stories. Remember that, even where I am in fact making claims about the actual historical record below, the more important thing is something like “Duncan believes this is what happened, and given that belief, here’s what Duncan thinks is good and bad and how one should respond.” You can evaluate “if X, then Y” even if you are skeptical that X is what really happened.)
It may be that you’re in the mood to sit down for 45 minutes and read all of these little mini-essays at once. If so, go for it!
But also keep in mind that you may want to just keep this tab open, and dip in and out of it from time to time. i.e. don’t let the fact that it’s quite long stop you from ever reading it at all—feel free to break it up into chunks or whatever.
Written November 16, 2022
It is not quite the abusiversary of this incident, but I noticed this story was taking up like 5% of my shower thoughts so I decided to go ahead and get it out of my head.
One time, I was depressed.
I do pretty well when depressed! Like, I often am quite productive, I do a decent job of exercising daily and calling old friends and making sure to set aside time to do activities that keep the smoldering ember of joy alive.
But I'm also pretty visibly depressed, when I am.
I had started a new job. I was less than four months into it. I was very clearly the juniormost person on the team, and furthermore was starting on explicitly shaky ground (had been considered and rejected for one position, had my start date delayed).
I was doing well, and was also visibly depressed.
At some point in the past, some of my colleagues had gone off and done [retreat/seminar] and found it useful. They recommended it to me. They thought it would help.
I was interested. I asked a few questions.
It turned out, one important linchpin of [retreat/seminar] was a multi-hour, explicitly psychedelic experience, which one of my colleagues repeatedly and unnervingly described as being “exactly like dying.”
I did not want to die. I was (in my depression) engaged in an active ongoing battle against death. I was doing all I could not to die.
I expressed this, as my primary reason for being wary about the recommendation that I go do [retreat/seminar].
My colleagues scoffed, a little. In language I did not have, at the time: they made no attempt whatsoever to pass my ITT.
Most notably, my superior, and one of the most powerful people in the org, continued to press me about it, off and on for weeks (without ever trying to understand or engage with my hesitations).
Eventually, they cornered me in the hallway outside of the office, alone, after hours. The conversation went on for a while. There was quite a bit of pressure.
“I don't want to die, though,” I said. “Like, I really don't.”
They were dismissive/impatient. Me being depressed (despite the fact that I was doing all my work, quite well, and also not, like, moping around) was problematic in some way that could not be explained in simple words, but it was made clear to me that I was in trouble, on thin ice, expected to Be Different Than I Was.
Eventually, a voice was raised (not mine). I won't pretend that I was screamed-and-yelled at, but the words were shouted, with deep impatience.
“Then will you GO to [retreat/seminar]? WILL YOU!?”
And I
depressed
alone
the juniormost member of my team
worried I was in danger of losing my job
facing one of the seniormost members of my team in a hallway in an empty office late at night all by myself after hours of being leaned on, after weeks of poking and prodding
gave up, and said “Fine, I'll go,” in a spirit of “I guess if nobody wants me around when I'm alive then I might as well die this way.”
I actually thought that. It was an explicitly suicidal decision.
And you know what? In the end, [retreat/seminar] actually went well and was in fact helpful and my experience of it was not at all like dying. There is a meaningful sense in which my superior was right, and I was wrong. That matters.
But still. Bad bets pay off sometimes; the fact that a bet pays off does not make it not-a-bad-bet. It took me a while to really grok how not-okay all of that was, as a method of coercing a depressed and vulnerable subordinate to go expose themselves to a psychedelic experience. It's the sort of thing no one should EVER do.
Written September 16, 2021
One of the most valuable pieces of wisdom that my mother ever managed to successfully convey to me was “if people tell you that they are bad, believe them.”
This advice is often hard to put into practice, though. For every rare person who will just outright admit that they have some damning character flaw, there are a dozen more whose admissions come in the form of e.g. amusing and charming stories about how they repeatedly defected on others, but then apologized profusely in one case, and bravely admitted fault in another, and just kept talking until the problem went away in a third, and basically got away with it every time.
And in the process of laughing at these relatable and self-deprecating stories, it's often hard to blur your vision enough to notice what this person is actually telling you about themselves, and about what they will likely do to you.
Written October 5, 2022
I saw a take today that included, approximately:
“As part of resolving the whole current Russia situation, we should legitimize the territory they stole in 2014, i.e. just acknowledge that Crimea belongs to Russia.”
Like, Russia caused a problem in 2014, and now they're causing a bigger problem, but we're all used to the problematic state from eight years ago so that falls off the radar and can be water under the bridge.
And it just strikes me that this is a pretty classic abuse tactic. Like, it rhymes pretty closely with a time that an organization outright lied to maneuver me into a certain position and also cause me to not understand what had happened for several months...
...and then when I finally figured out what had happened and started to raise objections, certain members of that org TOTALLY strategically leveraged a sense of, like, “ugh, why are you bringing up OLD stuff, let's just stick with how it's been for the past few months, we're all used to the way it is, now,” ignoring the fact that that very state of affairs had been achieved illegitimately.
Doing something bad and then just ... letting some time pass, is a devastatingly effective tactic, when you're dealing with humans. Humans en masse have a really hard time not just ... adjusting to the new equilibrium, and being like, eh, justice sounds like too much of a headache at this point.
(c.f. indigenous people living in America today)
Clever abusers, whether individuals or nation-states, know all too well how to take advantage of this dynamic.
Written January 5, 2023
There are some people who
when they see that Doing The Right Thing and Honoring Their Agreements and Following Reasonable Moral Precepts and Being Generically Prosocial (Or At Least Not Actively Tearing At The Social Fabric)
means that they won't get what they want, in some given situation,
go "alas, that sucks, guess I can't have what I want."
And then there are people who do the other thing.
Written April 2, 2022
The more I stare at the world, the more I think the biggest point of disagreement between me and most people is whether it's ever a good idea to say a false thing that I think will cause someone to reach the correct conclusion in the moment.
There are a LOT of people on Team Tell Them Something False If If Results In Them Getting The Right Answer. Which, from my perspective, is also Team Condescension And Paternalism, and Team Rob People Of Their Agency, and Team Perpetuating The Problems Under The Surface, and Team Rather Than Have A Slightly Harder Conversation Now I Will Just Set Them Up For More Trouble Later When These False Beliefs I Am Handing Them Inevitably Clash With Reality.
(There’s some other, more complicated thing that happens when language and culture mismatches and double illusions of transparency mean that you literally can’t say the true thing, like cannot actually convey the true thing to them in words, but that’s for another post.)
Written August 30, 2022
Emotionally Tall™
One of the things that has been frustrating to me is people's weirdly callous inability to consider the possibility that someone is emotionally/psychologically abusive.
(Like, in a way that is persistent/consistent/going to keep happening, and sufficient to rise to the level of moral culpability.)
By “weirdly callous inability to consider,” what I mean is that there will be, like, FOUR credible and widely spaced victims all being like “yes, [person] repeatedly emotionally and psychologically abused me,” and other people will be like “ehhhh, I don't really think that happened” or “ehhhh, I don't know if anything really needs to be done about that.”
I think that a major piece of this puzzle is that some people are emotionally tall.
--------------------------------
Consider physical abuse—intimidation, violence, sexual assault.
Some people are quite large, and physically imposing. Large and physically imposing people are often surprised when they discover just how frequently their fellow smaller, weaker humans are in fear for their safety, or actively in danger.
It's easy for large people to not “get,” on a visceral level, just how dangerous the world is for e.g. 40th-percentile sized women. It takes a conscious effort to pause, reflect, and deliberately update, for such a person to realize that the world is a fundamentally different place for people who do not have the same physical properties.
We understand, as a culture, that some people are just drastically less likely to be physically abused. When we see Person A (hulking giant) and Person B (5'2") each discussing Person C like this:
Person A: I've never felt threatened by Person C.
Person B: Person C has actively threatened me in a way that would've gotten them thrown in jail if there had been any witnesses.
... we know why Persons A and B have different experiences, and different priors, and it's easy to tell a coherent story—Person C just didn't mess with Person A (or they tried and it just didn't work).
Usually even Person A knows to discount their own experiential evidence, at least a little—at least once someone draws their attention to the dynamic. We kind of all understand how largeness works, relative to abusability. You might have to be like “bro,” but if you are like “bro,” Person A will usually go “ohhhhhhh, right right right.”
--------------------------------
I think there is an exactly analogous thing going on with regards to emotional and psychological abuse, and I think it is not recognized by people the way tallness is recognized, and I think this is a huge part of the problem.
Just as physically abusive people have a huge impact on the most vulnerable people, sporadic (but still often large) impacts on people in the middle, and zero impact on the large and physically imposing (including because they just misbehave less where large people can see),
so too do emotionally and psychologically abusive people have a huge impact on the most emotionally and psychologically vulnerable, sporadic (but still often large) impacts on people in the middle, and zero impact on the “emotionally tall.”
(In part because they tend not to misbehave around the emotionally tall, and in part because their misbehavior doesn't threaten or damage the emotionally tall anyway and thus isn't categorized as misbehavior by the emotionally tall even when witnessed.)
And the problem (according to me) is that the emotionally tall DO NOT ACKNOWLEDGE THIS ABOUT THEMSELVES, NOR FACTOR IT INTO THEIR ASSESSMENTS AND RESPONSES.
I've seen people shrug off and basically ignore claims of emotional and psychological abuse on the grounds that they've spent tons of time with Person X, and it's never been a problem, and they know several other people who've spent time with Person X without a problem, so ...
And if it were the analogous situation, and the claims of abuse were physical, they would immediately recognize, oh, right, maybe what's going on here is that I'm typical minding, and assuming that since I'm large and imposing and physically safe from Person X, everyone else is, too, and of course, that's ridiculous, okay, sorry, took me a second to recapture my sanity, but I'm awake and aware and ready to properly listen to you now, please start over from the beginning.
But since emotional tallness is invisible, and not a property that people are primed to think about and account for, they do not make the connection. They do not viscerally feel, and do not deeply comprehend, what's happening to the emotionally short around them, and they discount it and dismiss it, not only to the point of not taking action, but to the point of not even really believing that anything morally objectionable has happened. To not believing that anything objectionable could have happened, because of an implicit (and erroneous) belief that if it had, they would surely have noticed it themselves.
(“I mean, obviously you had a rough time, clearly this wasn't good for you, you should probably avoid Person X in the future. But it doesn't seem to me like Person X did anything wrong.”)
A person who only rapes small and vulnerable people is an obvious trope, and we know to look out for it, and we know how to top-down account for our own blindspots if we ourselves are not small and vulnerable.
A person who only mindrapes some people, and whose impact on the minds of others is neutral or positive, often gets away with it literally forever, in no small part thanks to the fact that the emotionally tall do not recognize their role in the dynamic, and explain away other people's experiences as probably being due to some mistake those other people made, or some flaw in those other people's orientation to the situation.
(“Have you considered your own role in contributing to this situation where you're claiming victim status?”)
Because if it's fine for me, it must surely be fine for everybody else, right? My experiences generalize perfectly, right?
--------------------------------
Edit: A friend was like, wait, do you mean X? and I was like, no, I mean something else. Here's part of the exchange, to clarify:
Friend: ...and the dumb thing I hear you saying is that... tall people shouldn't tall..? That people who go to the gym and body build should probably by default shrink down so as to be less imposing? That when my twiggy friend tells me they're afraid of my meathead friend, I should default to assume that the fear is because the meathead did an actual thing vs the consistent phenomena that most people are immediately frightened by meathead's bulging muscles?
Me: No, the thing I was trying to say is “tall people should recognize that people who are not as tall as them have a different experience, including sometimes being vulnerable to attacks that the tall people literally never even considered or thought about because it was just too far removed from their experience.”
Friend: ah, “Not every bad experience a short person has is an attack, but it's important to acknowledge that short people are particularly vulnerable to attacks in a way that's hard to even fathom for a tall person, and it's important to keep that in our hypothesis space”?
Me: yep
Written June 1, 2022
I think one of the important social lessons of the past half-century is something like:
“Just because a person is kind to, or good for, persons A, B, C, D, and E, doesn't mean that they're not unkind to, or bad for, persons F and G.”
Like, we see this with people who have been unambiguously abusive to some people, and who also have many, many friends who are like “they would never!” or “this is inconsistent with everything I've seen of them.” We're slowly learning, as a society, to not immediately fall prey to the halo/horns effect, and to not privilege our own limited observations over the observations of others.
I think an important ... corollary? ... of this, is something like “if they've been robustly good to/for persons A through E, they will probably continue to be robustly good to and for persons like those people. And similarly, if they were bad to and for F and G, they will probably be bad to and for future people who are like F and G.”
I think that if both people in the A-E category and people in the F-G category could recognize and converge on this general truism, a lot of conversations would make a lot more sense, and a lot of attempts-to-help-and-protect-people would be a lot more efficient/effective.
Written March 10, 2022
Just remembering that time one of my colleagues proposed a process
and I was like “okay but we should time box it to, like, four hours of debate and a week of people thinking and then arrive at an answer”
and they were like “why”
and I was like “We've both been a part of too many interminable processes at this point; I think the thing we're investigating is pretty straightforward and it's costly to take up people's time”
and they were like “nevermind, let's not do it then” but then later on we initiated it anyway except without my proposed time box and it took up something like 500 person-hours over six months and ended in “okay we're abandoning this process unfinished because it's too meandering and costly”
...
some real “this meeting could've been an email” energy, only, like, a lot moreso.
Written January 5, 2023
Just remembering that time that I had an intractable disagreement with a colleague
and the colleague was like “okay, weird pitch: what if we outsourced this to [council of three specific people] and pre-agreed to abide by their decision?”
and I was like, hmmm, [council of three specific people] seems super rad, actually, like those three people in combination are definitely wiser than me by myself, and I feel pretty confident they will see and understand and take seriously every consideration that I think is important
so I was like “Heck yeah, I enthusiastically consent”
and then my colleague went off and chatted privately with one of those three specific people for like six hours
and then came back and was like, “nevermind, I don't agree to submit to that trio's judgment, we have to find a new plan.”
🙄
Written December 8, 2022
It really does take a minute to unpack and reupdate.
I've posted several times about reorienting to conceive of a small number of my past relationships as various kinds of toxic and bad, and it's only
just now
occurring to me how much of a red flag it should have been, when a former work superior saw me being generically friendly with a work colleague I'd never previously interacted with much
and out of the blue and apropos of nothing, said (near-exact quote) “When I saw you chatting with so-and-so, I got this feeling like ‘you'd better watch out, Duncan's building a coalition against you, there's an ingroup forming and you'd better be a part of it or else.’”
Like, that was where their mind just … happened to go. That was their default interpretation of “people around me are having casual conversations.”
Yeesh.
Written December 23, 2021
One time, a colleague of mine was giving a presentation on attachment theory, which is one of those wrong-but-useful frameworks that divides people into four quadrants, based on how they approach emotional connection and interdependence.
There's secure attachment, anxious attachment, dismissive-avoidant attachment, and fearful-avoidant attachment. Roughly corresponding to:
“I'm secure in this relationship,”
“I'm anxious/insecure in this relationship,”
“I'm scared of being left in the lurch so I'm going to preemptively distance,” and
“I'm afraid of being abandoned so I'm going to cling.”
(Take the summary with a grain of salt; I am not an expert in this theory.)
During the ensuing discussion, two of my other colleagues were absolutely insistent on finding a way to pathologize secure attachment—were not at all comfortable with the idea that healthy attachment exists, and is a category people should be able to claim to be in. They spent upwards of ten minutes arguing that it should be viewed as basically every bit as toxic and broken as the other three.
I did not, at the time, realize just how much of a giant fucking red flag that is, and how diagnostic of many future interactions with those people it would be. If you had started with nothing but a cardboard cutout of “the sort of person who would feel somewhat threatened by the idea that maybe some relationships are actually just straightforwardly healthy,” and made bets about their actions based purely on that stereotype, you would have made money.
Looking back, the obvious advice is “Ah—yeah, if someone doesn't think that any of the quadrants on the graph should be considered healthy, you should pause and consider what that implies about how they are orienting to their relationship to you.”
Written by a non-Duncan person, circa summer of 2021
███████ talked to ████ for a long time with the clear intention of getting ████ (at least) using concepts that are shaped in such a way that it’s easier to come to ███████’s conclusions when you use them.
Additionally—and this is where it gets dangerous—it seems to me that ███████ was desperate for the people around them to use those concepts, and also, in order to cause those people to use those concepts, they seemed to try to enforce a deliberate confusion, artificially, so that other concepts would be obliterated, and room would be made for theirs.
It seems to be part of their arsenal or something to go “no listen, this is actually really confusing, for humans in general, and you shouldn’t trust yourself and you should join me in my void of chaos where all concepts can be rewritten, but here let me give you some of mine, for example.”
Confusion inductions as a subset of hypnotic inductions (anything that causes a person to, for whatever reason, be more inclined to adopt your suggestions as their experience). A confusion induction: you cause the other person to feel dissonance; they want to understand, they want things to be clear and make sense, and so you feed them experiences that cause them to feel uncomfortable and like they don’t know what to do and like things don’t make sense, and you do it until they’re sort of desperate for an answer, and then you hand them an easy answer that feels good, and then they just kind of let go and follow your lead.
Written May 26, 2022
A FB friend recently snarkily summarized what was being described in a particular post as:
“I know you think that’s yours to decide about, but I don’t like that, and I have a conscious plan to subvert your sense of ownership over that thing (and ideally all things, so you submit to my judgments without noticing).”
and BOY HOWDY did that ever RESONATE with a multi-year traumatic experience that I am STILL UNPACKING.
Having someone artfully do [that] at you and the people around you for years is Not Great.
Written October 29, 2023
Thanks to the delightfully loose physics of dreams ...
… I got to describe, directly to [person]'s face, that I have absolutely no intention of ever speaking again to [person]. Like, to explain how deadly seriously I meant it, and how little I expected the intention to change, and so forth.
[Person]'s response was to promptly recommended that I read a specific book, since (in their estimation) reading that book would undermine my intention and cause me to change my mind and thereby be willing to let them talk to me again.
I think my brain nailed it, actually. Like, my mental model of [person's brand of manipulation] while dreaming = spot on.
Written December 22, 2020
I've been tinkering with an essay titled "A Theory of Legitimate Influence" for well over two years now (that phrase offered by Harry Altman). Basically, the idea is that some kinds of influence are clearly immoral, but some kinds of influence must surely be okay (e.g. presenting someone with relevant facts?), and our society doesn't have clear lines or a clear principled stance on this.
The essay is far from done, but one piece (that is admittedly only half-baked) that feels particularly important to me, though, and worth getting out there on its own:
If you note that you have extraordinary success at some kind of persuasion (e.g. despite being only twice-as-right as the next guy, you're ten-times-as-effective at convincing other people to update to your position), then you have some moral obligation to notice this, and correct for it.
This is in part because the set of all possible claims described by the phrase:
“I can get them to understand if I say it JUUUUUUST RIGHT”
...contains far more falsehoods and manipulations and gaslightings than it does straightforward truths. For every one actually-hard-to-explain true thing, there will be dozens or hundreds of sufficiently-plausible or sufficiently-convincing lies, especially once you add in the fact that humans “believe” things for all kinds of reasons unrelated to truth (such as “I care about the facial expression of the person in front of me” or “it would be inconvenient or dangerous or costly to disagree”).
Like, yes, in fact: there are some true things that are surrounded by all sorts of nearby falsehoods and misunderstandings. There are some things that you really do have to say very carefully, and slowly, and with an uninterrupted, high-bandwidth back-and-forth.
But if you notice that you are going to that well over and over again, disproportionately often—
If you notice that you are consistently not persuasive when speaking in plain, Huck Finn words, but that you consistently are persuasive when you do an extremely complicated social tinkering—
If you notice that people frequently update to your position if you can get them alone in a room, but only if you can get them alone in a room (and not so much if someone from the other side is there, actively representing that side)—
Then I think it's important to view your own superpower with suspicion, and treat it with caution, and be very, very careful about when you pull it out and use it. There’s a reason “confidence man” has a negative connotation—specifically because you can't count on the usual safeguards to catch it. If your persuasiveness is significantly closer to being a fully generalizable weapon than most people's persuasiveness, then you can no longer use “well, if it wasn't a good argument, then people wouldn't buy it” as a discriminator anymore. You can no longer in good conscience say “all I'm doing is laying out facts, and letting people choose.”
(I've had to do some drawing back myself, in this fashion; for instance, I have instituted a waiting period of at least a day whenever asking anyone to make an important call, even if they say they’re ready to decide on the spot.)
Bill Clinton is a historical example of this-kind-of-persuasive; I hear probably-apocryphal but not entirely ridiculous claims that Republicans developed a buddy system to counteract his extraordinary ability to coax an agreement out of somebody he’d cornered alone.
Darker examples include things like abusers, who can consistently soothe and reassure their victims (and their victims' defenders, and sometimes even cops and juries) if given enough time to cast the spell.
As with all things, it's not always what it looks like—the claim here is emphatically NOT “someone who's extra persuasive is obviously doing a bad thing,” nor is it “any argument from a successful superpersuader is bad.”
But I do think that the prior on [any given thing the persuader thinks they can convey by arranging the context to their advantage] is not good, and in my view, that negative prior at least puts the burden of proof onto the superpersuaded point. “We can’t assume that our normal filters and detectors will work like we’re accustomed to, here, so we have to treat it as an unremarkable member of the set, and the set contains a LOT of bullshit.”
Like, a seemingly-persuasive argument from a superpersuader should be treated as likely-bogus on principle, unless and until it can also be made in blunt and straightforward fashion. And superpersuaders who aren’t intending to just straighforwardly indulge in/make full and unrestrained use of their power, ought recognize this, and willingly accept the handicap.
(Blunt and straightforward fashion specifically, because the whole problem of the superpersuader is that if you let them explain delicately and at length, it WILL turn out to be persuasive! Perhaps include a stubborn ten-year-old as judge, or institute a waiting period of multiple days between concluding arguments and the listener's actual decision, or ask that the superpersuader present their argument through a proxy who is known to lack this particular mixed blessing, or simply have the debate out in the open with both sides arguing their point on a level playing field.)
I reiterate that the above is half-baked, but it feels important. A theory of legitimate influence will almost certainly contain a fully-baked version of this.
Written May 21, 2022
There's a character trait or property that I see in people, which I experience as something like “watching them give themselves permission to believe whatever they want.”
(It feels that way, in my ontology—it feels like an indulgence.)
e.g. religious faith, e.g. overconfidence, e.g. leaning into the representativeness heuristic or the typical mind fallacy. Any place where someone has much stronger belief than the available observations support.
I'm really allergic to this. One of the reasons I like Circling, despite its crunchy hippie woo nature, is that Circling goes out of its way to hammer home, REALLY HARD, that there's a difference between what you're actually observing, and the story that you have about those observations.
And the reason I have a very strong allergy to this character trait has two parts.
Part one is that it seems to me that people who exhibit this character trait—this slipperiness of thought—in one domain, also tend to exhibit it in others. Overconfidence, or a-willingness-to-draw-strong-conclusions-with-a-cavalier-disregard-for-fact, doesn't seem to be a thing that people do ONLY in their interpersonal relationships but not in their politics, or ONLY in their sports debates but not in their philosophical musings.
Part two is that it seems to me that almost every moral atrocity we feel ashamed about today, as a species, is downstream of people just … deciding what they were going to believe, and then having no cruxes and not being updateable by evidence. At the heart of every witchhunt is a bunch of people who would rather kill or imprison someone else than take responsibility for their own anxiety.
(Like, people who are more concerned with feeling like something is being done than with whether or not the thing that's being done is actually right or just or even effective.)
So when I see somebody exhibiting this trait, it hits me in a really quite visceral way. I think about the sort of people who join witchhunts and inquisitions and pogroms and hate groups, or the sort of people who just handwave away concerns over the toxic byproducts of some industrial process, or the sort of people who go along with the worst parts of racism and sexism, versus the sort of people who do not do any of those things, and it seems to me that Having This Trait makes you much, much less likely to be in the latter group.
Written July 4, 2021
It is a sad truth that the vast majority of bigots will never have to admit (or even notice) that they are bigots, since popular bigotries always come with a slate of justifications and rationalizations to choose from that the people around them will parrot and endorse and reward them for holding.
(This post brought to you by I Really Didn't Think You Were A Bigot, I've Known You For Like Six Years, And Also Wow, It's Going To Be Impossible To Talk You Out Of It, Huh.)
Written April 11, 2021
People who allow themselves to draw global conclusions about other people's character or actions, without reference to facts and sometimes in direct contradiction to those facts, are dangerous.
I think I see this as a much brighter red flag than most people do. Like, if someone says “so-and-so is always complaining about their coworkers” and, on further investigation, it turns out that this is simply not true—
(like, maybe they complained twice, ever, or something)
—I think that most people just brush it off, like “eh, I get it, we all exaggerate stuff when we're frustrated.”
I do not brush it off. I also do not do that exaggeration by default—I deliberately work very hard, in fact, to track things concretely and countably, specifically so that I don't slide into that kind of exaggeration. I still waver some times, and I strongly value people who check me on strong claims and help me notice when I'm slipping, and I try to explicitly acknowledge oversteps.
It's a very important character note, in my mind, if someone indulges in that behavior. Being willing to cherry-pick or outright distort reality just so you can hold onto a conclusion that's emotionally valuable to you is an EXTREMELY bad sign, in my experience, and diagnostic of a lot of other bad things.
I've never, for instance, met an abuser who didn't do this, though admittedly people who do this who are not abusers still outnumber abusers by a lot.
But like, this is literally one of the skills the bad guys in 1984 were trying to build up in the populace, as part of the recipe for making everything terrible forever.
(To be clear, someone who does this in the heat of the moment and walks it back of their own accord later untrips the flag for me, at least mostly. Ditto someone who does this because they're just looser with language, but walks it back if you raise a skeptical eyebrow/ask them "really, though?)
Written July 7, 2022
I recently wrote a post on how a lack of resources caused me to screw up a romantic entanglement. I particularly liked the way the post came out because I believe it made the experience comprehensible to people who've never had it. Like, I flatter myself that I a-little-bit made it so some people could go "ooohhhhh, I get it."
So here's another post in a similar vein, about the difference between explicit verbal consent and ... something like “sufficient” consent? How securing explicit verbal consent can be “not enough” if your goal is to actually avoid coercion.
(I mean, we already have trivial examples of this, like pointing a gun at someone and getting them to verbally consent to something. But this is a slightly-less-trivial example, where the people involved were straightforwardly being responsible according to explicit consent norms. Like, they were actually following the rules, and yet.)
Background: I once had a disagreement with a colleague at work. My version of the story:
We had difficulty with morale and esprit-de-corps. As an experiment to try to fix it, I pushed for us to all share working hours one day per week. (It was the kind of workplace where people came and went as was convenient for them, which I think was in fact awesome, but it meant that people were often lonesome and that lots of things stalled for days and days when they could have been cleared up in 60sec of face-to-face conversation.)
There was an agreement, recorded in email in common knowledge. IIRC the special day was Thursday.
One of my colleagues didn't show up on the first Thursday. I said nothing.
That colleague didn't show up on the second Thursday. I reached out to them to say that we missed them and needed them and that we were, in fact, bottlenecked on being able to sync up with them.
That colleague didn't show up on the third Thursday, and I reached out to my superiors for help. Their solution was "let's make this a subject of conversation at our upcoming three-day authentic relating retreat."
At that retreat, I found the conversation extremely disorienting. It took (and this is not an exaggeration) three full hours of argument for me to extract, from the group, admission that yes, there was an explicit agreement in common knowledge and yes, it had been violated.
To be clear, I think this disorientation was probably symmetrical? Like, the group kept wanting me to engage on the level of but-what-about-my-colleague's-feelings, and probably found it distressingly difficult to get me to do so. I was kind of waiting on something like convergence on ground truth before wanting to move into the world of subjective experience, and there's a reasonable position to take that ground truth was sort of irrelevant to the kind of empathetic understanding and team-connecting that the retreat was designed to foster.
But in any event, the point is: emotional state: fragile! Exhausted! Eight-plus hours of high-tension, adversarial circling-type conversation, in a context where I felt somewhat bewildered and surprised-to-be-largely-alone-in-caring-about-the-objective-record. I believe I cried at least once. I believe I shouted at least twice.
Which is relevant background for the following:
Either late that night or late on the second night (which is not particularly better, since iirc the second day was no less taxing than the first), one of my superiors asked me if I would be willing to try something weird. They had been having a hard time relating to me, having a hard time navigating their own needs and emotional reactions around me, and they wondered if I might be up for [a thing which they explicitly acknowledged was kind of unfair, and which I would certainly lose no face for saying ‘no’ to].
This would have been around 11PM, possibly even already midnight. The thing which they wondered whether I might be up for involved a sort of weird semipsychotic embodied Focusing. They had this mental image of me as sort of ... split into a ghost and a zombie? And the zombie as terrorizing the ghost? And they wondered if I might roleplay with them to help them tease out their stories and intuitions. Like, help do a performance art piece that might contain some interpretable information that would be legible after the fact (ish; this is my projection; they would probably characterize things differently).
And so I said “sure.” Another staff member was present for most of the ensuing five hours (!); I can't remember if they were present from the get-go or wandered in later.
And in those ensuing five hours, my work superior did things like ask me to hold extremely still, and please not talk, and sometimes visibly shook/trembled, and sometimes screamed, and sometimes cried a little, and asked questions like “why are you trying to kill me?” and said things like “let me out let me out let me out” (all from the “perspective” of the “ghost”). Often there would be long, long periods of total silence (as long as five or ten minutes, iirc). Sometimes my superior and my other work colleague would talk (but I was mostly not “allowed” to).
I put “allowed” in quotes because, as noted, my superior asked for my consent before beginning the activity. They also checked in roughly once an hour throughout the ensuing five hours; each time asking me if this was okay and if I wanted to stop. They also “broke character” at the end, and chatted with me for another ten minutes, and gave sincere thanks for my willingness to “dive in” with them.
In other words, this person clearly, unambiguously discharged their moral duty according to the rules of explicit verbal consent. It is my firm belief that they did nothing wrong according to those norms.
-----------------------------------------------------
In my sixth grade classroom, while I was teaching, my students would often find loopholes or clever hacks—ways to get arbitrarily large numbers of points in an extra credit activity, or ways to avoid 95% of the effort involved in fulfilling some set of criteria.
I was always careful to reward this cleverness. To give them the points, or let them off the hook for the activity, or whatever, and only subsequently close the loophole. I believe that one of the most important properties of rules is that they mean what they say, and that you shouldn't fiddle around with them mid-stream.
So I want to reiterate that, according to the rules of explicit verbal consent, this person clearly and unambiguously discharged their moral duty, and is in an important sense not-culpable.
-----------------------------------------------------
And yet.
(Remember, the whole point of this vignette is to highlight the inadequacy of explicit verbal consent.)
I suspect that many, many people would be outraged and horrified, consent-be-damned, to find that a work superior even asked me to stay up until four in the morning, unable to speak, while they acted out a ghost story around me.
I think those people would generally be wrong, though. I think that it's important to be able to opt into [weird shit], and that the kind of it-doesn't-look-normal-therefore-it-is-bad attitude is a really really bad one. I would feel sad if normalcy norms were strong enough that this sort of thing was strictly out-of-bounds.
But also, I still don't think this thing that happened was okay, on balance?
And the reason that I don't think this was okay is something I've previously referred to as a “Hufflepuff trap.”
There's a dynamic that crops up, sometimes, whereby Person A will offer Person B a free, unfettered choice between two options:
Option 1: Do the thing Person A wants to do
Option 2: Sacrifice your self-image of being a good or cooperative person
It's a free choice! You can do whichever!
The problem with this should be immediately apparent. If you frame cooperation such that one side is clearly Objectively More Awesome than the other, then you have not quite actually offered someone a free choice.
Another example of this came from the same colleague when—
(after first being asked to change the way that they were scheduling events, and stop springing things on the team last minute, because you’re running us ragged, this isn’t sustainable, people need actual off-the-clock time to rest and recharge)
—they were like “okay, so, officially next week is our vacation week, and that’s sacred, I definitely want to be clear that I’m on board with the new policy of not interfering with vacations and not expecting the team to be tappable unless there’s sufficient notice, but I’ve decided to do a thing next week myself, and while nobody is obligated to help me with it, I do think that it will go much, much better if anybody is free and interested and also I do genuinely think this has the potential to be something like an additional 0.1% boost in our chance of preventing the end of the world, so if anyone can help that would be really quite amazing…”
To be clear: one is not always obligated to offer people free choices, in my culture. You can offer people pressured choices! You're allowed to have preferences between options!
But contexts like the workplace (or a church, or a sports team, or other frameworks that involve care and responsibility) change this in a meaningful way. You lose some freedom of movement, when you step into a role that confers power upon you.
When you have an exhausted junior member of a team who's currently in the middle of desperately trying to get a dispute fairly adjudicated—when that junior member of the team has had to fight tooth-and-nail to even get acknowledgement that explicit written agreements were violated, let alone to get those agreements shored up or supported-by-management—when that junior member of the team has screamed and wept and you've been in an emotional marathon for a full day (possibly two?)—
When you then say “hey, uh, there's this thing that would really help me, your superior, and like, I will acknowledge that it's a big ask, and kind of unfair, but also I'd find it super helpful,” it's just … really really really hard to make this sufficiently non-coercive, according to my moral instincts.
By its very nature, the request carries a bunch of overtones. There are power dynamics at play. There are regular social monkey dynamics at play. There's an implicit sense that the junior member is being judged on some weighted sum of their cooperativeness and their ability to hang with the cool kids, and that's hard to dispel even if you try.
Add to that that it's the middle of the night, and that the activity will go on for another five hours (which was not heralded up front), and that it will involve sitting motionless and silent while someone with power over you screams at you, and treats you as if you are a horrible creature that is harming a fragile innocent, and that it's just straightforwardly harder for humans to withdraw consent mid-stream—
I think that my superior did everything they knew to do, according to the Standard Rules, and also, I think the standard rules are (clearly) inadequate. Like, I think this is an existence proof of their inadequacy.
(As in my sixth grade classroom: I think this means my superior should be ... held harmless, or something? But I also think the rules should be patched after the fact.)
I think it takes more than merely securing explicit consent, to be sure that you have actually arranged a free and unencumbered choice, and I think for some things
(such as really intense psychoactive experiences in the middle of the night in a work context)
free and unencumbered choice is actually important.
If I imagine being in the room while Neville Longbottom's professor offers Neville Longbottom this choice, I absolutely feel a need to intervene. I would specifically want to intervene along the axes of “hey, Neville, are you doing this because you feel it is somehow related to your standing within this company?” and “are you doing this because you feel you have to prove something or make up for something or some other such thing?” I might also want to intervene for other reasons, but those two at least.
Moreso than most people, I feel that Neville Longbottom should be free to make this mistake, if he wants to. But I think something went importantly wrong, that he was even offered the choice.
Written February 7, 2022
The type of abuser who doesn’t ever violate a safe word or cross an explicit boundary because they exert incredible and unrelenting pressure to make sure that people feel like saying “no” is equivalent to giving up or admitting defeat or proving oneself hopeless or uninteresting.
Written October 31, 2021
“I'm just very concerned that an explosion is inevitable,” Says Man Actively Stacking Full Gas Cans Near Furnace. “We really need to start strategizing about what we're going to do when this blows up, which at this point seems unavoidable.”
Written December 13, 2023
Still sort of ... admiring? Impressed? ... by the absolutely brazen DARVO'ing when, like, years ago,
an employer who had lied to me, and manipulated me, and gone behind my back to arrange for agreements with me to be broken, and then covered up the brokenness to delay me noticing so as to extract more value out of my default cooperativeness in the short term, and then worked extremely hard to prevent common knowledge formation among my colleagues around all of it, and then run an actual for-real gaslighting campaign to rewrite the history of what had happened and why, and who at that point had completely won and had basically 100% of the power,
was like “I feel like your anger and resentment over all of this is abusive; I am feeling pretty abused by you these days.” Like, looked me straight in the eye and said a sentence that 95+% resembles that sentence, seeking some combination of [validation] and [sympathy].
(The directly employment-related stuff was eventually made right but at the time of that statement the right-making was like three full years in the future.)
Written November 13, 2022
People can be spottily sociopathic.
(In fact, I claim this is the case for the majority of individual instances of sociopathic behavior.)
There's this thing where most people seem to implicitly believe something like “because I have seen so-and-so behave non-sociopathically, they Aren't Sociopathic (in some fundamental sense).”
Like, “but I've seen them be deeply moved by X!” or “I've seen them cry about Y!” or “I've seen them express deep care for Z!”
(“And it wasn't crocodile tears; I genuinely buy their emotional display as reflecting their internal state, and you should too; it's highly unlikely that it was all an act.”)
But in fact sociopathy is an accessible state for virtually all humans; it's standard for humans to feel sociopathic-by-default toward their outgroups.
“A Sociopath” with a capital S could be thought of (roughly) as somebody who outgroups approximately everyone.
But there are lots and lots of humans who have the capacity to very rapidly shift someone else from ingroup to outgroup, and become suddenly cold and callous toward them, and there are lots and lots of humans who outgroup a lot of people, including large numbers of people very very close to them (such as neighbors, colleagues, housemates, sexual partners, family members, friends-of-friends or partners-of-friends, etc.).
Thinking of sociopathy as an all-or-nothing bucket causes people to miss, or to unfairly discount, reports from other people that “whoa, I saw so-and-so be WAY sociopathic in an interaction yesterday.”
People miss or discount those reports in precisely the same way that people are like “What? How could Bob be creepy/rapey? I've never seen Bob be creepy or rapey!”
(This isn't a hypothetical; I've tried to tell people about Bob's sociopathy for various instances of Bob and seen those reports meet with zero sympathy and zero update, because the person I was talking to was someone toward whom Bob is not sociopathic.)
The average member of our society is (slowly, painstakingly) learning to be suspicious of their own knee-jerk extrapolations when it comes to questions of whether Bob is creepy/rapey. The past decade has seen particular progress in that domain, with a lot of people realizing that “I've seen Bob be good” doesn't mean “Bob is never bad.”
But I think most people aren't similarly learning to take-very-seriously claims that Bob can switch into sociopath mode with shocking swiftness or shocking frequency. People (in general, in my experience) have not yet cottoned on to the partial sociopath, the way they've cottoned on to the partial misogynist or the partial sexual abuser.
Which is rough, because, like, MOST of the actually dangerous people are going to be in that set. Pure sociopaths are going to represent a relatively small fraction of the total number of individual instances of sociopathic behavior, just like serial killers represent a relatively small fraction of total murders.
When there are a lot of mostly normal people being sociopaths when it suits them, that's going to be the bulk of the sociopathic behavior.
The takeaway here isn't “believe that Bob is a full-blown sociopath, if someone tells you they witnessed sociopathy from Bob.” It's “stop thinking that sociopathy is an all-or-nothing thing.” Believe that, if the report is accurate, Bob is capable of sociopathy in contexts where you wouldn't normally have predicted they would be.
Plenty of people out there who are “normal” 22 or 23 hours of the day, but complete sociopaths when it benefits them to be. Including plenty of people who are well-liked and high-status, and who in practice (whether intentionally or subconsciously) rely upon their high-status and likeability to ensure that people will under-update on reports of their sociopathy.
(I've taken substantial damage from at least two such people still in good standing in the Bay Area rationalist community, for instance. These are hard problems, not least because those people's good standing is genuinely earned. Like, just as the existence of good behavior doesn't guarantee the absence of bad behavior, so too does the existence of bad behavior not erase good behavior!)
Written December 27, 2023
I don’t really have a clear view/clear handle on this thing that makes me mad, so I’ll just describe my observations:
I know a person who finds themselves in the position of needing to apologize a lot, and every apology seems to … grrr …
Like one time they were apologizing for creating a culture that was sort of lowkey corrupt/disingenuous, but their apology went out of its way to be like “btw everyone around me was colluding in this,” simultaneously deflecting responsibility and dragging others through the mud.
(Also that wasn’t true; I was not colluding and this was a major part of why I ended up being incompatible with this person and the culture of the group overall, which imo deserved a caveat in the “but everybody was doing it” deflection.)
This other time, they were apologizing for having oversold the stuff they had to offer, and were like “we taught people how to mimic being PCs instead of teaching them how to be PCs,” sort of simultaneously sneering at/insulting the very people they’d misled, dismissing them as not being player characters and their subsequent accomplishments as p-zombie cargo-culting.
It’s a kind of knife-twisting, a kind of sleight-of-hand, every time they could take a lump they sort of use someone else as a human shield or bring in some other distracting tangent. There’s never the sort of motion I’d expect from someone who feels genuine remorse, or like actually believes they were actually wrong and responsible.
(As opposed to someone who’s treating the people around them like robots and trying to figure out which sequence of buttons to press to no longer be in trouble without changing any of their underlying behavior, which is a thing they explicitly copped to having done repeatedly, in contexts like e.g. not doing their homework during their university education).
(One time they were trying to mollify me and said, this isn't an exact quote but it's really really close, will you just tell me what I have to do for you to be mollified?)
It really makes it hard to believe any moral progress has been made, not just in each individual case but ever.
Written February 14, 2022
In Duncan-culture, there are more mistakes you're allowed to make, up-front, with something like “no fault.”
e.g. the punch bug thing—if you're in a context where lots of people play punch bug, then you're not MORALLY CULPABLE if you slug somebody on the shoulder and then they say “Ouch, I don't like that, do not do that.”
(You're morally culpable if you do it again, after their clear boundary, but Duncan-culture has more wiggle room for first-trespasses.)
However, Duncan-culture is MORE strict about something like ...
“I hurt people! But it's okay, I patched the dynamic that led to the hurt. But then I hurt other people! But it's okay, because I isolated and fixed that set of mistakes, too. But then I hurt other people! But it's okay, because I isolated and fixed that set of mistakes, too. But then I hurt other people! But it's okay...”
In Duncan-culture, you can get away with about two rounds of that. On the third screwup, pretty much everybody joins in to say “no. Stop. You are clearly just capable of inventing new mistakes every time. Cease this iterative process.”
And if you don't—if you keep going, making a different error with a similar result every time—
In Duncan-culture, the resulting harm on rounds three and beyond is treated as, essentially, deliberate/intentional. Because the result was utterly predictable, and this fact failed to move you.
This is not, as far as I can tell, robustly/reliably true in the broader culture I'm currently a part of.
EDIT: More disambiguation:
We give people protection, socially speaking, when we consider them to have had good intentions, but to have made a mistake with tragic results.
In Duncan-culture, you can't really get that protection three times in a row for three similar results.
If you do A and it leads to X, that's just a mistake and we treat you sympathetically/generously.
If you then do B and it leads to X, well … plausibly your first patch wasn't good enough, but like, okay, things are hard, your good intentions shine through, fair game.
But if you then do C and it leads to X, all future X's are considered “your fault” in the not-excusable-as-a-mistake way. Good intentions cease to matter after three different Xings; your job now is to do whatever it takes to avoid more X, or to accept full responsibility for all future X, approximately as if you caused X on purpose.
Written November 25, 2021
Here's an interesting (and terrifying) failure of typical mind form.
You know those times when you hear about, like, people who've literally never had an inner monologue, or pictured an actual mental picture, and they always just assumed that everyone else was being metaphorical?
Or e.g. that recent story making the rounds of a man who had a self-image of being straight (because duh) and who assumed that all men must struggle with occasional sinful thoughts and impulses around other men, obviously that's why we have so many sermons and so forth about temptation.
(The upshot being that he didn't realize some men were just genuinely disinterested; he assumed (reasonably, based on all the info in his surroundings) that his struggle with homosexual desire was common and typical.)
Anyway:
Imagine Person A, who struggles somewhat (privately) with violent thoughts and impulses, and has never acted on them but has had to put forth real effort over the years, especially in a couple of critical incidents where they almost lost control.
And furthermore Person A assumes that this is basically true of everyone, because they compare their external, observable behavior with others and it makes sense/matches. It seems like the world would look pretty much the way it looks, if most people felt the same way inside, except some fraction of people have insufficient self-control (and become criminals or domestic abusers or bullies or what have you).
And imagine Person B, who has zero such struggles, and zero such impulses, and has approximately never been tempted to haul off and punch somebody in the mouth, and has flares of temper, sure, but has never even considered being afraid of “what if I lose control, though?” because it's just never come even remotely close to happening.
And furthermore Person B assumes that this is basically true of everyone, as well, and thinks of the 10% of the population that becomes criminals or domestic abusers or whatever as having a separate property that is some kind of extreme emotional dysregulation.
So Person A thinks that good and moral behavior is all about willpower or self-control, gating back some sea of negative impulses that are constantly threatening to burst forth.
And Person B thinks that willpower and self-control are approximately irrelevant for most people, and that it's all about whether you even have negative impulses that most of us don't have to deal with (and so, sure, self-control matters for that subset of people, but not so much for Average Joe).
---------------------------------------------------------
Now imagine the following exchange:
Person A: [says something about anger]
Person B: [says something sympathetic, chiming in with a “oh yeah, me, too” sort of example from their own experience, not realizing that the thing Person A means by “anger” is something very, very different]
Person A: [becoming sort of alarmed, because Person B isn't signaling the appropriate kind of seriousness or wariness, and seems real casual and blasé about the whole thing]
Person B: “What? Come on—relax, I'm just saying—I mean, look. It's fine. It's not like I'm constantly fighting the urge to murder any more than you are.”
Person A: [who is, in fact, constantly fighting the urge to murder, is now MUCH MORE ALARMED, because that sounds like an obvious lie, and like someone who is so blind to their own anger problem that they haven't even bothered to put in any checks or preventive systems, and is clearly at risk of just blowing up any time someone hits the right trigger]
---------------------------------------------------------
Person A is looking for reassurance that Person B is taking the situation seriously (as Person A is, as they assume all responsible people do).
Person B is looking to communicate that they're not one of those people who need to take things seriously (which, in their model of things, is a pretty small subset, so the default expectation is that they aren't in the subset, so they shouldn't have to put forth all that much effort to rule themselves out of it).
And since the two people talking past each other and don't really know it, they're going to get increasingly confused and increasingly alarmed and the interaction is going to turn increasingly adversarial.
Written July 11, 2021
One downside of rationalist culture, which is very up-front and literal and explicit in a way that's really good—
—is that you lose the sort of obvious boundary between insulting/hurtful/unfair speech, such that people (in my experience) become worse at recognizing it.
Like, in the broader context culture, there are things you simply don't say at all, because they're viewed as fundamentally rude. Rationalists do it better—more things are expressible in cooperative, prosocial ways, which allows more of the important conversations to happen, and more useful feedback to flow.
But sometimes it's not the surface expression but the core thing-being-expressed that is itself bad. There are always gentler or less-gentle ways to say anything, but there are some things where even their gentlest possible expression is fundamentally outrageous.
And so you get rationalists who say things like “it does in fact seem to me that you are likely to commit a terrorist act,” and are then ... surprised? ... that the other person takes offense?
Turns out that, yeah, no matter how gentle you are or how careful you are with your hedges, if you point at someone with no history of violence (and in fact an active history in nonviolent conflict resolution) and express a bigoted view of their personality based on broad stereotypes, this will tend to be upsetting.
It's the belief itself that is upsetting—the unjustified leap to a very negative conclusion.
The broader culture sidesteps this by never letting you say awkward stuff at all, basically. The whole category is largely outlawed due to its surface rudeness, regardless of defensibility.
But just because your subculture doesn't want to outlaw stuff because of its awkwardness doesn't mean that there aren't other, deeper reasons why The Thing You Said Is Not Okay.
(Such as, for instance, how it's bigotry with zero supporting evidence.)
Making [stuff you can't say] sayable is good, but it also means you need new markers/triggers for recognizing what kinds of things are beyond the pale, and actually harmful (without doing any compensatory larger good).
Followup:
In yesterday's post, I described a situation in which Person A shared a belief that Person B was likely to commit a terrorist act, based solely on Person B's membership in a large group (not even, like, a specific religion, let alone a particular extremist sect or specific church; just the broadest possible demographic category), and was then sort of surprised/taken aback by the fact that Person B did not view this utterance as a cooperative or okay act.
A friend asked, in response:
“Would it still be rude if it was not wrong?”
And I wanted the response to get more visibility as a top-level post, rather than just a comment reply.
In my culture, it greatly depends on what you mean by “it” not being wrong.
If by “it” you mean “the person IS in fact substantially more likely to commit a terrorist act than baseline,” then in my culture, no, it's not rude to (responsibly, cautiously) express that.
But in that case, the burden is squarely on the accuser to demonstrate that it's reasonable to hold this position—to present a preponderance of clear, concrete, legible evidence.
If by “it” you mean “they genuinely believe it,” then no, that's no saving grace whatsoever. Because then all you have to do is cultivate a general anxiety disorder—as long as you sufficiently fear whatever you sufficiently hate, you can spew whatever you want out into the broader culture and no one can stop you. My culture rejects this vehemently. It’s a major cause of the brokenness in American politics today (for instance).
Beliefs—especially dangerous ones, especially ones that threaten to steal someone's livelihood or rouse a mob—require justification. They require foundation. You cannot simply believe whatever you want, or whatever you feel like—or rather, I suppose you can, but as soon as you go about expressing those beliefs, you are accountable for the impact that expression has on the social fabric around you.
We saw what happened to Carthage. We saw what happened with the anti-vax movement. We saw what happened to the United States in 2020. In my view, the so-called rationalist referred to as Person A was doing E X A C T L Y the same thing.
(Note also that a belief being legible and defensible requires more than that it be popular—you could've found plenty of people who would have nodded along with claims that black people were fundamentally inferior to white ones in every way, back in the 1700s, and can still find a depressingly high number of such folk today. If you’re making incendiary claims about another person, you have to be able to distinguish your belief from bigotry; you have to be able to demonstrate that it doesn't just look like a duck, it is a duck, because it also walks and quacks and so forth.)
When Donald Trump says that he believes massive election fraud took place, he's also strongly implying “... and I say this as the President, with access to tremendous amounts of information that the public is not privy to, and all sorts of power.”
A high-status member of the rationalist community sharing a belief that someone might be a terrorist, geez, is not just someone airing an innocuous private thought. They're implicitly throwing their weight behind that claim, as well.
Truth matters. But even among the rationalists, let alone among the population at large, there are too many people who can't tell the difference between what their brain is telling them and what's actually supported by reality. That's why the actual evidence MUST come along with the claim, or else the speaker should be held to account for what is unambiguously an unwarranted attack.
Written March 17, 2022
One time way back in the day I was like “I think X is not an okay thing to do.”
And a person who Xs reached out to me to say something along the lines of “hey, other people seeing your ‘X is not an okay thing to do’ post are turning cold toward you/freezing you out/wanting to associate less with you.”
And I felt like this was ... fine?
Like, I think my “X is not okay” commentary was at least in part a bid to make separate islands for Xers and non-Xers, and if Xers respond by distancing themselves from me, that seems ... good for everybody?
Like, even in the world where I'm wrong, and X is great, actually, that just means that the people rightfully Xing can get on with it without an annoying naysayer around?
I don't fully endorse the High Levels of Snark implied by the analogy, but I was reminded of the time that Costco said that shoppers without masks wouldn't be allowed in the store, and in response some anti-maskers were like, well, we're not shopping at Costco, then, and Costco was like … yes … exactly … thanks.
Like, that whole story seemed like successful coordination, more or less.
Written May 2, 2021
Not long ago, I had a trusted advisor tell me that—moreso than anyone else they know—they feel like I insist that other people's models of me match my own model of myself.
[Author’s note from 2024: I now view this statement from them as a part of their manipulative toolkit, whether intentional/nefarious or reflexive. This was me figuring out what I didn’t like about it while still, at the time, viewing the person mostly-favorably.]
I think this is close to the truth, but missing an absolutely crucial piece.
It's quite true that I'm unusually uncomfortable with people who indulge in something like “free-floating models” of other people around them.
There's a sort of leaning into a mindless, 1984ish, we-get-to-believe-whatever-we-want-to-believe-about-people-and-are-not-constrained-by-observables that both unnerves and disgusts me. A convenient discounting of available evidence, a convenient overweighting of uncertainty (and interpreting that uncertainty with uniform bias). A writing-of-the-bottom-line, followed by whatever dirty mental accounting is necessary to justify the conclusion one had already decided to hold.
It's exactly the kind of thing that allows for witchhunts and pogroms, and I viscerally feel the fear of pogroms when I see it happening, even if the specific question at hand is trivial and innocuous (I do a decent-though-not-perfect job of taking that fear as object, rather than being blindly subject to it). Because that pattern of mental slippage—the habit of indulging in ungrounded thought—it's a motion people get used to, and don't notice in themselves, and they won't notice when it does matter, if they've practiced not-noticing when it doesn't.
And I am fully aware that I'm 99th-plus-percentile alert to this thing. That I take it far more seriously than most, and view it as a red flag where others might think it's yellow, or not even a flag at all. And that I'm most alert to it when it comes to discovering mismatches between [my self-model] and [your model of me].
But “insists” is a weird construction.
I do not have the internal experience of “insisting” that other people update their mental models of me to match my own, in any given situation.
Rather, what it feels like from the inside is:
Look, here's the evidence, here's the verifiable record, here's what actually unambiguously happened. That record flatly contradicts what you've been saying. What's going to happen next is I'm going to categorize you, in my own head, based on how you respond. Will you acknowledge, retract, and update? Will you double down despite the absence of supporting data? Will you do something more complex and reasonable, like ‘okay, I grant that my case is weaker than I thought, but also I don't abandon my hypothesis and I'd like some more time to bring in other lines of evidence and do some more thinking and reasoning away from pressure’?
In my head, “insist” feels like ... like leaning on the other person, chanting “believe me. Just believe me! I need you to SAY YOU BELIEVE ME!”
My stance is more one of “hey, you do you—your thoughts are your own. But I'd like you to let me know where you stand, please. What your policy is, on this question.”
(So that I can model you! So that I can use this bit of evidence to inform my predictions about how you will behave in the future, because the past is where I get my predictions from!)
I get that, if someone wants my model of them to be a certain way, this can create a feeling of pressure.
Like, if I'm saying “you get to choose whether I slot you into the bucket of
[someone I trust to think these things through sanely and reasonably]
or
[someone I basically am going to write off and distance myself from],”
then sure, there are stakes. That's not a meaningless distinction, and there's clearly a side I want you to be on, and if you value our relationship, then there's going to be pressure for you to go a certain way.
But like...
Do what you want?
I can't do anything about pressure that you experience because of your own sense of what's at stake. I can't promise not to judge something just because you fear that judgment, and feel like the fear of that judgment is putting pressure on you. The best I can do is be clear and upfront about my own needs and boundaries and cultural norms.
I don't think it's healthy or sustainable to try to force people to think straight. I don't think it's reasonable to try to thought-police. I have zero invested in trying to make people see reason, or care about available facts.
(It's generally unwise to give someone that much power—to put all of your eggs in a basket someone else can smash just by saying ‘no.’)
But I am absolutely going to include “do you think straight?” in my calculations of whether I want you near me or anything I care about.
The only way forward that I can see is:
Draw closer to, and weight more heavily, the people who are already visibly and consistently doing the responsible thing.
Pull away from, and discount, the people who are already visibly and consistently doing the (wildly) irresponsible thing.
On the margin, try to be cooperative and prosocial, and flag for people when I think I see them leaning one way or the other—share my impressions, so that if those people actually share my values, they don't miss an opportunity to express those values. Try to avoid simple mistakes of miscommunication.
And that's it.
Feels very different from “insists,” to me. In my culture, “noticing whether someone does X or not” is not at all the same as “insisting someone do X.” [And from my position of greater perspective in 2024 I think trying to narratively reframe the noticing as insisting is a psy-op designed to discourage the noticing.]