I've recently been reading some of Scott Alexander's older posts, like What Developmental Milestones Are You Missing? and What Universal Human Experiences Are You Missing Without Realizing It?
And these posts got me thinking about a rather embarrassing gap in my own mental landscape, something I somehow managed to miss until halfway through my PhD. It wasn’t an emotion, and it wasn’t quite a developmental milestone either. It was more like an insight—an idea that, once learned, makes certain parts of reality easier to grasp.
Lately, I’ve been increasingly aware that there are these things, let’s call them “useful memeplexes”, that function a bit like mental software updates. Install one, and suddenly the world makes a bit more sense or becomes simpler to navigate. This is all pretty obvious, I’m not talking about anything transcendent here. To give a concrete example, consider numbers.
We take the numbers memeplex for granted, but not every culture has it. The Pirahã people of Brazil, for example, apparently operate with only the concepts of “one,” “two,” and “many,” seemingly getting along just fine. (Though I wouldn’t want to be their accountant.)
Of course, lacking the numbers memeplex does come with some serious disadvantages. It makes information sharing... imprecise. If an enemy is attacking their scouts might report, “many are coming,” leaving everyone guessing whether that means 100 soldiers or 100,000. Not great for strategic planning.
In fact, the numbers memeplex is so overwhelmingly useful that multiple successful civilizations independently reinvented it. And today, we make sure it gets forcibly installed during childhood via the mandatory software update known as formal education.
But not all useful memeplexes are so widespread. Some slip through the cracks. Here’s the one I managed to miss:
1. The Universality of Argument Maps
I vaguely remember being taught about syllogisms in high school during a brief detour into Aristotle, but their significance was lost on me at the time. The big thing I didn’t grasp was that they weren’t just about showcasing logical implications regarding how mortal Socrates was, they were archetypal representations of every argument you’ll ever hear.
And once you realize this fact you should be able to take almost any argument (short of the illegible) and reconstruct it into this skeletal form—a structure known as an argument map.
Say your significant other comes home, sees the trash still sitting by the door, and says, “You forgot to take out the trash again,” before disappearing into the bedroom with the kind of silence that hums ominously.
That’s a (compressed) argument. And if you unpack it, the logic looks something like this:
Premises:
You didn’t take the trash out.
It was your duty to take out the trash.
Not doing your duty is bad.
Therefore: You did something bad.
At this point, your options are pretty limited, you can either object to one of the premises or apologize.
Argument maps are tools to help us engage in cognitive reflection. They allow us to reflect more deeply (engage in System 2 thinking for dual-process theory enthusiasts) by laying out the structure of an argument clearly and explicitly.
When I first discovered this, I began trying to form argument maps in my head during disagreements. At first, it was awkward and clunky but over time, it became second nature, almost like adding numbers. It’s been incredibly useful for keeping my objections focused and avoiding common logical fallacies. Just as importantly, it’s helped me recognize when I don’t actually have a real objection, when I’m simply uncomfortable with the conclusion and might just be embarrassingly wrong about something.
I see reconstructing arguments into maps as analogous to formalizing a problem using mathematics: it takes more cognitive effort upfront, but in return, you get clarity, precision, and a better chance of actually being right.
2. What Are the Rationality Memeplexes?
I often wonder what other useful memeplexes I might be missing, and I can’t help but wish there were a giant, well-curated list somewhere, ideally sorted by lifetime utility.
This feels like a part of the thing the rationalist community might be trying to do. Rationalists have spent a fair amount of time trying to define what rationality actually is, and while I think I understand what they’re gesturing toward, I’ve never found their descriptions entirely satisfying at an applied level. How in practice does one become more rational?
It seems to me that at least part of rationality is about acquiring a set of useful memeplexes, mastering them, and learning when and where to deploy them. I feel like this, in a way, should be a similar process to becoming a good statistician or a good plumber. The hope is that it would offer a more pragmatic picture of what rationality actually looks like in practice.
Perhaps in a future post, I’ll try to sketch out a list of useful memeplexes, just to see how far (if anywhere at all) this approach can take us.
What separates a memeplex from a mental model or a conceptual framework? Is there a reason why we should look at the concepts of the rationalists more than, say, the concepts of philosophers? If I were to compile a list of philosophy ones, I might include:
Occam's Razor, Map vs. Territory, Empiricism, Deontic Logic, Strawmen, Counterfactuals, Reference Class Problems, Heuristics, Reductio ad Absurdum, Is–Ought Problem, Epistemic Injustice, Veil of Ignorance, Language Games, Truth Tables, Theory of Mind, Thought Experiments, Regress Problems, Performative Utterances, Referential Opacity, Capital vs. Labor, Tacit Consent, Moral Luck, Recognition Theory, Pragmatics vs. Semantics, Reparative Justice, Instrumentalism... I could go on
Did my recent re-stack of a previous post of yours where I spoke of memeplexes possibly active the "memeplex" meme in you, inspiring this post? Cool if so, cool if no!