A Theory of Moral Intuitions
Featuring Sidgwick
There has been some talk of moral intuitions on substack lately after
posted this image:It so happens that, in this meme, I find myself sympathizing with the crying nerd at the top of the bell curve. Let me try to explain why.
1. Types of Moral Intuitions
A conceptual taxonomy of moral intuitions was attempted over a century ago by Henry Sidgwick in The Methods of Ethics.
Sidgwick proposed three categories: perceptual, dogmatic, and philosophical intuitions.
Perceptual intuitions are immediate, emotional reactions to particular actions. If you witness a murder and feel a sick twist in your stomach, that's a perceptual intuition at work; a kind of moral feeling, direct and unmediated.
Dogmatic intuitions are common-sense moral rules that most people treat as self-evident: statements like “stealing is wrong” or “we should keep our promises.” They apply to types of actions rather than specific cases and form the backbone of what people usually call common-sense morality.
Philosophical intuitions are more abstract in nature. They don’t refer to particular actions or categories of actions, but instead express principles that are meant to be universalizable — applying to all agents, in all situations. Also, they appear true or important upon self reflection. These are things like “suffering is bad”, or as Sidgwick might exemplify:
“It cannot be right for A to treat B in a manner in which it would be wrong for B to treat A, merely on the ground that they are two different individuals, and without there being any difference between the natures or circumstances of the two which can be stated as a reasonable ground for difference of treatment”.
Sidgwick regarded philosophical intuitions as the foundation of moral reasoning. These were supposed to be principles that, upon careful reflection, no reasonable person would reject. So, he believed, they should serve as the starting point for building a systematic moral theory. And, when pursued to its logical conclusion, Sidgwick thought that, this method of ethical reflection culminated in utilitarianism.
2. What to Make of This?
As a rule of thumb, I never trust philosophers. Which naturally raises the question: what are we to make of this seemingly arbitrary taxonomy?
Well, for starters, it's intelligible. Which is already pretty good by the standards of 19th-century philosophy. It does seem possible to coherently categorize moral thoughts in the way Sidgwick proposes.
Still, even if the classification makes sense, it’s not immediately clear why we should consider it especially valuable. Why, for instance, would we have different types of moral intuitions? Why should we treat philosophical intuitions as more foundational than others? And why would some moral principles be “self-evident” upon reflection?
Also, Sidgwick offers little in the way of argument for why we should all have the same philosophical intuitions; he mostly takes it for granted. Perhaps because, to him, it was self-evident (ha). But what about the rest of us mere mortals?
3. The Latest on Our Brain
Contemporary research in evolutionary psychology and cognitive neuroscience suggests that the human mind is not a single, unified block, but rather a collection of specialized cognitive systems—semi-independent, domain-specific networks shaped by natural selection to solve distinct adaptive problems. These systems govern different aspects of behavior, such as threat detection, social reasoning, language, mating, and foraging. Each system operates according to its own functional logic, processing specific types of information and producing outputs tailored to its evolutionary role. While these systems often work in coordination, they can also compete for our limited cognitive resources such as attention, working memory, and executive control.
Importantly, these cognitive systems are not monolithic; rather, they are composed of smaller, specialized subcomponents that carry out specific functions in service of the broader system. For example, the threat detection system—which monitors and responds to potential danger—relies on lower-level mechanisms that rapidly evaluate sensory input for signs of imminent harm. One such mechanism is the fear response, which automatically mobilizes attention, physiological arousal, and motor readiness when a threat is detected. This fear response operates largely outside conscious awareness, prioritizing immediate survival by preparing the organism to freeze, flee, or fight. In this way, subcomponents act as functional building blocks within larger cognitive architectures, allowing complex behavior to emerge. They are biological heuristics embedded in us by millions of years of evolution.
4. A Modern Reappraisal of Sidgwick
One possibility that would begin to shed light on some of the questions Sidgwick leaves us with is that moral intuitions (intuitions regarding how we ought to behave towards others) emerge from a cognitive system that governs our normative standards of good and bad cooperation. Rooted in our social nature, this moral system would have the goal of informing and guiding our conduct towards other beings.
Working under this assumption, philosophical intuitions would be natural candidates for some of the most ancient cognitive dispositions within our moral system, reflecting its core function. This helps clarify why such intuitions would serve as fitting foundations for a normative ethical theory: given that they would be universally shared among humans.
When comparing different moral intuitions, this idea doesn't seem that far-fetched. For instance, consider the contrast between the intuition that “causing unnecessary harm to others is wrong” and the intuition that “not bowing when you meet your elders is wrong.” The former appears more deeply rooted, a candidate for an innate philosophical intuition, something likely to be recognized as wrong across cultures—while the latter seems more culturally specific. But I think we can push the analysis even further.
If you’ve been following some of my other articles on morality, you’ll know that Harvard philosopher Joshua Greene places significant emphasis on what he calls deontological intuitions; those related to duty, rules, and rights. He argues that these intuitions function as a kind of System 1 morality: fast, efficient, and often unconscious heuristics that help us navigate the complexities of social life.
Sidgwick’s dogmatic intuitions appear to closely resemble the deontological intuitions Greene describes, leading one to suspect they might serve a similar functional role. If that's the case, it would offer a plausible explanation for their origin: they are cognitively efficient heuristics (or Schelling points) designed to help us act in line with our deeper, foundational philosophical intuitions, while accounting for our bounded cognitive capacities. Without such shortcuts, we might be paralyzed by moral deliberation—forced, for instance, to evaluate in every situation whether there are “reasonable grounds to differentiate treatment” between person A and person B, as in the example of a plausible philosophical intuition we used from Sidgwick.
We might even go further and suggest that many dogmatic intuitions such as 'failing to bow when meeting one's elders is wrong' seem like man-made solutions to practical social problems, and therefore appear more learned than innate.
And what to say about perceptual intuitions? Since perceptual intuitions, as defined by Sidgwick, arise as automatic emotional responses, they seem well-suited to refer to subcomponents of our moral cognitive system: functional components of our multi-layered moral architecture.
Take, for instance, the perceptual intuition of feeling guilty. Guilt likely evolved as a mechanism to promote prosocial behavior and maintain social cohesion. By causing individuals to feel bad when they violate social norms or harm others, guilt motivates reparative actions—such as apologizing or making amends—which in turn strengthen group bonds and cooperation. In this sense, guilt operates as a moral mechanism, a functional subcomponent of our broader moral system.
To recap, standing on the shoulder of giants, we have just reconceptualized Sidgwick in the following way:
Perceptual Intuitions
• Origin: Innate
• Role: Subcomponents of the moral system (biological heuristics)
Deontological Intuitions
• Origin: Learned
• Role: Memetic heuristics due to bounded cognition
Philosophical Intuitions
• Origin: Innate
• Role: First principles of the moral system
We have thus attempted to outline a neuroscientific account of why we appear to experience different tiers of moral intuitions, an account under which Sidgwick’s categorization begins to make more sense.
At this point, we should ask ourselves an important question: Are we in over our heads? The answer, of course, is yes. Nonetheless, sometimes we must have the courage to hypothesize, and to share those hypotheses with the world. Undoubtedly, our model will be flawed, but it might help or inspire others to get closer to a more accurate understanding of what’s really going on. For what it’s worth, our explanation seems reasonable to me. And importantly, it appears falsifiable through cross-cultural studies, which is a definite plus. So, I propose we press on.
5. Self-evident Upon Reflection
Armed with our new reconceptualization of Sidgwick, we can begin to sketch what the process of identifying a claim as “self-evident upon reflection” might actually involve. This schematic method should serve as a way of evaluating moral intuitions to determine how they should be categorized, whether they are the foundational philosophical intuitions or merely dogmatic/perceptual heuristics.
We begin by selecting a specific moral intuition—a normative statement that comes to mind concerning how people ought to act, or what is good or bad to do.
Next, we assess its universalizability: we ask ourselves whether the principle appears to holds consistently across cases, or whether counterexamples arise in certain scenarios. Is it merely a context-dependent heuristic, or does it reflect a guiding feature of our moral psychology?
We then consider whether it plausibly reflects a philosophical intuition tied to our innate understanding of morality, or whether it is instead a self-serving rationalization, shaped by competing impulses or a desire to prosper without much concern for others. Is it a thought provided to us by our ethical system evolved to promote social cooperation and fairness or by our self-interested system aimed at maximizing personal survival, status, and reproductive success?
Finally, we evaluate how well it fits with other candidate philosophical intuitions. Does it cohere with them, or is there tension?
Given the assumption we are working under — that healthy humans share an evolved moral module whose purpose is to provide a normative understanding of how to behave towards other beings — it appears plausible that this method could bring us closer to uncovering its governing principles.
(The method of identifying what is self-evident upon reflection represents the armchair approach to uncovering our philosophical intuitions. This is possible because we are in a rare case were we are both the inquirers and the object of inquiry. Of course, there is another route to investigate whether philosophical intuitions exist: scientific, cross-cultural studies—though this, of course, requires getting up from the armchair (boo). As more of a scientist than a philosopher, I tend to prefer this second approach, though it is undeniably more effortful and costly.)
6. Some Examples
To understand if a methodology makes sense it is always useful to look at some examples.
a) Thou Shall not Kill
Let’s take the sixth commandment in the Bible: “Thou shalt not kill.” Is this a philosophical intuition, or merely a perceptual/dogmatic one?
Well, is it universalizable in the sense we have discussed before? How does it hold up in edge cases? Consider, for example, an extreme case of self-defense: suppose that during a school shooting you manage to seize a weapon and kill the shooter before he harms others. Is this to be considered immoral because it violates the intuition that killing is bad? This appears quite dubious to me and I’m willing to bet for many other people too.
What seems more plausible is that “thou shalt not kill” was introduced as a moral norm in early civilizations because it functions as a very precious heuristic — a simple rule that generally promotes moral behavior. It reliably guides people toward good actions in the vast majority of situations, since it is very rarely a good idea to kill someone. So, it appears appropriate to categorize the moral intuition that killing is wrong as a dogmatic intuition.
b) Caring About What’s Physically Proximal
It seems like we have more ethical concerns for beings who are in our physical proximity rather than if they are far away, even if we don’t know them. Is caring about people on the basis of how near they are to us in space a philosophical intuition?
Up to a point this behavior makes sense. Our brains don’t have the cognitive capacity to keep in mind every person on the planet, so it’s reasonable that we prioritize what we know better and can more directly influence. However, we also recognize that if a close friend travels to the other side of the world, we wouldn’t suddenly believe it doesn’t matter if something bad happens to them just because they're far away.
This suggests we’re dealing with a perceptual intuition, a subcomponent of our moral system that downregulates concern for distant events. It likely evolved as a way to help us manage cognitive complexity, nudging us to focus on what’s known and actionable. Here, our self-interested systems are likely also at play, competing for our attention; since it pays to cooperate with those who can reciprocate.
In general, I suppose that caring more about what’s physically close is a candidate for a perceptual intuition. It appears to be a biologically shared, functional response rather than a core moral principle governing our moral system. Indeed, there appear to be counterexamples and humans often view helping those who can’t reciprocate as deeply moral (example abound: consider, for example, someone sacrificing their life to save a bunch of strangers).
(caring more for people who are close would also clash with other plausible philosophical intuitions, like the one proposed by Sidgwick previously).
c) Partiality Towards Family
What about the near universal human impulse to prefer family members over strangers?
Again, this makes some sense. In moral decision-making, prioritizing people we know to be good and loving over complete unknowns appears justifiable. But it seems possible to take this too far. For example, if we absolutize the principle to something like 'one should always prioritize family above all else' we can conjure up scenarios were such a principle seems highly dubious. Consider, for instance, a choice between saving the life of a child you don’t know and preventing a small bruise on your own child’s foot, I would bet that practically everybody would recognize that the moral choice is to save the unknown child.
As before this is another case were we would expect there to be tension between our evolved moral system that governs how we should behave towards others and different modules driven to want to propagate our genes at any cost. This makes the ‘partiality towards family’ intuition a candidate for subcomponent debunking.
In general, within this framework, there appear to be three ways of debunking intuitions: the intuition may be a learned heuristic, a biologically evolved heuristic, or a product of a cognitive system that competes with the moral one.
7. Some Objections
a) Our Moral System Is Itself a Heuristic for Gene Propagation
We did a lot of talking regarding our moral system and its core principles, but isn’t our moral system, under this framework, simply an evolved way to enhance gene progagation? In effect it seems like the whole moral system is a “heuristic” for helping us enhance replication success as social creatures. If we carry the logic we've used so far to its ultimate conclusion, it seems we’re led to debunk morality itself.
I don’t think that’s quite right. We are, in a very real sense, physically bound to care about morality in a way that we are not bound to care about gene propagation. Our morality operates according to certain laws that govern its functioning, and we are simply trying to uncover them. Philosophical intuitions serve as a way to axiomatize a model of this morality we intrinsically care about.
b) This Still Seems Arbitrary
Yes, the armchair approach carries a degree of arbitrariness, as it involves me talking to myself and betting on what healthy humans might think about various scenarios. It’s easy to fall into post-hoc rationalizations. What we’ve done here is simply proposed a sketch of a model, that aligns with Sidgwick’s categorization, that explains why we experience different and often conflicting moral intuitions.
7. Back to Common-Sense Morality
But why are we even trying so hard to obtain a coherent model of human morality? We have our common-sense morality, let’s just use that and be done with it.
First of all, if we were to adopt the common-sense morality of ancient societies, we can be fairly certain that our present-day common-sense would be horrified. Indeed, what is common-sense shifts across time and cultures (by the way, our model offers an explanation for this, suggesting that different environments give rise to different learned heuristics). This already exposes the shallowness of relying on common-sense—one is left wondering: which one, exactly?
Secondly, If we could explain how common-sense morality evolves across time and cultures by uncovering its underlying principles, we may be better equipped to avoid ethical missteps and promote more meaningful cross-cultural dialogue.
Thirdly, the lack of curiosity about the structure that governs our morality smacks of scientific surrender. Humanity didn’t come this far by shying away from complex problems. We progressed by striving to understand, and I suggest we continue to do just that.



(1) Where did Bryan Caplan post that meme? I didn’t see it among his notes, and I haven’t seen any discussion about it anywhere.
(2) I wouldn’t be so quick to accept the evolutionary psychological claim that we possess semi-independent, domain-specific networks that function semi-autonomously. That claim rests on largely theoretical grounds and is a contested matter that has not been convincingly established. There are ongoing disputes about the degree to which cultural learning shapes the relevant psychological systems we employ. See e.g.,:
Robbins, P. (2013). Modularity and mental architecture. Wiley Interdisciplinary Reviews: Cognitive Science, 4(6), 641-649.
(3) Likewise, I don’t think Greene’s dual process model of moral cognition has held up well to scrutiny. Critics like Kahane have shown, I think convincingly, that the two systems don’t neatly prompt people to produce deontological and utilitarian judgments.
Kahane, G., Everett, J. A., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘Utilitarian’judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134, 193-209.
(4) The remark about Greene placing “significant emphasis” on deontological judgments may be confusing to readers. Greene thinks that deontological judgments are the output of a more biased system that may work well much of the time but ultimately Greene thinks we should favor something more like utilitarianism under many circumstances, especially in cases of intergroup conflict since it provides a “common currency.” Maybe you know all that already but I thought “significant emphasis” could give readers the impression he’s a deontologist.
(5) I think more general skepticism about “intuitions” is called for. Why think anyone has moral “intuitions” at all? What even is an intuition, and how do we know people have them?
(6) You say “Given the assumption we are working under — that healthy humans share an evolved moral module whose purpose is to provide a normative understanding of how to behave towards other beings — it appears plausible that this method could bring us closer to uncovering its governing principles.”
Why work under this assumption? I agree with Stich that there is no moral domain and with Machery that morality is an historical invention. I’m not sure morality is a part of our evolved psychology. Here are a couple relevant papers that defend these views:
Machery, E. (2018). Morality: A historical invention. In K. J. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 259-265). New York, NY: The Guilford Press.
Stich, S. (2018). The moral domain. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 547- 555). New York, NY: Guilford Press.
Thank you for a fine essay. I’ve just reread Brand Blanchard on Sidgwick in his “Four Reasonable Men.” Sidgwick obviously had unusual (synesthetic?) and amazing mental faculties, including a fantastic memory, and reasoning abilities. Blanchard also describes and extols his compassionate and positive personality. He describes a moral sphere much larger than what was common for his time.
Given Sidgwick’s agnosticism regarding religion and his application of a naturalist’s empirical thinking when exploring psychic phenomena, I wonder why he used the thinking of a nonempirical metaphysician when exploring ethics. It has me questioning his honesty, his integrity. What do you think? How might Sidgwick’s ethical reasoning have differed had he access to current neuropsychological understandings?