Some of you will know that I’ve argued for Utilitarian Naturalism: the view that moral facts are stance-independent, natural facts that are best captured by something akin to the Utilitarian normative theory.1
This metaethical belief of mine is probably one of my weirdest—I don’t know of a single philosopher who agrees with it—which means it’s very likely to be false. But, I haven’t quite awoken from my stupor yet.
In this article, I’ll offer a just-so story suggesting why we might all be Utilitarians at heart, followed by a brief defense of Utilitarian Naturalism against some common objections. Think of it as a sort of appetizer for the perspective—designed to give a grasp of the idea without incurring in a significant opportunity cost in terms of time.
1. Pain and Pleasure
Pain and pleasure are likely highly adaptive traits from an evolutionary perspective. They encourage behaviors that promote replication—such as eating, mating, and social bonding—while also serving as signals against potential harm or injury, prompting withdrawal from dangerous stimuli.
But their role goes beyond immediate survival. These traits likely help us perform reinforcement learning throughout our lives. Evolution may have endowed us with these sensations not just to guide immediate responses, but also to support ongoing learning and adaptation in a constantly changing environment—a capacity notably limited in simpler organisms like bacteria, protozoa, or plants.
As a brilliant article by Gwern explains, evolution operates as a slow, sample-inefficient "outer" optimization process, shaping behaviors and learning mechanisms over generations. In contrast, reinforcement learning serves as a fast, sample-efficient "inner" process, allowing individuals to adapt within their lifetimes. In simpler terms, the optimization process of evolution created yet another optimization process inside of us that helps us learn.
Pain and pleasure, within this framework, emerge as evolutionary adaptations that provide us with an objective function. They also serve as the reward system of our consciousness, guiding our learning and decision-making.
2. Normativity in Cooperation
Beyond being capable of experiencing pain and pleasure, humans are social animals with innate instincts for cooperation, engaging in activities like hunting, defense, caring for offspring, or maintaining social structures. Indeed, belonging to a community and cooperating with its members—being a well-established participant—brings humans a sense of pleasure or eudaimonia.
Because we are a social creatures, it is hardly a stretch to suppose we must also have evolved mechanisms for discerning whether others are genuinely cooperating with us or attempting to exploit us. Is that individual I picked berries with yesterday a good cooperators or a bad one? Such discernment seems essential for any social species navigating complex interactions. Judging good and bad cooperation, then, appears to require some form of innate normative perception.
But cooperation isn’t a black-and-white concept; it’s inherently fuzzy. Someone might cooperate halfway, up to a certain point, or with varying levels of effort, from minimal to extraordinary. So how do we gauge this fuzzy category of reciprocation? Perhaps the most natural development is that, many years ago, we began to see good cooperators as those living beings who demonstrated sustained care for our reward function—for our well-being. After all, cooperation is inherently tied to mutual benefit, and recognizing those who through their actions increase our happiness would be a straightforward evolutionary advantage.
And because we also have to interact with others, it would be advantageous for us to have some sense of how to be good cooperators ourselves—to have something like a conscience. This conscience would serve as an internal guide, helping us evaluate our actions and intentions against normative standards of pro-sociality that would increase our fitness.
3. A Just-So Story
Following this reasoning, it seems natural that a primordial conception of morality might emerge—a basic sense of how we ought to behave toward others, and how others expect us to behave in order to be seen as a good member of the group.
What would this conception look like? It’s hard to say, but a simple and natural hypothesis is that an individual would be regarded as a good cooperator by the group if they showed care for the objective function of others. If they demonstrated concern for the well-being of others in a manner similar to how they cared for their own.
If this is true, what would lie at the upper end of this category of goodness? Who would be seen as the most virtuous—as a truly unimpeachable cooperator?
Following this line of reasoning, to be beyond reproach in the eyes of everyone, it seems like an individual would need to value everyone’s happiness equally, and their actions would need to consistently reflect that. So, the perfect member of the tribe—the ideal in the eyes of both the community and one’s own conscience—would be the individual who prioritizes the well-being of others just as much as their own.2
Of course, no one fully lives up to this ideal. Yet it seems to arise quite naturally from two fundamental facts: that 'Nature has placed mankind under the governance of two sovereign masters, pain and pleasure,' and that we are a social species—one that depends on cooperation for survival. In such a context, having some internal conception of good and bad cooperation would offer a clear evolutionary advantage. Pain and pleasure, then, become the natural yardstick by which cooperation is measured.
As a result, when thinking about how we ought to behave pro-socially to be unimpeachable in the eyes of society (when consulting our conscience) we might be predisposed to formalize an innate first principle similar to the Principle of Rational Benevolence articulated by Henry Sidgwick in The Methods of Ethics:
The good of any one individual is of no more importance than the good of any other
This idea of equal consideration, which recurs throughout human history, is a cornerstone of utilitarianism. It underpins the concept of fairness that monkeys appear to posses too. Sidgwick regarded it as self-evident, asserting that morality inherently demands impartiality.3 Of course, according to Utilitarian Naturalism, it’s no accident that this idea would echo through the ages—it's inscribed in our being.
As an aside, it's worth noting a potential point of misinterpretation at this point. The Principle of Rational Benevolence does not imply that everyone should always be treated the same, but rather that the well-being of all should be given equal consideration when making decisions. This doesn’t always lead to feel-good outcomes.
For example, if a mother lacks the resources to feed all her children, it may be the right—albeit tragic—choice to sacrifice the weakest child, as that child is less capable of caring for their own well-being and that of others. According to the Principle of Rational Benevolence, such a harsh decision is a moral one because it serves the interests of all individuals, evaluated impartially. Indeed, many ancient human cultures accepted these kinds of trade-offs quite readily.
In any case, since being good is by definition something we ought to aim for, it appears quite straightforward that, once we are endowed with the principle of rational benevolence, we are led to the conclusion that we ought to promote overall happiness throughout society. And if the happiness of all individuals is to be considered equally, we are in Utilitarian territory: the greatest good for the greatest number.
a) Who Is Worthy of Moral Consideration?
An interesting question—one we will only very briefly touch on—is: Who are the individuals that the principle of rational benevolence refers to? There’s a potential ambiguity here. Does our sense of normativity, shaped by evolutionary pressures, apply only to members of 'our group'? To all humans? Or perhaps to all sentient beings?
Debate is certainly possible. Descriptively, I tend to think we regard actions as morally good even when they aim to minimize the suffering of beings who cannot cooperate with us—or who may never be able to. We seem to possess a deep, foundational aversion to suffering; perhaps this should be considered the initial building block of our morality. In such a case, principles like the one of rational benevolence are best understood as extensions of our aversion to suffering—projected outward toward other potential sufferers. In this sense, all sentient beings would fall within our moral circle, albeit with varying degrees of practical concern. This would explain why our conscience is capable of triggering when we encounter any sentient being that appears capable of experiencing pain or pleasure. After all it would be adaptive for us to understand that we would be perceived as hostile and uncooperative by members of other species if we contributed directly to their suffering.
So, it’s possible that even if our sense of morality originally evolved to support mutual cooperation, its foundational principles were never intrinsically constrained to a specific group of individuals. It's true that, in practice, we tend to favor those we’re familiar with. But under this account, that bias wouldn’t stem from our innate moral sense, rather it would arise from other facts about our evolved psychology, such as our tendency towards wanting to maximize our own happiness—our tendency towards egoism.
Don’t get me wrong—of course, even from a utilitarian standpoint, it’s moral to show some special concern for those we know and love in applied settings. After all, we have more information about them: we understand their character, and we reasonably assume they’re sources of significant happiness (which not all individuals are). But that preference has its limits. If we imagine a father who sacrifices his son’s pinky finger to save a billion strangers, we don’t envision a moral monster, rather we tend to see moral virtue.
In any case, modern utilitarians are well known for embracing the broader moral outlook that extends concern to all sentient life, a view I’ve briefly defended here from a naturalist perspective. But even within this framework, there’s room for potential misunderstanding, and we should proceed with caution.
Just because the moral circle includes all sentient beings doesn’t mean that, in practice, it would be moral to sacrifice a human to save thirty shrimps, simply because their suffering upon dying is comparable. That would be a strawman of utilitarianism. Moral reasoning must account not only for immediate suffering but also for the second-order effects of our actions.
Humans possess a uniquely important quality that sets them apart from all other animals: the capacity to drastically reduce suffering through technological innovation. In fact, one could argue that humanity has already greatly lessened overall suffering thanks to this ingenuity—and from a utilitarian perspective, our goal should be to continue doing so until sentient suffering is eliminated altogether. Humans are, so far, the only species to have made meaningful progress toward this end. As a result, their average value in eudaimonic calculations is to be set at extraordinarily high levels.
4. Objections
a) What About Moral Disagreement?
Notoriously there is widespread moral disagreement. If Utilitarian Naturalism were correct, shouldn’t we expect widespread agreement instead? And how can this theory account for the variation in moral norms across different cultures and historical periods? After all, every culture seems to have its own values and moral codes—Utilitarian Naturalism can’t be right.
Answer 1:
Famously, applied utilitarianism is extremely difficult—so much so that one of the classic objections to the theory is that, if it were true, moral agents would be trapped in an impossibly complex task of calculating all possible future outcomes of their actions.
Utilitarians acknowledge this problem but have proposed a clever solution: rather than attempting exhaustive calculations in every situation, one can rely on simplified heuristics—rules of thumb that, when followed, tend to promote overall happiness. These may take the form of norms, values, or virtues that guide a utilitarian’s decision-making without falling into computational paralysis.
Our ancestors intuitively understood this as well. They developed moral rules and social norms to guide individuals toward behavior that benefited the broader community. These norms—or laws—were often introduced by 'norm entrepreneurs' who saw them as useful: practical tools for maximizing collective utility. Because moral rules function as heuristics that drastically reduce the cognitive resources needed for moral reasoning, they’re often internalized by our System 1 cognition; they become second nature. Moreover we are generally taught cultural norms and values, not the utilitarian principles that underpin them. These facts have led to widespread moral confusion.
Over time, values, virtues, and norms have come to be mistaken for morality itself, rather than being seen as pragmatic expressions of our underlying moral instincts shaped by human ingenuity.
Crucially, societies shaped by distinct historical experiences and influenced by different norm entrepreneurs tend to develop unique sets of norms and values. As a result, moral disagreements often stem not from divergent moral first principles, but from the varied cultural heuristics that have evolved to express those principles. This culturally conditioned learning is a key driver of moral disagreement.
Answer 2:
Utilitarianism naturally invites debate. Even two ideal utilitarians, working from different epistemological assumptions, may reach opposing conclusions. In fact, it's often possible to construct strong utilitarian arguments for both sides of a certain issue. Given the wide range of human experiences and cognitive resources, such disagreements are to be expected.
In fact, Utilitarianism may be the moral theory that best accounts for the prevalence of moral disagreement because it leads to applied moral debate. From this perspective, the existence of consequentialist based moral disagreement can be seen as evidence in favor of Utilitarian Naturalism.
b) What About Alternative Moral Theories?
Why would humanity come up with all these different moral theories to describe morality? What about religion? Or Deontology and Virtue Ethics? If Utilitarian Naturalism is true shouldn’t we have simply converged towards Utilitarianism when reflecting about morality?
Answer:
Religion is a human universal, suggesting that it functions as a highly adaptive memetic structure. It incentivizes adherence to moral norms—even among those lacking innate moral impulses, such as egotistical sociopaths—through mechanisms like divine surveillance and promises of an afterlife. Religion may have been the only viable framework for establishing order in early civilizations.
A close reading of sacred texts often suggests that religions were constructed by humans with the goal of improving their ancient societies. Tailored to specific historical contexts, the moral norms embedded in religious doctrines seem deliberately crafted to promote collective well-being and social cohesion. These systems were shaped with the intent to do good—but, naturally, not all of their rules, heuristics, or values succeeded in achieving that aim, largely due to the limited knowledge and worldview of their creators.
Instead, historical versions of Deontology and Virtue Ethics fall into a comprehensible confusione, they identify a specific set of applied rules or virtues for the moral theory itself. They conflate the applied heuristics used to guide moral behavior (the rules, values, and virtues) with the underlying principle that animates them. This is understandable because, from a pragmatic standpoint, rules and values are more important than first principles. It’s the heuristics we follow that guide most of our behavior.
Nonetheless, from a naturalized perspective, I believe Utilitarianism is the best candidate for a foundational descriptive theory of morality. It not only explains the emergence of religious moral systems, deontological rules, and virtue-based ethics, but does so in a way that unifies them under a common principle: the promotion of collective well-being. Trying to reverse this—deriving utilitarianism from the starting points of religious morality, deontology, or virtue ethics—appears like a far more difficult task.
We can climb the moral mountain from different sides, but one route offers a better explanation for the existence of the others.
c) There Is Clearly a Natural Bias Toward Kinship
It's not natural for humans to think in terms of maximizing well-being for all individuals equally—our instincts are heavily biased toward kin and close social circles and everybody accepts it.
Answer:
As mentioned earlier, having a pro-kin bias can make a certain amount of utilitarian sense. However, humans don’t always behave morally or pro-socially—often, they act egoistically. The claim we are making isn’t that humans consistently act morally, or that they resemble perfect utilitarians in their behavior. Rather, it’s that Utilitarianism offers a good physical model for the underlying laws that govern our conception of morality.
It’s true that people generally don’t take issue with favoritism toward kin—though we do have normatively loaded terms like nepotism to criticize excessive forms of it. This tolerance largely exists because there’s no strong social expectation for anyone to behave like a perfect utilitarian since its impossibly hard to do. So, falling short in a wide range of cases is broadly accepted.
d) Why Do We See Moral Development?
If we are naturally inclined towards Utilitarianism, and we accept that norms are heuristics designed to guide us toward collective well-being, then why do norms within the same culture change over time?
Answer:
Although our utilitarian first principles may remain constant, the surrounding environment does not. One of the primary drivers of norm change is technological progress.
For example, early societies may have believed that strict gender roles maximized overall happiness, especially in environments where physical labor was central to survival and economic productivity. However, as technological innovation reduced the importance of physical strength in the workplace, those same norms were reevaluated. New conditions called for new heuristics.
Utilitarian Naturalism offers a straightforward explanation for moral development: as the world changes, so too must the rules that best promote collective well-being.
e) What About the Existence of Immoral People?
Immoral or amoral people exist how would this be possible under Utilitarian naturalism?
Answer 1:
Often, immoral behavior stems not from malicious intent, but from mistaken epistemological assumptions. People may genuinely believe they are doing good, even as they cause immense suffering. Many brutal dictators, for instance, have described their actions in these terms.
Answer 2:
People aren't motivated solely by moral considerations—they can, and often do, prioritize their own happiness over the well-being of others.
Answer 3:
Some people seem to lack a functioning "moral chip," or have one that’s been diminished due to still poorly understood sociobiological factors, such as in cases of sociopathy or psychopathy.
f) What About the Is-Ought Gap?
Answer 1:
Answer 2:
It is the case that humans create new language for reasons of pragmatic pro-social utility. It is the case that all healthy humans believe in some universal principles concerning the moral sphere. Define ‘goodness’ as the normative theory these moral principles spell out. By definition good things are those we ought to do. You ought not torture babies.
Answer 3:
Demanding that one bridge the is–ought gap often presupposes a specific definition of "ought"—typically one involving strong, categorical, or non-natural normativity—that one need not accept. There's no problem failing to bridge a gap to a concept that doesn’t exist.
g) What about Moral Obligation?
Answer:
Within this framework, there are two main incentives to act morally: one internal, and one external.
The internal incentive is the psychological discomfort one feels when acting against what one fundamentally values—similar to the experience of cognitive dissonance in the epistemic realm. Our conscience, shaped by deeply held moral intuitions, will punish us for behavior it deems immoral. Underestimating this internal punishment is a mistake made at one’s own peril. It’s also worth noting that our own happiness is often closely tied to the well-being of others, making moral behavior beneficial even from a purely self-interested standpoint.
The external incentive comes from other human beings who share our moral module. As social creatures, we are highly attuned to recognizing and responding to behavior we perceive as immoral. This disapproval can manifest in subtle social sanctions—or, in some cases, escalate to outright violence.
5. Should the Truth of Utilitarian Naturalism Be Hidden?
Utilitarians sometimes ask themselves whether the truth of Utilitarianism should be hidden. The worry is that if too many individuals adopt the moral theory openly, it could lead to negative utility outcomes—people might reject simplified heuristics like values and norms, and in doing so, severely miscalculate the future utility of their actions.
Since Naturalized Utilitarianism is a metaethical position that rejects the idea of a moral tally in the afterlife—or any form of supernatural moral judgment—there’s a further concern: without these external incentives, might people feel less compelled to act morally?
That may well have been a risk in ancient times, when belief in divine oversight was a central organizing force. But today, I’m not so sure. Wider recognition of utilitarian principles could enhance moral behavior and help dispel the confusion and harm caused by cross-cultural moral confusion and religious or ideological fundamentalism.
And just to offer the reader a case study with n = 1: I still deeply want to act morally, even within a purely naturalistic framework. The aspiration to annihilate sentient suffering has given my life a sense of meaning more profound than anything I’ve experienced before.
That said, utilitarians probably worry too much about these kinds of meta-level concerns. Only a tiny fraction of people are genuinely engaged with metaethics, and widespread normative disagreement is likely to persist regardless of what gets written. And who knows—there’s always the off-chance that a god exists, too.
6. On the Explanatory Power
Utilitarianism is quite parsimonious as a theory, and it has the feel of a physical model: the morality of actions is determined as the solution to an optimization problem. This is similar to how other physical models work.
But in the natural sciences, one of the best tests of a model is its predictive power. If it is a good model Utilitarianism should help predict the evolution of moral norms. Some have argued that Utilitarianism does a fairly good job in this regard.
Indeed, utilitarians often claim that the theory has been ahead of the curve on key moral questions—that it has a solid track record. This is precisely what one might expect from a useful, if approximate, model of descriptive morality.
The title of this section makes me smile, though. As the attentive reader will have noticed, we’re not aiming for rigorous formal analysis here—this is a blog post, after all. There’s much more that could be explored in terms of evidence (like the dyadic completion literature) and many other objections worth considering. But one has to draw the line somewhere when writing an introduction.
For more on Naturalized Utilitarianism, feel free to check out some of my videos on the matter, as well as a few earlier notes (which one day I will surely get around to rewriting properly).
Naturalized Utilitarianism is, at heart, an optimistic theory. It suggests that if we relieve the suffering of others, our conscience—and those around us—will reward us. It offers a meaningful orientation to life, pointing us toward the maximization of sentient well-being.
Hope you could derive some well-being from my admittedly weird thoughts.
Like some naturalists, I suspect that, to make the theory coherent with the current nomenclature in philosophy, we would need a reforming definition of morality but this is besides the point of the article.
Here one might suspect that the best cooperator is one that cares for the objective function of everyone else more than his own. I believe this leads to misleading conclusions, in the sense that, in practice, this ideal doesn't always align well with what we typically consider good or moral—but explaining these discrepancies is beyond the scope of this introductory piece.
A similar and related concept—more geared towards practical application—is the Golden Rule: "Do unto others as you would have them do unto you." This principle embodies reciprocity and fairness, encouraging us to consider their well-being as if it was our own.
Is your theory that we simply define the word "right" as whatever action maximizes utility?
If so, then the view seems sort of vacuous. After all, someone could just define "right" in a deontological or virtue-ethical sense--or even as whatever action that maximizes redness in the world.
If not--that is, if you think that rightness and goodness are real properties "out there" that are identical to pain and pleasure--then I wonder how you would get any knowledge of this? After all, if rightness had been identical to not using people as mere means, or good had been identical to redness, the evolutionary story would have played out exactly the same way you describe. It would still have been adaptive to be disposed to value pleasure, and we would have formed all the same beliefs, but we would just have been wrong. So you can't point to the evolutionary story as evidence that pain and pleasure are good, and that rightness is maximizing the good.
Either way seems problematic for your view, though you may have addressed this in a way that I missed.
This is an interesting hypothesis. I buy some of it and really don’t buy other parts. I’d be happy to discuss it more over a zoom call, if you’d like: feel free to email me at dnbirnbaum@uchicago.edu.