7 Comments
User's avatar
Silas Abrahamsen's avatar

Is your theory that we simply define the word "right" as whatever action maximizes utility?

If so, then the view seems sort of vacuous. After all, someone could just define "right" in a deontological or virtue-ethical sense--or even as whatever action that maximizes redness in the world.

If not--that is, if you think that rightness and goodness are real properties "out there" that are identical to pain and pleasure--then I wonder how you would get any knowledge of this? After all, if rightness had been identical to not using people as mere means, or good had been identical to redness, the evolutionary story would have played out exactly the same way you describe. It would still have been adaptive to be disposed to value pleasure, and we would have formed all the same beliefs, but we would just have been wrong. So you can't point to the evolutionary story as evidence that pain and pleasure are good, and that rightness is maximizing the good.

Either way seems problematic for your view, though you may have addressed this in a way that I missed.

Expand full comment
Mon0's avatar

Thanks for providing a critique Silas!

In short, I believe that humans have evolved an innate sense of goodness, and this sense is fundamentally grounded in utilitarian first principles. We can see this if we carefully reflect on our moral intuitions, discarding those that are inconsistent or contradictory, and retaining only those that feel more self-evident, fundamental, and mutually coherent.

However, in practice, we have to rely on norms and virtues to guide our behavior because explicit utilitarian reasoning is prohibitively cognitively demanding.

In my view, this explains much of the historical development of descriptive morality (even before utilitarianism was formalized by smart humans).

Moreover, since different cultures adopt different heuristics to simplify moral decision-making, we observe cross-cultural moral disagreements. But these aren't truly disagreements about morality itself—they’re about the mental shortcuts used to conserve cognitive resources.

Not sure if I'm answering your doubts though.

Expand full comment
Silas Abrahamsen's avatar

Of course!

I think I agree with a lot of what you say, but I suppose I'm then wondering about the relevance of the evolutionary story you tell. As I read it, you explain how certain behaviors (being altruistic and all that) make sense to have evolved by being adaptive. But if your story about how we know morality is some innate sense, then it shouldn't matter whether it's adaptive. I mean, when you tell the story, even if the good behaviors are also adaptive, the reason that we evolved to prefer those behaviors has nothing to do with them being good, and everything to do with them being adaptive. But that sort of seems to undermine the evolutionary explanation for our beliefs being truth tracking.

Expand full comment
Mon0's avatar
Apr 5Edited

That’s a great objection—thank you for bringing it up. It actually reminds me that I should have included evolutionary debunking arguments in the objections section.

Before I dive into that, just a quick aside: our innate moral sense may no longer be adaptive (though I personally think it still is),but it’s entirely possible that it might be maladaptive under current conditions. Evolution can produce traits that become counterproductive when environments change. I say this because I find this aspect interesting to think about not because I think I am informing you of something new. Anyway, back to the debunking.

The metaethical core of my thesis is this: there is no “ulterior” or objective goodness that morality is supposed to track. Evolution gave rise to a functional moral system, and I believe it broadly operates in the way I’ve described—hence, naturalized utilitarianism.

In this framework, it doesn’t really make sense to ask, “But does utilitarian naturalism track the real goodness?”Or rather the question presupposes a deeper, more metaphysically robust kind of goodness . But I don’t see a good reason to think such a “real” goodness exists in the first place. Under utilitarian naturalism, we identify goodness with the coherent organization of our evolved moral first principles—and call it a day. Hence the naturalism part.

Maybe this is vacuous, as you suggest, but I see it as both a valid meta-ethical stance and, more importantly, for how I view philosophy a useful one.

Indeed, under my account, If someone tried to define goodness in a strictly deontological way, or as “whatever action maximizes redness in the world,” we’d keep on running into great confusion and headaches for the human race that will keep on killing each other over heuristic disagreements.

I think part of your skepticism might come from the fact that I take a more pragmatic approach to philosophy. I see philosophy not as the search for eternal truths, but as the development and refinement of useful concepts—memeplexes, —that help us live better and reason better.

That’s what I’ve tried to do here. But I realize you may be more interested in whether the view is truth tracking in some way. That's a canonical demand and you are right to raise it although I am a bit skeptical we can actually do such a thing for morality. Perhaps under some definitions of truth it is possible.

Expand full comment
Silas Abrahamsen's avatar

Whoops, completely forgot to reply, lol! I think that's an interesting approach! I guess I still worry that it might be sort of trivial, in that I could also just decide to think of rightness as denoting, say, some set of deontic principles, or even just the law of the country I live in. But I doubt this would be a problem for you, seeing as you don't think there's some "deeper" property.

Perhaps another worry (though I'm not sure how big it is) is that simply having goodness denote "the coherent organization of our evolved moral first principles," may not lead to a single correct system of morality.

Firstly there is a problem of individuating who "we" are. It might be my family, tribe, society, humanity, all sentient life, etc., and I'm not sure there is a correct boundary to draw. I suspect you might opt for the latter, but in that case I'm not sure that there are any moral first principles to be found. I'm not sure that dogs or ants have moral first principles, and if they do I'm not sure the best way of making them coherent would be utilitarianism.

And that leads to the second part of my worry, which is that I'm not sure that the best coherent organization of these first principles is utilitarianism, even if we can find a precise set of who we include. For example, it seems that even if we could plausibly say that humans have as a first principle in our nature, given by evolution, that we should treat all humans equally, I'm not sure that any plausible evolutionary story would lead us to having the first principle "treat all sentient beings equally." I mean, a person (or community of people) weighing the interests of insects or rats as highly as other humans will have much lower fitness (I suspect) than a community that treats only humans equally. Thus I think we at best get something like weighing of interests in proportion to capability for being a useful cooperator (or something like that).

Though I may be misunderstanding your position, and I'm interested to hear what you think!

Expand full comment
Mon0's avatar

No Silas that's a great objection. Indeed I'm open to the fact that Utilitarianism might not be exactly right, but I think, in practice, it seems to "predicts" things decently. I put predicts in quotes because it's very hard to measure these things. For some soft evidence one can look at the work of Joshua Greene.

I also agree that it seems kind of weird to get Utilitarian first principles from evolutionary forces but I provide a just so story for how it could happen. I suppose it might be adaptive for the foundations of our moral sense to be coherent and universalisable to potential cooperators.

Of course I understand how vague this all is, it's just a vague vague hypothesis I am throwing out there. Nonetheless thinking about a physical model to explain the historical dynamics of human morality is underappreciated so I'm happy to get people thinking in this direction. It would seem quite peculiar to me if we didn't have some "laws" that govern our behavior in the moral domain.

If we assume that my very fringe hypothesis is true I suppose the only thing I could say if someone were to "think of rightness as denoting, say, some set of deontic principles, or even just the law of the country I live in" is that rightness doesn't work that way. That we are physically bound to another sense of rightness (and eventually this will manifest in real world implications).

Incidentally this would be the reason for why there are no known human civilizations that want to maximize redness, you can't create your own morality. And even if it were possible to discover the TRUE morality we wouldn't care about it if it went contra the morality we are bound to value. If somehow we discovered that "it is good to torture babies" is TRUE we wouldn't care a lick.

Expand full comment
Noah Birnbaum's avatar

This is an interesting hypothesis. I buy some of it and really don’t buy other parts. I’d be happy to discuss it more over a zoom call, if you’d like: feel free to email me at dnbirnbaum@uchicago.edu.

Expand full comment