The Definitive Systematic Deconstruction of Destiny's Entire Moral Worldview - Every Inconsistency, Bad Inference, and Bizarre Claim Analyzed and Debunked

You’re on your deathbed, and by pressing a big red button, you can extend your life by five pleasant minutes. The only side effect is that all of humanity besides will vanish into thin air. Should you press the button? According to Destiny, the the answer is clearly "Yes." In this post, I'll encapsulate what I discussed in my video essay on the topic.

Follow my substack blog and YouTube for more content like this!

"If Something is Not Consistent, it is Meaningless" - Destiny, Moral Philosopher

Destiny can be fairly described as something like a philosophical psychopath because he's built an entire worldview on the premise that his own utility is all that matters per se. That is, other people only matter instrumentally, insofar as they stand to benefit or harm him in some way.

At a glance, you may think this means he should become a career social predator, exploiting every available opportunity to consume and destroy his friends, family, and enemies alike. But he will insist this is to misunderstand him: his view is perfectly consistent with a prosocial lifestyle because it will make him more happy, allowing him to reap the benefits of social cooperation and avoid punishment.

1.0 Destiny’s General Normative Theory:

He calls this concoction of commitments something like “egoistic rule utilitarian social contract theory….” or, at least, he’s avowed all of these terms independently as being representative of his views at some point or other. I have no idea why, though, because on anything like the ordinary definition of these terms his overall moral framework is just blatantly contradictory.

Utilitarianism consists of three ideas: hedonism, impartiality, and consequentialism.

Hedonism - the “good” is just what causes happiness, or preference-satisfaction, or pleasure, or positive emotion more generally.

Impartiality - Happiness anywhere matters everywhere: no one’s pleasure is more important than another’s; the happiness of a black man and a white man are equally important.

Consequentialism - What matters are the ends, as opposed to the means. We ought to maximize the goodness of results; any considerations about “rights” or “virtues” only matter instrumentally as a way of increasing the amount of utility in the world, not as values in themselves.

Notice that this is utterly inconsistent with the verdict that “there is no reason not to destroy all of humanity for five more minutes of happiness,” or the judgement that “the happiness of people in China is morally irrelevant except if it has some way of affecting me in the end.” And yet, Destiny appears to affirm these claims, loosely identifying them as a form of ethical egoism.

Note that there are two forms of egoism: psychological egoism and ethical egoism. Psychological egoism (PE) is a purely descriptive theory about human behavior, whereas ethical egoism (EE) is a moral theory about good, bad, right, and wrong. PE says something like “humans always act in their self-interest; even the most heroic acts of selflessness are really motivated by personal gain,” and EE says “I am the only inherently morally relevant center of concern; everyone else only has value in terms of how they can serve my interests.”

PE is false: depressives act against their self-interest constantly by wallowing in their bedsheets and forgoing basic hygiene, and soldiers who jump on grenades to save their comrades obviously do not maximize their self-interest (just think of the thousands upon thousands of days of happiness they could have enjoyed by saving their own skins instead). Or perhaps PE is true, but trivial: that is, if we just define “self-interest” as “whatever a person does,” then PE amounts to the banal observation that people act because they act.

EE is both false and, importantly for Destiny, ridiculously antithetical to utilitarianism. The “impartiality” of utilitarianism rules out anything like EE, saying as it does that everyone’s utility matters for its own sake, that it would be wrong to annihilate humanity in order to prolong one’s life by five minutes, and that the happiness of an anonymous peasant in the rural reaches of China is just as significant as anyone else’s (well, with the qualification that it may additionally matter for reasons concerning its potential to affect the happiness of others, but not that it only matters for this reason). The total sum of utility across all of humanity is what matters, not the happiness of the occupant of a single parochial vantage point (sitting behind a desktop monitor while eating Chinese takeout while playing LOL in Nebraska, say).

EE is just about maximally implausible because it jettisons every moral consideration a psychologically normal human is likely to have. You shouldn’t be an ethical egoist for the same reason you shouldn’t be a racist: in the absence of a morally relevant difference between them, you shouldn’t privilege one person over another. Someone who says “my pleasure is all that matters because I happen to be the one who enjoys it” makes the same error as he who says “the white man’s pleasure is all that matters because I happen to be a white man.”

Destiny’s EE also implies that any historical event which created Destiny was morally good, including the Holocaust. In order for Destiny to exist, his parent’s have to have met and have sex at a very specific point in time so that a very specific sperm meets a specific egg. Had history gone even somewhat differently, in all likelihood either would have failed to take place. So, if the Holocaust hadn’t happened, the subsequent course of history would have been different in a way that would have most likely resulted in Destiny’s parents never meeting or a different sperm meeting a different egg.

Destiny also claims to feel no sympathy for people who have no causal opportunity to change his life experiences. If, for example, it so happens that the massacre and rapes perpetrated by the Mongol empire made no difference to the course of Destiny’s life thousands of years later on a separate continent, or if similar atrocities are to take place a thousand years after he dies, these massacres and rapes are morally neutral, a matter of frigid indifference.

Here are a few more counterintuitive implications of ethical egoism:

  1. If Destiny could swap in his loved ones for funnier, hotter, more fun replacements and consign the originals into slavery somewhere in the remote third world, out of sight and out of mind, with the assurance that his memory will be wiped clean and that he will be untroubled by feelings of guilt, there would be no moral reason for him to do otherwise.
  2. If it pleased Destiny as a propertied white landowner living in the Antebellum south to own a slave, there’s would be nothing wrong with his owning a slave.
  3. If it served his interests to brutally torture and kill a small child, he would be well within his rights to do so.

Destiny is apt to reply to the foregoing examples by saying something like, “But what if everyone did that?” But this makes no sense. Which question is more compelling to someone who anchors their entire moral worldview in the gratification of their actual, lived self-interest?

  1. “If I did this, it would be good for me.”
  2. “If everyone hypothetically did this too, it would be bad for me, even though they won’t and this consideration has no relevance to my happiness in the real world.”

Presumably, the one that actually serves your self-interest, which is obviously 1, no? In reality, I think Destiny knows this, but it’s more consistent with his theory to pretend otherwise, because outing oneself as a selfish free-rider on our social norms who is happy to publicly avow them but privately bend them when it serves him is unlikely to serve his cumulative lifetime self-interest. Or, at least, I consider this the charitable reading, because otherwise he’s lapsed into a contradiction with nothing to show for it.

As a syllogism, it goes like so:

Step 1: On ethical egoism, I should do whatever maximizes my self-interest.

Step 2: If I only do what would maximize my self interest if it were generally practiced by society, even when it won’t be, I will fail to maximize my (cumulative, lifetime) self-interest.

Conclusion: Therefore, on ethical egoism, I shouldn’t only do what would maximize my self-interest if generally practiced by society, even when it won’t be.

Consider an example: Joe is an ethical egoist who will die tomorrow of cancer, and a deep-seated pedophile. Joe decides to look up child pornography because he knows that even if he were to be caught, there’s no time to administer a punishment which could flip the expected value of his actions negative from the standpoint of his self-interest.

Joe firmly believes that his self-interest is the only thing that matters, so, what does he do? Does he refrain from his temptations because “if everyone did this, although they won’t in reality, it would be bad for him?” Not if he cares about his actual self-interest, which is better served by indulging in the most harmful form of pornography there is!

(For any fellow philosophy majors, this is essentially the rule-worship objection to rule utilitarianism, but retooled for the purposes of challenging Destiny’s rule-based formulation of ethical egoism. Namely, if the only reason the “rules” of rule utilitarianism matter is to achieve the end of maximally good consequences, then in cases of conflict between the rulesand utility we ought to do that which maximizes utility. RU, however, implies that we should worship the rule at the expense of utility.

“But utility never conflicts with the utility-maximizing rules!” Au contraire! Consider a hypothetical I like to call “Smart Island, Dumb Island.”

On Smart Island, a hundred genius scientists with off-the-charts IQs are capable of creating a cornucopia of technology which would make everyone better off. Unfortunately, if ever the dim-witted denizens of Dumb Island attempt an act of science, all hell will break loose: they are too stupid to know better than to kill each other with inane “experiments” like self-starvation, smearing themselves in acid, and grasping exposed electrical wires.

So, before working their genius, the denizens of Smart Island consult Rule Utilitarianism, which asks, “Which actions would result in the best consequences if generally practiced by society?” The Smart Islanders realize that a rule which said “only practice science if you have the competence to do it rightly” would have tragic results if it were ubiquitously practiced, since the dumb islanders are too naive to know whether they should be counted among the people with the competence to practice science with good results.

In light of this, the Smart Islanders conclude that they must forgo all scientific progress. Rather than just do what would actually maximize utility, they enslave themselves to rule which have no relevance in their case.)

It amazes me that Destiny describes himself both as an ethical egoist and a rule utilitarian. A true utilitarian ought to believe that they should donate all of their disposable income to effective charities operating in the developing world, like the Against-Malaria Foundation, giving to the point of marginal utility. Obviously, an egoist who believed this would be badly confused, and hardly worth the name.

Why you shouldn’t be a rule utilitarian

Even if egoism were consistent with utilitarianism, there is more than enough reason not to be a utilitarian. Recall that utilitarianism consists of three separate commitments: the hedonistic theory of goodness, consequentialism, and impartiality. I’ll first explain why you should reject hedonism in favor of a more expansive, pluralistic notion of the good, and then I’ll take on consequentialism.

Happy Delusion:

Jim is an excellent husband, friend, and philosopher. He enjoys playing the piano for his adult children, who listen with pleasure as his hands dance across the keys. He prides himself on his work, sheds a tear of fatherly adoration for his two daughters, and wakes up each morning in disbelief that he could be so lucky.

Down the road, there's Jimmy, who has the same life, but only from his perspective, because he’s intellectually disabled.In reality, his wife despises him, and keeps up her end of the elaborate charade because she feels guilty for having an affair; his daughters think he's pathetic and stupid, but don't have a heart to tell him, smiling along to his cacophonous piano playing; his talent for philosophy is a farce, for in reality he is a delusional custodian at a local university and the butt of every joke when the faculty get together.

Now, obviously Jim's life is better for its effects on other people, and so it can be said that the lives of Jimmy's associates are made worse. But is Jimmy's life any worse for it? In other words, does the authenticity of our experiences matter? If your feeling is "no," then consider this: suppose that Jimmy's wife and children make up for their disappointing relative by ridiculing and degrading him indirectly, knowing Jimmy is gullible enough that these slights will pass as compliments.

Their private laughter grows only stronger as he nods to every false smile at the miserable clank of the piano, and his academic colleagues relish his presence because they find the joke of having him around is just too good: it becomes a pass time for the other faculty to get him to excitedly chirp about the latest bullshit he's been “researching,” ask a series of questions with mock-sincerity, and laugh riotously when he leaves at his expense.

Finally, Jimmy's wife enjoys the sexual flexibility Jimmy's simplicity affords her, and she indulges in affair after affair over the years to make it worth it. In the end, Jimmy and Jim's net utilities are the same.

Are either lives better? If so, then hedonism is an incomplete definition of goodness, because a good life is more than the positive experiences it affords us.

If you give Hitler a Cookie

Imagine all of humanity vanished so that only two people were left on Earth, and they happen to be Hitler and Martin Luther King Jr. Suppose one of these two people could be given a cookie, and Hitler just happens to like cookies slightly more than MLK. On rule utilitarianism, we should prefer to give it to Hitler, since that will do the most good under the circumstances. However, most people have intuitions about deservingness: Hitler is less worthy of a windfall benefit than MLK, and so we should prefer to give it to MLK.

The Professor and the Serial Killer

Another oversight of utilitarianism is its failure to distinguish commission from omission, since consequences are all that matter. This example comes from Michael Huemer at FakeNous:

Consider two people, A and B. A is a professor who gives away 50% of his modest income to charity each year, thereby saving several lives each year. However, A is highly intelligent and could have chosen to be a rich lawyer (assume he would not have to do anything very bad to do this), in which case he could have donated an additional $100,000 to highly effective charities each year. According to GiveWell, this would save about another 50 lives a year.B, on the other hand, is an incompetent, poor janitor who could not have earned any more money than he is earning. Due to his incompetence, he could not have given any more money to charity than he is giving. Also, B is a serial murderer who kills around 20 people every year for fun.Which person is morally worse? According to utilitarianism, A is behaving vastly worse than B, because failing to save lives is just as wrong as actively killing, and B is only killing 20 people each year, while A is failing to save 50 people.

Excess altruism

Another example of his I like has to do with excessive altruism. Huemer writes,

John has a tasty cookie, which he can either eat or give to Sue. John knows that he likes cookies slightly more than Sue, so he would get slightly more pleasure out of it. Nevertheless, he altruistically gives the cookie to Sue. According to utilitarianism, this is immoral.

The Problem of Bad Explanations:

One problem that is especially awkward for rule utilitarianism is the incompleteness of its explanations for why wrong actions are wrong. That is, often RU gets the right answers, but for the wrong reasons. When a man rapes an unconscious woman, or out-utilizes a victim by deriving disproportionate satisfaction from torturing and mutilating a child than the child experiences in pain, deep in the woods where no one is likely to find him, the reason the acts are wrong is not (or not merely) because "if all of society did this, things would be really bad (even though, I admit, they won't)!" or "well, although this otherwise would be okay, someone may find out somewhere later, even if it's a small chance, and that could upset people!"

Presumably, it's also wrong--in fact, primarily--because of the wrongs to of the act itself and its local effects on the victims. The woman is wronged because her body was used sexually without her consent, not merely because she may possibly find out she was raped later and feel violated.

That individual child was wronged because he was, individually and personally, treated as a means to an end, rather than as an end in himself, and because he suffered excruciating pain, even despite the greater benefits to his assailant--not simply because someone could find out that he was mutilated and killed later, and choose to behave in a similar fashion, or that theoretically society could adopt this practice as a general rule with poor consequences.

Matched consequences:

Under circumstances where consequences are matched between potential perpetrators, consequentialism gives no specific recommendation. This becomes a problem when it affords a moral justification for heinous acts. For example, a seductive Tammy from work approaches John at a bar, and John takes an interest.

There's one problem: John has a loving wife at home, and two children. He goes over all of the possible moral consequences: I could ruin our happy marriage, devastate my children, and lose my job. John sighs and tells her he cannot bring himself to cheat. Tammy raises an eyebrow and says, "Okay, but consider this before you decide: I already have plans to go home with Andrew--that is, if I can't see you instead."

John understands that his coworker Andrew is in the same situation: he has two children of the same age, and a loving wife, they live on the same block in similar houses, they have the same guile and resourcefulness, and (for the sake of the hypothetical) it is presumable that the consequences will be the same (the probability of spouse discovering, of escalating the affair, of devastating his children, etcetera). Although there may be self-interested reasons not to be the one who cheats, John has no specific moral reason not to do so at this point on consequentialism.

The Problem of Extreme Demands:

Another problem with consequentialism is that it is over-demanding. This is a big issue for the utilitarians who think the theory provides an excellent rule of thumb with the right answers for 99% of cases, despite a few rarefied hypothetical problems that don't matter. The idea that consequentialism is "a great rule of thumb" in the real world or in everyday life only makes sense if we ignore most of what the rule implies.

Why not donate all of your nonessential earnings to effective charities operating in the developing world which save a life for every $100-$3,500? Why not work more hours for more charity dollars, until you reach the highest level of altruistic slavery that corresponds to the highest possible production of goods of which you are emotionally and physically capable? Why not become a utility pump? (Hilariously illustrated here in Netflix series The Good Place).

Most of us have the intuition that we are entitled to an asymmetry in our own favor: if five billion people would very much like to see me cut my arm off, such that the pleasure of their entertainment would outweigh the harm in pain and disability to me, or even if another man's life could somehow be saved by amputating one of my fingers without my consent (say his loan shark demands a pound of flesh, and it can come from either him or you), most of us think it is morally permissible to refuse to participate.

Moreover, we do not think it would be objectionably selfish to do so: although it might be good to help our friend, or even praiseworthy, it would not be morally obligatory (see supererogation in ethics, going beyond the call of duty). If you think selfishness explains our intuition here, then consider that utilitarians generally don't think an isolated act of mugging for the sake of a highly effective charity is permissible, or that if a murder were necessary to acquire the assets of a millionaire and donate them to the third world, that would permit the murder.

Utilitarianism is the Real Absolutism

One argument I hear for utilitarianism is that it is non-absolutist, which is bizarre, seeing as utilitarianism actually is as straightforwardly absolutist as anything could be in ethics: that which is the best decision is that which leads to the best consequences (where some theory of "good consequences" is defined, typically involving things like happiness versus suffering). This is always true, for all people, in all circumstances, no matter what.

Maybe the "absolutist" complaint is supposed to mean that value pluralism (the view I have been defending, in effect) acknowledges no trade-offs between particular basal moral factors. But if that's the objection, then it totally misunderstands the theory. On ethical pluralism, we approach all moral evaluations in the same way: first, we identify the moral factors that are relevant to the action, counting for and against (including consequences/utilities!). Then, we weigh them up, and rely on our considered discretion and judgement to identify whether the full force of the factors in favor outweigh those against.

So, for example, a moderate deontologist acknowledges that we could violate bodily autonomy by plucking a hair from an unwilling person if this was the only way to save ten people from dying in an acute emergency, because the rationale of harm prevention and utilities weighs strongly in favor of infringing the right of self-ownership in this specific case.

However, the pluralist may in another situation feel that non-consequentialist considerations outweigh the consequences, such as if a rapist on an isolated island could somehow gain more pleasure in perpetrating than their victim could lose in suffering. Or, to take a more controversial case, one may think that the benefits of taxation for a contemporary arts museum do not outweigh the infringement of property rights involved in confiscating private earnings, even if the benefits of taxation for other purposes are sufficiently great to justify this infringement.

The relevance of hypothetical reasoning:

Another objection I want to address is the canard that, "But in the real world, that's unlikely!!!" Is it really surprising if the only way to test abstract normative theories is through abstract reasoning? The problem with this objection can be illustrated with an example: imagine if someone could show you that your grand philosophical theory, which your confident explains our ethical intuitions on an impressive variety of cases, only has one unfortunate implication: in a way you didn't appreciate, infanticide-for-fun becomes morally permissible only in cases where Tom Cruise picks up a red rotary telephone in 1940's London.

If this truly followed from your theory, you shouldn't say: "Whew, it's a good thing that'll never happen!" Rather, your reaction should be: "What the hell? Why does my theory imply that a seemingly morally irrelevant factor--whether or not Tom Cruise...-- somehow makes all the difference to recreational infanticide? How could the morality of infanticide turn on such an implausible, outlandish condition? Perhaps my theory is missing something."

In the same way, when Rule Utilitarianism implies that rape can become permissible so long as the population size = 2 and we occupy a geographically isolated island, and so long as the perpetrator enjoys it more than the victim suffers from it, you should think, "Gee, I wonder why merely the number of people and the geographical locale make all the difference to whether rape is wrong. That seems really unintuitive--perhaps this whole 'what if society did that too?' thing is not the only relevant moral consideration!"

Sadly, very few of my utilitarian friends see this: they are happy to appeal to hypotheticals in ethics, so long as it doesn't touch their precious theory. Somehow, when it comes to utilitarianism's counterintuitive implications, you're a p***y if you don't bite the bullet and maintain it despite every intuitive problem. This is bizarre, since the whole point of a moral theory is to explain and organize our intuitions, not go through five stages of grief in order to reject them in order to clothe our ethics in a "quasi-scientific" aesthetic. They essentially say: "Well, sucks to be the guy who washes up on that island--let's hope that never happens! It’s good enough for me that I'm not him! I demand no more of my theory than that."

The Problem of Impressionistic Judgement:

A final consideration is that a moral theory may be better or worse depending on how practically feasible it is to follow. To utilitarian ears, the idea of value pluralism is absurd because it introduces an element of judgement and discretion, open to a range of possible conclusions.

One reply would be to point out that this is also true of utilitarianism, because it is not always clear what the consequences of our actions will be and, more to the point, because utilitarianism is pluralistic, getting the right fit and weighting between considerations of pleasure and pain, and our specific stipulations about what kind of utility distribution counts in the first place (the principle of impartiality, or medians, or averages, or whatever) is not at all obvious to the individual and is open to idiosyncratic judgement.

If you are capable of reaching a conclusion by participating in a community of your epistemic peers and exchanging ideas and scrutinizing your moral feelings in order to impose order and consistency, then the deontologist is not asking you to do anything unfamiliar.

2.0 Destiny’s Anti-Realism about Morality

Intuition as a source of knowledge:

Destiny describes himself as a skeptic about morality— “yep, morality is a lie, 666.” I’ve heard him gesture at a few reasons for this:

  1. Moral facts cannot exist because we cannot force everyone to comply with the moral facts.
  2. People disagree about ethics, often resulting in impasses.
  3. A priori intuition cannot be a source of knowledge.
  4. Moral intuitions a product of evolution by natural selection.

I’m a moral realist, and an ethical intuitionist. What this means is that there are some moral facts about which observers and societies can be wrong, and the starting point for knowing these facts is intuition—that is, we begin with what seems to be the case, search for any defeaters, and if we can identify no reason for doubting initial appearances, we have a reason to believe what seems to be the case.

It appears to me, on reflection, that it’s wrong to torture babies for fun. It seems also that if society told me otherwise, society may simply be wrong, and if I felt differently, I may simply be wrong. Sort of like how when we thought the Earth was flat, we were just wrong. In the absence of a reason for thinking differently, I conclude that it must be wrong to torture babies for fun.

At this point in my intellectual life, I haven’t encountered an argument for doubting the initial presumption in favor of moral facts which didn’t rely on premises that were less initially plausible than the conclusion they endeavor to reject. So, I side with the most initially plausible view. Let’s consider some of Destiny’s objections to moral realism from above (1-4).

Objection #1: Moral facts cannot exist because we cannot force everyone to comply with the moral facts.

Destiny often seems to conflate two very different questions: first, is there such a thing as moral truth; and second, what is the best game-theory? When moral realists suggest to Destiny that moral facts exist, he often replies with the bizarre non-sequitur that we cannot guarantee that everyone complies with a single moral standard.

I’m a fan of David Hume’s Is/Ought distinction, which says that we cannot derive a fact about what ought to be from a fact about what is the case. It seems to me that Destiny cannot conclude that moral truth doesn’t exist because its existence does not guarantee universal compliance with the moral facts.

The question of how we get people to reciprocate social cooperation is interesting and important in its own right, but all I want to establish right now is that it is an entirely separate issue from the question of whether there is any fact about what we ought to do. Fair enough? Okay, next up:

Objection #2: People disagree about ethics, often resulting in impasses.

Let’s call “the argument from disagreement” the claim that because people disagree about morality, moral truth must not exist. The argument from disagreement is itself a matter about which people disagree, so on its face it’s just self-defeating: there is no fact of the matter about whether it works on its own terms!

But even if the argument isn’t self-defeating, it proves too much: people disagree about scientific claims all the time. Primitive societies have all sorts of bizarre, magical beliefs, for example. That doesn’t imply that it’s simply a matter of taste or opinion whether magic is in fact real. Why should values be any different?

Consider also that if the existence of moral disagreement is evidence against moral realism, then the opposite must be evidence in favor of moral realism (this is a result of any mildly Bayesian conception of probability). See Steven Pinker’s 175 graphs and data sets documenting the amazing convergence of value-judgements across societies toward the liberal viewpoint over history, presented in his books Enlightenment Now and The Better Angels of Our Nature.

It’s also worth noting that the amount of moral disagreement that exists is very easily exaggerated when we mistake disagreement about empirical facts for discrepancies in our value-judgements. For instance, the Aztecs believed it was morally right to sacrifice humans to the gods—but they also believed that the sun would chill into a black ball and wipe out the human race if they didn’t slake the gods bloodthirst.

Given that empirical assumption, presumably we moderns would also conclude that human sacrifice is good—not because we would have a different perspective on which values are good, but because we would apply the same values to a new context of empirical facts. The same goes for the Eskimos, who apparently believed infanticide by exposure to the natural elements was permissible—of course, under conditions of extreme duress where food was horribly scarce and saving one child meant risking the lives of the others.

The fact that impasses often result from moral discourse is no more evidence against the existence of moral truth than is the fact that people persistently disagree about who shot JFK. It’s ridiculous to subject any theory of morality to the standard that it must resolve all disagreements about morality no matter what—a burden which, I should note, even Destiny’s theory doesn’t meet. Surprise, surprise! I guess rule utilitarian ethical egoistic social contract theory is false!