top of page
  • Writer's picturechris de ray

Utilitarianism - the problem of calculating consequences, revisited


Utilitarian theories hold that the rightness of an action depends wholly on consequences. For example, the most basic (act) utilitarian theory states that an action is right just as long as it is the one that maximises pleasure and minimises pain. More complex utilitarian theories present variations on this theme - for instance, rule utilitarianism states that an action is right if it conforms to a rule, which brings about the best consequences if it is generally followed. So, while a single act of murder might bring about more pleasure than pain (say, painlessly killing a healthy patient and harvesting his organs to save five other patients), things would be happier overall if the rule 'do not murder' was generally obeyed, than if it wasn't (because people would be constantly fearing for their lives, etc).


We can set these nuances aside for our purposes. Let's focus on a problem that faces all brands of utilitarianism, which we'll call the calculation problem. On any utilitarian theory, knowing whether a given action is right depends on knowing future consequences, whether of the action itself, or the rule that the action instantiates, or whatever else. Critics point out that it can be very difficult to know the future consequences of anything. Our ability to predict the future is limited, especially regarding long-term consequences. For all we know, a drowning child might lead to said child growing up to becoming a ruthless, murderous dictator. And the 'algorithms' put forward by utilitarian thinkers, like Jeremy Bentham's 'felicific calculus', are quite complicated, including many variables like the intensity and duration of the pleasures (and pains) produced, how soon these will occur, the number of people affected, and the 'quality' of the pleasures (in J.S. Mill's version). Not only is this impractical in situations where we need to act fast (do I save the drowning child or not?), it also implies a kind of moral scepticism: since we can never really know what the consequences of our actions will be, we can't know whether our actions are right or wrong. The objection goes something like this: (1) If utilitarianism is true, for any action, we can't know that this action is morally right or wrong.


(2) But we can know that some actions are morally right or wrong (e.g. we can know that torturing people for fun is morally wrong) (3) Therefore, utilitarianism is false.


Here, the utilitarian has an obvious reply to (1): we might not be able to be absolutely certain of the future consequences of something, but we can be reasonably confident about them, and thus reasonably confident that such-and-such action is morally right or wrong. And maybe that's enough for knowledge (unless you're an infallibilist, but that's another topic). Sure, it's strictly possible that torturing someone for fun will somehow lead to the greatest happiness somewhere down the line, but let's face it, how plausible is this?


Not so fast. Let's tweak the above argument a bit: (1') If utilitarianism is true, I cannot be absolutely certain that torturing someone for fun is not morally right.


(2') I can be absolutely certain that torturing someone for fun is not morally right.


(3') Therefore, utilitarianism is false.



(1') is motivated by the fact that we can never be 100% certain of any future event, even with a mountain of evidence to the effect that it will occur. No matter how many times you have seen the sun rise in the morning, there is always some chance (however slim) that it won't rise tomorrow morning. What about (2')? Try to conceive of a scenario where torturing someone for fun is not only 'not-wrong' (i.e. morally permissible) but in fact the right thing to do. Are you able to? For my part, I cannot - moral truths like 'torturing someone for fun can never be morally right' strike me as self-evident in the same way that logical truths like 'X and not-X cannot both be true' are. But if you somehow can, consider the most morally abominable act (or rule) that you can think of. Try to think of a far-fetched scenario in which it brings about the greatest good for the greatest number (if you think hard enough, you will be able to). Next, try to imagine that this act is somehow morally right. Can you? I suspect not.



The best way out for the utilitarian would be to deny that it is actual consequences that make an action right or wrong, but rather reasonably expected consequences (Bentham seems to have believed this). This is sometimes referred to as 'reasonable utilitarianism'. I may not be able to know that consequence C will actually occur, but perhaps I can be certain that my expectation that C will occur is reasonable. Can I really be certain of this, though? I am not so sure. Whether an expectation is 'reasonable' depends on whether it is sufficiently likely given my evidence. Unfortunately, we can never fully rule out that we are mistaken about the evidence that we have. Knowing what evidence we have almost always requires us to rely on memory - for instance, to know that I have seen the sun rise hundreds of times, I need to remember this. But relying on memory to know past events, much like relying on our predictive powers to know future events, can never give us certainty. For any memory, no matter how strong or seemingly accurate, we can never rule out that it is distorted or perhaps even entirely false. But if I can't be certain of what evidence I have, I cannot be certain that my expectation of C is likely given my evidence, and thus cannot know whether my expectation is reasonable.


We are thus back where we started: utilitarianism (in the above modified form) entails that I cannot know for certain that some action is morally wrong, since I cannot know for certain that my expectations of the relevant consequences are reasonable. But if we can know for certain that some actions are morally wrong - which we surely can - then it follows that utilitarianism has a false implication, and therefore is false.


61 views0 comments

Comments


bottom of page