I often write rather harshly about utilitarianism here, so it seems like a good idea to mention one place where it definitely belongs: weighing the side effects of different drugs in an effort to determine which ought to be prescribed. This is the kind of decision that will necessarily involve quantitative reasoning. When we’re faced with the question the linked article poses:
Which is worse – ruining ten million people’s sex lives for one year, or making one hundred people’s livers explode?
The thought that “life is sacred” should not oblige us to go with the former. We make trade-offs involving risks like this all the time; for example, we all get into cars, even though it brings with it a non-negligible chance of injury and death, because we value convenience more than safety. But if cars were sufficiently dangerous, we might not do this.
It’s senseless to approach this question except quantitatively, because the specific chance of danger matters. Take cars. The average American has a 1-in-500 chance of dying in a car accident over the course of his lifetime, which means, doing a quick back-of-the-envelope calculation, that he has something like a 1-in-40,000 chance of dying in a car accident every year. Convenience is worth a 1-in-40,000 chance of death. Now take the drug example: is healthy sexual functioning worth a 1-in-100,000 chance of death? If so, then the trade-off should be made. Given how unbalanced 1-in-100,000 is, the answer is probably “yes,” even if we don’t allow healthy sexual functioning as high a value as many today would give it.
Despite what certain political commentators sometimes say (*cough*”death panels”*cough*), these kinds of trade-offs are not unethical, and in fact are an ethical imperative.
This line of reasoning is only valid, however, because the side effects of cars and drugs are just that: side effects. Strictly speaking, the choice we face is not one of “ruining ten million people’s sex lives for one year, or making one hundred people’s livers explode”; rather, we have a choice of allowing the ruination, or allowing the explosions. Such quantitative considerations only come into play once we ensure that the intrinsic effects of our actions pass muster.
The utilitarian responds to this way of thinking by rejecting the very distinction between substantial and accidental. He implicitly substitutes for it a distinction between known and unknown effects: I am to take everything I know about my possible courses of action, crunch the numbers, and do whatever has the maximally good outcome all-things-considered. If things then end up poorly because of factors of which I knew nothing, it’s unfortunate but not my fault (unless I could easily have known better); if they end up poorly because I chose incorrectly given the knowledge I had, I am to blame. This “to blame,” of course, just means “is the component that should be improved in order to maximize outputs.”
This might be a helpful way to think about maximizing output, but it is not a very good way to think about ethics. It assumes that we begin with a determinate concept of what we want, one so well defined that we can always evaluate a course of action in terms of how well it will achieve it, then simply add up the known effects of actions and pick the one with the highest value. This seems to me quite false. Rather, I want to say, we begin with a vague concept of what we want–perhaps no more than the word “happiness”–and give it determinate content in the course of discerning what course of action to take. This kind of reasoning seems difficult, if not impossible, to perform without relying on the intrinsic/extrinsic distinction. But my reason for this is, admittedly, still vague.