Moral Weight and Prima Facie Duties

Earlier I introduced the concept of moral gravity and illustrated it with some examples of how moral stature interacts with moral gravity to determine the weight of our moral obligations to protect the vulnerable. I want to develop this idea a bit further now into a general schema for thinking about the notion of the weight of different moral obligations in general.

Intuitively we all think that certain moral obligations are more compelling, that is, weightier, than others. This kind of intuition is particularly important when we are called upon to resolve conflicts among our prima facie duties. The notion of a prima facie duty was introduced by W. D. Ross as a way of reinterpreting the notion of categorical imperatives in Kantian ethics. Although Ross gave various interpretations of this notion, it is now generally understood among moral philosophers that one has a prima facie duty to do something just in case an agent has some reason to think that he or she has a moral obligation to act in a certain way, and that reason does not involve an appeal to personal inclination or self-interest or to the total consequences of performing that action. He writes that,
When a plain main fulfils a promise because he thinks he ought to do so, it seems clear that he does so with no thought of its total consequences, still less with any opinion that these are likely to be the best possible. He thinks in fact much more of the past than of the future. What makes him think it right to act in a certain way is the fact that he has promised to do so – that and, usually, nothing more.
Ross provided examples of six prima facie duties: fidelity, gratitude, justice, beneficence, nonmaleficence, and self-improvement, but suggested that there might be many more. So, for instance, in addition to such standard prima facie duties as “One ought to keep ones promises” or “One ought to refrain from injuring others,” there might be prima facie duties such as, “One ought not to discriminate against persons on account of their race,” or, “One ought not to believe things which are not true,” or “One ought to ensure that those accused of crimes get fair trials.”

In each case of a proposed prima facie duty moral agents are thought to have a pro tanto reason for acting in the ways that a description of a duty provides just because it is the sort of reason that counts as a moral reason for action. As I will use this expression, one has a pro tanto moral reason to do A, when one is aware of a state of affairs S that provides a moral reason for A-ing, even if there are other states of affairs of which one might become aware that would give one a moral (or other) reason not to A.

Prima facie duties are usually contrasted with “all things considered” moral duties, that is, ones in which all of the morally relevant factors have been taken into consideration in a process of moral deliberation before deciding whether the agent does in fact have an actual or operative duty to do something. As Shelley Kagan explains it: “To say that something is your duty all things considered is to say that you are required to do it given all of the relevant factors that are relevant to the case at hand. In contrast, to say of something that it is a prima facie duty is only to note the presence of one or more of the factors that would generate an all things considered duty—in the absence of conflicting factors.”

Ross himself expressed some dissatisfaction with the term “prima facie duty” and suggested that perhaps “conditional duty” or “claim” might better capture his meaning, but ultimately rejected these alternatives. The term “prima facie” is unfortunate because it suggests that the duty is only apparent, and the phrase “all things considered” is rather cumbersome. In my discussion I will use the term “standing moral responsibility” in place of “prima facie duty”, and the terms “actual” (or sometimes “operative”) moral obligation, for the notion of an all things considered duty.

So then, if moral agents have in general a standing moral responsibility to protect vulnerable moral patients, then it will often be the case that different special responsibilities generated by this general standing moral responsibility will come into conflict with one another. How does one resolve these kinds of conflicts? Ross feel back on moral intuition, but perhaps we can do better than that. The idea that there might be a formula of some kind that we could use to actually calculate the weight of our moral responsibilities has appealed to many philosophers, though none has come up with a satisfactory account. I don't think my account is ultimately satisfactory either: it is presented as a heuristic device to aid the understanding of the set of normative factors that might potentially influence judgments of these kinds.

The basic schema is as follows:

W = A x O x P

where W stands for the weight of a particular prima facie duty or standing moral responsibility, A stands for agent-relative factors, O stands for the kind of obligation involved, that is, the gravity of the interest or value that is being protected by having such an obligation, and P stands for patient-relative moral factors, such as the moral stature of the patients to whom the agent's obligation is directed.

We can illustrate how this schema works using simple thought experiments in which we hold two of the three variables constant and vary the third. For example, if we specify that the A is a competent moral agent with no special relationship to the patient involved, and the P is a normal healthy human child, then we can vary the O factor as follows: Suppose O is the obligation not to murder. This is a weighty moral obligation. It is weightier than the obligation not to injure, which is in turn weightier than the obligation to provide some benefit to a moral patient. So, if killing is given the value 100, and injuring the value 75, and benefiting the value 25, and we set A and P to 1 each, then, the schema predicts that the prima facie responsibility not to kill the child is weightier than the similar obligation not to injure it, which is weightier than the obligation to provide the child with a benefit. This is in turn explained by the fact that the underlying interests involved have greater gravity in relationship to the patient's well-being or good. So, O is going to be an independent variable that will affect our judgment about the weights of various moral obligations we might have. Other things being equal, the more grave the interest involved the weightier the corresponding moral obligation.

But, let's hold the O factor the same and vary the P factor, that is, the moral stature of the patient involved. Suppose that one patient is the normal healthy child as above and another is a sentient non-human animal such as say a rabbit. Let's put the child and the rabbit on the trolley tracks so that the moral agent involved has to make a forced choice between injuring the child and injuring the rabbit. In this case, we would assign the P factor a high value, say 100, and the rabbit a smaller, but not insignificant value, say 50. Since the A and O factors are held constant, the schema predicts that our choice should be to injure the rabbit, since our obligation to refrain from injuring sentient nonhuman animals is less weighty than our corresponding obligations not to injuring human beings, because the former have lower moral stature. And this is what the schema predicts (roughly) that in a forced choice between these two prima facie responsibilities, we should choose to save the child.

But this is still far too simple to account for our moral intuitions. We can see this by introducing quantities of moral patients.

W = A x O x (nP)

where n is a number of patients. Suppose that instead of one rabbit on the tracks we have 50 rabbits and one is still forced to choose between injuring one child and injuring 50 rabbits. If we use the values for moral stature I suggested earlier, the obvious conclusion would be that the obligation not to injure 50 rabbits is 25 times weightier than the obligation not to injure one child. But is it? We can fudge this by assigning an arbitrarily low value to the nonhuman patients involved, say 1 as opposed to 50, but this seems pretty arbitrary and just an ad hoc way of saving the intuition that human interests matter more than nonhuman interests. As I said before, if your view is that human interests, no matter how trivial, will always outweigh animal interests, then my theory is not for you.

But clearly quantities do matter. We can see this using standard cases in which there is a forced choice between steering the trolley so it collides with one innocent helpless person who is tied to the tracks or steering it so it will collide with five innocent helpless persons who are tied to the other track. Most everyone who is asked about such cases comes to the conclusion that it is preferable to injure or kill one person than it is to injure or kill five, other things being equal.

But what if that one person is your own mother? Here is where the notions of derived or observer-relative moral status and moral partiality come into play so as to raise the stakes involved and change the moral calculation. Suppose that on one track lies your own mother, and on the other a complete stranger. The additional moral stature your own mother has in your eyes would give her life greater value than the life of a mere stranger, and so, if forced to choose, most of us would (perhaps reluctantly) steer the trolley into the stranger. Except, of course, if your own mother was an evil-doer who abused you as a child. This kind of consideration lowers your mother's moral stature and makes it more likely that you would choose to sacrifice her for the stranger, who, presumably, is innocent. These kinds of considerations have nothing to do with the intrinsic moral standing of the patient, but rather, are due to the patient's derived moral status. In order to account for these kinds of factors, we shall have to make the proposed schema even more complex:

W = A x O x n(P +/- D)

where D represents the increment or decrement of moral stature in a moral patient due to their relational or derived moral status.

We probably also have to complicate the A factor, which we have taken to represent a normal adult human moral agent. But in addition to their standing moral responsibilities, moral agents can also have various kinds of special responsibilities due to the particular roles they occupy, their own past actions, the promises they have made, and so forth. These additional sorts of agent-relative factors can also affect the weight of an obligation, sometimes in crucial ways. So, for instance, if a physician happens to be on the scene when another person suffers a heart attack along with a non-physician, most people would I think judge that the obligation of the physician to attempt to help this person is weightier than the obligation of the non-medical bystander. It would be even greater is the medical person is a trained EMT who has been in fact summoned to the scene by a 911 call. Such an individual has a strict duty to provide aid to the patient in distress, which ordinary bystanders do not have. So, then, we need to make the schema look something like this:

W = (A +/-R) x O x n(P +/-D)

where R stands for agent-relative factors, such as social roles, that function to increase or decrease the agent's level of responsibility.

Again, this formula is just illustrative. It is far from clear what the relationship is among these different sorts of normative factors. I have represented them in terms of multiplication and addition, but I could have equally well used powers, or logarithms to the variables and the way they interact. In fact, it is not possible to determine the precise formula for thinking about these kinds of issue by doing armchair philosophical thought experiments like the ones I have been offerring. One needs to actually develop a range of cases and present them to subjects and measure their responses, and then find the data points, and then choose which sort of formula (if any) best fits the curve. That is, it might be doable through empirical moral psychological research, but even then, it is not going to be easy.

Yet as moral agents we make these kinds of judgments all the time. Assuming that our moral intuitions are not wholly arbitrary, it must then be the case that our brains are following some kinds of patterns or using some kinds of algorithms to arrive at judgment about the relative weights of our moral duties. But we are obviously a long way from understanding how we make these kinds of complex moral judgments. Which is why moral philosophers such as Ross, and I, fall back on using the intuitions of expert moral judges as the basis for deciding among conflicting moral responsibilities. Moral intuitions may be slippery and uncertain, but they are still better than simplistic formulas as a guide to moral judgment.

However, even simplistic moral formulas have some value in that they can be used to help us disentangle the different sorts of normative factors than can affect our intuitive judgments about the weight of different duties and responsibilities. The schema presented here is offerred only as a first rough approximation, a heuristic, that tells us that there are at least three basic kinds of normative factors that can affect the moral weight of an obligation: those related to the agent, those related to the type of obligation or interest involved, and those related to the patient's moral stature. This already gives us a lot of complexity, but I doubt it is adequate as it stands, since it leaves out what are called contextual factors, for instance, what the knock-on consequences of certain actions for other moral patients, and 'threshold effects', that is, when the harm done by fulfilling a prima facie obligation to say, refrain from restricting the free movement a person, would be so great as to tip the balance in favor of doing so, as for instance, when the person involved is the carrier of a serious infectious disease who might spread it to others. Such thresholds can alter our judgments even when duties derived from basic human rights are involved. So I am not claiming a lot for my schema; only that it is a way of starting to get a handle on these very complex matters.

No comments: