Thought Experiments and the Method of Ethics



Download 74.91 Kb.
Date09.05.2017
Size74.91 Kb.
Roll Call Question:

  • Situation A: You just found a shiny new dime on the sidewalk and pocketed it. Then you see someone who has just accidentally dropped a pile of textbooks on the ground.

  • Situation B. You have not just found a shiny new dime on the sidewalk and pocketed it. You see someone who has just accidentally dropped a pile of textbooks on the ground.

Roll Call Question: Does empathy hurt?

Are you more, less, or equally likely to help the person in situation A vs. situation B?

Moral Psychology

by Stephen Stich and John Doris


What it Is
Moral psychology is based in the view that moral theories are grounded in assumptions about human psychology. Specifically, to claim that humans ought to behave in a certain way, is to be committed to the empirical claim that they are generally capable of behaving in this way, and the (perhaps somewhat less empirical) claim that it is reasonable to expect them to do so. So, the two fundamental questions of moral psychology are:


  1. What empirical claims about human psychology do advocates of competing perspectives on ethical theory assert or presuppose?

  2. How empirically well supported are these claims?


Thought Experiments and the Method of Ethics
Much philosophical inquiry proceeds by way of thought experiment. In ethics, the standard means of evaluating and ethical principle is to simply consider a hypothetical case in which the principle is being applied. For example, the following principle:


  • It is worse to cause harm than to allow it to occur.

Has been subject to the following thought experiment:




  • Imagine that you come across someone who is badly injured, and the failure to call for help will probably result in her death. You have a cell phone and it costs you nothing to place the call to 911. Is the failure to call any less wrong than the killing itself?

Traditional moral philosophy assumes that it is possible to make progress by attempting to come to some consensus on the answer to this question.


One significant problem with this approach is that logically identical problems can produce very different answers. This is what has become known as the “framing problem” first discussed by the psychologists Tversky and Kahneman. Here is a framing type problem from this article:
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed.

  • If Program A is adopted, 200 people will be saved.

  • If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved.

A second group of subjects was given an identical problem, except that the programs were described as follows:

  • If Program C is adopted, 400 people will die.

  • If Program D is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die.

These are just different ways of describing exactly the same results, but subjects given the first version of the choice overwhelmingly preferred A, whereas subjects given the second version of the choice overwhelmingly preferred D.


Since the answers we get to thought experiments can be dramatically influenced simply by the way they are formulated, it becomes extremely important in moral philosophy to understand how people’s minds actually work to, as the authors say “subject philosophical thought experiments to the critical methods of experimental social psychology.”
Moral Responsibility and Free Will
The fundamental question with respect to responsibility and freedom is whether moral responsibility is compatible with a scientific picture of human action as causally determined. Incompatibilists hold that human freedom is predicated on the denial of determinism; hence, if humans are to be deemed free in a sense implying moral responsibility for their actions, they must be able to, at least occasionally, exert their will to break free of the causal order. Compatibilists assert that humans can be morally responsible even if determinism is true.
There are plenty of thought experiments designed to discover whether people are essentially compatibilist or essentially incompatibilist. For example:


  • Ramon is a young man who grew up in a loving home and an affluent family and has no known mental illnesses. One day he found a wallet with identification and several hundred dollars in it. He took the money and threw the wallet in a dumpster.




  • Raymond is a young man who grew up in an abusive home and a poor family whose parents were both drug addicts. He is schizophrenic. One day he found a wallet with identification and several hundred dollars in it. He took the money and threw the wallet in a dumpster.

Are Ramon and Raymond both (equally) morally responsible for their actions? Why or why not? It seems that those who hold both men equally morally responsible are appealing to compatibilist intuitions, whereas those who think Raymond is less responsible than Ramon are holding incompatibilist intuitions.


A fair amount of research has indicated that people tend to answer questions like these in a way that suggests that they are incompatibilists. However, it is easy to produce conflicting results by slightly changing the thought experiments. Again, from the article:

In the Woolfolk studies, subjects read a story about two married couples vacationing together. According to the story, one of the vacationers has discovered that his wife is having an affair with his opposite number in the foursome; on the flight home, the vacationers' plane is hijacked, and armed hijackers, in order to intimidate the other passengers, force the cuckold to shoot the man who has been having an affair with his wife. One variation of the story continues as follows:



  • Bill was horrified. At that moment Bill was certain about his feelings. He did not want to kill Frank, even though Frank was his wife's lover. But although he was appalled by the situation and beside himself with distress, he reluctantly placed the pistol at Frank's temple and proceeded to blow his friend's brains out.

A different variation goes:




  • Despite the desperate circumstances, Bill understood the situation. He had been presented with the opportunity to kill his wife's lover and get away with it. And at that moment Bill was certain about his feelings. He wanted to kill Frank. Feeling no reluctance, he placed the pistol at Frank's temple and proceeded to blow his friend's brains out.

People reading these stories tended to judge Bill more responsible in the latter case, even though he is equally causally compelled. Indeed, in some ways he is even more causally compelled to do so, since he is a causal victim of his own anger and jealousy. What would explain the apparently compatibilist intuitions in this case?

One proposed answer is that we tend to hold people more responsible if they “identify” with their behavior; i.e., if it’s what they really want to do, regardless whether they are being forced to do it.
The important point here is that our intuitions about moral responsibility can be messed with by framing the problems differently, even though they are equivalent with respect to the relevant concepts. Hence, if thought experiments are going to be any kind of aid in philosophy, we need to be aware of the psychological conditions that tend to influence these judgments.
Virtue Ethics
The relevance of moral psychology to virtue ethics is easily demonstrated. Virtues are aspects of character believed to influence our behavior. People who have the virtue of kindness are expected to consistently manifest kind behavior. The point of instilling virtues is to cultivate that behavior.
Social psychologists have been studying virtuous behavior for years. For example:


  • Isen and Levin discovered that subjects who had just found a dime were 22 times more likely to help a woman who had dropped some papers than subjects who did not find a dime (88% v. 4%).

  • Darley and Batson report that passersby not in a hurry were 6 times more likely to help an unfortunate who appeared to be in significant distress than were passersby in a hurry (63% v. 10%).

  • Mathews and Canon found subjects were 5 times more likely to help an apparently injured man who had dropped some books when ambient noise was at normal levels than when a power lawnmower was running nearby (80% v. 15%).

  • Haney et al. describe how college students role-playing as “guards” in a simulated prison subjected student “prisoners” to intense verbal and emotional abuse.

  • Milgram found that subjects would repeatedly “punish” a screaming “victim” with realistic (but simulated) electric shocks at the polite request of an experimenter.

These findings tend to show that moral behavior is highly “situational,” which provides at least some basis for skepticism about the causal efficacy of virtues. Some philosophers have responded to this that the virtues themselves are not expected to produce consistent behavior, but rather the virtues in concert with a properly functioning capacity for practical rationality.


The problem is that many other studies have shown that our practical rationality appears to be highly situational as well. Simple reasoning tasks are easily undermined by assigning them in slightly unusual situation.

If you are running in a race and you overtake the person who is in second place, what place are you now in?

If a bottle and a cork together cost $1.10, and the bottle costs a dollar more than the cork, how much is the bottle and how much is the cork?

Virtue ethics, at any rate, needs to take this kind of empirical data seriously. One possible response is to simply assert that virtue is actually rare, and that the only people who really possess it are those whose virtuous behavior survives situational changes. (Similarly, perhaps with the rational virtues.)


Another possibility is to acknowledge that virtue is, in some sense, socially sustained. This, of course, runs counter to the traditional idea that virtues by their very nature help us to resist social pressures. (Though is nothing incoherent about the suggestion that both are true; i.e., virtues are both socially sustained, and they help us to resist certain social pressures.)
Altruism vs. Egoism
The longstanding debate between altruists and egoists concerns the question whether and under what conditions people are behaving altruistically, i.e., for the benefit of others in the absence of a corresponding gain for oneself.
In Rosenberg’s article we discussed altruism in evolutionary contexts, and its important to see that there are at least two distinct ways of using this term.
Biological Altruism

  • In biological terms an organism acts altruistically iff a behavior reduces the fitness of the organism while increasing the fitness of one or more other organisms. (Recall that an organism increases its fitness by insuring the survival and reproduction of its genes, not just itself.)

Psychological Altruism



  • In psychological terms, an organism acts altruistically if it is motivated by an ultimate desire for the well-being of another person.

Since biological altruism is defined in terms of consequences and psychological altruism is defined in terms of motivation, there is no logical connection between these two concepts.




  • Specifically, an individual might be a psychological altruist with respect to kin, though not a biological altruist, since helping kin is egoistic from a biological point of view.

  • Also, an individual could be a biological altruist, engaging in actions that reduce his fitness relative to those whom his actions help, though not a psychological altruist, since he is not ultimately motivated by the desire to benefit others. (Examples?)

The conventional view in biology, which Rosenberg’s paper captured, is that there is only one known plausible route to biological altruism, and that is for a pattern of helping behavior to emerge among kin (which is not altruism in the biological sense, though may be in the psychological sense), and then mutate into a form of “reciprocal altruism” between non kin, which is modeled by the tit-for-tat strategy.


Tit for tat is true biological altruism, because it will result in an organism sacrificing its own interests for the benefit of another even though she is not kin. But she will not continue to do this if the behavior is not reciprocated.
Psychology
The empathy-altruism hypothesis
Psychological studies of egoism are attempts to determine whether people are motivated by an ultimate desire to benefit others. It is generally assumed that helping behavior can arise from an emotional response of empathy, and some clinical techniques have been devised for producing and suppressing empathy in subjects.
Generally, fairly compelling clinical evidence has been collected in support of the hypothesis that those who feel empathy for another are more likely to engage in helping behavior. However, this, by itself, is only weak evidence that people act out of a desire to benefit others.
The Social Punishment Hypothesis
According to the social punishment hypothesis, helping behavior is best explained egoistically as the desire to avoid social punishment or censure. Hence, an alternative way of explaining the correlation between empathy and helping behavior is the hypothesis that subjects will fear punishment for the failure to help under conditions that generate empathy for the unfortunate. (In other words, if you feel empathy, this is an indication that others will feel empathy, which means that we are more likely to be punished by others for the failure to help when we ourselves feel empathy for the victim.)
To test a hypothesis like this you need to create clinical conditions under which individuals are encouraged to develop empathy for an individual, and then offered an opportunity to help under two separate conditions. (1) the subject is informed that others will know if they decided to help; (2) the subject is informed that others will not know if they decided to help.
The prediction of what Batson calls the socially administered empathy-specific punishment hypothesis will be that helping behavior will occur under conditions of high empathy and high potential for negative social evaluations at a great rate than they do under high empathy and low potential for negative social evaluations.
Alternatively, the straight empathy-altruism hypothesis would predict the same rate of helping behavior for both. Graphically:
Table 1: Predictions About the Amount of Helping On the Socially Administered Empathy-Specific Punishment Hypothesis

Potential for Negative

Social Evaluation

Empathy

Low

High

High

Low

High

Low

Low

Low

Table 2: Predictions About the Amount of Helping On the Empathy-Altruism Hypothesis

Potential for Negative

Social Evaluation

Empathy

Low

High

High

Low

High

Low

Low

High

Interestingly, when these experiments have been done, the punishment hypothesis has not been born out. Actual behavior is more closely aligned with the empathy-altruism hypothesis.


Aversive-Arousal Reduction Hypothesis
Perhaps people experience empathy as a kind of pain or discomfort, and helping others is mainly a way of gaining relief. If this were the case, then we would expect people to seek more effective means of relieving this pain if it were available, namely escape from the empathy inducing social situation.
Experiments to test this idea compare rates of helping behavior when there is no escape to rates of helping behavior when escape is facilitated. (Many different versions of this experiment have been run. The basic design of one of the experiments is as follows: Subjects were given the task of monitoring the behavior of someone who is attempting to perform a task under adverse conditions, i.e., an electric shock. The subject witnesses the performer registering a great deal of discomfort at being shocked, and is then offered the opportunity to take the place of the performer. Half the subjects are given the choice of replacing the subject or leaving; the other half are given the choice of replacing the subject or continuing to observe them.)
The Aversive-Arousal hypothesis predicts that those given an opportunity to help will do so less frequently when there is an escape route. The standard Empathy-Altruism hypothesis predicts no difference between these two states.
Interestingly, the results confirm EA. No statistically significant difference in helping behavior is observed between situations where escape is provided and situations where it is not.
As Stich and and Doris point out, however, there is a crucial untested assumption behind the experimental design, and that is that subjects will pursue escape as a means of reducing the discomfort of empathy. Fleeing a situation in which one is offered a clear opportunity to engage in helping behavior is often followed by another sort of discomfort, viz., guilt. Hence, this experiment seems to depend on the assumption that subjects will predict that the pain of guilt is less than both the pain of empathy and the pain of helping.
The authors are dubious here, but it’s not clear their skepticism is warranted. There is ample evidence that people often engage in behaviors that very reliably produce guilt that far exceeds the pleasure gained thereby (e.g., cheating on ones spouse). A properly designed experiment on a large enough group of subjects should produce some observable difference here
Regardless, the point here is that egoism and altruism are both empirical theories which have testable consequences. Currently, and surprisingly from an egoist perspective, the evidence favors the theory that altruistic behavior stems directly from an empathic response to the suffering of others.
If these results hold up over time, then we should expect it to be modeled neurologically, and for reciprocal altruism or some other evolutionary model to provide a plausible account of it’s evolutionary history.
Summary
Many people who believe that altruism does not exist fancy themselves as having a scientific perspective, but in fact current science supports the claim that human brains have evolved to produce psychologically altruistic behavior.
Many people think that since, for any behavior to become habitual in a species it must be rewarded, this alone is sufficient to undermine the claim that the behavior is truly altruistic. But this view is actually highly unscientific for two reasons:


  • It is not quantitative. The existence of a reward does not undermine the claim that the behavior is altruistic; only the existence of a reward in excess of the benefit given to others would do this.

  • The empathic response is itself not a pleasure response. The minimal psychological reward of altruistic behavior is to relieve the feeling of empathy.

    • Though, for more on the neurology of altruism see this recent article from the Economist: The Joy of Giving



Moral Disagreement
Philosophical novices sometimes think the mere existence of moral disagreement is evidence against moral realism, i.e., the claim that moral claims have a truth value. Disagreement is, by definition, about the truth value of a statement.
However, the point can be put differently. People sometimes take themselves to be disagreeing, and they may engage in disagreement behavior, without any convergence at all in their views. There are many explanations of this failure to converge and one of them is that the disagreement is illusory; the assumption that the statement in question has a truth value is itself false.
Moral disagreement specifically concerns whether a particular moral value has greater or less moral weight than another. In order for moral disagreement to be real, there must be some shared set of assumption about moral values.
For example, if you say that knowledge is more important than personal happiness, and I assert the opposite, then if this is a real disagreement we must agree on something more basic. For example, we might agree that what matters most is maximizing happiness in the world, and you might try to show me that pursuing knowledge rather than personal happiness is more likely to have that consequence.
The important point here is that whether or not two individuals or cultures share values at some basic level is itself an empirical question.
However, this is an extraordinarily difficult thing to study scientifically. The authors give the following example:
[Hopi children] sometimes catch birds and make “pets” of them. They may be tied to a string, to be taken out and “played” with. This play is rough, and birds seldom survive long. [According to one informant:] “Sometimes they get tired and die. Nobody objects to this.” (Brandt 1954: 213)
This is the sort of cultural observation given in evidence of a values disagreement. However, varying empirical beliefs could just as easily explain the data. For example, it may be that the Hopi do not believe that the animals actually suffer. Or it may be that they think that animals are ultimately rewarded for this practice in an afterlife. Apparently, however, Brandt was able to find no evidence of relevantly different empirical assumptions here.
It’s interesting, actually, that someone would think this is evidence of a varying value structure with respect to animal cruelty. Presumably the moral disagreement concerns beliefs about the relative moral importance of animal suffering to other human goals. But it’s not at all clear that there is a disagreement. We permit scientists and meat producers to treat animals in astounding ways for the benefit of humanity. It seems likely that using animals as playthings would have a similar justification (or even just indicate a similar lack of reflectiveness).
In this essay the authors review evidence suggesting that people who live in the southern United States have a fundamentally different value structure than people who live in different parts of the U.S. Specifically, it is suggested that more of an “honor culture” exists in the south, perhaps as a result of their agricultural history of herding, which requires one to respond forcefully to intimations that one is weak and easily separated from his property.
Here is an example of the kind of research that has been done to determine whether this is true.
In the laboratory study (Nisbett and Cohen 1996: 45-8) subjects — white males from both northern and southern states attending the University of Michigan — were told that saliva samples would be collected to measure blood sugar as they performed various tasks. After an initial sample was collected, the unsuspecting subject walked down a narrow corridor where an experimental confederate was pretending to work on some filing. The confederate bumped the subject and, feigning annoyance, called him an “asshole.” A few minutes after the incident, saliva samples were collected and analyzed to determine the level of cortisol — a hormone associated with high levels of stress, anxiety and arousal, and testosterone — a hormone associated with aggression and dominance behavior. As Figure 1 indicates, southern subjects showed dramatic increases in cortisol and testosterone levels, while northerners exhibited much smaller changes.

Report of recent research on group selection:


Why Altruism Paid Off For Our Ancestors



Download 74.91 Kb.

Share with your friends:




The database is protected by copyright ©sckool.org 2020
send message

    Main page