What Makes a Good Decision? Robust Satisficing as a Normative Standard of Rational Decision Making



Download 76,53 Kb.
Page2/3
Date conversion27.10.2016
Size76,53 Kb.
1   2   3

And one can’t do a conventional utility analysis without attaching probabilities to various outcomes. Inventing probabilities in the face of serious information gaps, because you have learned that that is the normatively correct way to make decisions, can lead you astray. Info-gap robust satisficing actually provides a rational alternative to “the world is an uncertain place. Just close your eyes and pick.”

Bayesian decision theory attempts to deal with a decision maker’s objective and subjective (personal) uncertainty about contingencies and outcomes. Bayesian tools are suitable when the decision maker feels confident that a probability distribution reliably or realistically represents likelihoods or degrees of belief. We are concerned, however, with situations in which uncertainties are not confidently represented by probabilities. For instance, an individual may have no personal experience with the outcomes of prostate therapy, and yet still have to choose a therapy. In such a situation the individual may reasonably be unable to make probability statements about utilities or disutilities resulting from the outcomes of therapy. The individual may have some anticipations about utilities, but have no idea how wrong those anticipations are, and even less understanding about how likely different subjective feelings will be. A Bayesian analysis is difficult to operationalize in such a situation.


Does Robust Satisficing Avoid Probability Estimates?

It might be argued that robust satisficing does not really offer an alternative to utility maximizing that avoids estimating probabilities in a radically uncertain decision space. After all, if we want the alternative that is the most robust to uncertainty, doesn’t that require attaching probabilities to various future states of the world? To say that, for example, Brown is more robust to uncertainty than Swarthmore, both when it comes to field biology and when it comes to estimating the prospective student’s future interest in field biology, don’t we need to estimate how likely it is that current assessments of both program quality and student interest will be wrong? Or is it enough to say that Brown has three relevant biologists and Swarthmore only has one, so that if one of Brown’s biologists leaves, the student will have recourse? Is it enough to say that Brown also has wonderful programs in music and molecular biology, so that if the student’s interests change, there will be recourse? The short answer is Yes, as we can understand from the meaning of robustness. The robustness of a decision (e.g. choose Brown) is the greatest amount by which Brown and the student can change, and the choice is still acceptable. Choosing Brown is more robust than choosing Swarthmore if more profs can leave Brown than Swarthmore, and if the student's interests can change more widely and still be satisfied within Brown's biology dept but not Swarthmore's. No probability judgments are involved in these assessments, but uncertainty is handled in both the student's interests and the schools' characteristics.

But there is more to be said about the relation between utility maximizing and robust satisficing. Utility assessment entails the judgment of value: what is useful or valuable to the decision maker as an outcome of the decision. These values are personal or organizational or social values of the “goods” and “bads” that may result from a decision. The confidence (e.g. robustness) with which we anticipate the value of the outcome of our decision is not itself an outcome; it is an assessment made before the outcome. For instance, the financial return that we need from an investment is different from the confidence with which we make the investment; you can deposit cash in the bank, but not confidence.
Of course, sometimes we do include measures of risk (or confidence) in our utility functions. So, a critic might claim that by incorporating robustness in the utility function, and then optimizing this extended utility function, we have reformulated the robust satisficing procedure as a utility maximization procedure, in effect attaching probabilities to the various possible outcomes. Such incorporation of robustness or related quantities (such as variance) into a utility function is common. But in any such case, one could (and should) still apply the info-gap critique and propose a robust satisficing response. The critique is that the augmented utility function is based on best-model estimates (e.g. of the variance), and these best models are probably wrong in ways that we do not know. The response is to satisfice the augmented utility and maximize the robustness against error in the best models.

Does this cause an infinite regress? No, with one caveat. One has “best models,”

whatever they might be (e.g., models of statistical variance), and one has unknown info-gaps on those best models. One builds the best utility function available, assessing variance or other risks if desired. One then evaluates the robustness to info-gaps. End of process; no regress. The caveat is this: we must be willing to make judgments about what we know and what we don't know. Philosopher John Locke (1706/1997, I.i.5, p. 57) says it nicely: “If we will disbelieve everything, because we cannot certainly know all things; we shall do muchwhat as wisely as he, who would not use his legs, but sit still and perish, because he had no wings to fly.”

What is challenging about the above account is that for many, there is no way to think about uncertainty aside from using probability. So to say that the field biology teacher might leave Swarthmore is just to say that there is some probability of departure, even if we don’t, and can’t, know what that probability is. But in fact, info-gap models are non-probabilistic (for technical details see Ben-Haim, 2006). They entail no assumptions about or choice of a probability distribution. They do not even entail the presumption that a probability distribution exists. For instance, one might say that the best information indicates a future sea-level rise of 1cm per decade, and that we don't know how wrong this estimate is. The sea level might rise more, or it might fall. We just don't know how to evaluate errors in the underlying data and models. We are not asserting anything about probabilities (maybe we could, but we aren't). A robust satisficing decision (perhaps about pollution abatement) is one whose outcome is acceptable for the widest range of possible errors in the best estimate. No probability is presumed or employed.


Robust Satisficing: Normative or Prescriptive?

The fact that frequentist approaches to probability bleed into personal approaches, and that well-justified personal approaches bleed into what we are calling radical uncertainty, raises another issue for discussion—one that has been central to the field of judgment and decision making. There are three kinds of accounts one can offer of decision making: descriptive, normative, and prescriptive. Descriptive accounts are strictly empirical: they answer the question “how do people decide?” Normative accounts, in contrast, provide standards. They answer the question “how should people decide?” In between are prescriptive accounts. They compare the processes by which people do decide to the normative standards, and ask whether, given human limitations of time, information, and information processing capacity, there are procedures people can follow that, while not up to the normative standard, do a better job than what people currently do. Are there things people can do, in other words, to diminish some of the unfortunate consequences of the heuristics and biases that decision-making researchers have been documenting for years (see Baron, 2004; 2008; Over, 2004 for discussions of the distinctions between normative and prescriptive theories). Much of Gigerenzer’s work on “fast and frugal heuristics” (eg., 2004, 2007) is intended to spell out what some prescriptive decision making procedures might be.

Robust satisficing is certainly not a description of what decision makers typically do—at least not yet. But is it normative or prescriptive? We believe it is normative. When Simon (1955, 1956, 1957) first introduced the term “satisficing,” he was making a prescriptive argument. The alternative to satisficing—utility maximizing—was not feasible, given the limits of human cognition and the complexity of the environment. An “ideal” human, with unlimited capacity, should maximize, but for an actual human, it would usually be a foolhardy undertaking. It is important to emphasize here that whereas Simon’s formulations were focused on the processing limitations of organisms, our discussion is focused on epistemic uncertainties inherent in the environment in which decisions get made. No amount of information-processing capacity will overcome a decision space in which probabilities—whether of outcomes, or of people’s subjective responses to outcomes—cannot be specified.

If what we are calling radical uncertainty is not an epistemic problem but a psychological one, then robust satisficing becomes a prescriptive alternative to utility maximizing. Satisficing is the thing to do if collecting and analyzing all the data isn’t worth the time and trouble, especially if getting a “good-enough” outcome is critical. But what if the problem is epistemic? What if no amount of time and trouble can enable a high school senior to pick the best college? Under these conditions, it is our view that maximizing robustness to uncertainty of a good enough outcome is the appropriate norm. Maximizing expected utility is not, not least because one can’t really compute expected utilities.

Suppose you face a decision about how to invest your retirement contributions. You can try to answer one of these questions:

1. Which investment strategy maximizes expected value?

2. What are the risk/reward ratios of different strategies?

3. What is the trade-off between risk and value?

4. I want $1 million when I retire. What investment strategy will get me that million under the widest range of conditions?

Robust satisficing is what provides the answer to Question 4. And that might be the right question to be answering, even when you know more about the “urn” than that it has 100 balls. This is not to suggest that robust satisficing makes the investment decision simple. By no means. Nor is it to suggest that it guarantees success. But an investment strategy that aims to get you a million dollars under the widest set of circumstances is likely to be very different from one that aims to maximize the current estimate of future return on investment. Managers of major financial institutions were often accused of bad risk management in the events leading to the financial collapse of the last few years. No doubt, their risk management was bad. But this may have been less the result of underestimating the likelihood of very low probability events, as Taleb (2007) has argued, and more the result of pretending that certain consequential events could even have probabilities meaningfully attached to them. Furthermore, many sub-prime collateralized debt obligations were sold as high quality assets by assuming that defaults were uncorrelated, when in fact the correlations were simply unknown (Ben-Haim, 2010). A business operating with an eye toward robust satisficing asks not, “How can we maximize return on investment in the coming year?” It asks, instead, “What kind of return do we want in the coming year, say, in order to compare favorably with the competition? And what strategy will get us that return under the widest array of circumstances?”

The same obviously applies to choosing a college. If you want an acceptably good college experience, you are asking Question 4. And if the uncertainties you face are not meaningfully quantifiable, Question 4 is the question you should be asking, as a normative matter.

Robust satisficing may even be the right normative strategy in at least some situations in which probabilities can be specified. Consider what von Winterfeldt and Edwards (1986) called the “principle of the flat maximum.” The principle asserts that in many situations involving uncertainty (and college choice is certainly such a situation), the likely outcomes of many choices are effectively equivalent, or perhaps more accurately, the degree of uncertainty surrounding the decision makes it impossible to know which excellent school will be better than which other excellent school. Said another way, there are many “right” choices. Uncertainty of outcomes makes the hair-splitting to distinguish among excellent schools a waste of time and effort. There is more uncertainty about the quality of the student/school match than there is variation among schools—at least within the set of excellent, selective schools (this qualification is important; it is the principle of the flat maximum, after all). So once a set of “good enough schools” has been identified, it probably doesn’t matter very much which one is chosen; or if it does matter, there is no way to know in advance (because of the inherent uncertainty) what the right choice is. On the other hand, schools that are all at the “flat maximum,” and thus essentially indistinguishable, may be substantially different in their robustness to uncertainty—in how good they will be if things go wrong. Schwartz (2005) has made this argument, and suggested that it applies as well to schools deciding which excellent students to admit as it does to students deciding which excellent school to attend.

Because there is room for disagreement about whether a given domain is properly characterized as radically uncertain or not, there will also be disagreement about whether robust satisficing is a normative or a prescriptive alternative to utility maximization (too much time and trouble to find out makes it prescriptive; not possible to find out makes it normative). The norm of expected utility maximization is so entrenched that it might seem to behoove us to collect more and more information in an effort to eliminate radical uncertainty. But we should be wary. As Gigerenzer (2004; 2007; Todd & Gigerenzer, 2003) points out, one can account for increasing amounts of the variance in a data set by adding variables to a regression model. A point may be reached at which, with many variables in the model, one captures almost all the variance. The regression model now provides an excellent description of what came before. However, it is quite possible that this model will be less good as a predictor of future events than a model with fewer variables, because at least some of the variables that have been added to the model are essentially capturing randomness. As a general matter, Gigerenzer’s argument is that the best descriptive model will often not be the best predictive model. Not all efforts to reduce uncertainty will make for better predictions.
Robust Satisficing and Strategic Decisions

There is a common class of decisions people face in life to which utility maximizing as a norm arguably does not apply. There are decisions that involve the simultaneous decisions of others—what might be called “strategic” decisions. Strategic decisions, often modeled by formalisms from game theory, involve two or more participants, with differing, often competing objectives. There is radical uncertainty in that the right thing for Player 1 to do will depend on what the other players decide to do, and attaching probabilities to the moves of other players is often difficult, and sometimes impossible. Will the manager of the other team change pitchers if I put in a pinch hitter? Will Walmart come to the community if I build a big-box store on the outskirts of town? Will Amazon start a price war if I lower what I charge for best sellers? Will China or Russia become more aggressive internationally if I reduce the U.S.’s nuclear arsenal? Sometimes it is possible to assign educated guesses about probabilities to the various moves open to the other players, especially in situations that have had similar occurrences in the past (eg., “Walmart almost always comes to town when competition appears”; “the manager almost never lets the upcoming left-handed batter bat against a left-handed pitcher.” In cases like these, probability estimates (based on past frequencies) may be helpful. But in many such strategic interactions, there is no past history that is obviously relevant, so that probability estimates are as likely to be inventions as they are to be informed assessments. What, then, is one to do in such situations? And does robust satisficing apply?

Various suggestions have been proposed for making decisions in competitive strategic games. A common one is what is called the “minimax” strategy: choose the option with which you do as well as you can if the worst happens. Though you can’t specify how likely it is that the worst will happen, adopting this strategy is a kind of insurance policy against total disaster. Minimax is a kind of cousin to robust satisficing, but it is not the same. First, at least sometimes, you can’t even specify what the worst possible outcome can bring. In such situations, a minimax strategy is unhelpful. Second, and more important, robust satisficing is a way to manage uncertainty, not a way to manage bad outcomes. In choosing Brown over Swarthmore, you are not insuring a tolerable outcome if the worst happens. You are acting to produce a good-enough outcome if any of a large number of things happen. There are certainly situations in which minimax strategies make sense. But there are also strategic situations in which robust satisficing makes sense (see Ben-Haim & Hippel, 2002 for a discussion of the Cuban missile crisis; and Davidovitch & Ben-Haim, in press, for a discussion of the strategic decisions of voters).
Epistemic Satisficing and Psychological Satisficing

The foregoing has been an argument that robust satisficing is the normatively appropriate goal when people are operating within the epistemic limits of a radically uncertain world. Schwartz (2004; Schwartz et al., 2002; Iyengar, Wells & Schwartz, 2006) has argued that satisficing also has psychological benefits, even in decision spaces that might permit maximizing. Satisficers may obtain less good outcomes than maximizers, but satisficers tend to be more satisfied with their decisions, and happier in general. Psychological satisficing is encouraged by mechanisms such as regret, disappointment, missed opportunities, social comparison, and raised expectations, all of which are more pronounced in maximizers than in satisficers, and all of which contribute to reduced satisfaction with decisions (Iyengar, Wells, & Schwartz, 2006; Schwartz, 2004).



Psychological and epistemic satisficing are different concepts, the latter applying to humans as well as to organisms very different from us (e.g. armadillos, sunfish, and fruit flies; Carmel and Ben-Haim, 2005) for whom no “psychology” in the human sense applies. One might speculate that the propensity for homo sapiens to psychologically satisfice is a behavioral trait with evolutionary selective advantage. Psychological satisficing might be a mechanism by which the individual protects against failure, analogous to the epistemic satisficing that animals seem to display in seeking essential sustenance. One might view the psychological “well-being” function in the same way as myriad other “objective functions” that are robust-satisficed in epistemic satisficing. Indeed, the very well documented phenomenon of loss aversion in human decision making (e.g., Kahneman & Tversky, 1984) has never really had a compelling functional explanation. What makes loss aversion better than symmetric assessments of gains and losses? Here is a possible answer: loss aversion pushes people in the direction of robust satisficing, which is their best chance to end up with a satisfactory outcome in a very uncertain world. It is also worth pointing out that though there are many alternatives to multi-attribute utility as decision making strategies (see Baron, 2008), what distinguishes them from one another is the amount of information processing and other cognitive work the decision maker has to do. The implicit aim in virtually all cases is utility maximization, and the various simplified strategies are all compromises of that implicit aim so that people can make some decision and also continue to live their lives. Even satisficing, as initially formulated by Simon (1955, 1956, 1957) and as discussed subsequently, is viewed as a compromise with utility maximization: give up the best, and settle for less, because it’s the best you can do. In contrast, there is growing recognition that satisficing can itself be beneficial. Info-gap theory provides a quantitative framework for understanding why and when satisficing is advantageous over maximizing, and when it is not. The central concept is that there can be a trade-off between robustness and quality. We appreciate the difference between these two attributes of a decision: the estimated quality of outcome of the decision, and the sensitivity of that decision to uncertainty. Under specifiable conditions, enhancing robustness is equivalent to enhancing the probability of satisfaction, which suggests the evolutionary advantage of satisficing, as hypothesized by Todd & Gigerenzer (2003, p.161). Whatever the merit of an evolutionary argument, the fact remains that epistemic satisficing explains the usefulness of psychological satisficing. For an individual who recognizes the costliness of decision making, and who identifies adequate (as opposed to extreme) gains that must be attained, a satisficing approach will achieve those gains for the widest range of contingencies. In addition, there is some empirical evidence that satisficers may frequently make objectively better decisions than maximizers (see Bruine de Bruine, Parker, & Fischoff, 2007).
Future Research

The foregoing has attempted to make the argument that as a normative matter, robust satisficing is a better strategy for decision making than utility maximizing under conditions of radical uncertainty, and that this is true whether or not the decision space overwhelms the information-processing capacities of the decision maker. Though we doubt that very many decision makers deliberately and consciously switch from maximizing to robust satisficing when they face conditions of extreme uncertainty, it would be quite interesting to know whether changes in decision strategy in fact occur when, for example, the degree of uncertainty is made salient. If so, it would be interesting to know what cues to uncertainty decision makers are responding to, and what their own understanding of their strategy shift is. It would also be of interest to know whether decision makers who seem to be pursuing a satisficing strategy interpret decision spaces as radically uncertain even when they are not. Finally, it would be of interest to know whether people we have identified as “psychological satisficers” (e.g., Schwartz, et al., 2002) are more sensitive to radical uncertainty than maximizers are.

It would also be worthwhile to explore the psychological consequences of a normative argument like the one offered here. It is possible that with utility maximizing as the norm, decision makers are reluctant to satisfice. Satisficing is just “settling” for good enough, and the decision maker can easily imagine that others are smarter, or harder working, and thus able to maximize. In other words, satisficing reflects a defect in the decision maker, compared to imagined others and accepted norms. In contrast, if the arguments in this paper came to be commonly articulated and accepted, then satisficing would become the “smart thing” to do. It would reflect thoughtfulness and analytical subtlety. People might be much more inclined to adopt a strategy that is normatively correct than one that is merely psychologically beneficial.

Finally, it would be of interest to know whether there are cultural differences in people’s receptiveness to robust satisficing as a normative strategy. Radical uncertainty may be much more salient and tolerable in some cultures than in others. In such cultures, utility maximizing may not be entrenched as a norm, and people may engage in robust satisficing whether or not they know it and can articulate it (see Markus & Schwartz, 2010, for a discussion of profound cultural differences in decision making).

1   2   3


The database is protected by copyright ©sckool.org 2016
send message

    Main page