## Arbitrage, incomplete models, and interactive rationality

 Page 5/7 Date 30.04.2018 Size 0.57 Mb.

The example of an empirical violation of the axioms of expected utility first concocted by Allais (1951) has inspired many of the theories of non-expected utility developed over the last two decades. An analysis of this example from the perspective of arbitrage choice theory will help to illustrate how ACT departs from both EU and non-EU theory, in which preference is a behavioral primitive. In a simpler version of the paradox, introduced by Tverky and Kahneman (1979), a subject is presented with the following two pairs of alternatives.

A: 100% chance of \$3000

B: 80% chance of \$4000

A: 25% chance of \$3000

### B: 20% chance of \$4000

(In all cases, the remaining probability mass leads to a payoff of \$0.) The typical response pattern is that A is preferred to B but B is preferred to A, in violation of the independence axiom: most persons would rather have a sure gain than a risky gamble with a slightly higher expected value, but they would maximize expected value when faced with two “long shots.” This pattern of behavior does not necessarily violate the assumptions of arbitrage choice theory, because from the perspective of ACT, the decision problem is ill-specified:

• The choices are completely hypothetical: none of the alternatives is actually available.

• They occur in different hypothetical worlds: in the world where you choose between A and B a sure gain is possible, while in the world where you choose between A and B it is not.

• The relations between the events are ambiguous: it is not clear how the winning event in alternative B is related to the winning events in alternatives A  or B.

In order to recast this example in the framework of ACT (or otherwise turn the independence-axiom-violator into a bona fide money pump), the choices must be real, they must be forced into the same world, and the relations between events must be made explicit. The following decision tree shows the usual way in which such details are added to the problem description (e.g., Seidenfeld 1988, Machina 1989) in order to materially exploit preference patterns that violate the independence axiom:

\$3000

25%

80%

2

\$4000

1

3

#### Risky

75%

\$0

\$0

20%

Here, the choice between A and B is the choice between “safe” and “risky” if a decision is to be made only upon reaching node 2, whereas the choice between A and B is the choice between “safe” and “risky” if an irrevocable commitment must be made at node 1. In this refinement of the problem, the winning event in alternative B is a proper subset of the winning event in alternative A, and the winning event in B is also conditionally the same as the winning event in B, given that node 2 is reached. Savage (1954) pointed out that when the relations among the events are made explicit in this fashion, the paradox vanishes for many persons, including himself (and myself): the commitment that should be made at node 1 is determined by imagining the action that would be taken at node 2 if it should be reached, thereby ensuring consistency. Or, alternatively, the agent may “resolutely” choose to honor at node 2 whatever commitment was made at node 1 (McClennen 1990).
But suppose that even when presented with this picture, an agent insists on preferring A to B and B to A. At this point it is necessary to be explicit about which “world” the agent is in and at which point in time she is situated in it. At most one of these choices is real at any given moment, and it is the only one that counts. For, suppose that the agent is at node 1 in possession of alternative A in the form of a lottery ticket entitling her to the proceeds of the “safe” branch in the event that node 2 is reached. Then for her to say that she prefers B to A means that she would be willing to pay some \$ > 0 to immediately exchange her “safe” ticket for a “risky” ticket, if permitted to do so. But what does it mean, at node 1, for her to also say that she prefers A to B? Evidently this assertion must be interpreted merely as a prediction of what she would at some point in the future that may or may not be reached, namely, she predicts that upon reaching node 2 she would be willing to pay \$ to switch from a “risky” to a “safe” ticket, if permitted to do so at that point. If this entire sequence of exchanges and events indeed materializes, then at node 2 she will be back in her original position except poorer by \$2. Meanwhile, if she does not reach node 2, both tickets will be worthless, and she will still be poorer by \$. Does this pattern of behavior constitute ex post arbitrage? Not necessarily! A agent’s prediction of her future behavior realistically should allow for some uncertainty, which could be quantified in terms of bets in the fashion of the example in the preceding section. But in order for a clear arbitrage opportunity to be created, the agent must not merely predict at node 1 that she will switch back from “risky” to “safe” upon reaching node 2, she must say at node 1 that she is 100% certain she will do so at node 2, which is equivalent to an irrevocable commitment to do so. (It means she is willing to suffer an arbitrary financial penalty for doing otherwise.) But an irrevocable commitment at node 1 to exchange “risky” for “safe” at node 2 is then equivalent to immediately switching from B back to A. It is doubtful that anyone would pay \$ to exchange A for B and, in the next breath, pay another \$ to undo the exchange, and so the paradox collapses.
The key issues are (a) whether an irrevocable commitment to “safe” or “risky” is or is not required at node 1, and if it is not, then (b) what is the meaning of a “preference” held at node 1 between alternatives that are to be faced at node 2. In world she really inhabits, either the agent will have the opportunity make her choice at node 2 after some interval of time (measured by the realization of foreseen and unforeseen events) has passed, or else she must choose immediately. If she doesn’t have to choose immediately, then she might as well wait and see what happens. In that case, the only choice that matters is between A and B, and it will be made, if necessary, at node 2. If she is currently at node 1, anything that she says now concerning her behavior at node 2 is merely a prediction with some attendant uncertainty. (In some situations it is necessary for the agent to predict her own future choices in order to resolve other choices that must be made immediately. But even in such situations, as we saw in example of the preceding section, it is permissible for an agent to be somewhat uncertain about her future behavior and to change her mind with the passage of time.) Meanwhile, if the agent does have to choose immediately, the only choice that matters is between A and B. If she chooses B over A believing that those are her only options, but later an opportunity to exchange B for A unexpectedly materializes at node 2—well, that is another day!
In summary:

• Standard choice theory requires agents to hold preferences among objects in hypothetical (and often counterfactual) worlds that may be mutually exclusive, or only ambiguously connected, or frozen at different moments in time. The independence axiom then forces consistency across such preferences.

• Allais’ paradox exploits the fact that preferences may refer to different worlds. (The objective-probability version also exploits the ambiguity concerning the relations of events between worlds.) But most agents construct their preferences by focusing on salient features of the world they imagine they are in, and a world in which a sure gain is possible is perceived differently from a world in which winning is a long shot in any case (Shafer 1986). Hence, their preferences in different worlds may violate the independence axiom.

• In ACT, by comparison, there are no hypothetical or counterfactual choices: there is only one world, and every act of verbal or physical behavior has material implications in that world. Moreover, consistency is forced only on behavior that occurs at the present moment in the one world. When agents are called upon to make predictions about their future behavior, they are allowed some uncertainty, and such uncertainty leaves room for them to change their minds over time without violating the no-ex-post-arbitrage axiom.

3.3 On the realism and generality of the modeling assumptions
The modeling assumptions of arbitrage choice theory are simpler and more operational than than those of the standard theory, but in some ways they may also appear to be less realistic and less general in their scope. For one thing, money plays a prominent role, whereas rational choice theorizing is often aimed at nonmonetary behavior and noneconomic institutions (e.g., voting). And not only do interactions among agents in our models typically involve money, but they take place in a market where it is possible to bet on practically anything, including one’s own behavior. Outside of stock exchanges, insurance contracts, casinos, office pools, and state lotteries, most individuals do not think of themselves as bettors, and bets that explicitly depend on one’s own choice behavior are uncommon (though not unheard of). The standard theory, by comparison, builds on a foundation of preferences (which everyone has to some degree) and consequences (which can be anything whatever). At first blush, the latter approach seems flexible, broadly applicable, and perhaps even uncontroversial. However, it later must pile on assumptions about imaginary acts, common knowledge, common belief, and equilibrium selection that go far beyond what has been assumed in the examples of this section. If such additional assumptions were indeed realistic, the market transactions in the arbitrage models would not seem at all fanciful: everyone would already know the gambles that were acceptable to everyone else, and more besides.
The alternative theory of choice presented here is idealized in its own way, but nevertheless it is grounded in physical processes of measurement and communication that actually enable us to quantify the choice behavior of human agents with some degree of numerical precision. In the familiar institutions that surround us, money usually changes hands when subjective beliefs and values must be articulated in quantitative terms that are both precise and credible. Moreover, any exchange of money for goods or services whose reliability or satisfactoriness or future value is uncertain is effectively a gamble, and in this sense everyone gambles all the time. The markets in which such “gambles” take place not only serve to allocate resources efficiently between buyers and sellers, but also to disseminate information and define the numerical parameters of common knowledge. Indeed, the common knowledge of a public market is the intuitive ideal against which other, more abstract, definitions of common knowledge are compared. Last but not least, contracts involving monetary payments are quite often used to modify the “rules” of games against nature or games of strategy: agents use contingent contracts to hedge their risks or attach incentives or penalities to the actions of other agents. Monetary transactions and market institutions are therefore important objects of study, even if they are not the only such objects. (Indeed, the line between market and non-market institutions is rather blurry nowadays.) Insofar as money and markets play such a fundamental role in quantifying beliefs and values, defining what is common knowledge, and writing the rules of the games we play, it is reasonable to include them as features of the environment in a quantitative theory of choice, at least as a point of departure. Even where real money does not change hands or a real market does not exist in the problem under investigation, they may still serve as useful metaphors for other media of communication and exchange.
It might be expected that, by erecting the theory on a foundation of money and highly idealized markets, strong results would be obtained that, alas, would lose precision when extrapolated to imperfect markets or non-economic settings. It is interesting, then, that the results we have obtained are more modest in some ways than the results claimed under the banners of the standard theory, and in other ways orthogonal to it.
3.4 Summary
The preceding examples illustrate that the assumptions of arbitrage choice theory lead to conclusions about rational behavior that resemble those of the standard theory in many respects. First, it is sufficient (but not necessary) for every agent to behave individually in a manner that maximizes expected utility with respect to some probability distribution and utility function. (Non-expected-utility preferences are allowed as long as they imply that more wealth is always preferred to less—see Nau 1999 for an example.) Second, the agents jointly must behave as if implementing a competitive equilibrium in a market or a strategic equilibrium in a game. Third, where uncertainty is involved, there must exist a “common prior” probability distribution. And fourth, where agents anticipate the receipt of information, they must expect their beliefs to be updated according to Bayes Theorem. However, these results also differ from those of the standard theory in some important respects, namely:

• Probability distributions and utility functions need not be uniquely determined, i.e., belief and preference orderings of agents need not be complete, nor do they need to be separable across mutually exclusive events.

• Equilibria need not be uniquely determined by initial conditions.

• “True” probabilities and utilities, to the extent that they exist, need not be publicly observed or commonly known: they are generally inseparable.

• Utility functions need not be state-independent.

• The uncertain moves of different agents need not be regarded as probabilistically independent.

• Bayes’ Theorem need not describe the actual evolution of beliefs over time.

• Common prior probabilities are risk neutral probabilities (products of probabilities and relative marginal utilities for money) rather than true probabilities.

Among these departures from the standard theory, the reinterpretation of the Common Prior Assumption is the most radical: it is literally at odds with the most popular solution concepts in game theory and information economics. It calls attention to a potentially serious flaw in the foundations of game theory and justifies much of the skepticism that has been directed at the Common Prior Assumption over the years.

The other departures are all in the direction of weakening the standard theory: personal probabilities and utilities are not uniquely determined, beliefs and values are confounded in the eye of the beholder, equilibria are coarsened rather than refined, and the evolution of beliefs is stochastic rather than deterministic. Much of this ground has been traveled before in models of partially ordered preferences, state-dependent utility, correlated strategies, and temporally coherent beliefs. Assumptions of the standard theory are weakened and, not surprisingly, weaker conclusions follow. The arbitrage formulation scores points for parsimony and internal cohesiveness—especially in its tight unification of decision analysis, game theory, and market theory—but otherwise it appears to settle for rather low predictive power.
But there is one other very important respect in which arbitrage choice theory departs from the standard theory, which was alluded to earlier: the arbitrage theory does not rest on assumptions about the rationality of individuals. Its axioms of rational behavior (additivity and no ex post arbitrage) apply to agents as a group: they carry no subscripts referring to specific individuals . It is the group which ultimately behaves rationally or irrationally, and the agents as individuals merely suffer guilt-by-association. The fact that the group is the unit of analysis for which rationality is defined admits the possibility that the group is more than just an uneasy alliance of individuals who are certain of and wholly absorbed in their own interests. We have already seen that the beliefs and values of the group (its betting rates, prices, etc.) are typically more sharply defined than those of any of its members, but that is only the tip of a larger iceberg, as the next section will explore.

4.1 Incomplete models and other people’s brains
Standard choice theory assumes that a decision model is complete in every respect: the available alternatives of agents and the possible states of nature are completely enumerated, the consequences for every agent in every event are completely specified in terms of their relevant attributes, and the preferences of the agents with respect to those consequences are (among other things) completely ordered. As a result, it is possible to draw a diagram such as the following for a choice problem faced by a typical agent:
Figure 1:

Attribute

u11

Consequence

p

State

v11

u12

a1

1-p

Alternative

v12

u21

p

a2

v21

u22

1-p

v22

In this example, the agent chooses one of two possible alternatives, then one of two possible states of nature obtains, and the agent receives a consequence having two attributes. Because the agent is also assumed to have completely ordered preferences over a much larger set of acts, we can infer the existence of unique numerical probabilities of events (here indicated by p and 1-p) and unique (up to affine transformations) utilities of consequences. Under suitable additional axioms on preferences (Keeney and Raiffa 1976), the utilities of consequences can be further decomposed into functions of the utilities of their attributes. For example, it might be the case here that the utility of a consequence is an additive function of the utilities of its attributes, so that, after suitable scaling and weighting, the utility of consequence 1 on the upper branch is u11 + v11, where u11 is the utility of the level of attribute 1 and v11 is the utility of the level of attribute 2, and so on. Thus, we can compute the agent’s expected utilities for alternatives a1 and a2:

EU(a1) = p(u11 + v11) + (1-p)(u12 + v12)
EU(a2) = p(u21 + v21) + (1-p)(u22 + v22)
The optimal alternative is the one with the higher expected utility, and it is uniquely determined unless a tie occurs, in which case there is a set of optimal alternatives among which the agent is precisely indifferent. This is a satisfactory state of affairs for the agent, who knows exactly what she ought to do and why she ought to do it, and it is also satisfactory for the theorist, who can predict exactly what the agent will do and why she will do it. And if there are many agents, the omniscience of the theorist is merely scaled up: she can predict what everyone will do and why they will do it. A complicated economic or social problem has been reduced to a numerical model with a small number of parameters that easily fits inside one person’s head (perhaps aided by a computer).
Of course, a “small world” model such as the one shown above is often merely an idealization of a more complicated “grand world” situation. More realistically, the agent’s beliefs and values may be only incompletely ordered, in which case probabilities and utilities are typically represented by intervals rather than point values:
F
Attribute
igure 2:

[u11 , U11]

Consequence

[p, P]

State

[v11 , V11]

[u12, U12]

a1

[1-P, 1-p]

Alternative

[v12, V12]

[u21, U21]

[p, P]

a2

[v21 , V21]

[u22 , U22]

[1-P, 1-p]

[v22 , V22]

Here, p and P denote lower and upper bounds, respectively, on the probability of the event, and uij and Uij denote lower and upper bounds, respectively, on the utility of attribute 1 on the ijth branch, and so on. We can no longer compute an exact expected utility for each alternative, but we can compute lower and upper expected utilities (denoted eu and EU, respectively):

eu(a1) = min {x(y11 + z11) + (1-x)(y12 + z12)}
EU(a1) = max {x(y11 + z11) + (1-x)(y12 + z12)}
where the minimum and maximum are taken over all x, y11, z11, etc., satisfying x  [p, P], y11  [u11, U11], z11 [v11, V11], etc. If it turns out that EU(ai) < eu(aj)—i.e., if the intervals of expected utilities are disjoint—then we may still conclude that there is a unique optimal solution. But, in general, it is possible to have multiple “potentially optimal” solutions when there is overlap among the expected utility intervals for different alternatives. The optimal solution is now set-valued, as are the parameters of the model. (Models of this kind have been studied by Smith 1961, Aumann 1962, Giron and Rios 1979, Rios Insua 1990, Walley 1991, Nau 1989 & 1992a, Seidenfeld et al. 1998, among others.) In the worst case, we may not be able to pin the optimal solution down very precisely, but we can often at least narrow the range of alternatives that ought to be considered.
The model of Figure 2 (with incomplete preferences) is undoubtedly more realistic than the model of Figure 1 (with complete preferences), but it is somewhat unsatisfying for the agent and perhaps even more unsatisfying for the theorist. The agent merely fails to get a unique recommended solution from the model, whereas the theorist’s entire house of cards begins to wobble: the infinite regress, the common knowledge assumptions, the refinements of equilibria, all become hard to sustain in the face of indeterminacy at the agent level.
But the true situation is even worse: realistically, most models are not only preferentially incomplete, they are also dynamically and/or structurally incomplete:

Attribute

F
Consequence
igure 3:

State

Alternative

The attributes of consequences, states of nature, and available alternatives may not be fully enumerated, and the timing of choices—even the necessity of making choices—may be indeterminate, as bounded-rationality theorists emphasize.24 This is the situation normally encountered at the first stage of decision analysis: the tree has not yet been drawn and the decision maker has not yet thought of everything she might do, everything that might happen, and all the consequences that might befall her—let alone quantify her beliefs and values. In such a situation, the set of alternatives that are even potentially optimal may be indeterminate. This is not to say that analysis might not be helpful—indeed, the very lack of clarity that a decision maker typically experiences upon facing an unfamiliar problem is the raison d’être for professional decision analysts. Structured techniques of “value-focused thinking” (Keeney 1992) and “strategy generation” (Howard 1988) can be used to articulate values and create strategic alternatives, the views of other stakeholders can be solicited to provide different perspectives and new ideas, prototype models can be tested to identify the most important tradeoffs or uncertainties, and finally the relevant probabilities and utilities can be assessed. Eventually a model like the one in Figure 1 may be constructed, and it is hoped to be a “requisite” model of the real-world situation.

If the decision problem arises in an organizational setting, standard administrative procedures may be followed to assign personnel, gather information, construct and evaluate alternatives, build consensus, and generate action. For example, a strategy team may be formed; committee meetings, focus groups, and decision conferences may be held; consultants may be brought in; and dialogues among different levels of management may be conducted. Higher levels of management are often occupied by individuals who scaled the corporate ladder precisely by virtue of their knack for harnessing the abilities of subordinates and providing leadership in ambiguous situations (Isenberg 1984, March and Simon 1993, March 1994). Eventually a decision is made and an individual may assume responsibility, but by then many individuals have contributed to the process, and it is possible that no one has a clear view of the entire problem—or even the same view as anyone else. As on a professional sports team, there are different views from the playing field, the coaching box, and the front office, and all must contribute in order for the team to succeed.27

Another aspect of model incompleteness, which was illustrated in Example 5, is that the mere passage of time often lends detail to a decision model because unforeseen things happen, as the Austrian economists and complexity theorists emphasize. Nature or a human competitor makes an unexpected move, objectives wax or wane in perceived importance, opportunities that were not previously envisioned suddenly present themselves, while alternatives that were believed to be available sometimes turn out to be illusory. Indeed, the subjective experience of time is measured by the accumulation of small surprises: if nothing that you did not precisely foresee has happened lately, time has stood still for you. For this reason, obtaining the most “perfect” information that can be envisioned today may not be equivalent to obtaining an option to put off a decision until tomorrow, contrary to textbook decision analysis.
The preceding description of the decision making process suggests that all decisions are to some extent group decisions, and they are acts of creation and discovery, not merely acts of selection. This view is compatible with arbitrage choice theory—and incompatible with the standard theory. The standard theory assumes that every agent begins with a complete mental model of the problem, and such models are often required to be mutually consistent or even objectively correct. Every agent is an island of rationality with no reason to visit the islands of other agents except to get more of what she already knows she wants through trade or contests of strategy.28 In the arbitrage theory, by comparison, the rational actor is naturally a group. An individual typically perceives only part of the problem: different agents know the relative values of different objects, the relative likelihoods of different events, the ranges of alternatives that are appropriate in different situations—and no one can anticipate everything that might happen tomorrow. Like the proverbial blind man holding one limb of an elephant, an individual has opinions about some things but not necessarily everything, and interactions with others may provide a more complete picture. In the end, “rationality” gets enforced at the point where inconsistencies within or between agents become exploitable, and exploitation usually occurs against groups rather than individuals.29

4.2 The limits of theory
Of course, to be applied quantitatively, the arbitrage theory of rational choice requires a certain degree of completeness: events must be well-defined and contingent trades must be possible, or at least imaginable, with respect to those events. As such, the theory recognizes its own limits. Where events are not well-defined and where contingent trades are hard even to imagine, we should not expect to be able to predict or prescribe choice behavior in exact, quantitative terms. Even there, the theory is not altogether silent: it suggests that completing the model may be the most important task at hand and that interactive decision processes (or “networks of generative relationships” to use the phrase of Lane et al. 1996) may facilitate that task. But a sobering implication emerges, namely that if the agents typically do not see the whole picture, then neither does the theorist. If the purpose of interactions among agents is to construct, between them, a more complete model of a situation that is too complex for any individual to fully comprehend, then the theorist cannot hope to do that job for them single-handedly. No single mode of analysis or theoretical paradigm can expect to yield the last word in explaining a social or economic phenomenon. This is not an argument for relativism in science, but merely a reflection on the reason why universalist theories and quantitative models have thus far proved less useful in the social sciences than in the natural sciences. Models yield accurate predictions only if their complexity is commensurate with that of the phenomena they are intended to represent, which is easier to achieve with physical systems (where we look into the eyepiece of the microscope) than with social and economic systems (where we look up from the other end). If the complexity of the system is very much greater than that of the observer and her model, she cannot expect to understand every detail by herself—but the scientific establishment to which she contributes will gradually assemble a more complete picture through its network of institutions, its competition among paradigms, and its accumulation of technology and literature.
From this perspective, let us now return briefly to some of the questions raised earlier. First, recall Elster’s (1986) assertion that a group such as a family does not have ‘its’ own goals and beliefs. The concepts of goals and beliefs are operational only to the extent that they are reflected in decisions or other observable behavior. The family often ‘decides’ as a unit, and in that case, whose goals and beliefs are on view? The family may be composed of discrete individuals—man, woman, child, and dog—but others contribute as well to its decisions: financial and spiritual advisers, insurance and travel agents, architects, interior decorators, friends and relatives, plus all the forces of commercialism, conformity, and historical contingency. The family’s decision is therefore more than just a compromise or a strategic equilibrium among goals and beliefs that were already present and well-formed in the heads of its nominal members. Rather, it is an act of construction to which many contribute, whose precise outline none may fully anticipate, and whose final causes perhaps none can fully explain. Of course, the family does not possess the same unitary consciousness of its actions that we would attribute to an individual, but that is beside the point. We could ask the family to explain ‘its’ reasons for taking a decision, and we would get an answer as the result of another act of construction. The reasons offered might leave some things unexplained or ambiguous, but the same would be true if the subject were an individual. This view of the family is not entirely incompatible with the operational spirit of methodological individualism as defined by Watkins (1957), but it suggests that the decision-making units that are the focus of the analysis need not always be individuals. It does, however, imply that the companion concept of purposive action is too strong. The decision-making units, whether or not they are individuals, generally will not be able to completely and correctly articulate the causes and effects of their own behavior: that is why they must rely on others.
Next, does a firm have a utility function? Insofar as it is a rational actor—which is to say, insofar as it is a decision-making unit whose behavior does not give rise to either intra-firm or inter-firm arbitrage—then probably it does act as if maximizing expected utility, subject to the caveats noted above. But does this observation have practical implications for decision modeling? Probably not—because no individual in the firm is the keeper of the utility function, and the most important strategic decisions of the firm involve creating new alternatives and responding to environmental change, not merely choosing consistently among fixed alternatives in a stable environment. Hence, a decision analyst armed with a mythical corporate utility function cannot, even in principle, do the work of the firm’s executives, managers, and administrative procedures. Analysis and formal models may sharpen the abilities of corporate decision makers, but cannot fully represent their function. There is no mathematical substitute for executive intuition and leadership.
Why do individuals behave badly in many behavioral experiments? The tasks and the surroundings are often unfamilar and only superficially simple, and the subjects at their computer terminals lack the standards of appropriate behavior, the decision aids, the expert advice, the market signals, and other forms of social support they would have if they met the same types of problems regularly outside the laboratory. By depriving them of access to other people’s brains, we make them appear clumsy, capricious, and less than fully rational. (Of course, many individuals behave badly outside the laboratory—especially when they are on unfamiliar turf or otherwise unsupported by a social network—and this qualifies as “irrational” by our definition if they allow themselves to be easily exploited.)
Why are frequentist methods of statistics still so widely used, despite that fact that they are demonstrably incoherent as a basis for decision making under uncertainty? In the last decade or so, Bayesian methods have at last risen to prominence on the frontiers of statistics and artificial intelligence as new computational tools such as Monte Carlo Markov Chain algorithms and Bayesian networks have improved their tractability. But behind the front lines, introductory textbooks and even scientific journal articles still speak of accepting or rejecting hypotheses on the basis of their p-values. The persistence of such illogical methods can be understood by observing that the results of statistical analysis are rarely plugged into well-formed decision models where the effects of incoherence would be immediately felt. Rather, the analysis is embedded in a social process that tends to compensate for its theoretical defects (or else obscure them). The jargon of frequentist statistics serves as a lingua franca for communicating about uncertainty in situations where the decision model is ambiguous: everyone misunderstands the meaning of p-values, but at least they misunderstand them in the same way, and experienced statisticians temper their use with rules of thumb that help to avoid embarrassing errors. Bayesian methods place more of a focus—and more of a burden—on the individual who is doing the modeling, and perhaps for that reason they have not yet fitted as well into the social process of social-scientific inquiry. (Of course, the counterargument can also be made that a lot of bad social science has managed to cloak itself in bad statistics.)
Why do individuals bother to vote, if their expected impact on the outcome of the election is less than the trouble it takes? To frame the voting problem in this fashion is to overlook the entire group decision process of which the casting of votes is only the culmination. The election process not only aggregates but also serves to construct the preferences of the voters and to manufacture candidates that will claim to satisfy those preferences, spurred on by interest groups and news media. What happens on election day itself is often a mere formality, a ratification of an outcome already determined by opinion polls. Candidate x or candidate y is “chosen” by the final tally of votes, but what is more important is the campaign process through which the political landscape has been shaped and the alternatives x and y have been conjured up.30 The difference in expected utility that a given voter perceives between voting-for-x or voting-for-y or not-voting-at-all on election day cannot be exactly quantified, nor is her voting strategy an equilibrium of a game whose rules could have been written down in advance. Her own vote may be a largely symbolic expression of her identity as a citizen. What matters is that by engaging in a political process that unfolds gradually and somewhat unpredictably over time, the voters, lobbyists, politicians, and reporters as a group cause action to be taken (for good or ill) on social problems whose full dimensions they cannot comprehend by themselves. Of course this does not absolve individuals of responsibility for trying to understand their economic and political environment—because the aggregation of their incomplete mental models will eventually determine the collective solution—but it does suggest the need for both humility and tolerance.

4.3 Implications for modeling
The theory of choice outlined here is broadly consistent with the normative ideals of optimization and equilibrium that are central to standard rational choice theory, notwithstanding some indeterminacies and unobservables. But it also reinforces many of the principal arguments of rational choice critics, and its implications for modeling are distinctly different from those of the standard theory in a number of important respects.
First, decision analysis should embrace the idea that decisions are inherently interactive rather than celebrate the heroically rational individual. The aim of decision analysis should be to use multiple brains to find creative solutions to complex problems, not merely satisfy the latent preferences of a designated decision maker. Some of the brains may be multiple selves of the decision maker, evoked by exercises in reframing, role-playing, and value-focused thinking. But in most cases the individual’s colleagues, expert advisers, superiors, subordinates, role models, fellow consumers or investors—and decision analyst, if any—will also play a role as participants in the decision, not mere catalysts or passive data sources. Waiting may sometimes help: it may be advantageous to purchase an option to put off a decision until after some of its complexities are resolved by the passage of time, and the value of the option may reside partly in the opportunity to let unforeseen things happen. Of course, decision analysis practitioners already know these things, but they consider them part of their art rather than their science. Indeed, paradoxically, there is no role for a decision analyst in the standard rational choice theory—the decision maker already implicitly knows what she ought to do. The view of choice developed here suggests, to the contrary, that a second brain ought to be helpful, particularly one that has broad experience with similar kinds of decisions or expert knowledge of the problem domain. This may explain why software packages for decision-tree-analysis and expected-utility maximization have not proved particularly helpful to individuals acting alone—even decision theorists do not often use them in their private decisions.
Second, a novel approach to the modeling of games of strategy is suggested: let the common knowledge assumptions be translated into gambles or trades that the players are willing to publicly accept. Then impose a no-arbitrage condition on the resulting (observable) market, rather than imposing an equilibrium concept on the players’ (unobservable) probabilities and utilities. Of course, by Theorem 3, the two approaches are dual to each other, except that the arbitrage-free equilibria will be subjective correlated equilibria rather than Nash equilibria and the common prior distribution will be a risk-neutral distribution rather than anyone’s true distribution. But more importantly, by operationalizing the common knowledge assumptions in terms of material gambles or trades, the players are given the ability to tweak the rules of the game. The question then becomes not only “how should this game be played?” but also “should another game be played instead?” For example, two risk-averse players would never implement a mixed strategy Nash equilibrium: if side bets were permitted, they would bet with each other (or with an entrepreneurial observer) in such a way as to equalize their rankings of the outcomes, thereby eliminating the need for randomization and perhaps even dissolving the game (Nau 1995c). The theory of games that emerges here is thus a theory of how to make the rules as well as how to play once the rules have been fixed.
Concerning the study of economic and social institutions in which many individuals interact with each other—households, firms, markets, and governments—the implication is even more fundamental. To assume that it is “as if” every individual has a complete model of her environment and is behaving optimally in it is to deny what is perhaps the most important reason for them to interact, namely that their models are incomplete, and usually profoundly so. No wonder this leads to paradoxes and puzzles. As Arrow (1987) observes: “If every agent has a complete model of the economy, the hand running the economy is very visible indeed.” This is not to suggest that mathematical models are unhelpful, but merely that plural analysis and high-bandwidth data are likely to be needed to fully illuminate complex social or economic phenomena. Painstaking field studies of institutions, rich data from real markets and polities, in vivo social experimentation, lessons drawn from history and literature—all of which harness the power of many people’s brains—may shed more light on human affairs than the exploration of a system of equations with a small number of parameters. Certainly there is an important unifying mathematical principle that underlies rational behavior in decisions, games, and markets: it turns out to be the principle of no-arbitrage, once the knowledge assumptions are operationalized and the dual side of the model is examined. But that principle has little predictive ability by itself: the devil is still in the details.