Arbitrage, incomplete models, and interactive rationality

Download 0.57 Mb.
Size0.57 Mb.
  1   2   3   4   5   6   7

Robert F. Nau

Fuqua School of Business

Duke University

Durham, NC 27708-0120 USA

(919) 660-7763

Version 3.5

October 13, 1999

ABSTRACT: Rational choice theory rests on the assumption that decision makers have complete mental models of the events, consequences and acts that constitute their environment. They are considered to be individually rational if they hold preferences among acts that satisfy axioms such as ordering and independence, and they are collectively rational if they satisfy additional postulates of inter-agent consistency such as common knowledge and common prior beliefs. It follows that rational decision makers are expected-utility maximizers who dwell in conditions of equilibrium. But real decision makers are only boundedly rational, they must cope with disequilibrium and environmental change, and their decision models are incomplete. As such, they are often unable or unwilling to behave in accordance with the rationality axioms, they find the theory hard to apply to most personal and organizational decisions, and they regard the theory’s explanations of many economic and social phenomena to be unsatisfying. Models and experimental studies of bounded rationality, meanwhile, often focus on the behavior of unaided decision makers who employ strategies such as satisficing or adaptive learning that can be implemented with finite attention, memory, and computational ability.
This essay proposes a new foundation for choice theory that does not rely on consequences, acts, and preferences as primitive concepts. Rather, agents articulate their beliefs and values through the acceptance of small gambles or trades in a stylized market. These primitive measurements are intersubjective in nature, eliminating the need for separate common knowledge assumptions, and they partly endogenize the writing of the “rules of the game.” In place of the assorted preference axioms and inter-agent consistency conditions of the standard theory, only a single axiom of substantive rationality is needed, namely the principle of no arbitrage. No-arbitrage is shown to be the primal characterization of rationality with respect to which solution concepts such as expected-utility maximization and strategic and competitive equilibria are merely dual characterizations. The traditional distinctions among individual, strategic, and competitive rationality are thereby dissolved.
Arbitrage choice theory (ACT) does not require the decision models of individuals to be complete, so it is compatible with the notion that individual rationality is bounded. It is also inherently a theory of group rationality rather than individual rationality, so it can be applied at any level of activity from personal decisions to games of strategy to competive markets. This group-centered view of rationality admits the possibility that individuals do more than merely satisfice when making decisions in complex environments for which they lack complete models. Rather, they use “other people’s brains” by seeking advice from colleagues and experts, forming teams or management hierarchies, consulting the relevant literature, relying on market prices, invoking social norms, and so on. The most important result of such interactive decision processes usually is not the identification of an existing alternative that optimizes the latent beliefs and values of the putative decision maker, but rather the synthesis of a more complete model of the problem than she (or perhaps anyone) initially possesses, the construction of sharper beliefs and values, and the discovery or creation of new (and perhaps dominant) alternatives. On this view, traditional models of rational choice that attempt to explain the behavior of households, firms, markets, and polities purely in terms of the satisfaction of individual preferences are perhaps overlooking the most important purpose of socioeconomic interactions, namely that they harness many people’s brains to the solution of complex problems.

1.1 Introduction
2.1 Outline of standard rational choice theory

1. Environment

2. Behavior

3. Rationality

2.2 Trouble in paradise

1. Consequences and acts

2. Preferences

3. The axioms

4. Equilibrium

5. Impossibility

6. Simple questions, equivocal answers

7. The fossil record

2.3 Alternative paradigms

1. Bounded rationality, behavioral economics, and organization theory

2. Behavioral decision theory and experimental economics

3. Austrian and subjectivist economics

4. Evolutionary & complexity theory
3.1 Outline of arbitrage choice theory

1. Environment

2. Behavior

3. Rationality

3.2 Fundamental theorem and examples

1. Pure exchange

2. Elicitation and aggregation of belief

3. Decisions under uncertainty

4. Games of strategy

  1. Learning from experience

  2. Allais’ paradox

3.3 On the realism and generality of the modeling assumptions

3.4 Summary
4.1 Incomplete models and other people’s brains

4.2 The limits of theory

4.3 Implications for Modeling

Arbitrage, Incomplete Models, and Interactive Rationality
Robert F. Nau

1.1 Introduction
Over the last 50 years, the theory of rational choice has emerged as the dominant paradigm of quantitative research in the social and economic sciences. The idea that individuals make choices by rationally weighing values and uncertainties (or that they ought to, or at least act as if they do) is central to Bayesian methods of statistical inference and decision analysis; the theory of games of strategy; theories of competitive markets, industrial organization, and asset pricing; the theory of social choice; and a host of rational actor models in political science, sociology, law, philosophy, and management science.
Rational choice theory is founded on the principles of methodological individualism and purposive action. Methodological individualism means that social and economic phenomena are explained in terms of “a particular configuration of individuals, their dispositions, situations, beliefs, and physical resources and environment” rather than in terms of holistic or emergent properties of groups. (Watkins 1957; c.f. Homans 1967, Brodbeck 1968, Ordeshook 1986, Coleman 1990, Arrow 1994) Purposive action means that those individuals have clear and consistent objectives and they employ reason to find the best means of achieving those objectives. They intend their behavior to cause effects that they desire, and their behavior does cause those effects for the reasons they intend. (Ordeshook 1986, Elster 1986) In most versions of the theory, the objectives of the individuals are expressed in terms of preferences: they choose what they most prefer from among the alternatives available, and the (only) role of social and economic institutions is to enable them to satisfy their preferences through exchange or strategic contests with other individuals.
This essay sketches the outline of arbitrage choice theory (ACT), a new synthesis of rational choice that weakens the emphasis on methodological individualism and purposive action and departs from the traditional use of preference as a behavioral primitive, building on earlier work by Nau and McCardle (1990, 1991) and Nau (1992abc, 1995). The main axiom of rationality in this framework is the requirement of no arbitrage, and it leads to a strong unification of the theories of personal decisions, games of strategy, and competitive markets. It also yields a very different perspective on the purpose of social and economic interactions between agents, suggesting that group behavior is to some extent emergent and suprarational, not merely an aggregation or collision of latent individual interests. Thus, we will take issue with the following statement of Elster (1986): “A family may, after some discussion, decide on a way of spending its income, but the decision is not based on ‘its’ goals and ‘its’ beliefs, since there are no such things.” We will argue that there are indeed “such things” and that they may be better defined for the group than for the constituent individuals.
The organization is as follows. Section 2 presents an outline and brief critique of the standard rational choice theory. Section 3 presents a contrasting outline of arbitrage choice theory choice, illustrated by a few simple examples. Section 4 focuses on the phenomenon of model incompleteness and its implications for interactions between agents and the emergence of group-level rationality.
2.1 Outline of the standard rational choice theory
Standard rational choice theory begins with the assumption that the infinitely detailed “grand world” in which real choices are made can be adequately approximated by a “small world” model with a manageable number of numerical parameters. The formal theory is then expressed in terms of assumptions about the structure of the small-world environment, the modes of behavior that take place in that environment, and the conditions that behavior must satisfy in order to qualify as rational.

Elements of standard rational choice theory

1. Environment:


Events (states of nature and alternatives for agents)


Mappings of events to consequences for all agents

Acts (hypothetical mappings of events not under an agent’s control to consequences)

2. Behavior:

Physical behavior: choices among alternatives

Mental behavior: preferences among acts

3. Rationality:

(a) Individual rationality

Axioms of preference (completeness, transitivity, independence, etc.)

Axiom of rational choice (choices agree with preferences)

(b) Strategic rationality

Common knowledge of individual rationality

Common knowledge of utilities

Common prior probabilities

Probabilistic independence

(c) Competitive rationality


Market clearing

(d) Social rationality

Pareto efficiency

(e) Rational asset pricing

No arbitrage

(f) Rational learning

Bayes’ theorem

(g) Rational expectations

Self-fulfilling beliefs

1. Environment: The environment is inhabited by one or more human agents (also called actors or players), and in the environment events may happen, under the control of the agents and/or nature. An event that is under an agent’s control is an alternative, and an event that is under nature’s control is a state of nature. States of nature just happen, while alternatives are chosen. A realization of events is called an outcome. For each agent there is a set of material or immaterial consequences (wealth, health, pleasure, pain, etc.) that she may enjoy or suffer, and there is a known mapping from outcomes to consequences.1 For example, an agent may face a choice between the alternatives “walk to work with an umbrella” or “walk to work without an umbrella,” the states of nature might be “rain” or “no rain,” and the consequences might be “get slightly wet” or “get drenched” or “stay dry,” with or without the “hassle” of carrying an umbrella, as summarized in the following contingency table:
Table 1

State of nature:


Rain (s1)

No rain (s2)

Walk to work with umbrella (a1)

Get slightly wet, hassle (c1)

Stay dry, hassle (c2)

Walk to work without umbrella (a2)

Get drenched, no hassle (c3)

Stay dry, no hassle (c4)

From the perspective of an observer of the situation, every cell in the table corresponds to an outcome: nature will or won’t rain and the agent will or won’t carry her umbrella. Every outcome, in turn, yields a known consequence. From the perspective of the agent, every row in the table corresponds to an alternative, and her problem is to choose among the alternatives. Here the alternatives have been labeled as a1, a2; the states of nature have been labeled as s1, s2; and the consequences have been labeled as c1 through c4.

Every alternative for an agent is a feasible mapping of events not under that agent’s control to consequences. An act for an agent is an arbitrary mapping of events not under that agent’s control to consequences. Thus, an act is a (usually) hypothetical alternative, while an alternative is a feasible act. For example, the act “take a taxicab to work at a cost of $5” might yield the consequence “ride in comfort minus $5” (henceforth labeled c5) whether it rains or not, and an agent can contemplate such an act regardless of whether it is feasible. (The cab drivers may be on strike today, but the agent still can imagine the ride.) The set of acts is typically much richer than the set of alternatives for each agent. For example, the set of acts for our protagonist with the umbrella might be represented by the following table:
Table 2
State of nature:






















Here, acts a1 and a2 happen to correspond to alternatives a1 and a2 (walking with or without the umbrella) while act a3 (taking the cab) might be purely hypothetical. Another act (ai) might be composed of an arbitrary assignment of consequences (say cj and ck) to weather states—even an oxymoron such as “stay dry, no hassle” if it rains, “get drenched, hassle” if it doesn’t rain. The small world is therefore rather “big,” since it contains a great number of infeasible alternatives in addition to the feasible ones.

2. Behavior: Within the environment, several kinds of behavior occur. First and most importantly, there is physical behavior, which affects the outcomes of events. Physical behavior by agents2 consists of choices among feasible alternatives, corresponding to the selection of a single row out of a table similar to Table 1. However, it does not suffice to model only physical behavior, because the set of feasible alternatives usually is not rich enough to support a tight mathematical representation and because it is of interest to predict choices from other kinds of antecedent behavior. The other kind of behavior most often modeled in rational choice theory is preference behavior. Preferences are hypothetical choices between hypothetical alternatives (acts), corresponding to the selection of one row (or perhaps an equivalence class of several rows) out of a table similar to Table 2. An agent prefers act x to act y if she imagines that she would choose x rather than y if given the choice between them. (Actually, we are getting ahead of ourselves: at this point preference is merely an undefined primitive, but it will be linked to choices in the axioms which follow.) Preference behavior may be interpreted as a kind of mental behavior that precedes and ultimately causes physical behavior. The agent is assumed to have preferences with respect to all possible acts, even those that involve counterfactual assumptions, such as: “If I had a choice between riding in a cab (which is not running today) or walking to work with an umbrella (which I don’t have), I would take the cab.” The domain of preferences is therefore very rich, providing the quantitative detail needed to support a fine-grained mathematical representation of behavior.
3. Rationality: Assumptions about what it means to behave rationally are typically imposed at several levels: the level of the agent (individual rationality), the level of the small group (strategic rationality), and the level of the large group (competitive or social rationality). The lowest (agent) level assumptions apply in all cases, while the higher (group) level assumptions are applied in different combinations depending on the situation.
a. Individual rationality. The principal assumptions of individual rationality are axioms imposed on mental behavior—that is, on preferences. Preferences are assumed to establish an ordering of acts from least-preferred to most-preferred, and they are usually taken to satisfy the axioms most often imposed on ordering relations so as to ensure the existence of a convenient numerical representation. For example, preferences are usually assumed to be complete, so that for any acts x and y an agent always knows whether she prefers x to y or y to x or is precisely indifferent between the two. Preferences are also usually assumed to be transitive, so that if x is preferred to y and y is preferred to z, then x is also preferred to z. Where uncertainty is involved, preferences are usually assumed to satisfy an “independence” or “cancellation” or “sure-thing” condition (e.g., Savage’s axiom P2), which ensures that comparisons among acts depend only on the events where they lead to different consequences.3 For example, if x and y are two acts that yield different consequences if event E occurs but yield the same consequence if E does not occur, and if two other acts x and y agree with x and y, respectively, if E occurs but yield a different, though still common, consequence if E does not occur, then x is preferred to y if and only if x is preferred to y. This implies that numerical representations of preference can be additively decomposed across events, setting the stage for expected-utility calculations.
Preferences are also assumed to satisfy conditions ensuring that the effects of beliefs about events can be separated from the effects of values for consequences. One such condition (Savage’s axiom P3) requires that “value can be purged of belief.”4 Suppose that acts x and y yield identical consequences everywhere except in event E, where x yields consequence c1 and y yields c2. Then a preference for x over y suggests that consequence c1 is more highly valued than consequence c2. Now consider two other acts x and y that yield identical consequences everywhere except in event E, where they lead to c1 and c2 respectively. (There is an additional technical requirement that the events E and E should be “non-null”—i.e., not regarded as impossible.) Then x is assumed to be preferred to y if and only if x is preferred to y, which means the ordering of preference between two consequences cannot depend on the event in which they are received. Another such condition is that “belief may be discovered from preference” (Savage’s axiom P4). Suppose that consequence c1 is preferred (as a sure thing) to consequence c2, and suppose that for two events A and B, the lottery in which c1 is received conditional on A and c2 is received conditional on not-A is preferred to the lottery in which c1 is received conditional on B and c2 is received conditional on not-B. (This suggests that A is regarded as more probable than B.) Then the same direction of preference must hold when c1 and c2 are replaced by any two other consequences c1 and c2, respectively, such that c1 is preferred to c2. A further condition that is needed to uniquely separate belief from value, but which is often left implicit, is that the strength of preference between any two given consequences—e.g., the perceived difference between “best” and “worst”—must have the same magnitude and not merely the same sign in every state of nature (Schervish et al. 1990).
The axioms imposed on preferences yield a representation theorem stating that to every event not under her control the agent can assign a numerical degree of belief called a probability, and to every consequence she can assign a numerical degree of value called a utility, according to which the more preferred of two acts is the one that yields the higher expected utility. (A particularly elegant axiomatization of subjective expected utility is given by Wakker 1989.) To continue our earlier example, this means that descriptions of states can be summarized by their corresponding probabilities, and descriptions of consequences can be summarized by their corresponding utilities in tables of acts and alternatives, as follows.
Table 3:

Probabilities and utilities:




Walk to work with umbrella (a1)



Walk to work without umbrella (a2)



Here p1 and p2 denote the agent’s probabilities of “rain” and “no rain,” respectively, u11 denotes the utility level of the consequence “get slightly wet, hassle”, and so on. The expected utility of alternative i is then given by:

EU(ai) = p1ui1 + p2ui2,
and she strictly prefers a1 over a2 if and only if EU(a1) > EU(a2).
Thus, the rational individual is a maximizer, and the parameters of the objective function she maximizes are her beliefs and values, represented by numerical probabilities and utilities. Beliefs and values are imagined to have different subjective sources—the former depending on information and the latter on tastes—and their effects are imagined to be separable from each other. Only beliefs (probabilities) matter in problems of reasoning from evidence, while only values (utilities) matter in problems where there is no uncertainty.
One thing remains to complete the description of individual rationality: a link between mind and body, or between preference and choice. This is the axiom of rational choice, which states that the event that occurs shall be one in which every agent chooses her most-preferred alternative.5 To continue the example above, if the agent’s preferences are such that EU(a1) > EU(a2), then the theory predicts that event a1 will occur: she will carry the umbrella. The axiom of rational choice thereby gives operational meaning to the concept of preference. For, a preference for aj over ak is meaningless by itself if aj and ak are merely hypothetical acts rather than feasible objects of choice. But through the axioms imposed on individual preferences, such a preference constrains the preferences that may exist with respect to other acts, say a1 and a2, that are feasible, and through the axiom of rational choice it then exerts an indirect effect on choice. In the standard theory, most preferences have only this tenuous connection with materially significant behavior.
Download 0.57 Mb.

Share with your friends:
  1   2   3   4   5   6   7

The database is protected by copyright © 2022
send message

    Main page