«Individual Differences in Framing and Conjunction Effects Keith E. Stanovich University of Toronto, Canada Richard F. West James Madison University, ...»
THINKING AND FRAMING AND CONJUNCTION EFFECTS
REASONING, 1998, 4 (4), 289–317 289
Individual Differences in Framing and
Keith E. Stanovich
University of Toronto, Canada
Richard F. West
James Madison University, USA
Individual differences on a variety of framing and conjunction problems were
examined in light of Slovic and Tversky’s (1974) understanding/acceptance
principle—that more reflective and skilled reasoners are more likely to affirm the axioms that define normative reasoning and to endorse the task construals of informed experts. The predictions derived from the principle were confirmed for the much discussed framing effect in the Disease Problem and for the conjunction fallacy on the Linda Problem. Subjects of higher cognitive ability were disproportionately likely to avoid each fallacy. Other framing problems produced much more modest levels of empirical support. It is conjectured that the varying patterns of individual differences are best explained by two-process theories of reasoning (e.g. Evans, 1984, 1996; Sloman, 1996) conjoined with the assumption that the two processes differentially reflect interactional and analytic intelligence.
© 1998 Psychology Press Ltd 290 STANOVICH AND WEST (Baron, 1994; Cohen, 1981, 1983; Evans & Over, 1996; Gigerenzer, 1996;
Kahneman, 1981; Kahneman & Tversky, 1983, 1996; Koehler, 1996; Stanovich, in press; Stein, 1996). For example, the gap between the normative and the descriptive can be interpreted as indicating systematic irrationalities in human cognition. Alternatively, it can be argued that the gap is due to the application of an inappropriate normative model or due to an alternative construal of the task on the part of the subject (see Cohen, 1981 and Stein, 1996 for extensive discussions of these possibilities).
Even the simplest principles of normative rationality have been the subject of intense dispute. Take, for example, the basic principle of descriptive invariance (Kahneman & Tversky, 1984, p.343) “that the preference order between prospects should not depend on the manner in which they are described.” There is now a large literature on whether people do display framing effects that can be unambiguously interpreted as violations of this principle. For example, the Disease Problem of Tversky and Kahneman (1981, p.453) has been the subject of
Problem 1. Imagine that the U.
S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favor, Program A or Program B?
Problem 2. Imagine that the U.
S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program C is adopted, 400 people will die. If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor, Program C or Program D?
Many subjects select alternatives A and D in these two problems, despite the fact that the two problems are redescriptions of each other and that Program A maps to Program C rather than D. This response pattern seemingly violates descriptive invariance. However, Berkeley and Humphreys (1982) argue that Programs A and C might not be descriptively invariant in subjects’ interpretations. They argue that the wording of the outcome of Program A (“will be saved” ), combined with the fact that its outcome is seemingly not described in the same exhaustive way as the consequences for Program B, suggests the possibility of human agency in the future which might enable the saving of more lives (see also, Kuhberger, 1995). The wording of the outcome of Program C (“will die”) does not suggest the possibility of future human agency working to save more lives (indeed, the possibility of losing a few more might be inferred by some people). Under such a construal of the problem, it is no longer nonFRAMING AND CONJUNCTION EFFECTS normative to choose Programs A and D. Likewise, Macdonald (1986, p.24) argues that, regarding the “200 people will be saved” phrasing, “it is unnatural to predict an exact number of cases” and that “ordinary language reads ‘or more’ into the interpretation of the statement.” Similarly, Jou, Shanteau, and Harris (1996) have argued that the Disease Problem’s assumed underlying formula (Total Expected Loss - Number Saved = Resulting Loss) is without rationale and may be pragmatically odd for various
reasons. For example, they argue (p.3) that:
the deaths could be construed as occurring immediately after the decision to save 200 lives, or at some indefinite time in the future. If the deaths were construed as occurring at some unknown future time, they would not likely be seen as a consequence of saving 200 lives. Hence saving the lives will not be conceived as entailing the death of 400 people.
Similar debates have been spawned by claims that people violate the independence axiom of utility theory (Allais, 1953; Bell, 1982; Loomes & Sugden, 1982; Schick, 1987; Slovic & Tversky, 1974; Tversky, 1975). Whether or not subjects display the so-called conjunction fallacy in probabilistic reasoning has likewise proven controversial (Adler, 1991; Bar-Hillel, 1991; Dulany & Hilton, 1991; Fiedler, 1988; Politzer & Noveck, 1991; Tversky & Kahneman, 1983; Wolford, Taylor, & Beck, 1990). Analogous controversies surround the use of base rates (e.g. Koehler, 1996), confirmation bias (Klayman & Ha, 1987, 1989), belief bias (e.g. Evans, Over, & Manktelow, 1993), probability calibration (e.g. Keren, 1997), selection task choices (e.g. Oaksford & Chater, 1994, 1996), and many other tasks in the literature in which human performance seems to depart from normative models (for summaries of the large literature, see Baron, 1994; Evans, 1989; Evans & Over, 1996; Newstead & Evans, 1995; Osherson, 1995; Piattelli-Palmarini, 1994; Plous, 1993; Shafir & Tversky, 1995).
What most of the disputants in these controversies seem to have ignored is that—although the modal person in these experiments might well display an overconfidence effect, underutilise base rates, choose P and Q in the selection task, commit the conjunction fallacy, etc.—on each of these tasks, some people give the standard normative response. For example, in knowledge calibration studies, although the mean performance level of the entire sample may be represented by a calibration curve that indicates overconfidence, a few people do display near perfect calibration (Stanovich & West, 1998). As another example, consider the problems that the Nisbett group (e.g. Fong, Krantz, & Nisbett, 1986) have used to assess statistical thinking in everyday situations. Although the majority of people often ignore the more diagnostic but pallid statistical evidence, some actually do rely on the statistical evidence rather than the vivid case evidence (Stanovich & West, 1998). A few people even respond with P and not-Q on the notoriously difficult
selection task (Evans, Newstead, & Byrne, 1993).
292 STANOVICH AND WEST The debates about how to interpret the descriptive/normative gap usually ignore these individual differences. Contending arguments are framed in terms of changes in modal or mean performance in response to the manipulation of variables that are purported to differentiate between explanations of the gap. For example, arguments about whether overconfidence in knowledge calibration is eliminated when a representative sampling of items is used have focused on changes in the mean overconfidence bias score (Gigerenzer, Hoffrage, & Kleinbolting, 1991; Griffin & Tversky, 1992; Juslin, Olsson, & Bjorkman, 1997). It will be argued here that such analyses need to be supplemented with a concern for individual differences1, because the nature of individual differences and their patterns of covariance might have implications for the debates about how to interpret discrepancies between normative models and descriptive models of human behaviour2.
THE UNDERSTANDING/ACCEPTANCE PRINCIPLEIn a 1974 article, Slovic and Tversky presented a “mock” debate between Allais and Savage about the independence axiom of utility theory. This axiom states that “if the option chosen does not affect the outcome in some states of the world, then we can ignore the … outcomes in those states” (Baron, 1993, p.50; see Allais, 1953; Luce & Krantz, 1971; Savage, 1954). Slovic and Tversky (1974) speculated that the more the independence axiom of utility theory was understood, the more it would be accepted (“the deeper the understanding of the axiom, the greater the readiness to accept it” pp.372–373). Their argument was essentially that descriptive facts about argument endorsement should condition our inductive inferences about why human performance deviates from normative models3. Slovic and Tversky (1974) argued that understanding/acceptance See Stankov and Crawford (1996) and West and Stanovich (1997) for indications of how analyses of individual differences might have implications for interpretations of the psychological mechanism underlying the overconfidence effect.
For exceptions to the general neglect of individual differences in the literature, see Jepson, Krantz, and Nisbett (1983), Roberts (1993), Slugoski, Shields, and Dawson (1993), Slugoski and Wilson (in press), and Yates, Lee, & Shinotsuka (1996).
Their argument is one in a long tradition that allows descriptive facts to affect judgements of normative appropriateness. For example, Slovic (1995, p.370) refers to the “deep interplay between descriptive phenomena and normative principles.” As Larrick, Nisbett, and Morgan (1993,p.332) have argued, “There is also a tradition of justifying, and amending, normative models in response to empirical considerations.” Thagard and Nisbett (1983, p.265) refer to this tradition when arguing that “discovery of discrepancies between inferential behavior and normative standards may in some cases signal a need for revision of the normative standards, and the descriptions of behavior may be directly relevant to what revisions are made” see also Kyburg, 1983, 1991; March, 1988; Shafer, 1988). The assumptions underlying the naturalistic project in epistemology (e.g. Kornblith, 1985, 1993) have the same implication—that findings about how humans form and alter beliefs should have a bearing on normative theories of belief acquisition.