Is military incompetence adaptive?

June 23, 2017 | Autor: Dominic Johnson | Categoría: Psychology, Cognitive Science, Anthropology, Risk Taking, Evolution and Human Behavior, Data Collection
Share Embed


Descripción

Is Military Incompetence Adaptive? Richard Wrangham Department of Anthropology, Peabody Museum, Harvard University, Cambridge, Massachusetts

Military engagements are categorized as raids or battles, according to whether one or both sides has the opportunity to assess the other. In raids, assessment appears to be accurate. This means that aggressors experience low costs, which allows violence to be adaptive. A commonly reported reason for battles, by contrast, is a failure of assessment: both opponents hold positive illusions and believe they will win. This article asks why this form of battle incompetence occurs. Explanations in terms of individual anomaly or cognitive constraints appear unsatisfactory. Here, I propose two mechanisms by which positive illusions tend to promote victory. First, according to the Performance Enhancement Hypothesis, they suppress negative thoughts or feelings. This applies to both raids and battles. Second, the Opponent-Deception Hypothesis suggests that positive illusions increase the probability of a successful bluff. This applies only to battles. Military incompetence is proposed to be the result of adaptive strategies of self-deception, which unfortunately promote an increased intensity of violence. © 1999 Elsevier Science Inc. KEY WORDS: Violence; Self-deception; Assessment; Raids; Battles; Military incompe-

tence.

uman warfare consists of two major types of conflict, raids and battles (Keeley 1996). Raids are typical of “primitive war” and are characterized by the victims being unwilling participants. In a battle, on the other hand, both sides willingly engage. In some battles the willingness of both opponents to fight is understandable. For instance, the two opponents may be evenly matched, the weaker opponent may be cornered, or a lost battle may be a cost-effective component of a long-term war strategy. Yet, in many battles, one of the opponents is patently weaker than the other, but still chooses to fight despite

H

Received May 13, 1998; revised September 28, 1998. Address reprint requests and correspondence to: Richard Wrangham, Department of Anthropology, Peabody Museum, Harvard University, Cambridge, MA 02138, U.S.A. Evolution and Human Behavior 20: 3–17 (1999)  1999 Elsevier Science Inc. All rights reserved. 655 Avenue of the Americas, New York, NY 10010

1090-5138/99/$–see front matter PII S1090-5138(98)00040-3

4

R. Wrangham

nonviolent options. Such battles illustrate “military incompetence” by the weaker opponent (Dixon 1976). These cases, in which two opponents deliberately fight even though they clearly differ in strength, are puzzling because, according to animal conflict theory, actors should accurately assess their probability of victory. Weaker opponents are expected to predict that they will lose and should, therefore, retreat without fighting, especially if the probable costs of loss are high (Kim 1995). The principal explanations to date for these failures of military assessment are that the commanders are cognitively or emotionally incompetent. War leaders, it has been suggested, tend to be deviant because these characteristics help them to gain promotion within military institutions (Dixon 1976). Existing explanations, therefore, are maladaptive. No adaptive explanations have been proposed for systematic military incompetence in battles. In this article I propose two adaptive hypotheses for the self-deception that underlies military incompetence. Specifically, I suggest that positive illusions give players an advantage either by suppression of disadvantageous thoughts or feelings, or through an arm’s race of bluffing, which reduces the opponent’s ability to assess each other accurately. These proposals do not apply to raids, because raiders are hypothesized to assess accurately the competitive strength of victims. Accordingly, it is necessary to reconcile the relationship between the evolutionary explanation for raids, which assumes accurate assessment, and the explanation for militarily incompetent battles, which assumes inaccurate assessment. I therefore begin by briefly reviewing adaptive theories of raiding. I then address the problem of battles and their accompanying military incompetence.

RAIDS Characteristics Lethal raids are interactions in which parties of allied individuals collectively seek vulnerable neighbors, assess the probability of making a successful attack, and conduct a surprise raid that leaves one or more victims dead or dying. They are rare in nature. The only mammals in which they have been reported are chimpanzees Pan troglodytes (Goodall 1986; Manson and Wrangham 1991) and humans (“the commonest form of combat employed in primitive warfare . . . small raids and ambushes,” Keeley 1996: 85). In both cases, they are conducted almost entirely by males. Raids appear demographically significant in both chimpanzees and humans. For instance, for the 21 primitive societies for which Keeley (1996) found estimates, the death rate from raiding was 0.5% of the population per year. This would be equivalent to more than 1 billion war deaths if raiding occurred at similar rates in the 20th century. Among 13 raiding cultures, the median percentage of male deaths caused by violence was 20–30% (D. Jones and R. Wrangham, in preparation), similar to an estimate for chimpanzees in Gombe National Park, Tanzania (Goodall 1986). If such figures are representative of prehistoric human cultures, they suggest that raiding has been a major selective factor in human evolution.

Is Military Incompetence Adaptive?

5

Accurate assessment by the aggressors is an important component of lethal raiding. In both chimpanzees and humans, raiders act with considerable care (Goodall 1986; Keeley 1996). For instance, anecdotes show that raiders stop their attack if they learn that victims are less vulnerable than they initially appeared to be. M. Wilson (personal communication) has recently carried out playback experiments in the wild to test the hypothesis that chimpanzees respond to potential victims in accordance with assessment theory, i.e., that they are more likely to approach victims with increasing numerical advantage. This prediction has been robustly supported by 26 experiments to date (M. Wilson et al., in preparation).

Why Does Lethal Raiding Occur? Only one hypothesis has been proposed for the evolution of lethal raiding in chimpanzees and humans. The Imbalance-of-Power Hypothesis explains raids by the cooccurrence of two features (Manson and Wrangham 1991; Wrangham and Peterson 1996). There must be persistent rivalry among coalitions (groups or communities), such that individuals within a given coalition benefit by reducing the coalitionary power of their neighbors. Various kinds of benefit are possible, such as inspiring fear in the neighbors, protection from territorial incursions, expansion of a safe border zone, improved access to land, or attracting females into the aggressors’ community. There is no theoretical need for raiders to identify which of such benefits is likely to occur, however. Instead, the behavior can, in principle, be maintained merely by the tendency for coalitions of successfully raiding males to benefit reproductively, even in unpredictable ways, by weakening the group of neighbors. In a similar way, although individuals within a dominance hierarchy tend to benefit by rising in rank, the specific benefits of a rise in rank are unpredictable: high-ranking males, for example, do not always obtain more matings. The second requisite is that there are such large imbalances of power between coalitionary parties (fighting units) and their victims that aggressors can expect to incur little risk of being damaged during a raid (Manson and Wrangham 1991). In the case of nonhuman mammals, power imbalances are essentially restricted to numeric differences between parties. Among chimpanzees, the power imbalances reported to date consist of a lone victim fighting against three or more aggressors. In these circumstances the victim is held down by one or more, while others inflict damage. Among humans, by contrast, power imbalances can occur as a result not only of differences in number, but also in planning and technology. For example, fire can be used to burn entire communities in their houses and is, therefore, an efficient use of military resources (Keeley 1996). It could alternatively be argued that raiding in chimpanzees and humans is the consequence of a certain level of cognitive ingenuity, because a substantial degree of collaboration and planning is required (cf. Alexander 1987, 1989). This type of “Cognitive Hypothesis” would be challenged, however, by evidence from bonobos (Pan paniscus). Bonobos are as cognitively sophisticated as chimpanzees, but they do not raid. Bonobos’ failure to raid is easily explicable by the imbalance-of-power hypothesis, because the ecology of bonobos allows them to live in larger, more sta-

6

R. Wrangham

ble, and less solitary groupings than chimpanzees, so much so that large imbalances of power are rare (Wrangham and Peterson 1996). The Cognitive Hypothesis is thus not supported by this comparison, nor by a wider look at cognitively advanced species such as other apes and cetaceans. Nevertheless, cognitive abilities could prove limiting. Thus, both hypotheses may apply.

The Evolution of Lethal Raiding The co-occurrence of raids among chimpanzees and humans could be coincidental (raiding as a homoplasy, or convergence), or it could be an evolutionary continuity (raids as a human–chimpanzee synapomorphy). We need an improved reconstruction of hominid grouping patterns to decide which of these is correct. A key issue is whether or not hominids foraged in small fission–fusion parties (like chimpanzees and some modern hunter-gatherers), or in stabler groups or larger fission–fusion parties: only the small fission–fusion parties would be expected to favor a continuous history of raiding, because only this grouping pattern would create significant imbalances of power. No models are yet available to decide this issue. However, most analyses implicitly assume that fission–fusion parties occurred throughout much of hominid evolution, as they do among foraging (and other) humans today. This observation, together with the rarity of raiding in general and its similarity in form and function among chimpanzees and humans, means that the most parsimonious hypothesis is that lethal raiding arose in an ape prior to the chimpanzee– human split and has continued subsequently in the chimpanzee and hominid lines. According to this hypothesis, raiding has been subject to continuous selection for 5–6 million years (Wrangham and Peterson 1996). The consequences of this conclusion are significant because a selective régime lasting several million years, affecting a behavior responsible for a major source of mortality and reproductive success, is, of course, likely to have had substantial effects on psychological evolution. Specifically, this hypothesis proposes that selection has favored, in chimpanzees and humans, a brain that, in appropriate circumstances, seeks out opportunities to impose violence on neighbors. In this sense, the hypothesis is that we have evolved a violent brain, expected to assess accurately the costs of premeditated and unprovoked conflict. It also means that selection should have favored accurate assessment abilities in violent conflict. Hence, incompetent assessment is not expected.

MILITARILY INCOMPETENT BATTLES Characteristics In very recent human history, intergroup aggression has often consisted of escalated conflicts in which a significant proportion of combatants are killed. These lethal battles, whether among trench-bound lines of European soldiers under military orders or among voluntary warriors from small villages, have no known analog in chimpanzees or other primates (Boehm 1992). Unlike raids, therefore, they appear to be

Is Military Incompetence Adaptive?

7

evolutionarily novel in the hominid line. Possibly their only analog is among ants. As with raiding, however, further data may show that some other species, such as social carnivores, show parallel behavior. In this article, military incompetence refers to protagonists losing even when they expect to win, which is a common feature of battles (David 1997; Dixon 1976; Gabriel 1986; Messenger 1991; Perry 1996; Regan 1987, 1993). Four symptoms of incompetence are particularly common: overconfidence, underestimation of the enemy, the ignoring of intelligence reports, and wastage of manpower (Dixon 1976: 400). Group thinking exacerbates the problem by contributing six additional symptoms: a shared illusion of invulnerability, collective attempts to maintain shaky but cherished assumptions, an unquestioned belief in the group’s inherent morality, stereotyping the enemy as too evil for negotiation (or too weak to be a threat), a collective illusion of unanimity in a majority viewpoint (based on the false assumption that silence means consent), and self-appointed mind guards to protect the group from information that might weaken resolve (Dixon 1976: 399). Thus, whether decisions are taken by individuals or groups, they are based on assessments that commonly overestimate the actor’s military strength and underestimate the strength of the opponent. Misperceptions of this type are not confined to leaders. Morgenthau (1973) concluded: “Heroes, not horsetraders, are the idols of public opinion. Public opinion, while dreading war, demands that its diplomats act as heroes who do not yield in the face of the enemy, even at the risk of war, and condemns as weaklings and traitors those who yield, albeit only halfway, for the sake of peace.” Such sentiments turn potentially peaceful interactions into violent conflicts (de Mesquita and Lalman 1992). These tendencies occur not only in military interactions but also in intergovernmental relations, i.e., in wars as opposed to battles. For example, Tuchman (1984: 5) concluded that regardless of place or period, governments routinely pursue policies contrary to their own interest even when they are decided by a group, and even though feasible alternatives are available and openly discussed. Such tendencies, she found, were universal across 3000 years, unrelated to history or the type of political regime, nation, or class. They reflect “a rejection of reason” in the face of “ambition, anxiety, status-seeking, face-saving, illusions, self-delusions, fixed prejudices” (Tuchman 1984: 380). Hinde (1993: 41) also concluded from a review of war leadership that decisions by political leaders differ routinely from those of a theoretical “rational actor.” Prominent problems included “misperception or wishful thinking about the opponent’s probable actions or (on) irrational assessments of probable gains or losses. In part because each side tries to conceal its true intentions, they may underestimate the opponent” (Hinde 1993). A similar phenomenon occurs in nonmilitary contexts. In a wide variety of circumstances, and particularly with intergroup hostility, people tend to have illusions about their in-group, i.e., they make positively biased judgements (Hinde 1993; Klar and Giladi 1997; Rabbie 1989; Tajfel 1981). This tendency reflects a ubiquitous preference for in-groups, supported by stereotyping of outsiders.

8

R. Wrangham

Hypotheses for Inaccurate Assessment Individual incompetence. Past explanations for positive illusions in military contexts have been that leaders are stupid or psychologically deviant. However, commanders are clearly generally intelligent, even if they commonly make bad decisions on the battlefield (Dixon 1976: 168; Regan 1987). The idea that stupidity accounts for military incompetence has therefore been abandoned. In favor of commanders being psychologically deviant, Dixon (1976, 1988) noted that the mistakes made by war leaders are essentially errors of emotion—summarized as cognitive dissonance, pontification, denial, risk taking, and anti-intellectualism. He argued that commanders are susceptible to such errors because military promotion tends to favor men with excessive psychological defenses, including need for approval, fear of failure, and resistance to accepting unpalatable information (Dixon 1976: 394). Dixon thus claimed that a distinctive personality type (the “authoritarian personality”) is favored by military hierarchies and that, under the stressful conditions of war, such people tend to make bad decisions. In summary, as an inadvertent effect of military life, leaders tend to be men with poor battle judgment. Some cases of military incompetence may well result from poor performers acquiring a position of leadership (e.g., U.S. General George Custer; see David 1997). However, deviance in commanders seems unlikely to be a complete or even very significant account, because as Dixon (1976: 397) himself notes, incompetent military decisions are often taken by those (such as politicians or kings) who did not rise to the top of the military profession. Finally, even when decisions are made or strongly influenced by groups, they tend to suffer from the same errors as when made by individuals (de Mesquita and Lalman 1992; Dixon 1976; Hinde 1993; Tuchman 1984). The pervasiveness of poor military assessment thus challenges the hypothesis of individual psychological deviance and makes alternative hypotheses desirable. Cognitive constraints. Inaccurate assessment in military contests could represent cognitive failure, i.e., a maladaptive illusion. This is theoretically possible in light of experimental data showing cognitive illusions to be widespread, apparently a result of constraints on the design of decision-making mechanisms (Kahneman and Tversky 1972; Tversky and Kahneman 1974, 1981). Particularly relevant to military incompetence is the “overconfidence bias.” In typical demonstrations of the “overconfidence bias,” people are asked how confident they are that they have correctly answered a certain question or have correctly assessed the probability of a particular outcome. On average, their answers are wrong more often than predicted by their level of confidence. In this sense, the subjects are said to have an “overconfidence bias.” The bias has been claimed to be a reliable, reproducible finding and a general phenomenon (Dunning et al. 1990; Lichtenstein et al. 1982; von Winterfeldt and Edwards 1986). It might be argued, therefore, that military assessment failure is a specific instance of a more general overconfidence bias.

Is Military Incompetence Adaptive?

9

Despite the claims for generality, however, it is now known that the overconfidence bias occurs only under certain conditions. For example, when people are asked about the number of questions they expect to get right, rather than about the probability that they are right about any specific question, the overconfidence bias disappears (Gigerenzer 1993). Therefore, the notion of a general overconfidence bias “in the sense that it relates to deficient processes of cognition or motivation” is no longer legitimate (Gigerenzer 1993). Accordingly, the hypothesis that assessment failure in military contexts represents a design flaw, or maladaptive constraint, cannot be supported merely by the occurrence of biases in other contexts or of closely related types of bias. It is discussed later in relation to adaptive hypotheses. Adaptive self-deception. The claim that military decisions tend to incorporate positive illusions is parallelled by abundant evidence from nonmilitary contexts, where illusions include over-rating of personal or team prowess (e.g., health, professional competence, sporting ability, and ethics), exaggeration of personal contributions to joint tasks, attribution of past successes to personal contributions (but of past failures to bad luck), and a selfish evaluation of a fair outcome in disputes (reviewed in Babcock and Loewenstein 1997; Rue 1994: 156; Taylor 1989; Taylor and Armor 1996). Such biases were once thought to result from assessment failure (Miller and Ross 1975). They often appear useful, however. Furthermore, they tend to appear or disappear in appropriate circumstances. In such cases, therefore, they are now widely regarded as examples of adaptive self-deception (Sedikides et al. 1998; Zuckerman 1979). For example, although self-serving biases are typical within dyads, they are absent when the partners have a close relationship (Sedikides et al. 1998). Similarly people are more realistic when setting goals, whereas when implementing them their thinking shows positive illusions, which are beneficial (Taylor and Gollwitzer 1995). In a culture of self-effacement, the self-serving bias is enormously reduced (Heine and Lehman 1997). Depressed people, especially those with low self-esteem, are less biased than those who are nondepressed or have high selfesteem (Abramson and Alloy 1981; Freud 1950; Raps et al. 1982; Tennen and Herzberger 1987). Because such studies indicate that true information is accessible but hidden in the unconscious until needed, they imply that self-serving bias represents self-deception rather than an assessment error (cf. Gur and Sackheim 1979; Rue 1994: 88; Trivers 1991: 177). How could self-deception be adaptive in the context of military engagements, where perfect information might be expected to be critical in evaluating the likely success of fighting? In nonmilitary contexts, positive illusions appear to favor success in competition, both among groups and individuals. According to Hinde (1993: 33), for example, in-group bias is associated with increased cohesiveness and cooperation and, thus, more effective social action. Among individuals in competition, self-deception may enhance performance by deflecting attention from anxiety, pain, and fatigue (Starek and Keating 1991). These phenomena suggest that self-deception may underlie success in military contests also. I consider two hypotheses for

10

R. Wrangham

how it might do so. They are concerned with performance enhancement and opponent deception, relating to intrapersonal and interpersonal deception, respectively (Mele 1997).

Performance enhancement hypothesis. Positive illusions are widely viewed as a central component of mental health, associated with self-confidence, self-esteem, and self-respect. They are “the fuel that drives creativity, motivation, and high aspirations,” according to Taylor (1989). Positive illusions can thus be viewed as strategically beneficial in suppressing thoughts or feelings that would interrupt progress towards a goal. Such suppression can occur consciously (Fraley and Shaver 1997; Kelly and Kahn 1994; Wegner 1994), in which case there is no need to invoke illusions. However, where suppression is unconscious (such as suppression of negative thoughts or negative feelings), we can say that an individual or group experiences illusions, or self-deception. In sports terminology, positive illusions enable “championship thinking” or “psyching up,” and thereby enhance performance. Starek and Keating (1991) suggest that the performance-enhancing effect of self-deception comes from suppressing stress. The Performance Enhancement Hypothesis, therefore, is that, by suppressing conflicting thoughts or feelings, positive illusions enable individuals or groups to be more effective in achieving a goal (cf. Zuckerman 1979).

Opponent-deception hypothesis. The Performance Enhancement Hypothesis suggests that the way in which positive illusions in conflicts are adaptive is no different from other (noncompetitive) contexts. In contrast, the Opponent-Deception Hypothesis applies specifically to contests. It suggests that in conflicts involving mutual assessment, an exaggerated assessment of the probability of winning increases the probability of winning. Selection therefore favors this form of overconfidence. The Opponent-Deception Hypothesis is derived as follows. 1. Aggression theory predicts that, in a conflict between two parties, each opponent should assess the other. Selection should favor accurate assessment of fighting ability and motivational intensity, allowing the opponent that perceives itself to be weaker or less motivated to yield at minimal cost to itself. The reason for yielding is that, if it will lose after a fight, it is better to lose without suffering the costs of fighting (Dawkins and Krebs 1978; Elwood et al. 1998; Kim 1995; Parker 1974; Popp and DeVore 1979). 2. Under the conditions of mutual accurate assessment between evenly matched opponents, it pays Ego to deceive its opponent by presenting false evidence of Ego’s high competitive ability and/or the costs that Ego is willing to pay to win. Bluffing is a stable strategy if the costs of discovering whether or not an opponent is bluffing are high (Adams and Mesterton-Gibbons 1995). An important merit of bluffing is that a successful bluff will win a cheap victory. This creates an arm’s race in which each opponent attempts to bluff the other and, in turn, attempts to detect the other’s bluff (Dawkins and Krebs 1978; Hasson 1994).

Is Military Incompetence Adaptive?

11

3. In such an arm’s race, self-deception is a potential mechanism to hide ongoing deception from others, because self-deception reduces the probability of behavioral leakage, i.e., inadvertently revealing the truth through an inappropriate behavior (Trivers 1985, 1991). Alexander (1987: 123) applied this logic to altruism, arguing that the most effective way for individuals to advance their selfish motives within a system of reciprocal altruism is to deceive themselves (and therefore others) into perceiving their own motives as altruistic. A comparable argument has been proposed for cooperation (Surbey and McNally 1997). Similarly, the Opponent-Deception Hypothesis proposes that, in aggressive contexts, humans tend to deceive themselves as a way to bluff successfully. Selfdeception is not the only way to conceal evidence of false signaling (Rue 1994: 146), but it is likely to be the most effective. Self-deception may have other, closely related, benefits in contests involving bluff. For example, positive illusions may provide a mechanism by which Ego can devalue information coming from a bluffing opponent.

Consequences of Inaccurate Assessment For at least two reasons, the harboring of positive illusions increases the risk that two opponents in a conflict will fight. First, it means that they have higher expectations of the outcome of fighting than are realistic. Second, they tend to perceive each other’s signals cynically. As a result, the “contract zone” within which conflicts can be resolved to mutual satisfaction is reduced (as shown experimentally in studies of the effect of self-serving bias in bargaining impasse [Babcock and Lowenstein 1997]). Thus, because both opponents are selected to overestimate their own abilities and underestimate the abilities and/or motivation of the opposition, fighting occurs more often than it should if their mutual assessment were accurate. Both opponents end up escalating conflicts that only one can win and suffering higher costs than they should if assessment were accurate. This result is “globally maladaptive” (in the sense that self-deception reduces the fitness of the average individual), in the same way as most investment in aggressive anatomy or behavior. As a parallel example, the canine teeth of male baboons Papio anubis have evolved to be long and sharp, due to an evolutionary arm’s race among male baboons to possess the most effective weapons. One result is that their canine teeth regularly cause wounds in females and other males. In this sense, the evolution of canine teeth is disadvantageous for the average individual, compared to a hypothetical baboon species in which males have short, blunt canine teeth. In a similar way, I suggest that self-deception has been positively selected in military contests, because without it a player would be less effective (e.g., hesitant or outbluffed). Nevertheless, the unfortunate result is that conflicts are more frequent and severe than they would be without it. Thus, self-deception in conflicts is disadvantageous for the species as a whole.

12

R. Wrangham

DISCUSSION Comparison of Hypotheses Military incompetence in the most general sense is clearly due to a wide range of factors, such as difficulties of communication, the cynical willingness of some commanders to sacrifice their own men’s lives, nepotistic appointments of genuinely incompetent commanders, problems of assessing the reliability of allies, unwillingness to adapt to changed circumstance, or conflicts between foreign policy and domestic objectives (de Mesquita and Lalman 1992). In this article, however, I follow Dixon (1976) in focusing on a specific and apparently widespread form of incompetence by which individuals or groups, whether soldiers or politicians, systematically overestimate their own strength and underestimate the strength of the opposition. This tendency for positive illusion plays an important role in diverse examples of military and political failure (Tuchman 1984). Previous explanations for military incompetence have assumed that it is nonadaptive or maladaptive, making it an interesting evolutionary puzzle. The most viable nonadaptive explanation is that assessment failure results from cognitive constraints, i.e., humans are inherently incompetent at assessing relevant strengths and weaknesses of themselves and their opponents. This is supported both by the occurrence of frequent military errors and by the evidence that, in many contexts, human judgments are systematically poor or irrational. However, the occurrence of military failure is consistent with both adaptive and nonadaptive explanations of biased assessment. According to the adaptive hypotheses, military failure occurs because, although self-deception benefits each player, it also leads to a higher probability of violence and a more severe contest. Thus, the occurrence of military disasters does not help per se to decide whether overconfidence is generally adaptive. Similarly, the evidence that humans are often irrational in other contexts has little relevance to assessment failures in military contexts. Assessment failures, although widespread in laboratory tasks, are found less often in evolutionarily relevant problems (Cosmides 1989; Cosmides and Tooby 1992; Gigerenzer and Hug 1992). Because violent conflicts are clearly a prime example of a problem with evolutionary salience, it would be a surprise if the widespread occurrence of positive illusions in battles is maladaptive. Understanding whether biased assessment in military contexts is adaptive or maladaptive, therefore, depends not on theory but on analysis of opponents’ strengths, assessment strategies, and successes in particular contests. The adaptive explanations predict that players with positive illusions tend to succeed sufficiently often that it pays to have such illusions. Miscalculations then result from a tradeoff between successful and failed bluffs, rather than from an inherent inability to assess correctly. Two adaptive explanations are proposed, the Performance Enhancement and Opponent-Deception Hypotheses. Both conform to the theoretical expectation that cognitive processes are adapted to evolutionarily relevant problems. Both invoke self-deception. According to the Performance Enhancement Hypothesis, self-decep-

Is Military Incompetence Adaptive?

13

tion promotes effective action by reducing internal conflicts, as has previously been suggested in other contexts (Hinde 1993; Starek and Keating 1991). The OpponentDeception Hypothesis suggests that self-deception reduces inadvertent signaling of weakness and is proposed here for the first time. The two hypotheses can, in theory, be distinguished by the extent to which positive illusions are associated with the ability to communicate to an opponent. Thus, the Opponent-Deception Hypothesis predicts that the strength of positive illusions covaries with the intensity of communication between opponents. For instance, in raids the aggressors do not communicate to the victims, so the Opponent-Deception Hypothesis expects no self-deception (unless it is needed as soon as fighting starts). However, communication in battles is two way (both before and after the start of fighting), favoring self-deception, According to the Performance Enhancement Hypothesis, by contrast, self-deception will occur in both raids and battles (although if battles are more dangerous, the pressure for self-deception may be greater). The relative power of the Opponent-Deception and Performance Enhancement Hypotheses can therefore be tested by comparing the strength of positive illusions in raids as opposed to battles. The role of communication in promoting self-deception can be examined in lower-level conflicts also. For example, a counterintuitive result from industrial confrontations suggests a relationship between self-deception and communication: increased communication between the parties made it harder for the party with the stronger case to win (Stephenson 1984). The blurring of negotiation zones imposed by the dishonesty of self-deception may be contributing to this effect. The level of self-deception can be expected to be influenced by conditions such as the familiarity of the opponents, relationships with allies, and moral ideologies. If similar effects are found in a wide variety of contexts, the Opponent-Deception Hypothesis will be supported. The Performance Enhancement Hypothesis will be supported if selfserving biases show no relation to the exchange of signals between opponents. Neither of the adaptive hypotheses is intended to suggest that positive illusions arose evolutionarily specifically because of their role in resolving escalated conflicts. Positive (and negative) illusions are exhibited in a range of social interactions, such as altruism, conflict, and cooperation (Surbey and McNally 1997; Taylor 1989), none of which have any obviously primary significance from an evolutionary perspective. I assume, therefore, that self-deception evolved independently of particular types of conflict and was co-opted into its current role. Indeed, it seems likely that self-deception in conflicts between coalitions is an elaboration of self-deceptive strategies that individuals adopt in dyadic interactions. The adaptive hypotheses do propose, however, that positive illusions are prominent in contests specifically because of the advantages they tend to give.

Evolutionary Significance of Self-Deception in Contests The potential effects of self-deception have not been widely considered in animal conflict theory, because selection is expected to favor accurate assessments (Enquist and Leimar 1983). Analysis of its effects may enrich our understanding of the evo-

14

R. Wrangham

lution of conflicts. For example, because self-deception reduces perceived opponent asymmetry, it should cause opponents to fight more intensely (e.g., longer and riskier bouts, Enquist et al. 1990). This means that, other things being equal, species with a greater capacity for self-deception are expected to have more intense conflicts. Similarly, variation between opponents in the success of self-deception provides a source of variance in the outcome of a contest that is additional to factors such as fighting ability and motivational strength. Accordingly, in species with greater self-deception abilities, contest outcomes are expected to be less predictable. It is therefore of interest to know how species vary in their capacity for selfdeception. It seems likely that the human capacity for self-deception is particularly strong, given a partitioned consciousness in which an individual can simultaneously hold two contradictory beliefs (Sackeim and Gur 1997). If so, the intensity and unpredictability of human battles can be expected to be enhanced relative to a less selfdeceptive species, both in dyadic and coalitional contexts. The strength of positive illusions is correlated, at least sometimes, with circulating testosterone, both in men and in women. For example, Cashdan (1995) found that androgens (and estradiol) were positively correlated with self-regard in women, as measured by the degree to which subjects over-ranked themselves in a peer-ranking test. This suggests that one mechanism by which testosterone promotes aggressive behavior is by enhancing the self-serving bias, suggesting opportunities for experimental investigation. For example, men appear more susceptible than women to self-deception, both for positive and for negative illusions (Hartung 1988: 171). Accordingly, one route by which a high level of testosterone may lead to an increased probability of fighting could be through promotion of positive illusions about competitive ability. Some authors object to an adaptive analysis of violence (Gould 1996), because the word “adaptive” conjures up visions of warfare also being “good,” or “biological,” or “hard-wired.” But, of course, even if warfare is adaptive, that does not make it right, or genetically determined, or inevitable. This analysis does suggest, however, that selection has favored two kinds of immoral tendency. On the one hand, our history of raiding has given us the tendency to attack whenever the costs appear sufficiently low. On the other hand, in battles and political contests we deceive ourselves into thinking that our own moral stance is more moral than our opponent’s (Hartung 1995; Loewenstein 1996). Although favored by natural selection, these immoral biases appear to foster violence. Thus, it is all the more important to understand them. For discussion and comments on an earlier draft I thank C. Bergstrom, T. Burnham, R. Connor, M. Daly, I. DeVore, P. Ellison, D. Haig, R. Hinde, C. Knott, A. McGuire, R. Nesse, D. Nott, L. Rue, R. Trivers, B. Sillen-Tullberg, M. Wilson, W. Zimmerman, and two anonymous reviewers.

REFERENCES Abramson, L.Y., and Alloy, L.B. Depression, non-depression, and cognitive illusions: a reply to Schwarz. Journal of Experimental Psychology 110: 436–437, 1981.

Is Military Incompetence Adaptive?

15

Adams, E.S., and Mesterton-Gibbons, M. The cost of threat displays and the stability of deceptive communication. Journal of Theoretical Biology 175: 405–421, 1995. Alexander, R.D. The Biology of Moral Systems. Hawthorne, NY: Aldine de Gruyter, 1987. Alexander, R.D. Evolution of the human psyche. In The Human Revolution: Behavioral and Biological Perspectives on the Origins of Modern Humans, P. Mellars and C. Stringer (Eds). Princeton: Princeton University Press, 1989, pp. 455–513. Babcock, L., and Loewenstein, G. Explaining bargaining impasse: the role of self-serving biases. Journal of Economic Perspectives 11: 109–126, 1997. Boehm, C. Segmentary “warfare” and the management of conflict: comparison of East African chimpanzees and patrilineal-patrilocal humans. In Coalitions and Alliances in Humans and Other Animals, A.H. Harcourt and F.B.M. de Waal (Eds.). Oxford: Oxford University Press, 1992, pp. 137–173. Cashdan, E. Hormones, sex, and status in women. Hormones and Behavior 29: 354–366, 1995. Cosmides, L. The logic of social exchange: has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition 31: 187–206, 1989. Cosmides, L., and Tooby, J. Cognitive adaptations for social exchange. In The Adapted Mind: Evolutionary Psychology and the Generation of Culture, J.H. Barkow, L. Cosmides, and J. Tooby (Eds.). New York: Oxford University Press, 1992, pp. 163–228. David, S. Military Blunders: The How and Why of Military Failure. New York: Carroll and Graf, 1997. Dawkins, R., and Krebs, J.R. Animal signals: information or manipulation? In Behavioral Ecology: An Evolutionary Approach, J.R. Krebs and N.B. Davies (Eds.). Oxford: Blackwell, 1978, pp. 282– 309. de Mesquita, B.B., and Lalman, D. War and Reason: Domestic and International Imperatives. New Haven: Yale University Press, 1992. Dixon, N. On the Psychology of Military Incompetence. London: Jonathan Cape, 1976. Dixon, N. Our Own Worst Enemy. London: Futura, 1988. Dunning, D., Griffin, D., Milojkovic, J., and Ross, L. The overconfidence effect in social prediction. Journal of Personality and Social Psychology 58: 568–581, 1990. Elwood, R.W., Wood, K.E., Gallagher, M.B., and Dick, J.T.A. Probing motivational state during agonistic encounters in animals. Nature 393: 66, 1998. Enquist, M., and Leimar, O. Evolution of fighting behavior; decision rules and assessment of relative strength. Journal of Theoretical Biology 102: 387–410, 1983. Enquist, M., Leimar, O., Ljungberg, T., Mallner, Y., and Segerdahl, N. A test of the sequential assessment game: fighting in the cichlid fish Nannacara anomala. Animal Behavior 40: 1–14, 1990. Fraley, R.C., and Shaver, P.R. Adult attachment and the suppression of unwanted thoughts. Journal of Personality and Social Psychology 73: 1080–1091, 1997. Freud, S. Collected Papers (Vol. 4). (J. Riviere, Trans.). London: Hogarth Press, 1950. Gabriel, R. Military Incompetence: Why the American Military Doesn’t Win. New York: Noonday Press, 1986. Gigerenzer, G. The bounded rationality of probabilistic mental modules. In Rationality, K.I. Manktelow and D.E. Over (Eds.). London: Routledge, 1993, pp. 244–313. Gigerenzer, G., and Hug, K. Domain-specific reasoning: social contracts, cheating, and perspective change. Cognition 43: 127–171, 1992. Goodall, J. The Chimpanzees of Gombe: Patterns of Behavior. Cambridge: Harvard University Press, 1986. Gould, S.J. The Diet of Worms and the defenestration of Prague. Natural History Sept: 18–67, 1996. Gur, C.R., and Sackheim, H.A. Self-deception: a concept in search of a phenomenon. Journal of Personality and Social Psychology 37: 147–169, 1979. Hartung, J. Deceiving down: conjectures on the management of subordinate status. In Self-Deceit: An Adaptive Mechanism, J. Lockard and D. Paulus (Eds.). Englewood Cliffs, NJ: Prentice-Hall, 1988, pp. 170–185. Hartung, J. Love thy neighbor: the evolution of in-group morality. Skeptic 3: 86–99, 1995. Hasson, O. Cheating signals. Journal of Theoretical Biology 167: 223–238, 1994. Heine, S.J., and Lehman, D.R. The cultural construction of self-enhancement: an examination of groupserving biases. Journal of Personality and Social Psychology 72: 1268–1283, 1997. Hinde, R.A. Aggression and war: individuals, groups, and states. In Behavior, Society and International Conflict, P.E. Tetlock, J.L. Husbands and R. Jervis (Eds.). Oxford: Oxford University Press, 1993, pp. 8–70.

16

R. Wrangham

Kahneman, D., and Tversky, A. Subjective probability: a judgement of representativeness. Cognitive Psychology 3: 430–454, 1972. Keeley, L.H. War Before Civilization. New York: Oxford University Press, 1996. Kelly, A.E., and Kahn, J.H. Effects of suppression on personal intrusive thoughts. Journal of Personality and Social Psychology 66: 998–1006, 1994. Kim, Y-G. Status signalling games in animal contests. Journal of Theoretical Biology 176: 221–231, 1995. Klar, Y., and Giladi, E.E. No one in my group can be below the group’s average: a robust positivity bias in favor of anonymous peers. Journal of Personality and Social Psychology 73: 885–901, 1997. Lichtenstein, S., Fischhoff, B., and Phillips, L.D. Calibration of probabilities: the state of the art to 1980. In Judgment Under Uncertainty: Heuristics and Biases, D. Kahneman, P. Slovic, and A. Tversky (Eds.). New York: Cambridge University Press, 1982, pp. 306–334. Loewenstein, G. Behavioral decision theory and business ethics: skewed tradeoffs between self and others. In Codes of Conduct: Behavioral Research into Business Ethics, D. Messick and A. Tenbrunsel (Eds.). New York: Russell Sage Foundation, 1996, pp. 214–227. Manson, J.H., and Wrangham, R.W. Intergroup aggression in chimpanzees and humans. Current Anthropology 32: 369–390, 1991. Mele, A.R. Real self-deception. Behavioral Brain Sciences 20: 91–136, 1997. Messenger, C. Great Military Disasters. New York: Gallery Books, 1991. Miller, D.T., and Ross, M. Self-serving biases in attribution of causality: fact or fiction? Psychological Bulletin 82: 213–225, 1975. Morgenthau, H. Politics Among Nations. New York: Alfred A. Knopf, 1973. Parker, G.A. Assessment strategy and the evolution of fighting behaviour. Journal of Theoretical Biology 47: 223–243, 1974. Perry, J. Arrogant Armies: Great Military Disasters and the Generals Behind Them. New York: John Wiley, 1996. Popp, J., and DeVore, I. Aggressive competition and social dominance theory. In The Great Apes, D.A. Hamburg and E.R. McCown (Eds.). Menlo Park, CA: Benjamin/Cummings, 1979, pp. 317–338. Rabbie, J.M. Group processes as stimulants of aggression. In Aggression and War, J Groebel and R.A. Hinde (Eds.). Cambridge: Cambridge University Press, 1989, pp. 141–155. Raps, C.S., Peterson, C., Reinhard, K.E., Abramson, L.Y., and Seligman, M.E.P. Attributional style among depressed patients. Journal of Abnormal Psychology 91: 102–108, 1982. Regan, G. Someone Had Blundered: A Historical Survey of Military Incompetence. London: B.T. Batsford, 1987. Regan, G. Snafu: Great American Military Disasters. New York: Avon, 1993. Rue, L. By the Grace of Guile: The Role of Deception in Natural History and Human Affairs. New York: Oxford University Press, 1994. Sackeim, H.A., and Gur, R.C. Flavors of self-deception: ontology and epidemiology. Behavioral Brain Sciences 20: 125–126, 1997. Sedikides, C., Campbell, W.K., Reeder, G., and Elliot, A.J. The self-serving bias in relational context. Journal of Personality and Social Psychology 74: 378–386, 1998. Starek, J.E., and Keating, C.F. Self-deception and its relationship to success in competition. Basic and Applied Social Psychology 12: 145–155, 1991. Stephenson, G.M. Intergroup and interpersonal dimensions of bargaining and negotiation. In The Social Dimension: Europeans Developments in Social Psychology (Vol. 2), H. Tajfel (Ed.). Cambridge: Cambridge University Press, 1984, pp. 646–667. Surbey, M.K., and McNally, J.J. Self-deception as a mediator of cooperation and defection in varying social contexts described in the iterated Prisoner’s Dilemma. Evolution and Human Behavior 18: 417–435, 1997. Tajfel, H. Human Groups and Social Categories. Cambridge: Cambridge University Press, 1981. Taylor, S.E. Positive Illusions: Creative Self-Deception and the Healthy Mind. New York: Basic Books, 1989. Taylor, S.E., and Armor, D.A. Positive illusions and coping with adversity. Journal of Personality 64: 873–898, 1996. Taylor, S.E., and Gollwitzer, P.M. Effect of mindset on positive illusions. Journal of Personality and Social Psychology 69: 213–226, 1995. Tennen, H., and Herzberger, S. Depression, self-esteem, and the absence of self-protective attributional biases. Journal of Personality and Social Psychology 52: 72–80, 1987.

Is Military Incompetence Adaptive?

17

Trivers, R.L. Social Evolution. Menlo Park, California: Benjamin/Cummings, 1985. Trivers, R.L. Deceit and self-deception: the relationship between communication and consciousness. In Man and Beast Revisited, M.H. Robinson and L. Tiger (Eds.). Washington, DC: Smithsonian Institution Press, 1991, pp. 175–192. Tversky, A., and Kahneman, D. Judgement under uncertainty: heuristics and biases. Science 185: 1124– 1131, 1974. Tversky, A., and Kahneman, D. The framing of decisions and the psychology of choice. Science 211: 453–458, 1981. Tuchman, B. The March of Folly: From Troy to Vietnam. New York: Alfred A. Knopf, 1984. von Winterfeldt, D., and Edwards, W. Decision Analysis and Behavioral Research. Cambridge: Cambridge University Press, 1986. Wegner, D.M. Ironic processes and mental control. Psychological Review 101: 34–52, 1994. Wrangham, R.W., and Peterson, D. Demonic Males: Apes and the Origins of Human Violence. Boston: Houghton Mifflin, 1996. Zuckerman, M. Attribution of success and failure revisited, or the motivational bias is alive and well in attribution theory. Journal of Personality 47: 245–287, 1979.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.