Individual Values and Social Goals in Environmental Decision Making

Share Embed


Descripción

Individual Values and Social Goals in Environmental Decision Making David H. Krantz1 , Nicole Peterson1 , Poonam Arora1 , Kerry Milch1 , and Ben Orlove2 1

2

Columbia University, Department of Psychology, Center for Research on Environmental Decisions (CRED), MC 5501, New York, NY 10027, USA dhk,np2184,[email protected], [email protected] University of California, Davis, Department of Environmental Science and Policy, One Shields Way, Davis, CA 95616, USA [email protected]

Summary. Environmental problems can be viewed as a failure of cooperation: individual choices are seemingly made based on individual benefits, rather than benefits for all. The game-theoretic structure of these problems thus resembles commons dilemmas or similar multiplayer strategic choices, in which the incentives to eschew cooperation can lead to unfavorable outcomes for all the players. Such problems can sometimes be restructured by punishing noncooperators. However, cooperation can also be enhanced when individuals adopt intrinsic values, such as cooperation, as a result of social goals to affiliate with a group. We define a social goal as any goal to affiliate with a group, and also any goal that derives from any group affiliation. We suggest that individual decision making in group contexts depends on a variety of different types of social goals, including conformity to group norms, sharing success, and carrying out group-role obligations. We present a classification of social goals and suggest how these goals might be used to understand individual and group behavior. In several lab- and field-based studies, we show how our classification of social goals can be used to structure observation and coding of group interactions. We present brief accounts of two laboratory experiments, one demonstrating the relationship between group affiliation and cooperation, the second exploring differences in goals that arise when a decision problem is considered by groups rather than individuals. We also sketch three field projects in which these ideas help to clarify processes involved in decisions based on seasonal climate information. Identifying social goals as elements of rational decision making leads in turn to a wide variety of questions about tradeoffs involving social goals, and about the effects of uncertainty and delay in goal attainment on tradeoffs involving social goals. Recognizing a variety of context-dependent goals leads naturally to consideration of decision rules other than utility maximization, both in descriptive and in prescriptive analysis. Although these considerations about social goals and decision rules apply in most decision-making domains, they are particularly important for environmental decision making, because environmental problems typically involve many players, each with multiple economic, environmental, and social goals, and because examples

166

David H. Krantz et al.

abound where the players fail to attain the widespread cooperation that would benefit everyone (compared to widespread noncooperation).

1 Introduction About 25 years ago, the New York City Council strengthened its “scoop-thepoop” ordinance. Dog walkers were required to scoop and properly discard their dogs’ feces; violators would receive a summons and might have to pay a substantial fine. To some (including one of the authors, who is not a dog owner), this ordinance appeared to be a model of poor legislation: it would be costly to enforce, and if not enforced, it would merely add to the many ways in which New Yorkers already disrespected the legal system. However, this negative prejudgment turned out to be off the mark: there was widespread compliance. Costly enforcement was not necessary. Compliance was not universal, of course, but it did not have to be. The sidewalks of New York City became cleaner. New Yorkers walk much, and walking became more pleasant. This improvement has been largely sustained over the years. Why did it turn out thus, when so many other New York City ordinances are ignored and remain seemingly unenforceable? Was the threat of punishment more credible than for other minor offenses such as littering or jaywalking? Are dog owners, as a group, more afraid of punishment than others? Or are they more civic-minded and law-abiding than wrong-way bicyclists or hornblowing motorists? Indeed, civic-mindedness and concern about punishment may each have influenced the dog owners. Anticipated social reward—perhaps a feeling of group accomplishment—may also have affected their behavior. In this chapter, we suggest a general framework for analyzing problems of cooperation or compliance. We include both social rewards and social sanctions, and we discuss the relation of social goals to group affiliations. We suggest that individual differences in cooperation relate more to differences in group affiliation than to personality traits such as altruism, fear of punishment, or willingness to punish others. As applied to the scoop-the-poop puzzle, our analysis suggests that even weak social identities (such as belonging to the group of neighborhood dog owners, or to the group of civic-minded New Yorkers) could have given rise to social goals that played a crucial role for compliance with the ordinance. Many environmentally related problems are partially analogous to the dog feces problem. For example, efforts by individuals, groups, or countries to reduce greenhouse gas emissions, hoping to minimize global climate change, are partly analogous to the scooping actions undertaken by dog owners to minimize unpleasant sidewalks. Decisions by groups or countries are different, of course, from those by individuals, but the values and goals of individuals strongly influence decisions by groups or countries. A deeper understanding of the values and goals that determine compliance and cooperation is important

Values and Goals in Environmental Decision Making

167

for the design of solutions to a broad range of environmental problems. We suggest here that such understanding is closely tied to understanding the multiple group identities [7] and the resulting social goals of the people whose cooperation is in question for any given problem. For present purposes, we define a social goal as either a goal to affiliate with a group or a goal that is a consequence of an affiliation. We include both temporary and long-term affiliations with groups of any size (from a relationship with one other person to an abstract identity as “good citizen” or the like. Consideration of social goals is important not only for the understanding of behavior but also for prescriptive economic analysis. From the standpoint of prescriptive models, social goals present an array of novel questions. What are the rules describing the tradeoff of one social goal against another, or against an economic goal? How are social goals affected by uncertainty and by delay? These central questions about intergoal tradeoffs, intertemporal tradeoffs, and uncertainty are often handled by adopting sweeping, simplified behavioral assumptions (discounted expected multiattribute utility) [20,35]. The question of when such simplification gives a good approximation for a prescriptive problem can only be raised once the social goals involved have been described, which brings one back again to understanding the multiple group identities of the decision makers. Section 2 presents a brief discussion of commons dilemma problems, such as the scoop-the-poop problem. We cannot do justice to the massive literature on this topic. We merely present and discuss a game-theoretic framework in which payoffs are affected by social institutions and social goals. In particular, we use this framework to discuss the importance of intrinsic social rewards for mutual cooperation. (The huge literature on intrinsic rewards from Bruner [10] to Deci and Moller [15], is also part of the background but beyond the scope of our discussion.) In Section 3 we discuss group affiliation and goals that derive from it. The discussion leads to a tentative classification of social goals. Such classification can serve several purposes. First, it helps one to keep in mind the wide variety of different goals that may be in play for different decision makers. Relatedly, the classification can guide qualitative research on the process of decision making: specifically, the development of coding categories for records of group interactions in decision settings. (Later sections offer some examples in which such group interactions are observed and coded.) Lastly, the classification serves to organize a set of detailed questions about decision mechanisms, that is, about the behavioral mechanisms that govern tradeoffs among different goals and the effects of time delay and uncertainty. Section 4 contains a brief discussion of such decision mechanisms. We emphasize the novel research questions posed by this framework, although answers are scarce. Relating social goals to environmental decisions is one of the major themes of the Center for Research on Environmental Decisions (CRED), and the last two sections discuss some current CRED projects. In Section 5 we describe

168

David H. Krantz et al.

two lines of laboratory investigation with small groups. The first looks at group identity as a precursor of cooperation, and the second at group decision processes related to reframing decision problems. Section 6 describes field observations of group decision making and additional laboratory studies suggested by them. Finally, Section 7 considers the implications of our work for prescriptive economic analysis. We conclude with a summary.

2 Coercion and Intrinsic Reward in Commons Dilemmas 2.1 Commons Dilemmas and Coercion The scoop-the-poop problem is a commons dilemma [24]: if many cooperate, the gain for each person is large, but the portion of that gain that stems directly from any one person’s cooperation is too small to repay his or her effort or cost, therefore each person has an incentive not to cooperate, regardless of whether many or only few others cooperate. Many decision situations have a similar structure. Discharge of chemical wastes into waterways, or discharge of polluting or greenhouse gases or aerosols into the atmosphere provide a host of examples. As Hardin [24] put it: The rational man finds that his share of the cost of the wastes he discharges into the commons is less than the cost of purifying his wastes before releasing them. The collapse of fisheries due to overfishing provides yet another family of examples. Widespread self-restraint would make most users of a fishery better off, and most recognize this to be true, but there is no incentive for self-restraint by any one user, because the improvement in fish stocks produced by any one user’s restraint would be too small to repay that user’s sacrifice. Hardin’s prescribed solution for commons dilemmas was “mutual coercion mutually agreed upon.” Dog owners helped to elect the New York City Council, which enacted the coercive scoop-the-poop ordinance. Similarly, in times of water shortage, city or regional officials impose restrictions on the water usage of the voters to whom they are directly or indirectly responsible. International attempts at mutually agreed coercion are seen in many treaties, including those intended to curb overfishing, to protect endangered species, to curb chlorofluorocarbon emissions (damaging to the protective ozone layer in the stratosphere), and to curb greenhouse gas emissions (involved in global warming). Since Hardin’s work in 1968, there has been much research on commons dilemmas. Ostrom [39] provides a valuable review and synthesis, covering laboratory studies, field studies, and theory (see also [40]). A major conclusion is that mutual coercion does not always or even usually require formalized institutions stemming from treaties, legally enforced contracts, legislation, or

Values and Goals in Environmental Decision Making

169

administrative law. Many groups succeed in creating norms for cooperation that command wide compliance. Ostrom finds that trust and reciprocity are central to cooperation, and that norm development and group affiliation enhance cooperation. Ostrom’s [39] design principles for the commons of group membership or boundaries and participation particularly signal the importance of group affiliation. Both the empirical findings and the theory reviewed by Ostrom emphasize the importance of sanctions (negative consequences) for violations of the relevant norms. Also noteworthy are the laboratory studies of Fehr and his collaborators [11,18], which emphasize the important role of individuals who, despite cost to themselves and lack of gain, are ready to impose sanctions on noncooperators. Evolutionary theory for repeated play in commons dilemmas [22] emphasizes the importance of conditional cooperation: a disposition to cooperate initially and to continue only if cooperation is soon reciprocated. Here, the discontinuation of cooperation can be viewed as a sanction. 2.2 Intrinsic Reward for Cooperation The threat of sanctions is undoubtedly important, but one should also consider the rewards stemming from cooperation. We discuss this first in the context of the prisoner’s dilemma, which can be characterized abstractly by the two-person payoff matrix shown in Table 1. Player #1 has a choice of two strategies, shown as the two main rows of Table 1, labeled Cooperate and Defect. Player #2 has a similar choice, shown as the two main columns. The outcomes for each player are shown in the cell of the table determined by their two strategy choices, with Player #1’s payoff above and #2’s below. Table 1. Prisoner’s dilemma outcome ranks. Top entry in each cell is Player #1 outcome ranking; bottom is Player #2 outcome ranking. Player #1 Strategies

Cooperate

Defect

Player #2 Strategies Cooperate

Defect

3

1

3

4

4

2

1

2

The outcome ranking for each player goes from 1 (low) to 4 (high). For Player #1, row 2 (Defect) dominates, because the outcome for row 1 ranks lower within each column: 3 < 4 and 1 < 2. Similarly, for Player #2, column

170

David H. Krantz et al.

2 (Defect) dominates: in row 1, 3 < 4, whereas in row 2, 1 < 2. In a situation where the game will be played only once, each player should Defect to maximize the rank of her payoff. Empirically, a majority of people do defect on a single isolated play [5], but many do cooperate, and several different situational changes have been shown to induce more people to cooperate [41]. Why do they cooperate? There seem to be two broad categories of explanation: (i) confusion and (ii) group goals. The first type of explanation asserts that people fail to perceive that noncooperation is dominant. In real-life examples, outcomes are usually uncertain, which may contribute to failure of perceived dominance. The ranking from 1 up to 4 may not be perceived as such if people deviate widely from expected utility (for reasons rational or otherwise) in processing the uncertainties. Most laboratory experiments on the prisoner’s dilemma, however, avoid complication due to uncertainty: they present outcomes as certain or they precalculate expected values and display them in the payoff matrix, which should make dominance easier to recognize [6]. Even under certainty, people may not represent the payoffs as a conditional table as in Table 1, and may therefore not recognize dominance [49]. Some people who do recognize the dominance of Defect in the payoffs nevertheless choose Cooperate. Why? This brings us to explanation category (ii): very simply, they want to cooperate. More precisely, some people view mutual cooperation as an additional goal. Mutual cooperation is an intrinsic social reward. It is as though Player 1 implicitly adds a nonzero value C to the entry in the (Cooperate, Cooperate) cell of the payoff matrix, and Player 2 adds C 0 . In this table, we denote the increased rewards for mutual cooperation by 3+C or 3+C 0 . This is not a rigorous notation, because 3 was just the rank of an outcome. We use this notation to indicate that the outcome is improved for each player. The result is shown in Table 2. Table 2. Prisoner’s dilemma with rewards for mutual cooperation. Top entry in each cell is for Player #1; bottom is for Player #2. Player #1 Strategies

Cooperate

Defect

Player #2 Strategies Cooperate

Defect

3+C

1

3 + C0

4

4

2

1

2

If the C and C 0 improvements are sufficiently large, then the outcomes in the upper left cell for each player are highest, and Defect no longer dominates.

Values and Goals in Environmental Decision Making

171

If Player #1 thinks that C 0 is high, then he may expect Player #2 to be tempted to maximize his outcome by cooperating. If C is also high, and if player #2 knows this, then the two players each recognize that both will be best off through cooperation, and they will be likely to cooperate. With this altered payoff structure, the situation is no longer a prisoner’s dilemma. High values of C and C 0 can be thought of as representing valuation of a group goal for the players. That is, the addend is not associated with the action of cooperation per se, because it does not show up when the other player defects. Rather, it depends on a group outcome, namely, that both cooperate. We keep open the possibility that C 6= C 0 , that is, that the intrinsic reward value of mutual cooperation is different for the two players. In particular, Defect might dominate for just one of the players. If the other player knows this, she will assume that her counterpart will defect, and thus, despite having a high intrinsic value for cooperation, she will also likely defect. We defer to Section 3 the consideration of how such an intrinsic reward for mutual cooperation might arise from group affiliation, and to Section 4 a discussion of how such a reward interacts with other goals (represented in a preliminary fashion by the “+” signs in Table 2). To make clear why such intrinsic rewards could play a key role in the commons dilemmas and other similar decision situations, we consider a four-person symmetric game with both intrinsic rewards and sanctions. The symmetry assumption greatly reduces the generality but it allows the payoffs for the multiperson game to be represented in Table 3 by a matrix (instead of a multiway array). Table 3. Reward and punishment in a symmetric four-person game. Number of

Payoff to each

Defectors

Cooperate

Defect

0

7 + C4 − q0



1

5 + C3 − q1

8 − D1

2

3 + C2 − q2

6 − D2

3

1 + C1 − q3

4 − D3

4



2

Here, as in Tables 1 and 2, the extrinsic rewards are ranked in order of their subscripts, from 1 up to 8. If all the intrinsic rewards (Cs), the costs of punishment (qs), and magnitudes of punishment (Ds) are = 0, then the payoff matrix represents a classic commons dilemma or public goods game. No matter how many cooperate or defect, there is an incentive for any one cooperator to defect, to get a better outcome (7 < 8, 5 < 6, etc.). Thus, the only Nash equilibrium is all-Defect.

172

David H. Krantz et al.

The standard way to create cooperative equilibria is to punish defectors. Assume that it costs each cooperator qj if there are j defectors to be punished, and that each defector suffers punishment Dj . If the punishments are severe relative to the costs, defection is deterred. Thus (continuing the symbolic addition/subtraction), if 7 − q0 > 8 − D1 , then when all cooperate, nobody has any incentive to defect. This, or variants of the same idea, may very well succeed. But even when all do cooperate, the cooperators lose something, because establishing sanctions for defectors is costly. As the number of defectors rises, so typically does the cost of sanctions, and the sanctions in turn typically become less effective. Norms that many people violate become difficult or impossible to enforce. The situation might be much different if the Cs add positive value to cooperation. An intrinsic reward value for mutual cooperation can be viewed as a windfall: the players are promised only 7, the next-to-best individual outcome, if all four cooperate, but they experience something better: the promised outcome plus the intrinsic reward C4 . This windfall may be expected by people who have experienced similar intrinsic rewards, and thus it may motivate cooperation. Many factors complicate this simple picture. The intrinsic reward value of cooperation may depend on how many cooperate: C4 may be different from C3 , and so on. Symmetry may not hold; the extrinsic outcomes and intrinsic rewards may vary across group members, and the latter may depend as well on which particular people are among the cooperators. Intrinsic rewards associated with group goals may be also accompanied by additional costs: for example, a person’s pursuit of other, unrelated goals may be constrained by group norms, such as when individual financial goals conflict with norms of altruism or generosity. Even when many people are motivated by intrinsic reward, some may not be, thus, punishment of defectors may still be necessary for maintenance of cooperation. Despite these caveats, the additive formulation in Table 3 shows the possible importance of intrinsic reward. All-cooperate is a Nash equilibrium in Table 3 provided that 7 + C4 − q0 > 8 − D1 . The larger C4 , the more easily is this attained. The sanctions D threatened for a potential defector can be smaller, and perhaps less costly (smaller q) when C values are large for that individual. In practice, the designers of a formal or informal structure for “mutual coercion” may understand the social goals and intrinsic rewards from cooperation for the people who will be subject to sanctions, and may (consciously or unconsciously) take such social rewards into account in deciding (correctly) what levels of sanction and monitoring will be sufficient. A structure that appears to work because of effective enforcement may in fact work only through

Values and Goals in Environmental Decision Making

173

a mixture of sanctions and intrinsic social rewards. This would be our reading of Ostrom’s design principles for the commons. Moreover, sanctions themselves create a second-level commons dilemma: who will cooperate to impose them, and why? Consider the scoop-the-poop problem. In the high population density of New York City, violations are generally noticed, even at off hours. Will someone who observes a violation take the trouble to call the police? Or the trouble and risk of criticizing the offender directly? Insofar as there is some small probability of this, it probably stems from anger or other negative emotion elicited by the violation: usually, the observer cannot in turn be sanctioned for failure to report or to criticize the violator. Such negative emotion does not, however, come merely from one additional bit of dog feces on the sidewalk, rather, it comes from observing a violation of a norm that one considers important. A large C (for achieving a cooperative goal) gives rise to a large disappointment when the goal is not achieved, and consequent negative emotion directed at a person violating the norm. Because of this second-level commons dilemma, a first-level account based solely on punishment must nevertheless take into account social goals of the enforcers.

3 Toward a Classification of Social Goals Humans are social animals: associations with human groups strongly affect the survival, health, and reproductive fitness of individuals. Unlike other social animals, people affiliate simultaneously with multiple groups, including family, work groups, neighbors, and often many others. Fitness depends not only on the particular groups with which one is affiliated, but on one’s role and one’s status within each such group. The preceding claims seem obviously true, but determining the effects of affiliation, role, and status in detail is complicated. For example, a study by [17] exhibits a simple relationship between social role and reproductive fitness, but in horses, not people. Complicated relationships of social status and social roles to human longevity are shown by Samuelsson and Dehlin [46]. Affiliation with groups can be viewed as a fundamental social goal, which in turn is related to many other goals [21]. In some cases, affiliation is merely a subgoal directed toward some purely individual goal. An example would be a hungry person who approaches others chiefly in the hope of being invited for a meal. Often affiliation is an end in itself or a means to other social goals: for example, a person might approach others just in order to be near them (end in itself) or in hope of conversation, sex, or other social goals. Indeed, a particular affiliation goal may be directed toward a variety of other goals. A partial classification is given in part (A) of Table 4. Schachter [47] studied factors that increase the desire to affiliate. For example, inducing anxiety (about a supposedly forthcoming electric shock) increases a subject’s desire to spend waiting time with another (unknown)

174

David H. Krantz et al.

person. The monograph directly demonstrates increase in well-being through shared presence: actually spending time with another person reduced anxiety. Turner et al. [53] demonstrated that very slight differentiation among a set of individuals can induce separate group identities, sufficient to motivate discrimination against the “out-group” (or in favor of the “in-group”) with respect to allocation of rewards. Here, affiliation seems to be an end in itself, arising automatically as a consequence of the slight differentiation. We view the nepotism toward in-group members or discrimination against the outgroup as intrinsic social goals (Category E in Table 4). We treat affiliation as a central concept not only because it arises so readily as a goal but because other social goals can easily be classified relative to specific group affiliations. Table 4 gives an overview of such a classification scheme. Under (A) we list a number of consequences of affiliation. A person affiliated with a particular group thinks of himself as part of that group (group identity) and is perceived as having somewhat altered status (by outsiders, by other members of the group, and by himself). Depending on the nature of the group, the person may also feel (and may be) more secure, physically, economically, or socially, and may feel (and may be) more able to pursue other goals. We use the term “efficacy” for the latter state. Each of these points, identity, status, security, and efficacy can be discussed at book length, but elaboration is beyond the scope of this brief classificatory venture. Suffice it to say that each of these consequences can be anticipated to some extent and can thus motivate affiliation with the group. Another important consequence of affiliation is increased awareness of group norms: the expectations of one’s behavior held by other group members [19]. Various forms of sharing with other group members also emerge as goals. Sharing of “mere presence” becomes an important goal in many groups [47,60]. Many specific activities are also shared, depending on the type of group, and usually governed partly by group norms. Table 4 mentions a few examples. Finally, because a member’s status depends in part on that of the group, protection and/or enhancement of group status is a goal that arises as a consequence of affiliation. Included in categories B, C, and D are goals that arise from within-group differentiation of roles and/or status, rather than from affiliation per se. A group may comprise many different roles, designated formally or agreed on tacitly. President or secretary may be roles that are described formally and assigned by election in a large group; strawberry-shortcake-maker might be a role assumed informally by one member of a couple. Category B is role aspiration: a person desires a particular role within the group. Typically, role aspirations are new goals that arise as a consequence of group affiliations, because a person not affiliated is likely to have at most vague awareness of the within-group roles, especially the informal roles. Once such a goal is

Values and Goals in Environmental Decision Making

175

Table 4. Classification of social goals relative to a particular group. (A) Goals related to affiliation per se

Taking on group identity Taking on group status Safety: feeling and being secure Efficacy: feeling and being capable or powerful Adherence to group norms Sharing with other group members • Mere presence • Activities or experiences (vary with group: e.g., meals, conversation, sex, prayer) Enhancing group status

(B) Role or status aspirations within the group

Specific roles (formally or informally designated) Within-group status aspirations Within-group social comparison

(C) Role-derived obligations (prevention goals)

Within-group • Memory operations (encoding, storage, retrieval) • Coordination (scheduling, etc.) • Adding or elaborating norms • Sanctioning norm violators External • Protecting group status • Intergroup relations and coordination • Affiliation with related or umbrella groups

(D) Role-derived opportunities (group promotion goals)

(E) Goals for other group members and nonmembers

Sharing goals (in-group nepotism) Opposing goals (out-group prejudice, schadenfreude)

attained, however, additional goals specific to the particular role are activated and adopted. Category C consists of role obligations: goals that a person fulfilling a particular role ought to adopt. The norms in question may be imposed within the group or from other groups. For example, the role of parent within a family entails numerous obligations, some of them not well anticipated: some imposed by within-family norms, others by outside groups (e.g., extended family,

176

David H. Krantz et al.

other parents in one’s cohort, child-care workers, local school system, or state). Just about every role, however, carries obligations. Under (C) we give a partial classification of such role obligations, making use of some functions that most groups require in order to exist as such. For example, individuals do not necessarily require memory or coordination beyond the brain mechanisms acquired in the course of normal human development. However, groups of two or more individuals do not share a brain, thus they require explicit mechanisms to remember the past and to coordinate current efforts. There are often differentiated roles within groups devoted to these functions. The other categories of role obligation listed under (C) also arise naturally from considering group function. The theory of self-regulation [25] emphasizes the distinction between prevention goals (ought goals, or obligations) and promotion goals (ideal goals, or opportunities). Although the theory has mainly been applied to individual goals, it is interesting to note the large number of potential prevention goals generated by group-role obligations. By contrast, role-derived promotion goals, (D), is a peculiar category. A promotion goal involves an opportunity that one is not obligated to act upon; but if an opportunity is role-derived, and pertains to the group in question, there may always be some degree of obligation to act on it. In this connection, of course, one can consider rolederived opportunities that do not pertain to the group in question, and so are not obligations. The examples that come to mind are ugly: seeking a bribe, taking sexual advantage of a dependent, and so on. Possibly there are good examples of role-derived group-pertinent opportunities that do not entail obligations; we leave (D) as a placeholder for further consideration. The final category, (E), consists of goals that one holds on behalf of other group members. These are numerous and include goals such as wanting one’s child to do well in school or to gain a job promotion, wanting a co-religionist to win a competition, or wanting members of a temporarily associated group to get extra rewards [53]. We also include goals that are specifically held for nonmembers: often, negative ones, as in out-group prejudice. 3.1 Mutual Cooperation The discussion of social dilemmas in the preceding section emphasized the possible importance of a mutual cooperation goal, represented by the additive parameter C or C 0 in Table 2, or the Cs in Table 3. How does such a goal fit into the preceding classification? We take it to be affiliation related (A), but within that category, there are several different forms to be considered. First, mutual cooperation can be thought of as shared activity, and shared success. It is more abstract than sharing a meal or sex, but nonetheless falls into this subcategory. Second, it can sometimes enhance group status (or prevent a lowering of group status). The poop-scooping dog owners may feel that unpleasantness in the streets lowers the regard in which they are held, as dog owners, by others in their neighborhood. Note that this is different from

Values and Goals in Environmental Decision Making

177

the case of a dog owner who scoops despite the fact that others do not. For such a dog owner, personal status may be at issue; that is different from the person who scoops in the hope that others will do so. Third, the individual may feel that failure to cooperate violates a subgroup norm. Identity as a neighborhood dog owner is strengthened by following what may turn out to be an important norm for that group. Finally, in some cases, where there is no pre-existing group, cooperation may simply be a means of affiliating, even if only temporarily, with others. That may be the driving force in laboratory groups, where the individuals are brought together for the first time and do not expect any continuing relationship. In everyday life, one sees examples of exchange of small kindnesses between strangers who are only momentarily together. 3.2 Observing Social Goals The preceding classification of social goals is intended to be useful in identifying social goals from observed behavior. Many sorts of behavior can be observed in the study of decisions. One can observe verbal and nonverbal aspects of people’s interactions with one another, as they discuss a problem or decision together; one can record thoughts reported by individuals while dealing with a decision problem (under thinkaloud or “write your thoughts” instructions), one can administer pre- and/or post-task interviews, also pre- and/or post-task questionnaires. Such observational methods can generate much data, which must be categorized efficiently for analysis. The classification in Table 4 provides a structure for categorizing these observations. Because Table 4 categorizes social goals relative to some particular group affiliation, it suggests an initial question about any particular behavioral observation: which group affiliations might plausibly have influenced that person’s behavior? For example, at a community meeting to hear a seasonal climate forecast, a woman said to the group, “We should work hard.” This looks like an example of “adding or elaborating norms” (under (C) in Table 4). Is she setting a norm for the entire village? Or for members of her own immediate family? Or some other subgroup? In which subgroup(s) does she have a role that obligates her to contribute toward norms? These are questions that might be answered by interviews with the speaker herself or with other local people who are in a position to know. Often several different affiliations will be relevant to a decision problem. For example, a person selecting a gift may consider his relationship with the intended recipient, with a partner and/or friends (who share the expense), with the larger social group connecting gift-giver and intended recipient, and even the (possibly temporary) dyadic relationship with a merchant. Social goals pertaining to each of these relationships could be probed using the above classification to structure questions or coding categories.

178

David H. Krantz et al.

3.3 An Example: Reciprocity in the Framework of Social Goals As explored above, social goals can be associated with a variety of group affiliations, roles, obligations, or opportunities, often simultaneously. The classification system of the previous section provides a guide to observing and analyzing social goals in context. Here, we use gift-giving as an example. Gift-giving is a topic of much interest theoretically. Mauss’s seminal work in anthropology [33] explores how gifts enhance solidarity: “like the market, [gift-giving] supplies each individual with personal incentives for collaborating in the pattern of exchange” [16]. The debate over what a gift is, and whether there are “free gifts” has been long and involved; here, we are most interested in the idea of how gifts fulfill social goals for individuals. Some gifts, viewed in a longer time framework, may be instrumental toward later returns. Such gifts may be viewed as maximizing the giver’s expected utility over time. For example, one may give a large tip in a restaurant in the hope of receiving excellent service on future visits. There is some risk in this— the hope may not be realized—but the expected return may be high enough to compensate for the risk. Social goals are not necessarily involved: the waiter may return excellent service for no reason other than the expectation of more large tips, and large tips may in turn be continued for no reason other than to assure excellent future service. At the opposite extreme, giving a gift may be a social goal in category (E): the recipient’s happiness may be intrinsically rewarding to the giver. Between these two extremes one can identify at least four other social goals that sometimes underlie gifts: fulfillment of role obligations, adherence to group norms (including reciprocation norms), status aspirations, and, perhaps most important, intrinsic reciprocation goals. Roles that lead to gift obligations are ubiquitous. The existence of strong norms for reciprocation, and the role of gift-giving in seeking high status are the two themes of Mauss’s study of Pacific Island and Northwest Pacific cultures [33]. Intrinsically motivated reciprocation is a goal that is induced by a gift or favor from someone else: the reciprocation goal is called intrinsic if the recipient desires to give a gift for a gift, independent of a group or cultural norm to reciprocate, and independent of any expectation of future favors. An example might be leaving a notably large tip in return for excellent restaurant service in a circumstance where one is reasonably sure that there will be no opportunity to return to that restaurant. Moreover, reciprocation may typically involve multiple simultaneous goals: one goal may be intrinsic, but it may coexist with norm-satisfying, status-seeking, role-based, or instrumental goals. A large tip, for example, may be motivated by an instrumental as well as an intrinsic goal. Nonetheless, we think that intrinsic reciprocation is an important motivational phenomenon, possibly the basis for other sorts of reciprocation. Similarly, the intrinsic mechanism can give rise to norms for reciprocation: the original gift-giver believes (on the basis of her own reciprocation goals)

Values and Goals in Environmental Decision Making

179

that she has induced a goal of reciprocation, and therefore she also expects reciprocation and thereby helps define a norm.

4 Tradeoffs Involving Social Goals Tables 1–3 implicitly used a utility-maximization framework, because it was assured that outcomes can be ordered. To break the dominance of Defect, the outcome rank, or utility, was modified by punishment and/or intrinsic reward. We do not believe that a utility-maximization framework is adequate (even for decisions without social goals), however, it is a useful starting point for many questions concerning social goals. In this section, we first discuss four questions about social goals in the utility framework. We then briefly sketch the reasons for considering decision principles other than utility maximization, and point out how some questions about social goals look from the standpoint of alternative decision principles. 4.1 Four Questions About Social Goals (i) Tradeoff Between an Economic and a Social Goal One of the first questions about social goals is how much they are actually valued. People want to be fair, but how much will they pay toward that end? One takes the measure of a social goal in terms of some economic equivalent, the maximum that a person will pay in order to attain it. Or better yet, because even a fairly large monetary payment may represent only a small sacrifice for a wealthy person, the economic equivalent is the marginal utility from money that will be sacrificed in order to attain the goal. People do sometimes make large sacrifices in order to cooperate with others, to share with others, or to benefit others. Some social goals are described as “priceless.” We can view this description with some cynicism, yet it does at least convey the idea that tradeoff questions are not lightly answered. Cooperation in commons dilemmas is a complicated example: relevant factors are not merely the tradeoff of economic and social goals, but also fear of sanctions, concern for reputation (especially in repeated play), expectations about others’ choices, and recognition of others’ expectations and intelligent reasoning (leading sometimes to Nash equilibrium). Nonetheless, the fact that some social goals weigh heavily is an important element. Although we have little confidence that social goals can be valued accurately by studying their tradeoffs with money, we do think that the question of tradeoff between economic and social goals is extremely important. The broad classification of social goals, given above, is intended to place such tradeoff questions in a general context, by recognizing that a wide variety of social goals may be in play simultaneously.

180

David H. Krantz et al.

(ii) Tradeoffs Among Different Social Goals This topic may be as important, and more complex, than tradeoffs between an economic and a social goal. Part of the complexity arises from possible internal conflict among social goals arising from different group identities. Books have been written about the conflict between work roles and parenting roles, between romantic love and religious affiliation, and so on. Again, we hope that a broad classification of social goals may facilitate thinking about the ways in which such conflicts are resolved in decision making. The utility perspective suggests a complex multiattribute utility function, describing the contribution of each such goal to overall happiness. Because particular actions are typically directed toward achieving several goals at once, such a multiattribute utility function would have to encompass many combinations of simultaneously achievable social (and economic) goals. Only in the case of additive utility, where each goal contributes in separable fashion from all others, can one hope for a manageable utility description. (iii) Uncertainty and Social Goals Risk and uncertainty have been studied most with outcomes defined by money value. There is a narrow but important class of decisions with high uncertainty and with gains and losses having clear money values. Gambling is the clearest example; financial investments and insurance purchases are more complicated, because some of the gains and losses in question lie in the future, so risk becomes mixed with intertemporal tradeoff. For insurance, moreover, the financial gains and losses are sometimes hard to separate mentally from accompanying nonmonetary outcomes: physical suffering, social loss, or destruction of loved property [26]. It seems unfortunate that research has focused on financial goals, because the attainment of health goals, social goals, research goals, or environmental goals is often uncertain. Utility has been the underlying theoretical concept used to justify neglect of different goal types: according to the expected-utility principle, the effect of uncertainty for any type of outcome is captured by the formula p(E) · U (y), multiplying the utility U (y) of the outcome in question by the subjective probability, that is, the degree of belief p(E) that the event E (defining the circumstances under which the outcome is obtained) actually occurs. According to this idea, the type of outcome y does not matter, only its utility U (y) enters into the evaluation that takes uncertainty into account. On its face, this is a very strong and scarcely plausible assumption. One might, on the contrary, think that people would show higher tolerance for uncertainty for outcomes that are typically highly uncertain, and less for outcomes that can often be nearly assured. The assumption is so pervasive, however, that it scarcely seems to have been tested empirically, even in the literature on health decisions, where there has been the most systematic use of the utility concept apart from utility of money. Brewer et al. [8] demonstrated that patients who believe that blood cholesterol level is related to coronary

Values and Goals in Environmental Decision Making

181

consequences adhere more closely to LDL cholesterol-lowering regimens then those who do not hold this belief. This sort of evidence at least shows a connection between beliefs and decisions, outside the monetary domain, although it is far from a test of the strong utility model enunciated above. We note that the most important alternative to expected utility is prospect theory [28,56], which was developed and tested solely in the domain of money gambles. Although it is far from obvious how this theory should be generalized to other types of goals (see [29] for one suggestion), the loss-aversion principle derived from the theory has in fact been applied extensively, albeit metaphorically, to nonfinancial goals. In a more general theory, loss aversion may also vary with the goal domain. We hope that a focus on social goals will lead to more thorough investigation of the effects of uncertainty on different sorts of goals. Indeed, it is hard to know how to approach the earlier questions about tradeoffs between social and economic goals, or tradeoffs among social goals, when most of the goals have some associated degree of uncertainty, without also knowing how the valuation of different sorts of goals is affected by uncertainty. (iv) Intertemporal Tradeoffs Involving Social Goals The effects of delay, like those of uncertainty, have been studied extensively in the domain of money. Often decision theory treats delay, like uncertainty, by use of a utility discount factor, that is, through the formula k(t) · U (y), where 1 − k(t) is viewed as the discount associated with delaying utility U (y) by duration t. Chapman [12] reviews some limitations on this model in the domain of health decisions: in particular, discount rates appear different for monetary and health goals. Redelmeier and Heller [45] report differences in discount rates for different health goals, including cases of negative discount rates. Social goals could be an excellent domain for studying effects of delay, because they differ greatly in the time perspectives attached to different goals. Sharing, whether conversation or sex, often seems urgent, whereas role aspirations (becoming a parent, presidency of an organization) are often longterm goals. Group environmental goals (restoring wetlands, amelioration of greenhouse-gas emissions) can carry a very long time perspective. As with uncertainty effects, it may be necessary to understand effects of delay prior to thoroughly understanding tradeoffs between social and economic goals, or among different sorts of social goals. Finally, we note that delay often produces additional uncertainty, so there is an important interaction between these two types of effects. 4.2 Limitations of a Utility Approach Utility maximization is most useful when the tradeoffs among different sorts of goals are stable. Even within the domain of individual goals, however, there

182

David H. Krantz et al.

is strong evidence that tradeoffs among goals are far from stable: the adoption and weighting of particular goals can be context-dependent [50], as can the decision rule governing tradeoffs [29]. Three findings are particularly relevant to these conclusions about tradeoffs and decision rules: intransitivity of pairwise choice, contingent weighting of multiple attributes, and paradoxical effects from adding a dominated alternative. We briefly review these findings, and some of the possible tradeoff mechanisms that underlie them, because similar phenomena and choice mechanisms might play an important role in decision processes involving social goals. (i) Intransitivity Systematic intransitivity of individual choice was first identified by Tversky [54]. He predicted this phenomenon from a choice model in which individuals first make comparisons between two alternative options, along each of several relevant attributes, then choose between the options by combining the positive and negative differences obtained from the various comparisons. This model can lead to intransitivities. For example, B may be chosen over C on the basis of an improvement in quality, even though C is a little cheaper. Similarly, A may be chosen over B, on the basis of still higher quality, although B is a little cheaper. However, when A is compared with C, the combined price difference may seem important, and one may choose C, sacrificing quality to price. A recent beautiful study of foraging choices by Canadian gray jays [58] illustrates this type of intransitivity. Each bird tended to choose A, which consisted of a single raisin that was obtainable by going 28 cm down a wire mesh tube, rather than B, a cache of two raisins at the more dangerousseeming distance of 42 cm. Likewise, each bird mostly chose B, the two raisins at 42 cm, rather than C, three raisins at the still deeper distance 56 cm. However, in a choice between a single raisin at 28 cm versus three at 56 cm, the difference in raisin cache size tended to match or outweigh the appearance of danger. Four of twelve jays chose C, the three-raisin cache on a clear majority of trials; the other eight hovered around 50–50 for choices between C and A. A similar intransitivity must occur whenever the perceptual magnitudes of differences along different attributes grow according to distinctly different laws. For the jays, the perceived advantage of 3 rather than 1 raisin is perhaps nearly the sum of the perceived advantages of 3 raisins versus 2 or 2 versus 1. Although the perceived growth in danger from 28 to 56 cm is perhaps much less than the sum of the magnitudes for 28 versus 42 and for 42 versus 56 cm. Tversky showed that when three attributes are involved, this choice mechanism, via attribute-differencing perceptions, can lead to perfectly transitive choice only when all the difference-growth laws are linear. Intransitivity is of course inconsistent with utility maximization as the mechanism underlying choice, inasmuch as utilities are ordered transitively. This is true whether the quantity to be maximized is expected utility, or

Values and Goals in Environmental Decision Making

183

of any other numerical index (including the value function postulated by prospect theory). An alternative choice process, which would lead to transitive choice, involves comparing some “overall evaluation” or index for each option (integrating across all its relevant attributes) rather than combining within-attribute comparisons. We know, however, that initial within-attribute comparison, of the kind considered by Tversky, is ubiquitous in human multiattribute choice [32,42]. The nonintuitive character of choice by numerical index is well illustrated by the regular controversies over college football polls in the United States. Anyone whose intuitive ranking disagrees with the index feels free to criticize the index as arbitrary or biased. Social goals may be particularly difficult to integrate with economic goals. Consider a choice between plan A, which offers a large financial benefit to the decision maker, versus plan B, which provides a smaller benefit, adheres more closely to the norms of a group with which the decision maker is strongly affiliated, yields some benefits for other group members, and perhaps thereby increases the chance that the decision maker will gain a leadership role in the group. The within-attribute comparisons (greater financial benefit versus better adherence to group norms) are salient. To suppress these comparisons and instead to judge “overall integrated utility” of the financial benefit, the deviation from group norms, and so on. may be difficult. These salient withinattribute comparisons will play a major role in preference construction, contrary to utility maximization. (ii) Contingent Weighting Even when integration across attributes occurs, the relative importance of a given attribute may depend on details of the choice situation. For example, Tversky et al. [57] showed that a tradeoff between probability of winning and amount to be won can depend strongly on whether people simply state which lottery they would rather play or state a price for each lottery (with the understanding that the higher-priced lottery will be chosen to be played). When asked to state prices, people place higher weight on the amount to be won. They found similar context-dependence for tradeoff between the amount of a prize and the delay before it can be obtained. The possibilities for contingent weighting when individual and social goals are both in play seem obvious, and potentially enormous. In the field of disaster policy (natural hazard, accident, or crime) the tradeoffs among lives, injuries of various sorts, damage to property, and public and private expenditure on prevention and preparedness are notoriously difficult. The weights applied to money expenditure and to lives obviously depend on whose lives are at stake, but also on whether the choices are made as part of a budget process where alternative expenditures are at stake or in some other context. In this arena it seems important to sort out the effects of contingent weighting of goals versus the effects of goals that are influential but not fully articulated. Similar issues arise with respect to health policy and environmental policy choices.

184

David H. Krantz et al.

(iii) Asymmetric Dominance The fact that the alternative plans or options are an important determinant of the context and affect the construction of a preference, is demonstrated most directly when a dominated alternative is inserted as a third option in a choice situation [1,27]. The situation is depicted abstractly in Table 5. Table 5. Price/quality tradeoff and asymmetric dominance. Plan/Option

Price

Quality

A0

$22

Fair

A

$20

Fair

B

$35

Good+

B0

$35

Good

If the first three options, A0 , A, and B are the only ones available, then A tends to be chosen rather than B (and A0 , which is dominated by A, is rarely or never chosen). However, if only the last three, A, B, and B0 are available, then B tends to be chosen (and B0 , dominated by B, is rarely or never chosen). In other words, whether A or B tends to be chosen depends on whether a third option is similar to but dominated by A or is similar to but dominated by B. Once again, this phenomenon has not been explored with a mixture of social and individual goals, but it is a robust finding for all sorts of tradeoffs among individual goals. For example, it is seen in female college students’ selection among descriptions of prospective male blind dates, where the dimensions varied are handsomeness and articulateness [48]. Consider Table 6, in which the basic choice is between good adherence to a group norm, gaining a payment of $220 (plan A), versus poor adherence and a payment of $550 received (plan B). Assume that there are no extrinsic rewards or punishments for adhering to the norm or not. Table 6. Adherence to norm versus payment received. Plan/Option

Payment

Adherence

Received

to Norm

A0

$200

Good

A

$250

Good

B

$550

Poor

B0

$550

Poor−

Values and Goals in Environmental Decision Making

185

By analogy with Table 5, adding the option B0 , with a slightly more egregious violation of the norm, for no more gain, might increase the chances that people feel virtuous in choosing B, whereas adding A0 , which allows A the appearance of a bonus for adhering to the norm, might increase the chances that people choose A. Alternatively, tradeoffs between payment received and adherence to a norm might follow quite different rules, and could even show the opposite of the asymmetric dominance phenomenon. This is just one example of the vast terrae incognitae in the domain of social goals. 4.3 Social Goals in a Broader Decision Theory Krantz and Kunreuther [29] discuss several alternatives to utility maximization. One is within-attribute comparison, extensively documented by Payne et al. [42] and incorporated into Tversky’s [54] additive-difference model. A second is voting by goals: choosing a plan that accomplishes two different goals, rather than a different plan that accomplishes only one, or the like. This idea was derived from a basis for intransitivity quite different from Tversky’s, the intransitivity of majority vote [4,13]. The third is threshold-setting with respect to important goals: only those plans are considered that meet a threshold (with respect to probability, or ease of achievement) for one or more “essential” goals. In our view, one of the tasks in understanding decision making is to diagnose, if possible, the rule or rules actually used to select one plan rather than another. The coding of behavior should therefore include not only the group affiliations and social goals involved in a decision, but also the decision rules that are in play. The examples of CRED research presented in Sections 5 and 6 represent a bare beginning toward a program of study aimed at understanding social goals in environmental decision making and simultaneously aimed at developing decision theory in ways appropriate to multiple goals.

5 Some Laboratory Studies of Group Decision Making In this section we briefly describe two studies of group interaction and coordination. The first demonstrates the importance of group affiliation for subsequent cooperation in a social dilemma, and the second strongly suggests that group interactions can alter framing effects by introducing group concerns into consideration. Both of these studies support our argument that social goals can themselves motivate behaviors to reflect group affiliation, identity, or belonging, and can lead to additional reward value from mutual cooperation or reciprocation to others. 5.1 Group Identity in a Cooperative Game In the first study [2,3], groups of four Columbia undergraduates were initially asked to complete a letter-writing task (which varied in its social setting, as

186

David H. Krantz et al.

detailed below). They were paid for this task, then asked whether they wished to place half of their earnings in an “investment cooperative” where the return on investment increased with the total number of investors (0 to 4). In terms of monetary payoffs, the investment cooperative was a game with two types of pure-strategy Nash equilibria: a noncooperative equilibrium, where everyone refuses to invest, and the cooperative equilibria with three investors and one noninvestor. Thus, if one believes that all three others will invest, one has an incentive to refuse, keeping one’s pay and reaping the benefits of the others’ investment; and if one believes that at most one other will invest, one should also refuse to invest. Investment only produces a net gain over refusal if exactly two of the three others choose to invest. This investment decision was always made individually; no interaction was permitted. Groups differed only in the presence and collaboration of others on the unrelated letter-writing task. The four randomly assigned conditions were hypothesized to have varying impact on group affiliation: anonymous (no knowledge of group members); symbol (abstract representation of the group, for example, by a blue star); co-present (group members present in the same room, but no interaction); and collaborative (group members engaged in a prior unrelated collaborative task). In the collaborative condition, the experimenters observed the group interaction during the letter-writing task and coded each individual with respect to his observed level of group affiliation as judged by eye contact, tone, and other indicators. Cooperation rate varied as predicted with condition and group affiliation: 43% for anonymous groups, 63% in the symbol condition, 78% in the copresent condition, and 75% for individuals in the collaborative condition. These results suggest that the level of group contact (either real or abstract) influences the willingness of participants to cooperate in a later task. Within the collaborative condition, 94% of individuals with high-rated group affiliation chose to invest, compared with 52% for those with low-rated group affiliation. Figure 1 shows the fit of a logistic-regression model relating rated group affiliation to cooperation within the collaborative condition. These results are consistent with our view that stronger affiliation leads to stronger social motives and thus greater cooperation. We suggest that social goals, derived from affiliation, can be (but are not always) activated by group activity or awareness of the group, and are present in varying degrees. Varying strengths of affiliation thus lead to different outcomes (in this case, cooperation in a later task), through, we posit, activation of social goals that lead to cooperation. In this specific case, we suggest, and our data support, that group activity leads to developing group affiliation (such as in references to my group, etc.) that then encourages a norm of cooperation, even in the absence of further reinforcement of this identity or norm. Interestingly, similar effects can occur without actual group co-presence or interaction (symbol condition). This of course is seen in real life through in-group effects involving people who share the same flag, same religion, and so on.

Values and Goals in Environmental Decision Making

187

Fig. 1. Predicted and observed cooperation.

In addition, subjects who were the lone defector in a group with three investors experienced significantly lower satisfaction with their outcome than their fellow group members or other defectors (despite their higher monetary payoffs), and were highly likely to cooperate on a second cooperative task, even without further interaction with group members. This suggests that there is a psychological cost to breaking norms of behavior, even if these norms are only implicit. Anecdotal evidence from other studies also shows that defectors often experience remorse, even with optimal economic gains. Finally, we also found that condition and thus level of affiliation with a group influenced how the dilemma presented in the first decision task was framed. Subjects in the co-present and collaborative conditions were more likely to spontaneously frame the decision as leading to a gain, whereas subjects in a more individual-based setting were more likely to frame the decision as one leading to a potential loss. This hints at the underlying process by which affiliation might be affecting the final decision to cooperate or defect in the dilemma. However, does the difference highlight the process if the subject expresses both views? To answer that question, we controlled for a mention of both the loss and gain frames and found that subjects in the group-based situations were more likely to mention only the gain frame as compared to subjects in a more individual-based situation who were more likely to mention

188

David H. Krantz et al.

only the loss frame. Spontaneously framing the decision as a gain led to greater cooperation, whereas spontaneously framing the decision as a gain led to greater defection by the subjects. It bears pointing out that cooperation yields greater gains for all whereas defection results in greater gains for the defecting individual at the cost of the group. Other laboratory studies have also shown the importance of group affiliation for cooperation [9,14]. These studies suggest that there is a connection between affiliation as a group member and behavior in a task that includes cooperation as an option. Here, we explain our results by hypothesizing that activation of group affiliation leads to activation of social goals, in this case of cooperation. This model is not binary, but depends on the degree of perceived affiliation, which can be measured through survey or observational data. 5.2 Framing Effects in a Group Setting In the second study [36], three-person groups completed several decision tasks similar to ones in which strong framing effects have previously been found for individuals working alone. We focus here on two of these decision problems. In one, modeled on the classic demonstration of framing [55], the choice lay between risky and riskless plans to mitigate the effects of a likely disease outbreak, with the consequences of each plan framed either as severe infection or as protection against infection. The second problem probed the intertemporal discount rate, involving either the delay or acceleration of receipt of prize money. For each decision problem, half of the groups first considered the problem as individuals (all with the same framing), making their own decisions, and discussed the problem later (still with the same frame) to reach group consensus. The other half first encountered that problem as a group. In this study each set of three subjects was recruited from an existing group (student groups and office groups were solicited) so that the participants had ongoing relationships. The data include the decisions reached, the justifications reported, background data about the individuals’ relationships to their group, and videotapes of the group interactions. The disease problem used a scenario concerning West Nile Virus (a real threat in the region). The individual decisions showed the standard framing effect: the risky mitigation plan was more popular among participants in the loss frame, but selection of the risky option was markedly reduced with gain framing. The choices in this problem were closely related to references to certainty made in justification. For individual decisions, the correlation between choice and certainty justification was very strong: risky choice in the loss frame was justified by avoiding the sure loss, and risk-averse choice in the gain frame was justified by the certain gain. For groups, reference to societal goals (as well as frame) was associated with selection of the risky option. To set up the other decision problem, prize money was made available to one of the participating groups (randomly preselected before the experiment began). Each group was asked to make intertemporal tradeoff decisions: they

Values and Goals in Environmental Decision Making

189

either decided how much smaller a prize they would accept if its receipt was accelerated (three months earlier), or how much larger a prize they would demand if its receipt were delayed (three months later). All participants were told that the group decision would be binding, should their group turn out to be the one randomly preselected to have the decision count. Discount rate was calculated by using the formula d = (x1 /x2 )1/(t1 −t2 ) , where x1 denotes the amount received today (t1 ) and x2 is the amount seen as equivalent three months from now (t2 ) [44]. Smaller discount factors signify greater discounting; a discount factor of 1 means no discounting. Previous work on asymmetric discounting has shown that individuals tend to have lower discount factors in delay conditions compared to individuals contemplating acceleration of consumption [30]. Contrary to previous findings, this study did not find a significant framing effect for individual or group discount rates. However, there was a difference between groups as a function of previous exposure to the decision as individuals. For groups without previous exposure, there was a significant effect of frame, although in the opposite direction predicted by prior research [59]. In the delay frame, these groups showed less discounting than groups with prior exposure who were in the accelerate frame. Participants’ justifications were coded as to whether they favored delayed receipt of the prize. The delay frame yielded marginally more patient reasons than the accelerate frame. These findings suggest that a different process may be at work when people consider an intertemporal choice with a collective outcome (the decision that counts will be made as a group and will affect the entire group) than when people consider only their own outcome. We are in the process of analyzing the group discussion transcripts to determine whether there is other evidence supporting this hypothesis. Obviously, the results reported here barely scratch the surface of what can be done in understanding collective decisions. The observed connection between social goals and selection of the risky option (affording an opportunity to protect everyone), in the disease mitigation problem, and the observed reversal of the usual framing effect for temporal discounting with a collective outcome each suggest the importance of group processes, and the analysis of justifications offers a picture in which societal goals or group-outcome goals have a strong influence on the decision process.

6 Coding Field Observations of Group Decision Making For many observers of environmental decision making, the importance of group processes and social goals is obvious and has been taken for granted. Yet there has been a disciplinary gulf that has profoundly affected scientific analysis of group processes. For anthropologists and political scientists, social norms and group processes are fundamental to their disciplines, and have been emphasized in the

190

David H. Krantz et al.

analysis of cooperation [34,37,38]. For psychologists and microeconomic theorists, on the other hand, individuals have been the focus of theory. The most important recent work in social psychology has fallen under the heading of social cognition: its focus has been the individual’s perception of the social world, rather than social motives. In applied physical science and engineering, technical information has been the focus; the fact that such information often is not attended to or not understood is a source of frustration, but is not seen as a reason for modifying prescriptive recommendations. From the perspective of mathematical analysis, much of what people do seems irrational, “myopic” or “politically motivated.” Such behavior leads to questions about how to improve rationality, or how to communicate better, but not questions about the prescriptive analysis itself. Of course, there have been important exceptions: in agricultural economics, especially, bridges have been built across the disciplinary gulf. We return to the issue of prescriptive analysis in Section 7. Our own thinking has been strongly influenced by individually oriented behavioral decision theory (especially the work of Daniel Kahneman, Paul Slovic, and Amos Tversky) but also by field observations of group decision making. Suarez et al. [51] strongly emphasize the importance of communicating uncertain information (in probabilistic format) in community settings. A discussion of integrated-systems modeling [23] concludes with a remarkable statement: However, while these models have a considerable degree of utility in their own right, it is argued that the dialogue and learning they generate among the disparate players is equally important to effective applications of climate forecasting. The modeling provides the means to build the trust and effective relationships needed among the disparate players. Phillips and Orlove [43] found that Ugandan farmers place high importance on collective discussion in shaping the use of forecasts. They gather spontaneously in public places to converse about forecasts and their possible uses, and sometimes form “listening groups” that assemble to hear and discuss radio programs that present forecasts. Tronstad et al. [52], working in a different culture on quite different issues, noted a similar phenomenon: We attribute most of this improvement in comfort level [for their mobile computer lab outreach] to individuals of a more familiar and homogeneous background being in the same room. Such field observations make clear that group processes are critical in the communication and use of scientific information, but also suggest how little we understand why this is so. Are the groups that come together to understand scientific information reaching some consensus decision that sets a norm for the individuals? Are they helping one another understand technical material? Are they creating a counterweight, out of numbers, to the prestige of the

Values and Goals in Environmental Decision Making

191

scientific messengers? Do they simply feel better due to the presence of others [47]? Or is learning facilitated by mere presence of others [60]? Most of the authors responsible for these field observations come from backgrounds of mathematical modeling (of climate, agriculture, or economic behavior). They observed and reported the importance of social settings and relationships, but did not turn their observations into questions for scientific investigation. It is natural, however, for collaborators with backgrounds in anthropology or psychology to recognize that these phenomena could be explained in many ways and deserve closer investigation. Several of our projects at CRED are devoted to collecting systematic data in field settings in order to understand such phenomena better. More generally, we use field settings both as test sites for theories about social processes and (especially insofar as our theories prove inadequate) as sources of new theoretical ideas, which in turn might also be tested in laboratory experiments. We briefly describe three examples here. (i) Listening Groups in Ugandan Agricultural Communities This project stemmed from the observations of Phillips and Orlove, noted above. The listening groups are an interesting example of the role of social goals in individual decision making. Decisions about what sort of seed to plant, how much of each, and when to plant each crop can all be influenced by a seasonal climate forecast, because the best choices depend on the onset, continuity, and strength of rain in the rainy seasons. However, each household has its own land, and farming decisions are at least partly made at the household level. Many people have radios, and in principle, could listen within the household to broadcasts of climate forecasts and make whatever decisions seem appropriate. In any case one would not expect uniformity, because amount of land, available labor, and other factors vary from one household to another. It is not surprising that people look to one another for confirmation of their thinking, but the semiformal structure in which people gather to listen and discuss the forecast seems to require explanation. In current work by Ben Orlove and Carla Roncoli, such meetings are observed. The observations include videotape of the meeting itself, sociolinguistic analysis of the verbatim transcripts of the discussion, by national collaborators, and finally interviews with some participants after the meeting. We are not yet able to give detailed results and conclusions, but several preliminary observations can be mentioned. First, the participants have multiple group identities: a person is associated with a particular household, with the farmer group itself (often this is a group that meets from time to time for various kinds of discussion), and with subgroups and superordinate groups. The presence of the visitors leads particularly to superordinate group identities. The village head may decide to attend, just because there are outsiders in the village, and thereby the participants’ village identity is invoked. The visitors themselves have relationships with the group, thereby constituting

192

David H. Krantz et al.

transient but important superordinate identities. For example, most of the population in the research area is Christian, with some Muslims. In one community, the team included the local extension agent who was known to many of the participants and who happened to be Muslim. The meeting was opened with the Lord’s Prayer, but then the group leader wanted to be sure that the Muslim extension agent felt included in the group. Second, much of the discussion is directed toward setting of a consensus or shared understanding of the farming situation, including an agreed interpretation of the forecast, agreed strategies about when to plant various crops, and sometimes norms concerning the amount of effort to be devoted to agricultural work. The latter norms are specifically directed against free-riding. Villagers are concerned about members of a household (mostly men) who do not contribute enough labor to the household, thus free-ride on other members’ efforts. Other concerns are related to these. One is the positive or promotion focus on opportunities of cooperation: if everyone plants the right crop at the right time, the village might attract larger-scale traders, or more traders, and everyone would get a better price for their crops (especially important for villages farther from paved roads). Another is the cooperative effect of everyone working with more effort (the key concept of amaanyi ) when they are more certain of the forecast interpretation and when there is agreement within the village. A third observation about the influence of social goals on discussion is that these meetings serve as an opportunity to review collective goals about nonagricultural issues as well. Village residents can discuss government programs, market conditions, and regional politics. They can integrate agricultural decision making for the coming season with planning at longer time scales. (ii) Water Allocation in Northeast Brazil The Brazilian state of Ceara is subject to frequent droughts, and has a system of water reservoirs that are managed to provide irrigation for local agriculture, and, in the future, may provide transfers of water to the large urban area of Fortaleza. Water release decisions are made by Water Allocation Committees, a form of participatory democracy sponsored by the Ceara government but controlled to some extent by experts who lay out the set of alternatives to be considered. In collaboration with the International Research Institute for Climate and Society, we at CRED have been studying the decision processes of Water Allocation Committees. Observations include videotapes of the meetings themselves and interviews with some of the participants. As with the Uganda meetings, data analysis has not progressed to the point where detailed conclusions are possible. Two features stand out, however. One is that much negotiation takes place outside the formal meeting, so that interviews are an extremely important supplement to the videotapes. Second, one observes (here, as in other contexts) strategic use of the uncertainty of forecasts: uncertainty is used or ignored in a position statement in

Values and Goals in Environmental Decision Making

193

order to justify a position favorable to the individual or group putting forth the argument. The latter observation has led us back to the laboratory, to try to observe the degree to which uncertainty changes people’s tradeoffs between their own and others’ outcomes. This research is being conducted by Miguel Fonseca in the CRED laboratory. Underlying the research, of course, is the existence of social goals: people do value others’ outcomes (e.g., other farmers or urbanites in Fortaleza); the question is, how much. (iii) Risk Sharing in East Africa There are many situations in which strategies are available that transfer risk (often from a buyer to a seller) to the mutual long-term benefit of both parties. A familiar example in industrialized countries is protection against “lemons” by an enforceable guarantee of quality for which a premium price is paid. If the buyer of a defective item might suffer a serious loss, then she will be willing to pay a premium for an inspected or guaranteed product. The seller thereby assumes part of the risk: he cannot sell a defective item, and sustains a loss (but less serious than that of the buyer). Such losses are more than compensated, in the long run, by the receipt of a higher price for the items he does sell. Third-party warranties or insurance can sometimes provide similar benefits when losses are large compared with the wealth of the participants: risk is shared among many purchasers. In a new setting, the introduction of such financial mechanisms can encounter two different sets of problems. One is “fit” with local culture. Transactions may be governed by strong norms, and the real payoffs may therefore involve social as well as economic goals. The second problem is the “long run” nature of the benefit. On a particular occasion, it may be the loss that is salient, so acceptance of risk may be placed in doubt. There is a large literature showing suboptimal behavior and little learning in stochastic environments. We are conducting two types of studies relevant to risk sharing in East Africa. One involves onsite studies of the understanding and acceptance of risk-sharing contracts. This involves both “fit” and acceptance of risk. The other research consists of laboratory studies of learning in stochastic environments. Under what circumstances do people learn to think about long-run outcomes rather than immediate losses? Results on both fronts, although very preliminary, are highly encouraging. In collaboration with the International Research Institute for Climate and Society, we have been studying the acceptance of insurance contracts by farmers in Malawi. In the laboratory, we have developed protocols that lead to excellent learning. It remains to be seen to what extent such learning transfers from one type of stochastic setting to another.

194

David H. Krantz et al.

7 Implications for Decision Analysis Incorporating social goals into a utility analysis would be a daunting task. For any one affiliation, there is a considerable list of possible social goals (Table 4), and this is compounded by multiple affiliations. The task is made impossible by the formation of transient affiliations, accompanied by transient social goals. The analysis of Uganda listening groups offered one set of examples: transient affiliation with visitors. Other examples are seen in routine politenesses reciprocated between strangers who are placed together for a few hours (say, in an airplane) or even for a few seconds (say, going through a doorway nearly simultaneously). Fortunately, this sort of utility complexity is already ruled out by the phenomena of individual choice with nonsocial goals, as reviewed in Section 4. The advice for prescriptive analysis remains what it always has been, in decision analysis: start by determining the decision-maker’s actual goals, and the contingencies under which each will be attained, given a particular plan or strategy. A simplified analysis might simply weight goal values by probabilities and sum. This looks like expected-utility theory, but it is not: it accepts the context-dependence of goals and weights placed on goals. Decision makers sometimes wish to attain near-guarantees for certain important goals. Often, this is possible, if the decision maker is prepared to commit very extensive resources toward the achievement of some important goals. It is then important to look carefully at worst-case scenarios, or at least very unfavorable scenarios. Would the decision maker really be willing, if it turned out to be needed, to devote such extensive resources to the given goals? If not, then the proposed decision rule should be revised. This advice of course has nothing directly to do with social goals as such, but it often comes up in connection with social goals that are viewed as extremely important. Particularly for environmental decision making, attention has to be paid to the values placed on environmental goals per se and on cooperation per se. Sanctions for noncooperators may be needed, but they will work only if most parties cooperate for intrinsic reasons. Summary We define a social goal as either a goal to affiliate (with any size or type of group, temporary or long-term) or a goal that arises from a group affiliation. Several different broad categories of goals derive from group affiliation. Some are linked to affiliation per se, including the goals of adhering to group norms, enhancing group reputation, and simply sharing success with others in the group. Social goals also include aspirations for particular roles within a group, and they include consequences of attaining special group roles, especially role obligations such as norm-setting or norm-enforcement. In addition, affiliation often leads to goals concerning consequences for other people, for example, in-group nepotism or discrimination against out-group members.

Values and Goals in Environmental Decision Making

195

Many environmental decision situations nominally appear to be commons dilemmas, in which the dominant strategy for each player is noncooperation, even though all would be better off with large-scale cooperation. These decision situations might be analyzed differently if social goals were taken into account. Goals of social norm adherence and reciprocity have the potential to lead to punishing norm violators and intrinsic rewards for group successes. For example, the efforts in recent years by European countries to reduce greenhouse gas emissions may in part reflect a “European environmental identity” for decision makers and their constituents by combining a continent wide social identity with environmental values, leading individuals to act cooperatively, despite noncooperation by major greenhouse gas emitters such as the United States and China. The analysis of decision situations should also take into account the complex social identity of decision makers, including multiple simultaneous affiliations, multiple role aspirations, role-derived obligations, and multiple reciprocity goals. The literature on constructed choice makes it very doubtful that all of these multiple goals can be modeled by utility theory (see also [31]). The choice of plans available, as well as other contextual features of the decision situation, determines what goals are activated and how they are weighted. In this context, heretofore unanswered questions come to the fore: How are social goals traded off with economic and other social goals, and how are their relative weights affected by uncertainty and time delays? At CRED, our laboratory experiments and field experiences suggest that group interactions are influenced by these social goals and group identities, such as in the social dilemma described above in which cooperation can be increased by an arbitrary group symbol and in the study of the effects of groups on decision framing. Our field studies in Uganda and Brazil also highlight the importance of social goals for real-world meetings and decision processes, suggesting that goals of consensus and agreement can drive meetings and discussions about climate forecasts. In continuing this research, we hope to elucidate in greater detail the important role of social goals in environmental decisions.

Acknowledgments Research supported by NSF grant SES-0345840. Many thanks to Elke Weber, Carla Roncoli, and the CRED team for their assistance with this chapter.

References 1. D. Ariely and T. S. Wallsten. Seeking subjective dominance in multidimensional space: An explanation of the asymmetric dominance effect. Organizational Behavior and Human Decision Processes, 63:223–232, 1995.

196

David H. Krantz et al.

2. P. Arora and D. H. Krantz. To cooperate or not to cooperate: Impact of unrelated collaboration on social dilemmas. Poster, Society for Judgment and Decision Making, Toronto, 2005. 3. P. Arora, N. Peterson, and D. H. Krantz. Group affiliation and the intrinsic rewards of cooperation. Technical report, Department of Psychology, Columbia University, New York, 2006. 4. K. J. Arrow. Social choice and individual values. Monograph 12, Yale University: Cowles Foundation for Research in Economics, 1951. 5. R. Axelrod and W. D. Hamilton. The evolution of cooperation. Science, 211:1390–1396, 1981. 6. J. Bendor. Uncertainty and the evolution of cooperation. Journal of Conflict Resolution, 37:709–733, 1993. 7. M. B. Brewer. The many faces of social identity: Implications for political psychology. Political Psychology, 22:115–125, 2001. 8. N. T. Brewer, G. B. Chapman, S. Brownlee, and E. A. Leventhal. Cholesterol control, medication adherence and illness cognition. British Journal of Health Psychology, 7:433–448, 2002. 9. R. Brewer, M. B. & Kramer. Choice behavior in social dilemmas: Effects of social identity, group size, and decision framing. Journal of Personality and Social Psychology, 50:543–549, 1986. 10. J. S. Bruner. The act of discovery. Harvard Educational Review, 31:21–32, 1961. 11. C. F. Camerer and E. Fehr. When does “economic man” dominate social behaviour? Science, 311:47–52, 2006. 12. G. B. Chapman. Time discounting of health outcomes. In G. Loewenstein, D. Read, and R. Baumeister, editors, Time and Decision: Economic and Psychological Perspectives on Intertemporal Choice, pages 395–417. Russell Sage Foundation, New York, 2003. 13. M. D. Condorcet. Essai sur l’application do l’analyse ´ a la probabilit´e des d´ecisions rendues ´ a la pluralit des voix. Paris, 1785. 14. R. M. Dawes and D. M. Messick. Social dilemmas. International Journal of Psychology, 35:111–116, 2000. 15. E. L. Deci and A. C. Moller. The concept of competence: A starting place for understanding intrinsic motivation and self-determined extrinsic motivation. In A. J. Elliot and C. S. Dweck, editors, Handbook of Competence and Motivation, pages 579–597. Guilford, New York, 2005. 16. M. Douglas. No free gifts: Introduction to Mauss’s essay on The Gift. Risk and Blame: Essays in Cultural Theory. Routledge, London, 1992. 17. C. Feh. Alliances and successes in Camargue stallions. Animal Behavior, 57:705– 713, 1999. 18. E. Fehr and U. Fischbacher. Third-party punishment and social norms. Evolution and Human Behavior, 25:63–87, 2004. 19. L. Festinger, S. Schachter, and K. Back. Social Pressure in Informal Groups. Harper and Row, New York, 1950. 20. P. C. Fishburn. A study of independence in multivariate utility theory. Econometrica, 37:107–121, 1969. 21. A. P. Fiske. Five core motives, plus or minus five. In S. J. Spencer, S. Fein, M. P. Hanna, and J. M. Olson, editors, Motivated Social Perception, pages 233–246. Lawrence Erlbaum, Mahwah, NJ, 2003. 22. W. G¨ uth. On ultimatum bargaining experiments: A personal review. Journal of Economic Behavior and Organization, 27:329–344, 1995.

Values and Goals in Environmental Decision Making

197

23. G. L. Hammer and H. Meinke. Linking climate, agriculture, and decision making: Experiences and lessons for improving applications of climate forecasts in agriculture. In N. Nichols, G. L. Hammer, and C. Mitchell, editors, Applications of Seasonal Climate Forecasting in Agricultural and Natural Ecosystems. Kluwer Academic Press, The Netherlands, 2000. 24. G. Hardin. The tragedy of the commons. Science, 162:1243–1248, 1968. 25. E. T. Higgins. Beyond pleasure and pain. American Psychologist, 52:1280–1300, 1997. 26. C. K. Hsee and H. Kunreuther. The affection effect in insurance decisions. Journal of Risk and Uncertainty, 20:149–159, 2000. 27. J. Huber, J. W. Payne, and C. Puto. Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis. Journal of Consumer Research, 9:90–98, 1982. 28. D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica, 47:263–291, 1979. 29. D. H. Krantz and H. Kunreuther. Goals and plans in protective decision making. Working paper Wharton Risk Center WP # 06-18, University of Pennsylvania, 2006. http://opim.wharton.upenn.edu/risk/library/06-18.pdf. 30. G. Loewenstein. Frames of mind in intertemporal choice. Management Science, 34:200–214, 1988. 31. W. J. M., K. S., and M. D. M. A conceptual review of decision making in social dilemmas: Applying a logic of appropriateness. Personality and Social Psychology Review, 8:281–307, 2004. 32. A. B. Markman and C. P. Moreau. Analogy and analogical comparison in choice. In D. Gentner, K. J. Holyoak, and B. N. Kokinov, editors, The Analogical Mind: Perspectives from Cognitive Science, pages 363–399. The MIT Press, Cambridge, MA, 2001. 33. M. Mauss. The Gift: The Form and Reason for Exchange in Archaic Societies. Routledge & Kegan Paul, New York, 1990. 34. B. McCay. ITQs and community: An essay on environmental governance. Agricultural and Resource Economics Review, 33:162–170, 2004. 35. R. F. Meyer. Preferences over time. In R. L. Keeney and H. Raiffa, editors, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, Chapter 9. Wiley, New York, 1976. 36. K. F. Milch. Framing effects in group decisions revisited: The relationship between reasons and choice. Master’s thesis, Department of Psychology, Columbia University, New York, 2006. 37. B. Orlove. Lines in the Water: Nature and Culture at Lake Titicaca. University of California Press, Berkeley, 2002. 38. E. Ostrom. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, New York, 1990. 39. E. Ostrom. Collective action and the evolution of social norms. The Journal of Economic Perspectives, 14:137–158, 2000. 40. E. Ostrom, T. Dietz, N. Dolsak, P. Stern, S. Stonich, and E. Weber, editors. The Drama of the Commons. National Research Council, National Academy Press, Washington, DC, 2002. 41. T. R. Palfrey and H. Rosenthal. Private incentives in social dillemas: The effects of incomplete information about altruism. Journal of Public Economics, 35:309–332, 1988.

198

David H. Krantz et al.

42. J. W. Payne, J. R. Bettman, and E. J. Johnson. The Adaptive Decision Maker. Cambridge University Press, New York, 1993. 43. J. Phillips and B. Orlove. Improving climate forecast communication for farm management in Uganda. Final report, NOAA Office of Global Programs, 2004. 44. D. Read. Is time-discounting hyperbolic or sub-additive? Journal of Risk and Uncertainty, 23:5–32, 2001. 45. D. A. Redelmeier and D. N. Heller. Time preference in medical decision making and cost-effectiveness analysis. Medical Decision Making, 13:212–217, 1993. 46. G. Samuelsson and O. Dehlin. Family network and mortality: Survival chances through the lifespan of an entire age cohort. International Journal of Aging and Human Development, 37:277–95, 1993. 47. S. Schachter. The Psychology of Affiliation: Experimental Studies of the Sources of Gregariousness. Stanford University Press, Stanford, CA, 1959. 48. C. Sedikides, D. Ariely, and N. Olsen. Contextual and procedural determinants of partner selection: Of asymmetric dominance and prominence. Social Cognition, 17:118–139, 1999. 49. E. Shafir. Uncertainty and the difficulty of thinking through disjunctions. Cognition, 50:403–430, 1994. 50. P. Slovic. The construction of preferences. American Psychologist, 50:364–371, 1995. 51. P. Suarez, A. Patt, and Potsdam Institute for Climate Impact Research. The risks of climate forecast application. Risk, Decision and Policy, 9:75–89, 2004. 52. R. Tronstad, T. Teegerstrom, and D. E. Osgood. The role of technology for reaching underserved audiences. American Journal of Agricultural Economics, 86:767–771, 2004. 53. J. C. Turner, R. J. Brown, and H. Tajfel. Social comparison and group interest in ingroup favoritism. European Journal of Social Psychology, 9:187–204, 1979. 54. A. Tversky. Intransitivity of preferences. Psychological Review, 76:31–48, 1969. 55. A. Tversky and D. Kahneman. The framing of decisions and the psychology of choice. Science, 211:453–458, 1981. 56. A. Tversky and D. Kahneman. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 26:297–323, 1992. 57. A. Tversky, P. Slovic, and D. Kahneman. The causes of preference reversal. The American Economic Review, 80:204–217, 1990. 58. T. A. Waite. Background context and decision making in hoarding gray jays. Behavioral Ecology and Sociobiology, 12:127–134, 2001. 59. E. U. Weber, E. J. Johnson, K. F. Milch, H. Chang, J. C. Brodscholl, and D. G. Goldstein. Asymmetric discounting in intertemporal choice: A query theory account. Psychological Sciences, 18:516–523, 2007. 60. R. B. Zajonc. Social facilitation. Science, 149:269–274, 1965.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.