Social Constructivism, Mental Models, and Problems of Obedience

Share Embed


Descripción

SOCIAL CONSTRUCTIVISM, MENTAL MODELS AND PROBLEMS OF OBEDIENCE PATRICIA H. WERHANE, LAURA P. HARTMAN, DENNIS MOBERG ELAINE ENGLEHARDT AND MICHAEL PRITCHARD SUBMITTED TO THE 2009 EBEN ANNUAL CONFERENCE I. INTRODUCTION Reframing the 1960s Milgram obedience experiment by using a new lens, we shall analyze that experiment from original perspectives offered by the theoretical work of Immanuel Kant, Dennis Moberg, Patricia Werhane, Dolly Chugh, and Max Bazerman on social construction, mental models, and bounded awareness. Our thesis is that there are important synergies for the next generation of ethical leaders based on the alignment of modified or adjusted mental models. This entails a synergistic application of moral imagination through collaborative input and critique, rather than “me too” obedience. More specifically, we will analyze the Milgram results using frameworks relating to mental models (Werhane, et al., 2009), as well as work by Moberg on “ethics blind spots” (2006), and by Bazerman and Chugh on ‘bounded awareness’ (2006, 2007) Using these constructs to examine the Milgram experiment, we will argue that the ways in which the experiments are framed, the presence of an authority figure, the appeal to the authority of science, and the situation in which the naïve participant finds herself or himself, all create a bounded awareness, a narrow blind spot that encourages a climate for obedience, brackets out the opportunity to ask the moral question: “Am I hurting another fellow human being?” and may preclude the subject from utilizing moral imagination to opt out of the experiment. Similarly, in commerce, many moral failures can be traced to narrow or blinded mental models that preclude taking into account the moral dimensions of a decision or action. In turn, some of these moral failures are caused by a failure to question managerial decisions and commands from a moral point of view because of mental models that construct a perceived authority (translated – faultily – as truth or wisdom) of managerial team or leadership. We will conclude that these forms of almost blind obedience to authority are correctable, but with difficulty. Bazerman and Chugh suggest that people could learn to become more observant, (2006, 2007) and Moberg argues that “ethics blind spots can be corrected by a selfPage | 1

Werhane, et al. Social Constructivism (2009)

improvement regimen” (2006). While questioning whether either acute observation or selfimprovement is always possible, we will argue that linking the modification of mental models to an unbinding of awareness represents an important synergistic relationship and one that can build effectively on the lessons learned from our experience with moral imagination.

II. The Milgram Experiments In the early 1960’s, Dr. Stanley Milgram undertook his noteworthy study of human obedience to authority. Puzzled by the question of how otherwise decent people could knowingly contribute to the massive genocide of the Holocaust during World War II, Milgram designed an experiment that sought to cause a conflict between one’s willingness to obey authority and one’s personal conscience. A psychology professor at Yale University, Milgram had examined accounts of those accused of genocide that were given at the Nuremberg war trials. A common proffered defense was that they were just following the orders of others. Milgram’s experiment was designed to test the hypothesis that a willingness to obey authority can account for behavior that an individual would otherwise avoid and would regard as wrong, even in non-military contexts. The original experiment involved three participants. The first participant assumed the role of “teacher” and was actually the subject of the experiment. The teacher was told that this was an experiment to determine the effect of punishment on learning. The second participant was identified as the “experimenter,” and was usually played by a 31 year-old high school biology teacher wearing a gray technician’s coat. Occasionally, this role was played by Milgram himself.

The “learner,” was a 47 year-old accountant with a kindly appearance, also a

confederate of the experiment. Although it was made to appear to the teacher that the roles of teacher and learner were determined by drawing lots, in fact, the roles were pre-determined. The initial volunteers included 40 men recruited through a newspaper ad, which offered volunteers $4.50 in exchange for their participation (Milgram, 1963, p. 372). Eventually, more than a thousand people, including both men and women, participated in the study. Milgram created a machine that appeared to be an electric shock generator, including switches representing shock levels that started at 15 volts and increased in 15-volt increments to 450 volts. To give teachers some idea of what a shock would feel like, they were each given a sample shock of 75 volts. In addition to labels identifying each switch by a specific voltage Page | 2

Werhane, et al. Social Constructivism (2009)

level, they were also labeled in groups with terms such as "slight shock," "moderate shock" and "danger: severe shock." The final two switches were labeled "XXX" (Milgram, 1963, p. 373). Since the learner was a confederate of the experimenter, the learner was never actually connected to the machine and was instead instructed how and when to respond and, in doing so, to pretend to receive actual shocks. The teachers were given a list of word-pairs (e.g., “blue/girl”) that was to be read aloud to the learners. Then each of the first words of the pair would be presented to the learner, followed by a set of words (e.g., “boy, girl, grass, tent”). The teacher was instructed to deliver a shock to the learner every time an incorrect answer was selected (e.g., grass), and then to repeat the correct paired word (in an effort to “teach”). Prior to reaching 150 volts, the learner would utter an occasional low grunt. However, at 150 volts the learner would insist, with a cry of pain that he wanted the experiment to stop. If the teacher showed any resistance to continuing, the researcher would follow an experiment protocol and insist that the teacher had to continue. As the experiment progressed, the teacher would hear the learner desperately plead to be released or even complain about having a heart condition (again, based on Milgram’s specific experiment protocol). Once the 300-volt level was reached, the learner would bang on the wall and demand to be released. Beyond this point, the learner would become completely silent for the remaining questions. As the teacher and the learner were sequestered in different rooms during most variations of the experiment, the teacher/subject at this point does not necessarily know the condition of the learner, or why he is failing to respond. If the teacher turns to the experimenter or otherwise asks the experimenter for direction, the experimenter instructed the teacher to treat this silence as an incorrect response and to deliver a further shock. Most teachers did ask the experimenter at this point whether the experiment should continue. The experimenter was directed by the protocol to issue a series of commands to prod the teacher along: 1. “Please continue.” 2. “The experiment requires that you continue.” 3. “It is absolutely essential that you continue.” 4. “You have no other choice, you must go on.”

Page | 3

Werhane, et al. Social Constructivism (2009)

The level of shock that the teacher was willing to reach was used as the measure of obedience (Milgram, 1974, p. 21). Milgram asked a variety of different groups of people (psychiatrists, graduate students and faculty on the behavioral sciences, college sophomores, and middle class adults) what percentage of the teachers they thought would, at some point, disobey the experimenter. Respondents uniformly predicted that nearly all the teachers would stop before 450 volts was reached. The psychiatrists predicted that most teachers would refuse to continue if they reached the 150 volt level (the point at which the learner first demands that the process be stopped); and they predicted that only one in a thousand would go all the way to 450 volts (Milgram, 1974, p. 31). The actual results of the study were surprising. Sixty-five percent of the ‘teachers’ in the original study delivered what they took to be the maximum shock of 450 volts (Milgram, 1963). Of the 40 participants in the study, 26 delivered the full 450 volts, while only 14 stopped before reaching the highest level. It is important to note that many of the subjects became extremely agitated, distraught and angry at the experimenter. Yet, they continued to follow orders, pressing the 450 volt lever. As mentioned above, Milgram’s study continued with over a thousand participants and nineteen variations on the experimental design. In most of these variations, nearly two-thirds of the teachers eventually pressed the 450 volt lever. Originally, only men were sought for the study; but women also applied, with similar results. Throughout the research, Milgram found that, after hearing the learner’s first cry of pain at 150 volts, nearly 80 percent of the participants nevertheless continued with the experiment all the way to the maximum level of 450 volts (Burger, 2009, p. 2). As discussed earlier, the experimenter and learner in Milgram’s study were collaborators. The only actual shocks administered were received by the teachers (the 75 volt sample shock at the outset). Nevertheless, the learner was a convincing actor; during the debriefing period, virtually all the teachers indicated that they believed they really were administering shocks, and at progressively higher levels.

They also acknowledged experiencing great difficulty in

continuing to do so.

Page | 4

Werhane, et al. Social Constructivism (2009)

The significance of the Milgram studies is, first, in its ability to offer some insight regarding why so many of the participants in his experiment acted as they did, despite their belief that they were causing intense pain, if not serious injury, to the learners. Rather than attribute their willingness to continue to sadistic impulses, Milgram identified obedience as the key element. Milgram enumerated several factors that explain such high levels of obedience: 1. The fact that the study was sponsored by Yale University (a trusted and authoritative academic institution) led many participants to believe that the experimenter was a credible and reliable researcher and had the support of the university. 2. The experiment seemingly had the worthy purpose of increasing learning. 3. The experimenter said that the shocks were painful but not dangerous. 4. The selection of teacher and learner status seemed random and voluntary. 5. The subjects felt an obligation to continue once they had accepted the responsibility of assisting with the research. 6. Although the teacher was forced into conflict with the learner (begging that he be allowed to stop), the learner was typically out of view while the equally demanding experimenter was in full view of the teacher. 7. There was little time for reflection on the part of the teacher (Milgram, 1963, pp. 377-8). Milgram also contended that the research brought into conflict two deeply ingrained tendencies—the tendency to obey legitimate authorities and the tendency to avoid deliberately harming others. In reading the transcripts of the experiments as well as in listening to the actual experiments, it is evident that great pressure is exerted on the teacher to continue the experiment, even though the learner screams that great harm is being done to him (Milgram, 1963, p. 378). The teacher has to strongly confront the experimenter, who keeps demanding that the experiment continue. This strong pressure exerted by the experimenter to continue mixed with the teacher’s need to confront the experimenter is particularly troublesome in the transcripts and audio of the experiments. What is surprising is the extent to which the authoritative role of the experimenter wins this struggle, despite the absence of any threat of physical force. Soon after the experiments were completed, Milgram’s procedures were subjected to serious ethical criticism. His reliance on deceiving volunteers about the nature of his study, and about the psychological risks to which the experiment unknowingly subjected them, was Page | 5

Werhane, et al. Social Constructivism (2009)

questioned. In the late 1970’s, research institutions like Yale University developed policies requiring researchers to gain the approval of institutional review boards (IRBs) before proceeding with research involving human participants. Requirements such as obtaining the informed consent of volunteers and minimizing risks of harm to participants became paramount, which brought studies replicating Milgram’s procedures to a halt. Now, several decades since his findings were published, viewers might easily dismiss the results as a reflection of some other era, a deference to authority characteristic to a time nearly half a century ago, that has now been morally outgrown. To the contrary, however, a recent, modified replication of Milgram’s studies suggests that this may not be the case.

Dominic Packer (2008) identified a crucial factor in Milgram’s

experiments the 150 volt level. He noted that it was at this level that the learner began to complain loudly about the shocks he (allegedly) was receiving. Analysis of the Milgram data also indicated that about 80 percent of those who nevertheless continued to press the levers proceeded all the way to 450 volts. If it could be assumed that, today, if someone who would be willing to go from 150 volts to 165 volts would behave as teachers did in the original Milgram study, a projection could be made without actually requiring teachers to continue past 150 volts. Packer suggested that all that would need to be determined is whether the teacher would continue, if pressed to do so (Packer, 2008, pp. 301-4). Proceeding on Packer’s suggestion, Burger (2009) partially replicated the Milgram experiments, stopping at 150 volts. The Institutional Review Board at Santa Clara University, responsible for approving the use of human subjects in research such as Burger’s, agreed that the risk of harming participants would be acceptably low if the experiment stopped at the very beginning of audible resistance on the part of the learner. In Burger’s replication of the original experiment, with this single variant (stopping the experiment once the decision to continue or not was made at 150 volts), 70 percent of the participants indicated a willingness to continue past 150 volts, a difference not statistically significant when compared with the original Milgram study (Burger, 2009, p. 8). Also, there was no significant difference when women participants were compared with the male participants in either the Milgram studies or the Burger replication (66.7 percent of the male participants and 72.7 percent of the female) (Burger, 2009, p. 8). In interviewing the participants afterward, Burger found that those who stopped at 150 volts Page | 6

Werhane, et al. Social Constructivism (2009)

believed they were responsible for their own behavior – for administering the shocks. As Milgram found in his study, the experimenter was held accountable by participants who continued to the full extent of the shocks. Both Burger and Milgram strived to have individuals from a variety of walks of life. Further, Burger’s study screened out anyone who had taken more than two psychology classes or had familiarity with the Milgram study. The obedience experiments have been conducted under highly controlled, deliberately contrived conditions.

Milgram was interested in making inferences from how participants

behaved in these circumstances to how ordinary, otherwise decent people might behave outside the experimental laboratory. It seems reasonable to assume that if ordinary people can be persuaded to (not coerced into) administering what they believe to be severe physical pain on others at the prodding of an experimenter, they can also be persuaded to administer other sorts of harms at the beck and call of those above them in hierarchically organized business and institutional settings in general. In many versions of the Milgram experiments and in the Burger experiment, there those who were shocked were typically “out of view” to the teachers, Thus the agent-teachers were unlikely to be first-hand witnesses to the harms suffered. Thus, unlike in the Milgram study, it is possible for agents even not to notice that they have been instrumental in causing harm to anyone. Second, responsibility for whatever harms may result from individual behavior can readily be shifted to others or simply diffused. (“I simply did my job, what I was told to do . . .” “My superiors have assured me—even insisted—that ‘the buck’ stops with them.”) Turning to organizational structures in general, Milgram draws worrisome conclusions about what commonly happens to individual conscience and moral sensitivity: Each individual possesses a conscience which to a greater or lesser degree serves to restrain the unimpeded flow of impulses destructive to others. But when he merges his person into an organizational structure, a new creature replaces autonomous man, unhindered by the limitations of individual morality, free of humane inhibition, mindful only of the sanctions of authority (Milgram, 1974, p. 188). So, although his original concern was to better understand how otherwise decent people could play such an instrumental role in bringing about the violent death and suffering of millions of

Page | 7

Werhane, et al. Social Constructivism (2009)

innocent people in the Nazi Holocaust, Milgram apparently believed his findings had important implications for “business as usual” in the corporate, business world, as well.

III.Mental models, bounded awareness and blind spots Beginning with Immanuel Kant, one of the dominant theories in philosophy and the social sciences is called “social constructivism.” The thesis of social constructivism is that "our conceptual scheme mediates even our most basic perceptual experiences" (Railton, 1986, p. 172). We learn from Kant that our minds do not mirror experience or reality. Rather, our minds project and reconstitute experience. His reasoning, in brief, is as follows. Whereas the content of each of our experiences may vary dramatically, the ways in which we organize, order, and think about our experiences are universally the same. For example, Kant argued, we all experience the world in three dimensions, in a space-time continuum, and we engage in similar sorting mechanisms such as quantity, quality, same as, different, equal to, and so forth. Kant also claimed that all humans order the world causally; that is, when an event occurs, we assume it has a cause, and, alternately, all events are assumed to be causally related to other events. Yet, we organize our experiences as formal concepts: they lack content and cannot be perceived. For example, one cannot perceive time; one merely experiences events temporally. "Cause" and "effect," for example, are not observable phenomena but ways in which we frame relationships between phenomena. Kant concluded that all human beings order and organize their experiences through an identical set of formal concepts. So, although the content of each of our experiences may be quite different, the ways in which we order these experiences is exactly the same. This would explain how we can imaginatively understand experiences we have never encountered and communicate with people of cultures, ethnic backgrounds, or historical periods quite different from ours. A. Mental Models The idea of mental models, the basis for social constructivism, emerged from Kant’s conclusion that the human mind organizes and orders its experiences, and that human knowledge is based on these constructions, as opposed to what may or may not exist apart from our experiences in the external world. A contemporary version of the idea of a mental model is grounded on the understanding that our minds actively interact with the data of our experiences, Page | 8

Werhane, et al. Social Constructivism (2009)

selectively filtering and framing that data (Senge, 1990; Gentner and Whitley, 1997, pp. 210-11; Gorman, 1992; Werhane, 1999). This paradigm differs from Kant’s conclusion that we all order the world with exactly the same constructs. Our idea is that we do order the world, but we do so in different ways, depending on our learning experiences; and this ordering and organizing process is always incomplete. Earlier work by Werhane explores the social origins of our mental models and the biases that may result from the brackets and omissions that become necessary when we simply we cannot mentally process all that we experience (Werhane, 1999). However, because mental models are not genetically fixed or totally locked in during early experiences, one can evaluate and change one’s mind sets if one is committed to doing so (Werhane, et al., 2009; Werhane, 1999). While the human mind constructs the frameworks for its experiences, these constructions are socially learned and subject to alternation or change. These frameworks are usually called mind sets or mental models, a term that comes from the cognitive science literature (Senge, 1990). The term ‘mental model,’ or ‘mind set,’ connotes the idea that human learning and interactions do not merely the result from passively forming mental representations or mental pictures of our experiences, representations that are derived simply from the stimuli or data to which we are subject. Rather, our minds interact with the data of our experiences, selectively filtering and framing that data. (Senge 1990, ch. 10; Gentner and Whitley 1997, pp. 210-11; Gorman 1992; Werhane 1999). However, in the process of focusing, framing, organizing and ordering what we experience, we bracket and leave out data, simply because we cannot observe all that we experience. The result is that sometimes this limiting process taints or colors what we experience. The resulting mind sets or points of view are incomplete, and sometimes distorted, narrow, and single-framed. “Mental models are often the mechanisms whereby humans are able to generate descriptions of system purpose and form, explanations of system functioning and observed system states, and predictions of future system states" (Rouse and Morris, 1986, p. 351). Mental models might be hypothetical constructs of the experience in question or scientific theories; they might be schema that frame the experience, through which individuals process information, conduct experiments, and formulate theories; or mental models may simply refer to human knowledge about a particular set of events or a system. Mental models account for our ability to Page | 9

Werhane, et al. Social Constructivism (2009)

describe, explain, and predict and may function as protocols to account for human expectations often formulated in accordance with these models (Gorman, 1992, 192-235; Rouse and Morris, 1986). More recently, Campbell, et. al., borrowing a term from neuroscience, “pattern recognition,” describe the act of making assumptions based on prior experiences and judgments. (2009) Unlike mental models, however, pattern recognition does not cause the exclusion of information; instead, we perceive a given situation as familiar and consequently accord to that situation all of the characteristics of our prior experiences. As a result, we often believe we understand this new situation and ignore any distinctive qualities it may have. (Campbell, et.al., 2009) Pattern recognition, then, embodies the human tendency to sense that “I have been in this situation before; I know just what to do.” Thus, like mental models, pattern recognition can miss important data, and indeed, be shortsighted or even dangerous. To understand the power of mental models and pattern recognition, and the impact they have on decision-making, it is critical to appreciate that they are shared and often culturally biased ways of perceiving, organizing experience, and learning, because the schema or mental models we employ are socially acquired and altered through religion, socialization, educational upbringing and other experiences (Johnson, 1993; Werhane, 1999). On the other hand, because they are incomplete and socially derived, one can evaluate and change one’s mental models and patterns of recognition, as well. Moreover, even what appear to be purely descriptive objective accounts are also framed by the selective processes through which data is collected and sorted. Thus, the often sharp distinction between the descriptive and normative is more likely to be blurred. Interestingly our ideas of what is right or wrong, good or bad, our moral notions, are also forms of mental models that frame or shape our normative choices and judgments. As Mark Johnson writes, “[m]oral reasoning is a constructive imaginative activity….Our most fundamental moral concepts…are defined metaphorically. The way we conceptualize a particular situation will depend on our use of systematic conceptual metaphors …” (Johnson, 1993, 2) Thus, moral notions are not merely objective evaluative tools for making moral judgments but rather shape our moral decisions. More than merely informing us about what is going on, they actually arrange what is going on by

Page | 10

Werhane, et al. Social Constructivism (2009)

selecting the variables to be understood in a particular way. As such, they order the world in a way that appears to be coherent, shaping one’s vision of it.

B. Bounded Awareness and Blind Spots Continuing to build on theories of incomplete information, Max Bazerman and Dolly Chugh suggest that many decision-makers suffer from “focusing failures,” some more critical to decision-making outcomes than others (Bazerman and Chugh, 2006; Chugh and Bazerman, 2007). These failures are created by what they term a “bounded awareness” - a limited and often “blinded” processing of perceptions or phenomena that can lead to flawed decision-making. Similarly, Moberg demonstrates how what he calls ethics blind spots, in a manner similar to bounded awareness or narrow mental models, undermine moral decision-making in organizations. We shall argue that the Bazerman-Chugh description of focusing failures and Moberg’s notion of ethics blind spots are closely aligned to the concept of mental models. Bazerman and Chugh suggest that no one can be aware of everything in their surroundings. Thus, we select and focus on that which draws our attention or what we imagine are salient phenomena for those purposes in which we are then engaged (Bazerman and Chugh, 2006; Chugh and Bazerman, 2007). In our terminology, we create and perpetuate mental models that focus, but sometimes divert, ignore, or miss other salient data important to ourselves, our company or its projects. It is this kind of failure to notice that is most troubling. As Bazerman and Chugh warn, “What you fail to see can hurt you” (Chugh and Bazerman, 2007, p. 1). More importantly, we might add, it can hurt those among us who are most vulnerable.

IV. Obstacles to Ethical Decision-Making A. Bounded Awareness, Mental Models and Focusing Failures Distorted mental models interfere with ethical decision-making. Another significant obstacle to ethical decision-making is a focusing failure created by bounded awareness, which encourages us to hone in on certain elements of our environment or context to the exclusion of others. As we have discussed, the way that we learn about our world is to figure out precisely that which is worth paying attention to and that which we should ignore. Indeed, we certainly do not have the capacity to attend to every item of stimuli; so, we must filter, or we would find Page | 11

Werhane, et al. Social Constructivism (2009)

ourselves subject to overload and would not be able to comprehend at all (Senge 1990; Bazerman and Chugh, 2006). We filter and refocus on such a constant basis that it is done almost entirely within our subconscious. While, again, this cognitive mechanism is necessary to our processing abilities, it is the automatic nature of the behavior that creates the arena of potential vulnerability. Bounded awareness “leads people to ignore accessible, perceivable and important information while paying attention to other equally accessible but irrelevant information” (Bazerman and Chugh, 2005). A focusing failure generated through bounded awareness was effectively demonstrated by research originally conducted by Neisser (1979) and is currently used to stunning results by Transport for London in the United Kingdom, for the purpose of encouraging road safety. The original research asked viewers to count the number of passes made by a group of people tossing a basketball in a video clip. Two groups were tossing balls at the same time, one group wore white shirts while another group wore black shirts; so, following the ball of a single group required some attention. The short video, however, also included a woman walking across the entire field of vision with a black umbrella. The woman was perfectly clear and unobstructed. Yet, only 21 percent of the viewers reported having seen the woman during the viewing. In the more recent, “Awareness Test” (Transport for London, 2009), a similar circumstance is created. However, instead of a woman with an umbrella, a man in a full bear costume “moonwalks” (dances backwards) across the space where the individuals are tossing the basketballs. In the experience of one of the current authors, sightings of the moonwalking bear range from zero to a high of 5 percent, based on anecdotal consulting and classroom use with more than 1000 participants. Scholars in the field of perceptual psychology explain that we develop a blindness to that to which we are not paying attention (Simons and Levin, 2003; Mack and Rock, 1998). If we are told to count the number of basketball passes during a particular clip, we obey the authority figure and begin to count. Indeed, those of us who are more effective at tuning out extraneous information are more successful at counting; if a bird flies into the picture, ignore it. Some might contend that our effort to ignore the bird therefore is not a blind spot but instead a masterful and professional focus. But, do we become so triumphant over our distractibility, our inattention that we fail to notice key information? How do we discriminate between the Page | 12

Werhane, et al. Social Constructivism (2009)

moonwalking bear that distracts us from our task at hand and the moonwalking bear that represents a critical ethical challenge for us or for our organization? Has our obedience created not a focus but a blind spot? Moreover, while Bazerman and Chugh’s definition of bounded awareness, above, warns that we might be led to ignore vital information while paying attention “to other equally accessible but irrelevant information” (Chugh and Bazerman, 2007, p. 2), we would expand on their definition to include also those circumstances when one is attending to “to other equally accessible and sometimes equally important information.” In other words, leaders and other decision-makers are often faced with myriad challenges, stimuli and decisions. Our awareness is not merely bounded when we pay attention to information irrelevant to decisions but also when we are overloaded by information, all of which is relevant and we are unable to determine that which is most worthy of our attention, a failure to focus and prioritize. In applying a similar concept directly to ethical decision-making in organizations, Moberg (2006) coined the term “ethics blind spots” to demonstrate that work organizations rarely send official signals to employees about their moral actions. Managers are more attentive to employees’ competence rather than to their character; consequently, instances of both exemplary moral behavior and ethical lapses often go unrecognized (Wojciszke, Bazinska and Jaworski. 1998). Employees hear much more about how to get their jobs done rather than about how to complete them ethically. Compounding this problem is that employees tend to be hypervigilant about their boss’ moral failures. When they look to officials as role models, they often see people who are hypocritical, untrustworthy, and self-absorbed.

B. Moral Impulse Control (Emotional Tagging) Contemporary psychological models hold that moral behavior emerges through one of two processes (Lapsley and Hill, 2008). One is a “hot,” impulsive process in which the person is guided by a gut-feeling about what is moral and what is not. The other is a “cool,” deliberative process involving careful analysis and the conscious application of moral norms. Moral psychologists have several different ideas about exactly how this dual-processing system operates (Haidt, 2008). However, there seems general agreement that hot processes often precede one’s reaction to morally-charged situations, and that cool processes are then evoked to explain, Page | 13

Werhane, et al. Social Constructivism (2009)

justify, and rationalize these intuitions (e.g., Haidt, 2001). Because gut-feelings seem to originate in brain locations that are evolutionarily primitive (Greene et al., 2001), some speculate that they represent adaptations that may no longer reflect modern life. According to one formulation, people possess three sets of evolutionary impulses that can be traced back to our reptilian, mammalian, and characteristically human origins (Narvaez, 2008a). The oldest of these are the biological-based motivations toward survival including flight-fight, territoriality, maintenance of hierarchy and group order, and obedience to authority. Mammalian impulses are more empathetic, modeled after nursing and maternal care, communication between mother and offspring, and play (Narvaez, 2008a). Biologically-based intuitions that are neither reptilian nor mammalian, but rather characteristically human include those associated with forward-looking, a sense of fairness, and perspective-taking. These more evolutionarily recent inclinations are most consistent with “cool processes.” In situations like the Milgram experiments, which of these processes is likely to be evoked? Clearly, the reptilian impulses would urge compliance with the authority figure. However, the more mammalian and human programming would elicit a sympathetic concern for the person being harmed and perhaps even a recognition that moral norms would not countenance giving painful shocks to people unable to perform memory tasks. Psychologists generally theorize that one of three mechanisms determine which of these “hot” processes might influence behavior in the experiment. 1. Early childhood experiences. Mammalian systems are molded in the first years of life. If caregivers provide touching, holding, and playing, the young mammal (person) becomes fully capable of his or her capacity for sympathy and compassion (e.g., Laible and Thompson, 2000). Thus, one’s early experience with care-givers might influence one’s obedience to authority. 2. Social norm development and influence. Humans are social creatures. As such they express their biological impulses in ways that adjust to the social circumstances they fit (Haidt and Bjorklund, 2008). While asocial behavior is possible, it is not common. Norms exert influence depending upon the particulars of the situation. In Western societies, for example, most people use communication or institutions rather than violence when faced with territorial incursion by a neighbor. Thus, one’s learning Page | 14

Werhane, et al. Social Constructivism (2009)

social norms relevant to when obedience is moral or immoral might influence one’s obedience to authority. In the US military, for example, recruits are taught to obey any and all orders from their superiors unless they are unethical. If this is regularly repeated and reinforced by sanctions, it is likely to have an effect on morally repugnant obedience. 3. Situational priming. Darcia Narvaez and her colleagues (2008b; Narvaez, et al., 2006) assert that some knowledge schemas (i.e., prototypes, scripts, episodes, constructs about the self, goals, beliefs, expectations) become chronically accessible through priming, experience, and repetition, and these influence how moral decisions are made spontaneously with a minimum of conscious thought. Some schemas concern self-constructs of moral character (e.g., morality is important to me, Gandhi is my role model), and if these are chronically accessible, the person is prone to act consistently with these knowledge schemas. Other relevant schemas concern the situational context in which moral behavior might be enacted (e.g., it is wise to be skeptical of people in uniforms). If self- and situational-schema like these are both chronically accessible, acts of obedience in the Milgram experiment are less likely to emerge. C. Misleading Memories via Pattern Recognition Psychologists have examined the cognitive processes of people who have attained very high levels of performance in such activities as chess (Chase and Simon, 1973), sports (Allard and Starks, 1980), medical diagnostics (Elstein, Shulman and Sprafka, 1990), and computer programming (Adelson and Soloway, 1985). As a result, they have identified several common elements of expertise (Ericsson and Charness, 1994) that also apply to moral behavior (Dreyfus and Dreyfus, 1991).One element of expertise is pattern recognition in which exposure to certain chunks of data trigger particular action sequences that are well adapted to the situation. As part of this process, the expert mentally simulates the action sequence based upon the mental models of the situation derived from experience (Klein, 2004). These mental models are the result of “deliberate practice” in which the individual accumulates experience by facing typical situations in a progressively more complicated sequence (Ericsson and Charness, 1994). Throughout the process, the expert acquires “tacit knowledge” about the task. This terms derives from the work Page | 15

Werhane, et al. Social Constructivism (2009)

of Michael Polanyi who contended that “we know more than we can tell” (1966, p. 4). Thus, although experts draw on their knowledge when they make decisions, they may not be able to explain accurately why they have acted or educate another person to make the same decisions in the same circumstances. The same processes are involved in the behavior of individuals who do not attain high levels of expertise. In effect, everyone learns to recognize patterns, and they certainly acquire a great deal of tacit knowledge concerning those features about their work environment that repeat themselves. Although they may never be known for their world-class performance, they are masters of their own mundane universe, and they employ pattern recognition for this purpose. A problem arises when ordinary people are exposed to situations that do not fit the patterns they have learned to rely on. In the Milgram experiments, subjects received orders from individuals who had the appearance of scientists. Subjects were told that they were to act as an assistant. Their “supervisor” wore garb that conveyed that he was a medical expert. Had the subject ever been to a hospital or a research laboratory, the pattern of “act as an assistant to a scientist/medical expert” fit obedience. In “typical” work contexts where employees have learned patterns representative of their own tacit knowledge, they will have immunity to many obedience situations. Their reaction may be, “something is wrong here; he wouldn’t ask me to do that.” However, if the patterns learned represent borderline unethical deeds like giving rude customers less service than they deserve, moving to practice that actually cheats rude customers may be hard to detect. D. Role Identification Another obstacle to ethical decision-making is involved with role identification. We often identify with our roles, particularly in stressful situations, rather than stepping back from that role to realize that it is just one of many that define us. For example, in the Milgram experiments the naive teacher-subjects often do not step back from their identification as part of a scientific experiment to ask what they are doing, and why. Nor do they all ask themselves, “would I do this in another context?’ Nor do they always consider how this behavior fits or conflicts with their other roles as parents or employees or citizens. Identification with a particular role, then can blind one from challenging what is demanded from that role, particularly in difficult or stressful situations. Page | 16

Werhane, et al. Social Constructivism (2009)

E. Inappropriate self-interest Though we shall discuss in Section IV strategies by which to evaluate and to recalibrate one’s mental models, they are insidious in their ability to cloud our judgment while, at the same time, allowing us to rest certain in the conviction of the ethics of our decisions. Often, by recognizing the possibility of unethical behavior, we are able to relieve the vulnerability. For instance, where a conflict of interest exists in the business relationship, a business person may be able to relieve the concern by disclosing the conflict and by obtaining the consent of all parties involved. However, there remain circumstances where our mental models prohibit us either from recognizing the conflict or from acknowledging the full impact of the conflict on our decisionmaking processes. Consider the example of Supreme Court Justice Antonin Scalia, who refused to recuse himself from a court case involving then-Vice President Cheney, even though the Justice had just recently gone duck hunting with the Vice President on a weekend holiday trip. He explained, in a somewhat tautological argument that might seem to demonstrate a mental model in play, “Since I do not believe my impartiality can reasonably be questioned, I do not think it would be proper for me to recuse” (Reuters, 2004; Mears, 2004). The Sierra Club had sought the recusal based on an appearance of impropriety, and Scalia had denied the request. The boundedness of his awareness prevented him from perceiving a possible conflict of interest. (See Chugh,et. al., 2005)

V.

Addressing Obstacles

a.

The

Choice

against

Our

Own

Blindness

and

Organizational Blind Spots: The self-improvement regimen Without compelling and dependable guidance from organizational leadership about how to be a good person, employees are on their own for the most part. They must choose to be ethical because they want to do so, because they see it as a natural expression of their moral ambitions. And, they must emulate role models of their own choosing—not just people above them in the chain of command, but anyone that naturally inspires them (Moberg, 2006 ). Organizations that strive to establish cultures in which values and ethics are prominent may counter these and other forces discussed above. If codes and credos are consistently used in Page | 17

Werhane, et al. Social Constructivism (2009)

decision-making and if managers mentor subordinates to improve their character as well as competence, then employees will have a better idea of what is expected of them from a moral standpoint. Much depends, however, on how organizations treat events where employees’ job performance is at odds with their moral behavior. If top sales people are excused for their ethical wrongdoing (Bellizi and Hite, 1989) and if upstanding organizational citizens are furloughed for performance lapses, then that reinforces the priorities that create the ethics blind spots in the first place. But, as Moberg argues, individual inattentional blindness and institutional blind spots are “correctable.” This is because mental models are socially learned and incomplete. So these models can be reframed.

Part of the possibility for reframing comes through strategic

recalibration of those mental models as well as in the development of a strong moral imagination, both subjects we shall develop in the following sections. For companies, these imperatives may entail changing the work environment, perhaps reframing goals, missions and even corporate cultures where mistakes are not discouraged and where managers are also encouraged to think for themselves, and to push-back against what a manager might find to be questionable behavior. In either case, as both Bazerman and Moore, and Moberg, conclude, this involves taking responsibility, moral courage, and challenging the status quo. Thus, through self-awareness, moral acuity and imagination we can become aware of our limitations, our failure to perceive important data, or in Scalia’s case with Dick Cheney, a failure to perceive that there might be a conflict of interest. Putting that ability into action, however, is more difficult. b. Combating Inattentional Blindness We have established that there is pervasive deterioration of our ethical judgments on the basis of our inattentional blindness and bounded awareness resulting from biased mental models. Bazerman and Moore’s proposition to combat that deterioration can ultimately be reduced to a facile admonition, followed by six relatively basic strategies, which are nevertheless extraordinarily challenging to implement (2009). The facile admonition: Do not trust your gut. In Bazerman and Moore’s terminology, all too often, we trust out intuition; and intuition does not always guide us toward objectively rational behavior. Instead, rather than follow our sometimes faulty or arbitrary reasoning processes, in order to habitually engage in optimal decision-making, one should specifically and consistently adhere Page | 18

Werhane, et al. Social Constructivism (2009)

to a decision-making model. While a model might take into account various elements otherwise considered by one’s intuition, it also systematically integrates large amounts of information after the decision has been made – ex ante – about what data to select and how to code it. Individual “gut instinct,” on the other hand, might integrate data in a way that is influenced by biased mental models or that ignores information due to inattentional blindness or blind spots. When directed by a decision-making model to access information or gather data, these errors are far less likely to occur. Bazerman and Moore’s second strategy to improve decision-making in a managerial environment may seem intuitive when considered in an academic domain: Acquire expertise. However, they caution that expertise is not the equivalent of the “relatively mindless, passive learning obtained via experience.” That which will ameliorate decisions is “much more that the unclear feedback of uncertain, uncontrollable and often delayed results. Rather, it necessitates constant monitoring and awareness of our decision-making processes” (2009). Third, we need to challenge our expertise, deliberately exposing ourselves to environments and individuals who represent threats or opposite perspectives to our patterns, biases, judgments or plans. Bazerman and Moore term this strategy “debiasing” one’s judgment. In order to engage in debias – and to ensure that it remains stable within our thought processes – one must first identify the mental models to which one is responding originally and admit that they are flawed, thus accepting the need for change. Second, debiasing is indispensable because, as professionals, decision-makers are habituated to a great deal of positive reinforcement and support. Unless and until one accepts one’s intuition as an impediment, she or he will resist the proposal that alternative perspectives or models could be “more right” and instead perceive them as an assault on her or his self esteem. Once the individual is able to recognize, listen to and understand a distinct aspect of a particular decision, Bazerman and Moore cite three strategies that provoke the decision-maker to adopt an empathetic, stakeholder approach to the dilemma. By acknowledging the considerable value in analogous cases and experiences, rather than in specific episodes, a decision-maker can gain “generalizable insight” in place of fact-based lessons. By extrapolating to the perspective of an outsider rather than an insider, as well as by understanding the mental models that others

Page | 19

Werhane, et al. Social Constructivism (2009)

bring to the dilemma, we are most effectively equipped to recalibrate our own naturally biased mental models to adjust or adapt for these realistic exigencies of our interpersonal environment. c. Moral imagination Linking the modification of mental models to an unbinding of awareness represents an important synergistic relationship and one that can build effectively on the lessons learned from our experience with moral imagination. Moral imagination has been defined as “a necessary ingredient in responsible moral judgment” that can enable in particular circumstances to “discover and evaluate possibilities not merely determined by that circumstance, or limited by its operative mental models, or merely framed by a set of rules or rule-governed concerns. In managerial decision-making, moral imagination entails perceiving norms, social roles, and relationships entwined in any situation” (Werhane, 1999, p. 93). The importance of moral imagination resides in the following idea: within organizations—managers who strive to success and excellence risk in many cases to find themselves bounded in a cognitive trap, where only a narrow, partial perspective on reality emerges as possible.

In such cases, managers’

interpretation of reality can become distorted or “blinded,” as we have argued in the previous section, their abilities to grasp ethical dimensions is sometimes impaired and the ability to exercise moral judgment impeded. In the worse scenarios, as organizational psychologists demonstrate, the competitive culture may degenerate into a neurotic tendency of “search of glory” (Horney, 1950), managers tend to confuse reality with a self-created world of fiction characterized by collective folie á deux processes, such as psychotic forms of illusion of grandeur or depressive delusion of persecution (Kets De Vries, 1984), and managerial decisionmaking may be heavily biased by phenomena of over-confidenceunreasonable optimism on future outcomes, inconsistency in risk-taking decisions and excessive confidence on personal skills (Kahneman and Lovallo, 1993; Camerer and Lovallo, 1999). To ameliorate these risks and those we have outlined previously, the capacity to engage in moral imagination, along with a “self-improvement regimen” are key assets. i. Disengagement from the context The first stage of activating moral imagination is to try to disengage from the particular issue and its context to discover what mental models are at work. Ethical failures of managerial decision-making are often the result not of weak moral development or a lack of understanding Page | 20

Werhane, et al. Social Constructivism (2009)

of what is right or wrong, but rather bounded awareness of facts and of the moral implications and social consequences of ‘business decisions’. This means asking questions such as: “What are the operative narratives in this context?” “Who is affected in this situation?”

“What

motivates the decision-makers?”“What conflicts and values are at stake?”, “How does this look from an outsider’s perspective or from the point of view of a stranger, someone from another industry, or someone from alien culture? and “Is The role of moral imagination in this process is essential: to develop and apply moral principles, managers need first to reach an appropriate understanding of the complex circumstances of reality that they are facing (perception) and what important facts and assumptions are left out of their initial perceptions. By exercising their understanding and disengagement in the process of developing a capacity of moral imagination, managers will be less inclined to underestimate or fail to take into account salient aspects—e.g. the ethical implications—involved in complex decisions. One way to become disengaged is to imagine yourself in the shoes of a disinterested observer. Another is to step out of your comfort zone—engage in some business-related activity that is not “normal” for your routine. For a company, it would mean imagining itself as, say, a foreign company engaged in working in a new country, or hiring workers who had different social practices, or imagining dealing with shareholders who insisted on daily involvement in company business. ii. Delving into possibilities The second stage of developing a robust moral imagination involves delving into possibilities. Similar to Bazerman and Moores’s third strategy (debias), consider some new alternatives in approaching a particular issue. What societal, corporate and personal values are at stake? Do any of these challenge the status quo? In this stage, it is the combination of moral imagination with moral reasoning that enables creative moral managerial decision-making. A corporate ethics program that embraces the idea of moral imagination (i.e. aiming at training decision-makers to exercise their capacity for imaginative thinking) would therefore aim to develop moral standards that represent the balance between the initial, context-based moral intuitions and the imaginative reflection that de-contextualizes the thinking from the status quo. Moral imagination here can be seen as activating a thought process similar to the Rawls’s notion of reflective equilibrium (Rawls, 1971). By continuously going back and forth between the Page | 21

Werhane, et al. Social Constructivism (2009)

(specific) case at hand and the (general) company mission and values, between the local culture, social norms and traditions, and more abstract personal values and moral principles, managers will be able to think through the issues that they are facing and consequently reinforce their motivation toward ethical decision-making. This process does not require that managers deny their local identities or parochial interests. On the contrary, they will place these contextual elements under moral scrutiny, until, as Rawls points out, their “considered judgments,” duly pruned and adjusted, will be in equilibrium with their more general principles or with what might be considered “moral minimums,” those values that represent widespread agreement across different cultural, social and historical contexts about what actions are morally justifiable or (more easily) morally questionable, but with no claim to be absolute.

Their validity needs to be continuously

reaffirmed over time, open to revision and refinement if new situations or innovative thinking might enable. Again, this approach is reminiscent of Bazerman and Moore’s debiasing strategy, admitting that our mental models may be flawed, and thereby accepting the possible need for change. A practical application of this dynamic process can be seen, for example, in the evolution of environmental standards or in discussions around any justification for torture. iii. Focus on outcomes The third stage of developing moral imagination takes into account practical issues and consequences.

Here, one questions the viability of alternatives at stake. Can these be

operationalized? And what might be the consequences, negative and positive, for all the stakeholders involved? This approach to moral imagination requires one to recognize and appreciate social norms and rules of behavior, and the idea that to develop ‘moral sensitivity’ is one crucial task to enable every person to behave ethically. Too often in modern corporations managers find themselves trapped in narrow decision-making frameworks, biased by short-term pressures that burden their roles and responsibilities, and fail to integrate in their thinking an adequate appreciation of social norms and ethical principles.

Moral imagination allows

managers to connect with the external word, to ‘feel a concern for the welfare of others’, or to take into consideration the impacts of corporate action on all the organization’s stakeholders, to use a modern management language.

Page | 22

Werhane, et al. Social Constructivism (2009)

VI. Conclusion “We’re only human.” The common refrain allows us to explain (in an effort to defend and to justify) our errors in decision-making; but, in so declaring, we also betray our awareness of the errors, thus giving rise to a natural accountability for the implications of those misdeeds. Once we have awareness, not only of the acts, but also of the mental models operative in our thinking, we can then begin exploring means by which to correct them. It is that cognition that engenders the responsibility. For example, it was that acute cognition, therefore, rather than the acts themselves, that created in participants the gravest discomfort in the Milgram experiments – the intense distress in the realization of what they had done without questioning those actions, rather than in the act itself. In this current work, we sought to recalibrate the lessons of those experiments by applying the concept of mental models, the cognitive frameworks with which the participant approached the experiment. Mental models bind our awareness within a particular scaffold and then selectively can filter the content we subsequently receive. Through our proposed accountable recalibration, we cultivate strategies anew, creating new habits, and galvanizing more intentional and evolved mental models. The central question that challenges each viewer or reader of Milgram’s obedience experiments is not simply why the subject/participant acted in the manner observed, but how that reader would act if in the same situation. We ask ourselves what we would have done, whether we would have questioned the authority figure, refused to continue, stopped the experiment. Of course, observers (non-participants) gawk at the results, suggesting anecdotally that hindsight offers us clearer vision than present temporal experience. However, it is precisely that real-time experience with which our minds interact when they selectively filter and frame in order to create our mental models. While we organize and order our world through mental models, we do not often do so with the luxury of analytical hindsight. To the contrary, if we doubt whether our actions might have been different from those of Milgram’s participants, we are simply asking whether we order the world in a manner so terribly distinct from others - and have thereby proven Werhane’s thesis surrounding the manufacture of bounded constructs from incomplete data. Many years postMilgram, we continue to uncover our own focusing failures in the interpretation of those results, Page | 23

Werhane, et al. Social Constructivism (2009)

recognizing our bias against history and in favor of currency, while (predictably) ignoring the moral blind spots that remain. Consider the presumption in anticipating the 2009 Burger results of the Milgram replication; after more than forty years, one might expect that the human race has, if not evolved, at least learned just a small lesson from its collateral historical events.

We have shared

experiences in the intervening years; yet, we recognize distinct patterns based on our particular and personal biases, mental models and social schema. Accordingly, no single descriptive analysis is possible; only vastly diverse normative prisms exist through which we continue to view identical scenarios. In the end, the same mental models persist, permeating our otherwise autonomous ability to “think for ourselves” when, in fact, that is precisely what the models represent, each person’s individually-generated construct of our personal experiences. Bazerman and Chugh remind us, though, that our incomplete constructs often omit the data most necessary for effective decision-making. “What you fail to see can hurt you,” they submit (Chugh and Bazerman, 2007, p. 1); but our business pressures induce a polar opposite belief system. Instead, we whet and hone toward singular objectives, creating exclusionary silos when the reality of our professional dilemmas demand the broadest perspectives possible. In fact, to be our most effective, efficient and ethical “best,” we are all meant both to count the number of basketball passes and to be able to see the moonwalking bear; in other words, we are meant to perform the apparent and essential functions of our positions in order to meet bottom line objectives and to be able to guard against any ethical risk or vulnerability that might threaten those objectives, at all times, and whether anticipated or incidental. Ariely (2008) explains the vulnerabilities and applied risks congenital in failing to attend to these ethics blind spots. Vision is “one of the best things we do,” he explains. “We have a huge part of our brain dedicated to vision. Bigger than dedicated to anything else . . . we are evolutionarily designed to do vision. And if we have these predictable repeatable mistakes in vision, which we're so good at, what's the chance that we don't make even more mistakes in something we're not as good at. For example, financial decision making.” But, are we to abandon all hope of community understanding, of victory over common bias since, as Haidt contends (2009), “[o]ur minds were not designed by evolution to discover the truth; they were designed to play social games?” Ariely (2008) explains these visual Page | 24

Werhane, et al. Social Constructivism (2009)

impairments or obstructions, but also the metaphor for their subjugation. “For some reason, when it comes to the mental world, when we design things like healthcare and retirement and stock markets, we somehow forget the idea that we are limited. . . . If we understood our cognitive limitations in the same way that we understand our physical limitations, even though they don't stare us in the face in the same way, we could design a better world. And that, I think, is the hope of this thing.” Maloney (2001) anticipates guidance that is later articulated by Bazerman and Moore (2009) when he concedes that “prophetic voices do attempt from time to time to shock us out of those common biases.” Since these biases are deeply rooted, as we discussed above, they do not yield effortlessly, and we remain blind to many of them. It is therefore a vital imperative to surmount the obstacles that impede effective and ethical decision-making by integrating the processes discussed above into our evolved mental models. If we do not attend to this blindness, if we do not revisit our mental models and develop a strong moral imagination in order to challenge the intuitions that otherwise persist without question or deliberation, we are destined to accept common bias. Maloney (2001) cautions that one day “others will shake their heads, when reminiscing about us, and say: how could they have thought that?” And what shall be our only answer? We were not thinking.

REFERENCES Ariely, D. 2008. Are we in control of our own decisions? The Entertainment Gathering at the TED Conference (Dec.) http://www.ted.com/index.php/talks/dan_ariely_asks_are_we_in_control_of_our_own_decisi ons.html (accessed May 28, 2009). Banaji, M. et al. 2003. How (Un)ethical are you? Harvard Business Review (Dec.). Bazerman, M. and D. Chugh. 2006. Decisions without Blinders. Harvard Business Review Jan: 88-97. Bazerman, M. and D. Moore. 2009. Judgment in Managerial Decision Making. Hoboken, NJ: John Wiley & Sons. Bellizzi, J. A., & R. E. Hite. 1989 Supervising unethical sales force behavior. Journal of Marketing 53(2): 36–47. Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64, 1-11. Page | 25

Werhane, et al. Social Constructivism (2009)

Campbell, A., J. Whitehead and S. Finkelstein. 2009. Why good leaders make bad decisions. Harvard Business Review Feb: 60-66. Chase, W., & Simon, H. 1973. Perception in chess. Cognitive Psychology, 4: 55–81. Chugh, D. and M. Bazerman. 2007. Bounded awareness: what you fail to see can hurt you. Mind & Society. 6: 1-18. Chugh, D, Bazerman, M.H., Banaji, M. R. 2005. “Bounded ethicality a psychological barrier to recognizing conflicts of interest.” In D. A. Moore, D. M. Cain, G. Loewenstein, and M. Bazerman, eds. Conflicts of Interest. London: Cambridge University Press, 74-95. Gentner, D. and E. W. Whitley. 1997. Mental Models of Population Growth. In Bazerman, Tenbrunsel and Wade-Benzoni, Eds. Environment, Ethics and Behavior. San Francisco: New Lexington Press. Gorman, M. 1992. Simulating Science. Bloomington IN: Indiana University Press. Haidt, J. cited in Kristof, N. 2009. Would You Slap Your Father? If So, You’re a Liberal. New York Times. http://www.nytimes.com/2009/05/28/opinion/28kristof.html. (May 28) (accessed May 28, 2009). Johnson, Mark. 1993. Moral Imagination. Chicago: University of Chicago Press. Kant, I. 1787: 1965. Critique of Pure Reason. Trans. Norman Kemp Smith. New York: Bedford St. Martins. Kets de Vries, Manfred and Miller, D. 1984. The Neurotic Organization. San Fransicso: JosseyBass.Klein, G. 2004. The power of intuition. New York: Currency. Mack, A., & Rock, I. (1998). Inattentional Blindness. Cambridge, MA: MIT Press. Maloney, R. 2001. Authenticity and Contact with Youth. Review for Religious, 60: 262. Mears, B. 2004. Scalia won't recuse himself from Cheney case. May 6. http://www.cnn.com/2004/LAW/03/18/scalia.recusal/. http://www.cnn.com/2004/LAW/03/18/scalia.recusal/Milgram, S. (1963). Behavioral Study of

obedience. Journal of Abnormal and Social Psychology, 67, 371-378. Milgram, S. (1965). Some conditions of obedience and disobedience to authority. Human Relations, 18, 57-76. Milgram, S.(1974). Obedience to Authority: An Experimental View. New York: Harper and Row. . Moberg, D. J. 2000. Role models and moral exemplars. Business Ethics Quarterly 10: 675–696. Moberg, D. J. 2006. Ethics blind spots in organizations. Organizational Studies, 27 (3): 413-428. Narvaez, D. 2008a. Triune ethics: The neurobiological roots of our multiple moralities. New Ideas in Psychology, 26: 95-119.

Page | 26

Werhane, et al. Social Constructivism (2009)

Narvaez, D. 2008b. Human flourishing and moral development: Cognitive science and neurobiological perspectives on virtue development. In L. Nucci & D. Narvaez (Eds.). Handbook of moral and character education (pp. 310-327). Mahwah, NJ: Erlbaum. Narvaez, D., Lapsley, D. K., Hagele, S., & Laskey, B. 2006. Moral chronicity and social information-processing: Tests of a social cognitive approach to moral personality. Journal of Research in Personality, 40 (6): 966-985. Neisser, U. 1979. “The Concept of Intelligence.” Intelligence. 3: 217-227. Packer, D.J. (2008). Identifying systematic disobedience in Milgram’s obedience experiments. Perspectives on Psychological Science, 3, 301-304. Polanyi, M. 1966. The tacit dimension. Garden City, NY: Basic Books. Reuters. 2004. Justice Scalia Refuses to Recuse in Cheney Case. Mar. 18. http://www.commondreams.org/headlines04/0318-06.htm. Railton, Peter. 1986. “Moral Realism.” Philosophical Review. 95: 168-175. Rawls, John. 1971. A Theory of Justice. Cambridge MA: Harvard University Press. Transport for London, City of London. http://www.dothetest.co.uk/ (using the “original” version of the “Awareness Test.”) (accessed April 20, 2009). Werhane, P. H., S. Kelley, L. Hartman, and D. Moberg. 2009. Profitable Partnerships for Poverty Alleviation. New York: Routledge/Taylor and Francis. Werhane, P. H. 1999. Moral Imagination and Management Decision-Making. New York: Oxford University Press. Wojciszke, B., R. Bazinska, & M. Jaworski. 1998. On the dominance of moral categories in impression formation. Personality and Social Psychology Bulletin 24: 1245–1257. Zimbardo, Philip. 1972. The psychology of imprisonment: privation, power and pathology. Stanford California: Stanford University Press, 1972 .

Page | 27

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.