Un-making artificial moral agents

June 19, 2017 | Autor: Keith Miller | Categoría: Human Geography, Philosophy, Applied Ethics, Levels of Abstraction, Information Technology Ethics
Share Embed


Descripción

Ethics and Information Technology (2008) 10:123–133 DOI 10.1007/s10676-008-9174-6

 Springer 2008

Un-making artificial moral agents Deborah G. Johnson1 and Keith W. Miller2 1

University of Virginia, Charlottesville, USA University of Illinois at Springfield, Springfield, USA E-mail: [email protected] 2

Abstract. Floridi and Sanders, seminal work, ‘‘On the morality of artificial agents’’ has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as ‘‘artificial agents.’’ Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them. Key words: artificial moral agents, autonomy, computer modeling, computers and society, independence, levels of abstraction, sociotechnical systems Abbreviations: STS: Science and Technology Studies

Introduction Luciano Floridi has made an enormous contribution to the field of computer ethics through a large body of work that brings computers and computation to bear on a range of philosophical concepts and ethical issues. In this paper we focus in particular on Floridi and Sanders, seminal piece, ‘‘On the Morality of Artificial Agents.’’1 One reason that Floridi and Sanders’ paper has generated intense debate is that its claims are relevant to a range of traditional and emerging fields including cognitive science, information studies, computer ethics, computer science (especially artificial intelligence), and philosophy of technology. Scholars in these fields have quite different stakes in the debate about the status of artificial agents and this tends to add complexities and keep the debate moving. Each community of scholars (and to some extent each scholar) asks different questions and puts down anchors at various points in 1 Luciano Floridi and Jeff W. Sanders. On the Morality of Artificial Agents. Minds and Machines, 14(3): 349–379, 2004.

this complex territory, and then insists that everything else must fall into place around the anchorclaims. Each scholar and community has axes to grind – messages they want to convey to particular audiences. To understand the territory, we have to step back and consider where one or another group of scholars has dug in and why they place their anchors exactly where they do. Many of the interlocutors in this debate seem to believe that what is at issue is ‘‘the truth’’ about the moral status of computer systems. In this paper, we argue that this issue is not a matter of truth. There is no preexisting right answer to the question whether computer systems are (or ever could be considered) moral agents; there is no truth to be uncovered, no test that involves identifying whether a system meets or does not meet a set of criteria. We argue that even if computer systems exhibit particular features, we are not ‘‘required’’ in the name of consistency to draw a particular conclusion about their moral status. Our analysis begins by stepping back from the debate and trying to understand what exactly is at stake and why the debate goes on as it does.

124

DEBORAH G. JOHNSON

The debate focuses on computer systems that are conceptualized as artificial agents; they are computational artifacts that, unlike many other artifacts, behave with some kind or degree of independence. The debate can be described as a struggle or negotiation over the meaning and significance of computational artifacts that behave with this independence and, indeed, their independence leads some to refer to these systems as ‘‘autonomous agents.’’ Two examples that are often discussed are robots that perform tasks at remote locations (including other planets) with only periodic commands from human ‘‘handlers’’ and network bots that can be given an item to buy and then search for the best value available, taking into consideration price, quality, and vendor reputation. The range of current and future applications is enormous and growing as more and more systems are automated by increasingly complex software and hardware. These emerging technologies can and do make decisions about what information we receive when we search, what medicines or doses of radiation we receive, how airplane and automobile traffic is ordered, what financial transactions are implemented, how weapons are targeted and launched, and so on. Our analysis concedes that computational artifacts can behave independently from the humans that create or deploy them in some sense of the term ‘‘independent.’’ However, we argue that in one very important sense of the term ‘‘independent’’, computer systems are not ever completely independent from their human designers. We argue, as well, that it is dangerous to conceptualize computer systems as autonomous moral agents.

Interpretive flexibility When we step back from this debate about the status of computational artifacts and try to understand what is being contested, the debate seems to fit accounts of technological development provided by science and technology studies (STS) scholars. STS scholars argue that during the early stages of technological development, technology has interpretive flexibility. The technology is not yet a ‘‘thing’’ both in the sense that the material design has not yet solidified and in the sense that there is disagreement about its meaning or significance.2 Various actors and

AND

KEITH W. MILLER

interest groups struggle over the benefits and drawbacks of various designs. The standard exemplar in STS is Pinch and Bijker’s description of the development of the bicycle.3 They identify a variety of bicycle designs that were developed, adopted for a time, then rejected and replaced with another design. The pattern and direction of development was neither predetermined nor linear; at each stage a variety of interest groups were part of the process and each group pressed for particular features. The bicycle design that eventually took hold was the outcome of struggles and negotiations among the various interest groups. The idea that technology has interpretative flexibility in its early stages of development is important here because it makes the (perhaps obvious but often misrepresented) point that technologies are not developed in a single act, nor are they created intact as we (consumers and users) later encounter them. Rather the technologies we have today were developed through a process of iteration: early prototypes were rejected or modified, users responded in a variety of ways to various features, users and the public attached meanings and found uses for devices that the designers never intended. Of course, in hindsight it may look like the technology, as we know it now, was on a path fated to end with the technology we currently see. That is the nature of hindsight; it puts the present in a perspective from which it looks like it was fated to occur. In reality, however, as STS scholars point out, the process of technological development is contingent and in the early stages what is being developed has many valences; there are many potential directions in which the design and meaning might evolve, including going nowhere at all, e.g., cold fusion. So it is with computer systems and computational artifacts. They are developed over time in contexts in which various actors push and pull in a variety of directions with a variety of concerns and interests. The actual design and meaning that is eventually adopted by users is the outcome of struggles between modelers, programmers, clients, distributors, marketing representatives, users, lawyers, and so on. Indeed, the process of development continues on even after a product is brought to market; for example, the process continues when users find work-arounds and

3

2

Johnson makes this point with regard to nanotechnology referring to it as a technology ‘in the making.’ Deborah G. Johnson, Nanoethics: An Essay on Ethics and Technology ‘in the Making’. Nanoethics, 1(1): 21–30, 2007.

Wiebe Bijker and Trevor Pinch. The Social Construction of Facts and Artifacts. In Wiebe Bijker, Thomas Hughes and Trevor Pinch, editors, The Social Construction of Technological Systems, pp. 17–51. MIT Press, Cambridge, Mass., 1987.

UN-MAKING ARTIFICIAL MORAL AGENTS

bypass features the designer thought essential.4 Systems often evolve into ‘‘things’’ that are very different from that which was initially conceived. It is important to note that the interpretative flexibility of technology generally does not go on forever. Closure and stabilization occur when a particular form of the technology takes hold and the various actors and interest groups involved agree upon a meaning or cluster of meanings. Indeed, STS literature is filled with case studies illustrating technologies at various stages of interpretive flexibility, technologies that eventually move to stabilization and closure. For example, drivers get used to steering wheels rather than sticks for steering, computer users get used to a Windows environment, VHS wins over betamax for video format, and so on. When we recognize the interpretive flexibility of technologies in their early stages of development, we acknowledge that technology is, in part at least, ‘‘socially constructed.’’ A technology is not a single ‘‘something’’ that catches on or fails; technologies are being made, i.e., being delineated as ‘‘things’’ and given meaning, as they are being developed. Various actors in the process develop understandings and begin to attach different meanings and significance to the ‘‘thing.’’ Think, for example, of the struggles over the delineation, meaning, and significance of genetically modified foods, cloning, and nanotechnology. Indeed, these are all cases where debate continues and where there is often disagreement about the object of discussion, e.g., what counts as genetically modified? What is nanotechnology? Using this notion of interpretive flexibility, the debate about artificial agents can be seen as a struggle over the meaning of computer systems that operate independently in space and time from the humans that create and deploy them. The debate arises because these systems have a kind and degree of independence not found in other, prior technologies. Unquestionably, this degree and kind of independence is significant. The question is what are we to ‘‘make’’ of it? Are these systems ‘‘beasts’’ designed by modern day Victor Frankensteins? Are they precursors to HAL (in the film 2001)? Are they ‘‘merely’’ automated versions of maids and servants? Are they the next step in the evolution of prosthetic tools that extend human capabilities? In the frame of interpretive flexibility, the question is not what these systems (truly) are; the question is: ‘‘What should we ‘make’ of them?’’ Deciding the question ‘‘Can computers ever be considered moral agents?’’ is more like the

question ‘‘Should eighteen year olds be allowed to vote?’’ or the question ‘‘should marijuana be considered a dangerous drug?’’ than it is like the question ‘‘What is the half-life of uranium-238?’’ As scholars engaged in the debate about the moral agency of artifacts, we are actors in the process, actors who are making contributions to the construction of the meaning of artificial agents. Seen from this perspective, the question ‘‘can artificial agents be moral agents?’’ is misleading. More accurate questions are: ‘‘How should we conceptualize computer systems that behave independently?’’ and ‘‘What terms should we use to refer to computational systems that behave independently from their creators and deployers?’’ In answering the latter questions it is important to look at the criteria we use in other cases and it is helpful to identify how the case at hand is like and unlike other cases in which agency is attributed to entities that behave independently, e.g., grown children, animals. However, consistency cannot be determinative here since there will always be similarities and differences. In this framework, Floridi and Sanders’ ‘‘On the morality of artificial agents’’ can be seen as a major effort to provide an understanding of a new kind of technology that exhibits expanded independence. Debate arises because scholars have quite different ideas about the meaning and significance of these computational artifacts. To some extent, there seems to be agreement that these entities can be characterized and referred to as ‘‘artificial agents’’ but this agreement has been achieved because of the ambiguity of ‘‘agent’’. In other words, it is not at all clear what it means to say that something is an ‘‘artificial agent.’’ Human agency has a particular meaning in philosophy of mind and ethics, but ‘‘agent’’ also refers to lawyers and brokers who act as ‘‘agents’’ of their clients5; and ‘‘agent’’ seems to be used in relation to machines simply when machines perform tasks. Thus, when we refer to computer systems as ‘‘artificial agents’’, it is unclear which, if any, of these meanings are being attributed to computer systems. Thus, even though the interlocutors in the debate have agreed on the use of the term ‘‘artificial agent’’, because of the ambiguity of the term, the debate over meaning continues. ‘‘Artificial agent’’ still has interpretative flexibility.

5

4

See John L. Pollock, When is a Work Around? Conflict and Negotiation in Computer Systems Development. Science, Technology & Human Values, 30(4): 496–514, 2005.

125

Deborah G. Johnson and Thomas M. Powers, Computers as Surrogate Agents. In J. Van den Hoven and J. Weckert, editors, Information Technology and Moral Philosophy. Cambridge University Press, Cambridge, 2008.

126

DEBORAH G. JOHNSON

Putting in anchors Acknowledging that computational artifacts that behave independently are in a stage of interpretive flexibility and framing the debate as a struggle over the meaning of ‘‘artificial agents’’, we can now examine the struggle more carefully. To get a handle on the debate, we can identify two distinct groups of scholars who differ on the status of artificial agents but agree on important related themes. [We admit that we have not identified all possible views in this debate; our oversimplification is intended to highlight key parameters of the debate.] The first group of scholars is committed to computation as a model, either a model of reality or a tool for scientific exploration. We will call this group Computational Modelers and we believe that Floridi and Sanders fall into this category. Computational Modelers see computational modeling as the ultimate in the philosophical endeavor to capture reality. Their agenda can be seen to be rooted in the enlightenment tradition and the idea that reason will lead to truth. Computational Modelers have a stake in using computation as a (if not the) foundation of a body of knowledge that brings insight to a wide range of areas; i.e., they have a stake in the value of computation as a model. Whether computing is used as a model of reality or a pragmatic model (i.e., a useful way of thinking about and manipulating nature) is an interesting question, but not one that gets much attention from Computational Modelers. Instead the assumption seems to be that computational models represent reality. In any case, while Computational Modelers may be attracted to computation as a model, they get caught up in much more than models. Computers are machines, machines that behave; computers can be used not just to model behavior, but to produce behavior. This may well be the source of the move to refer to computers as agents. Since computer systems behave, their behavior can be compared to human behavior. The effective outcomes of computer behavior and human behavior can be the same. For example, before computerization of the stock exchange, a buyer would call a stockbroker, check on the price of a stock and tell her to purchase a particular number of shares of the stock. The stockbroker would call someone, say certain words, fill out paper work, and the buyer would eventually receive paper confirmation of ownership of shares of the stock. Now that the entire process has been computerized, a buyer can set up a computer system to execute the purchase of the stock if and when the price reaches a certain point. Few human movements are involved; no words need be spoken; and paper

AND

KEITH W. MILLER

may never be used. It would seem here that the computer system acts as an agent replacing the stockbroker who acted as an agent on behalf of a client in the old system. While we will challenge the comparability of the behaviors here, for now, the point is that Computational Modelers seem to want to make much of the fact that machines can now perform tasks that only humans performed in the past. The fact that computer systems can do tasks that humans do seems to support the validity of the computational model involved. [Of course, it should be no surprise that the computer system accomplishes tasks done by humans for that is precisely what systems designers and programmers designed the system to do. They developed a computer system to replace (though not necessarily replicate) the prior system.] A second group in the philosophical debate about the moral status of artificial agents seeks to illuminate the role that computers (and technology in general) play in the lives of human beings, especially their moral lives. Here we would place ourselves, in contrast to Floridi. The agenda of this group is to draw attention to the power of technology, especially computer technology, and its moral implications. This means bringing to light the moral character of computer systems, the values embedded in their design, and the ways in which they affect the moral lives of human beings. The adoption and use of computer technology has powerful effects and those who decide about its design and adoption have enormous power. Among other things, better understanding of the moral implications of computer technology can lead to better steering of technological development. Thus, the kind of knowledge sought by this group of scholars has an implicit, if not explicit, goal of informing human action and directing social change in the future. We will label this second group the Computers-in-Society Group. When it comes to artificial agents, this group acknowledges that computer systems behave independently in some sense of the term ‘‘independent,’’ and sees this independence as a moral concern. Computers are increasingly being assigned tasks that humans controlled and performed in the past; Big Blue defeats Kasporov6, robots replace workers on an assembly line, and computer systems instead of physicians recommend treatments. As we mentioned earlier, what information we receive when we search, what medicines or doses of radiation we receive, how 6

For quick insight into the nature of this defeat from one of Big Blue’s programmers, see Robert Andrews. A Decade after Kasporov’s Defeat, Deep Blue Coder Relives Victory. Wired, May 11, 2007, http://www.wired.com/ science/discoveries/news/2007/05/murraycampbell_qa.

UN-MAKING ARTIFICIAL MORAL AGENTS

airplane and automobile traffic is ordered, what financial transactions are implemented, how weapons are launched and targeted are now tasks done (decisions made) by computer systems. If computers are performing tasks and making these critical decisions, then, the Computers-in-Society scholars argue, there must be some accountability for the effects of these systems on human well-being. The Computers-in-Society group explores questions about the role of computer systems in decisionmaking by classifying computer systems as a form of technology, a new form with special features, but nevertheless, a form of technology that is, by definition, created and deployed by humans. There are a variety of ways in which members of this group express their concerns; for example, they draw attention to the ways in which computers affect the world in which humans act and live. Computers-inSociety scholars also give accounts that uncover the ways in which the design of technology can make a difference in what human actors can do and what humans actually do. This group of scholars has a stake in illuminating the contribution of computer systems to morality and some go as far as to say that computer systems have ‘‘agency’’ at least in the sense that they make causal contributions to what humans do and what occurs. Nevertheless, the Computers-in-Society group has a stake in not establishing computer systems as autonomous moral agents. Their concern is that framing computer systems as moral agents will deflect responsibility away from the human beings who design and deploy computer systems. In a sense, this group has a two-part agenda: show that technology is an important component of morality but also show that technology is under human control.7 So, Computational Modelers and Computers-inSociety scholars disagree over the meaning of ‘‘artificial agent’’, that is, they disagree about the status of computer systems that behave independently. Each group draws on different philosophical traditions – the first group generally draws on logic and philosophy of mind and the second group draws on ethics and social philosophy. The Computational Modelers have a stake in the validity of computational models and seem to believe that establishing the parallels between computer system behavior and human behavior is crucial to the validity of the computational model. They seem to believe that if computer systems are given the status of moral agents, this will be testimony to the achievement of the computational model in 7 See D.G. Johnson, Computer Systems: Moral Entities but not Moral Agents. Ethics and Information Technology, 8(4): 195–204, 2006.

127

capturing reality. On the other hand, the Computersin-Society group has little stake in computational modeling. For this group it is all well and good if computational models can be used to produce new knowledge and devices that make the world a better place for humans. However, when it comes to the moral agency of computer systems, this group finds the claims of Computational Modelers both odd and dangerous. The claims are odd because they do not seem to acknowledge that computer systems are an extension of human activity and human agency, and because their view of agency and morality is out of sync with the idea of morality as a human system of contextualized ideas and meanings. The claims seem dangerous because they imply that computer systems can operate without any human responsibility for the system behavior. So, each group of scholars has its own agenda and the groups converge and conflict on the topic of the moral status of artificial agents. To return to the concept of interpretive flexibility, it seems that the Computational Modelers are pushing for a ‘‘frame of meaning’’ for computer systems that has heretofore been reserved for humans: the frame of ‘‘moral agent.’’ Why this frame? The answer we have suggested here is that it furthers the agenda of Computational Modelers to establish the validity of computational modeling. On the other hand, the Computers-in-Society group push against a ‘‘frame of meaning’’ that casts computer systems as moral agents. While the Computers-in-Society group would frame computer systems as moral – moral entities – the group is against ascribing the status of moral agent to any computer system on grounds that doing so is dangerous. As already explained, the Computers-in-Society group argues against ascribing moral agency to computer systems on grounds that doing so will deflect responsibility from the humans who create and deploy them. For this group of scholars, it is important to keep the connection between computer systems and humans well in sight.

Critique of computational modelers and Floridi–Sanders So, the debate between the Computational Modelers and the Computers-in-Society group is a debate about how to ‘‘construct’’ the meaning and significance of computer systems that have a particular kind or degree of independence. If the Computational Modelers win the debate, artificial agents will be understood to have the potential for moral standing (moral agency) of a kind that ultimately might lead to a prohibition on turning them off. If the Computers-in-Society group

128

DEBORAH G. JOHNSON

wins, artificial agents will be understood to be important components of the moral world but they will always be understood to be human constructions under human control. They would always be understood to be ‘‘tethered’’ to humans in the sense that they are the products of human invention, are deployed by humans for human purposes, operate in contexts maintained by humans, and cannot function without some degree of human control (even though that control may be distant in time and space). We can now explain and critique the Floridi and Sanders contribution to this debate. By introducing the notion of levels of abstraction, Floridi and Sanders argue that computer systems can have moral agency insofar as moral agency is understood to make sense at a particular level of abstraction. They argue that we can conceptualize computer systems at different levels of abstraction and within one level of abstraction a computational entity may be autonomous while it is not so at another level of abstraction. Floridi and Sanders then seem to claim that since computational entities can be conceptualized as autonomous at one level of abstraction, they can be autonomous moral agents. This move is especially important since it is the kind of move that Floridi and Floridi and Sanders make with regard to a number of moral concepts. For example, in their paper on artificial evil8 they argue that a line of code could be evil – as understood at a particular level of abstraction. We agree with Floridi and Sanders about levels of abstraction; that is, entities such as computer programs can be conceptualized and viewed at different levels of abstraction (or within different conceptual frameworks). However, Floridi and Sanders misstep when they generalize concepts and terms from one level of abstraction to another. Thus, while a computational entity might be conceptualized and understood to be independent, even autonomous, at some level of abstraction, it would be misleading to maintain that it is, therefore, independent or autonomous at another level of abstraction. Floridi and Sanders seem to have confused, on the one hand, something being autonomous within a level of abstraction and, on the other hand, something being autonomous writ large. They seem to move from autonomous agents as understood at one level of abstraction to autonomous agent broadly understood or understood in the context of moral theory. [They also seem to think that morality and moral theory have different levels of abstraction rather than being

AND

KEITH W. MILLER

a particular level of abstraction but this is a point we will discuss in a moment.] Grodzinsky, Miller, and Wolf9 draw attention to this flaw in Floridi–Sanders’ reasoning when they discuss the level of abstraction of designers versus the level of abstraction of users. When one considers the significant difference between a computer artifact as seen from the viewpoint of its designer and the same artifact as seen from the viewpoint of a user (who does not have access to the source code and computational state of the artifact), Grodzinsky et al. argue that it is counterintuitive to define moral agency from the perspective of the user and entirely ignore the designer’s viewpoint since the designer’s viewpoint (level of abstraction) is necessary for the computer artifact to exist at all. In allowing that an entity might be a moral agent at one level of abstraction and not at another, Floridi and Sanders have reduced autonomous moral agency to a highly variable and relativistic notion. Indeed, it seems odd to think of moral agency out of the context of moral concepts and theories. In an important sense, morality is a particular level of abstraction with particular concepts and meaning. ‘‘Moral agency’’ has a deep history in philosophical thought, and is inextricably connected to other concepts such as action, intentionality, and mental states. While these concepts and their relationships can be modeled and represented, to say that they can be reduced to a different level of abstraction seems at least to beg the question, if not to be entirely misguided. Re-conceptualizing moral agency at another level of abstraction is comparable to re-conceptualizing something like color to sound. The argument can be made in a somewhat different way. While Floridi and Sanders have made a solid case for an account of computer systems as autonomous at some (or several) level(s) of abstraction, they have not shown that ‘‘moral’’ could ever be delineated at the same level of abstraction. To establish that computer systems can be autonomous moral agents, Floridi and Sanders would have to show that within some level of abstraction in which a computer system is autonomous, the notion of ‘‘moral’’ could also be delineated. This would, then, give us an account of autonomous moral agent at some level of abstraction. We have suggested that this can’t be done without begging the question.

9

8

L. Floridi and J.W. Sanders, Artificial Evil and the Foundation of Computer Ethics. Ethics and Information Technology, 3(1): 55–66, 2001.

Frances S. Grodzinsky, Keith W. Miller and Marty J. Wolf. The Ethics of Designing Artificial Agents. CEPE, July 12–14, 2007, San Diego, CA. Abstract available at http://cepe2007.sandiego.edu/abstractDetail.asp?ID=14.

UN-MAKING ARTIFICIAL MORAL AGENTS

Comparable behavior, comparable status/identity The Floridi and Sanders account is flawed for other, related reasons as well. Earlier we noted that while Computational Modelers start out with a focus on models, they emphasize the capacity of computer systems to behave or to produce behavior. As indicated earlier, it is this move from models to operational systems that may well lead Computational Modelers to the plausibility of moral agency for computer systems. However, the focus on behavior turns attention away from the processes by which the behavior is achieved. To begin to see the problem here, consider the difference in the way we evaluate scientific models versus operational systems. Scientific models are tested against the natural world they represent. Operational systems are aimed at utility, at achieving certain tasks as understood by their human users. The aims, purposes, and evaluation criteria for each kind of model are very different. The validity of a scientific model is a matter of comparison with the natural world. In this respect, scientific models are constrained by the natural world; we know when we have a good model of an ecological system or a weather system by seeing whether it replicates the behavior that actually occurs in the natural environment. On the other hand, operational systems have no such constraints. They are designed to achieve tasks and there is no need to accurately model how things are done in the natural or human world. For example, operational systems such as those that regulate air traffic, buy stocks, and perform tasks via robots on the moon are not required to behave as humans do. A computer system designed to route air traffic may incorporate descriptions of the behavior of airplanes, but it also gives directions to airplanes. The system is configured to track the planes, ‘‘decide’’ and signal the pilots. The system is designed to make safe and efficient decisions about when and where real aircraft should approach runways. The important point is that operational models of this kind are not designed to model human behavior; they function in ways that are quite distinct from human thinking. Indeed, such systems are often designed to exceed the capacity of humans in the speed and even the quality of their decisions. So, when we compare an operational computer system and a human performing the same tasks, the behavior of the human need not look anything like the behavior of the computer system and vice versa. The function achieved has to be equivalent, not the two entities performing the function. Thus, when it comes to operational systems, there need be no match between the behavior of the system and the behavior of the machine. To say that

129

computer systems can be moral agents is, then, reducing morality to functionality – a reduction that seems antithetical to the idea of moral agency. An example may help make the point. Let us assume that a person opening a door for another person carrying a large package is a small act of beneficence, a positive moral act by a human moral agent. If, however, the door is opened by an electric eye and motor assembly as the person approaches the door, we do not say that the door-opener has performed an act of beneficence. The function performed is equivalent, but the underlying processes (voluntary, autonomous act versus mechanical operations) are significantly different. It may be that the persons who envisioned, designed, and installed the mechanical door-opener had just this situation in mind, in which case those humans might be considered beneficent in envisioning, designing, and installing the opener. But that does not make the door-opener itself a virtuous or praiseworthy moral agent. Moreover, the mismatch between computer behavior and human behavior is only part of the problem. The more important and, perhaps, more interesting flaw in Floridi and Sanders’ analysis is their failure to recognize that operational systems often achieve their functions by agreements among humans to ‘‘count’’ computer activity as equivalent to human performances. Only through human decision-making do certain computer operations come to be recognized as – to stand for or be considered equivalent to – particular human actions. Our earlier discussion of computerized stock purchasing can be extended to illustrate the point. When the automated stock system was designed, it had to meet key requirements that were critical features of the old system. The new system had to operationalize notions of ownership, money, transfer of funds, possession, stock, etc. These requirements were translated into the automated environment in such a way that humans could come to use the system in a way that made sense to them. The computerized stock exchange worked because humans agreed to count certain electronic activities as ‘‘authorizing a transaction’’ and ‘‘purchasing a stock’’. To put this in another way, the computerized system created a new way of achieving ‘‘purchase of a stock.’’ However, this was possible only because humans agreed to treat electronic configurations as such. Systems designers were not looking to find ‘‘purchasing a stock’’ in computer systems; they created a system in which human beings would be able to engage in activities that could count as ‘‘purchasing a stock’’ and this meant that the system had to have features that could be interpreted as comparable to features in the paper and ink and words system. The system had to be

130

DEBORAH G. JOHNSON

constructed physically and symbolically; that is, there had to be computer operations and human meanings assigned to those operations. So, there are two important flaws in Floridi and Sanders’ account. First, computer systems do not model human behavior in the way scientific models model natural systems. Second, operational systems often achieve human functions because humans agree to count computer operations as performances of a particular kind. These are precisely the mistakes that the Computational Modelers, including Floridi and Sanders, seem to make in the case of moral agency. Their logic assumes that because operational systems have certain outcomes, the behavior is equivalent to human behavior producing the same outcomes. They fail to see that tasks can be achieved by very different means and they fail to see that whether or not the processes count as moral action is a matter of human convention. Moreover, they fail to recognize that the meanings that humans give to particular operations of a computer system are contingent. In other words, the operations of the computer stock exchange don’t have to count as ‘‘purchasing a stock’’; it is only through human convention that they count as such. The power of the misconception While this mistake of conflating equivalent functions with equivalent behavior and equivalent entities is (we hope) now clear, the significance of the mistake should not be underestimated. Some Computational Modelers tell us that in the future we will be compelled to treat certain computer systems – if they develop in certain ways – as entities with moral status. They make predictions of a future in which robots will be so sophisticated that we (humans) will have a moral responsibility to refrain from turning them off. Note that the compelling force here seems to be consistency. Were we not to acknowledge certain kinds of robots as moral agents, we would be wrongly treating silicon-based moral agents differently than carbon-based moral agents when the silicon-based agents met all the criteria we apply to carbon-based (human) moral agents. (So the argument goes.) Lest we be accused of using a straw man or overstating what this group of scholars claim, consider what Sullins10 writes: ‘‘Certainly if we were to build computational systems that had cognitive abilities similar to that of 10 John Sullins. Ethics and Artificial Life: From Modeling to Moral Agents. Ethics and Information Technology, 7(3): 144, 2005.

AND

KEITH W. MILLER

humans we would have to extend them moral consideration in the same way we do for other humans.’’ Note the low threshold for moral consideration; it is ‘‘cognitive abilities similar to that of humans’’ (our emphases added here and below). Once this low threshold is met, we must, according to Sullins (‘‘we would have to extend…’’), give the same moral consideration to computer systems that we give to human moral agents (‘‘the same way we do for other humans’’). Consider a parallel argument made in relation to an electronic stock exchange. Suppose a group of computer programmers decide to design and build a system that they hope could be implemented to replace a paper and ink stock exchange. They get the system functional – we can even imagine that it is a flawless system. The system specifies functional equivalents to such operations as ‘‘requesting to purchase a particular stock at a specified price’’, ‘‘authorizing the transfer of funds from a bank’’, and ‘‘purchasing shares of a stock’’. Are we compelled to recognize these functional equivalents as the actions they represent? Of course not. We would be foolish to use this system to purchase stock unless and until the system was embedded in the appropriate social institutions, that is, unless and until many other humans agreed to use the system in the way it was designed to be used. A wide range of actors (regulatory and government agencies, banks, law enforcement, and users) would have to adopt the system and agree to act as if operations of the new system would count as their functional equivalents, e.g., ‘‘purchase of a stock’’, ‘‘transfer of funds’’, etc. Another way to see the problem here is to return to our earlier discussion of technological development. Sullins presumes a version of technological determinism, that is, he presumes that technological development follows a natural order of development unaffected by social and political forces and human choices. The STS critique of technological determinism is that technological development is contingent; it is influenced by a wide range of social, political, economic and cultural and historical factors. Sullins fails to recognize how the debate about the significance of autonomous computer systems reflects and will shape the meaning and the design of computer systems. In adopting this technologically deterministic view, Sullins and others hide the power that is being exerted by a variety of interest groups. There are enormous amounts of money, time, and effort being invested in the development of autonomous systems and the money, time, and effort is being invested by groups that have specific, distinctively human interests in

UN-MAKING ARTIFICIAL MORAL AGENTS

mind – more sophisticated and effective weapons, more global and efficient markets, less crime and terrorism, faster response times, etc. Ignoring the interests of these powerful interest groups, the Computational Modelers push for a conception of computer systems that is compatible with these interests. Attributing moral agency to computer systems simply hides those groups and their interests. Scholars in the Computers-in-Society group have a stake in the rejection of technological determinism. As long as technology is seen as inevitable and predetermined, many actors who are powerfully affected by particular technologies will stay out of the discussion and debate about whether and how that technology should be developed. This prevents technological development from being steered for human wellbeing, that is, for the good of all those who are affected. Using the term ‘‘artificial moral agents’’ encourages the view that these sophisticated computer systems are the way they are because they must be exactly that way. The term suggests that this is the only way they can be and the only way we can think about them. In fact, these systems can be developed in a myriad of ways; they develop in the ways humans choose to develop them, and their meaning and status are fluid. On Bill Joy’s (2000) technologically deterministic vision of artificial agents, we are to believe that artificial agents will, ultimately and necessarily, become so advanced (and superior to humans) that they will (indeed should) take over humanity.11 This is conceivable only if one accepts that technology follows some predetermined path of development that can’t be stopped because it is out of human control. By contrast, the Computers-in-Society group insists that there are human forces at work shaping the development of any technology including autonomous computer systems, and we ought to carefully consider how they are being constructed, not merely passively observing their ‘‘emergence’’ in some ‘‘natural progression.’’

The dangers of constructing artificial moral agents If we take our cue from the Computers-in-Society group, the independence and self-generation of computer systems are not the relevant or key features. This group concedes that many computer systems can now – and more will soon – behave independently. Such computer systems can at some level of abstraction meet criteria such as those proposed by 11 Although he does not advocate the position, Bill Joy’s title to his important article seems apt: Why the Future Doesn’t Need Us, WIRED, 8(4), 238–262, 2000.

131

Floridi and Sanders; i.e., they have the ability to respond to a stimulus by a change of state (interactivity), the capacity to change state without an external stimulus (autonomy), and the ability to change the transition rules by which states are changed (adaptability). However, fulfilling these criteria does not convince the Computers-in-Society group that computer systems are or ‘‘must’’ ever be treated as moral agents. Instead, they insist that computer systems should be conceptualized and given meanings that encourage certain kinds of human behavior. This means, among other things, understanding computer systems in ways that keep those who design and deploy them accountable. To illustrate this point, consider a complex bot that learns and generates its own rules of behavior. Suppose it generates rules of behavior that make it an extremely risky entity. Say it makes medical decisions or regulates financial transactions and no humans understand exactly how it does what it does. According to the Computers-in-Society group, to conceptualize such systems as autonomous moral agents is to absolve those who design and deploy them from any responsibility for doing so. It is similar to absolving from responsibility people who put massive amounts of chemicals in the ocean on grounds that they did not know precisely how the chemicals would react with salt water or algae (though they knew generally about chemical reactions and toxicity). Ignorance is no excuse since they knew they were engaged in risky activity. Just as we might ask, ‘‘What is the best way to think about chemicals to ensure that people know they are dangerous?’’ the Computers-in-society group asks, ‘‘What is the best way to think about computer systems to ensure that those who create and deploy them do so safely?’’ The Computers-in-Society group claims that it is dangerous to construct computer systems as autonomous agents because the construction implies that no humans are responsible for what the systems do; they argue that – no matter how independently they behave – computer systems should be understood in ways that keep them conceptually tethered to human agents. Here the Computers-in-Society group must answer two distinct questions: Are computer systems in fact tethered to human agents (descriptive question) and should they (normatively) always be understood to be tethered to human agents? We claim that the answer to both of these questions is ‘‘yes.’’ Descriptively, all computer systems (indeed, all artifacts) are tethered to humans in a complex web of intentions and consequences, and normatively, concepts of computer systems should reflect the connection of the system to human agents.

132

DEBORAH G. JOHNSON

The first claim may be the easier to defend so we will start there. Computer systems are always tethered to human agency in the following ways: they are created by humans, created to perform tasks on behalf of humans, and they are designed to be used by humans. Any computer system is deployed through systems that function as socio-technical systems; for example, we could not have the Internet (or anything like it) were it not for hundreds of social institutions, agreements, laws, contracts, and more. Users and their institutions are the beginning and ending point for all computer systems. Even computer subsystems that take their input from and deliver their output to other computer systems are ultimately connected to humans. They are deployed by humans, they are used for some human purpose, and they have indirect effects on humans. To be sure, computer systems can be understood at various levels of abstraction, including levels of abstraction in which no reference is made to humans. But the whole point of adopting these levels of abstraction is to do some human work, e.g., being able to understand and/or control what is going on in a system. Abstractions are the work of humans and the abstractions themselves do not exist separately from humans. Thus, attributing ‘‘moral agency’’ to computer systems merely because there is a level of abstraction in which these connections are unnecessary is misleading. Our descriptive claim is, then, that computer systems are always tethered (connected) to human beings, though there are a multitude of ways to conceptualize and abstract the workings of these systems. Our normative claim is that the connections between computer systems and human beings should be recognized in the way we understand computer systems. Are we saying that certain levels of abstraction should never be used to describe systems? No, that is not the point. Levels of abstraction in which human actors and agency do not appear are useful for a variety of purposes. Abstraction is an effective tool that allows us to focus on some details while ignoring other details. Our point is that it is misleading and perhaps deceptive to uncritically transfer concepts developed at one level of abstraction to another level of abstraction. Obviously, there are levels of abstraction in which computer behavior appears autonomous, but the appropriate use of the term ‘‘autonomous’’ at one level of abstraction does not mean that computer systems are, therefore, ‘‘autonomous’’ in some broad and general sense. We should not allow the existence of a particular level of abstraction to determine the outcome of the broader debate about the moral agency of computer systems. Thus, part of our normative claim is that statements at a particular level of abstraction must always

AND

KEITH W. MILLER

be qualified as applicable only at that level of abstraction. Debate about the moral agency of computer systems takes place at a certain level of abstraction and the implication of our analysis is that discourse at this level should reflect and acknowledge the people who create, control, and use computer systems. In this way, developers, owners, and users are never let off the hook of responsibility for the consequences of system behavior.

Conclusion The debate about whether computer systems can ever be ‘‘moral agents’’ is a debate among humans about what they will make of computational artifacts that are currently being developed. It is also a debate about the direction of future developments. This is a debate, not a ‘‘discovery’’ of a fact of nature concerning these artifacts. Floridi and Sanders’ 2004 paper is an important touchstone in the debate, worthy of serious criticism. They are, we contend, a strong voice in the service of a group of scholars we have called ‘‘Computational Modelers’’ who have a stake in attaching the label ‘‘moral agent’’ to computational artifacts. We contrasted their ideas with those of the ‘‘Computers-in-Society’’ group, a group within which we place ourselves. The relationships between human moral agents and the artifacts they design, implement, and deploy should continue to be carefully examined by scholars, artifact developers, and the public for we are all likely to be increasingly affected by these human-tethered artifacts.

References Robert Andrews. A Decade After Kasporov’s Defeat, Deep Blue Coder Relives Victory. Wired. May 11, 2007. Available at http://www.wired.com/science/discoveries/ news/2007/05/murraycampbell_qa. Wiebe Bijker and Trevor Pinch. The Social Construction of Facts and Artifacts. In Wiebe Bijker, Thomas Hughes and Trevor Pinch, editors, The Social Construction of Technological Systems, pp. 17–51. MIT Press, Cambridge, Mass., 1987. Luciano Floridi and Jeff W. Sanders. Artificial Evil and the Foundation of Computer Ethics. Ethics and Information Technology, 3(1): 55–66, 2001. Luciano Floridi and Jeff W. Sanders. On the Morality of Artificial Agents. Minds and Machines, 14(3): 349–379, 2004. Frances S. Grodzinsky, Keith W. Miller and Marty J. Wolf. The Ethics of Designing Artificial Agents. CEPE. San Diego, CA, July 12–14, 2007. Abstract Available at http://cepe2007.sandiego.edu/abstractDetail.asp?ID=14.

UN-MAKING ARTIFICIAL MORAL AGENTS Deborah G. Johnson. Computer Systems: Moral Entities but not Moral Agents. Ethics and Information Technology, 8(4): 195–204, 2006. Deborah G. Johnson. Nanoethics: An Essay on Ethics and Technology ‘in the making’. Nanoethics, 1(1): 21–30, 2007. Deborah G. Johnson and Thomas M. Powers. Computers as Surrogate Agents. In J. Van Den Hoven and J. Weckert, editors, Information Technology and Moral Philosophy. Cambridge University Press, Cambridge, 2008.

133

Bill Joy. Why the Future Doesn’t Need Us. WIRED, 8(4): 238–262, 2000. John L. Pollock. When is a Work Around? Conflict and Negotiation in Computer Systems Development. Science, Technology & Human Values, 30(4): 496–514, 2005. John Sullins. Ethics and Artificial Life: From Modeling to Moral Agents. Ethics and Information Technology, 7(3): 139–148, 2005.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.