Joseph F. Rychlak, Artificial Intelligence and Human Reason: A Teleological Critique

September 20, 2017 | Autor: Marek Hetmański | Categoría: Social Epistemology
Share Embed


Descripción

421 Harel, David (1987), Algorithmics: The Spirit of Computing, Reading, MA: Addison-Wesley. McCune, William (1997), ‘Solution of the Robbins Problem’, Journal of Automated Reasoning 19, pp. 263–276. McCune, William (1998), ‘Robbins Algebras Are Boolean’, . Rumelhart, David E., McClelland, James L., and the PDP Research Group (1986), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, MA: MIT Press.

Division of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A. E-mail: [email protected]

WHEELER RUML

Joseph F. Rychlak, Artificial Intelligence and Human Reason: A Teleological Critique, New York: Columbia University Press, 1991, xii + 209 pp., $35.00 (cloth), ISBN 0-231-07290-2. Joseph Rychlak’s book is a clearly defined critique of artificial intelligence (AI), as well as his own theory of human agency. It is a psychological study referring to experimental research and broad philosophical tradition. The author clearly formulates his methodological and philosophical purpose: “I espouse a teleological or humanistic brand of psychology” (p. xi). This means that he critically distances himself from the majority of AI theories. Rychlak assesses them with regard to human agency, comparing it with the alleged agency (consciousness) of digital machines. There is no doubt as to what his final opinion is – these theories are not concerned with some vital aspects of human subjectivity. They are not so much wrong, as only partially true. Rychlak bases his comparison of the functioning of a human being and the operation of a machine on the analyses of such specific issues as the nature of language, reasoning, learning, perception, and functioning of the brain. As a conclusion arising from his examination of these subjects, he formulates a thesis about the predicational character of human subjectivity that is revealed not only on the linguistic level, but on the brain and other behavioral levels as well. No artificial systems – either in relation to hardware or software – give any evidence of this feature. Rychlak’s argument is based on the historical presentation of philosophical and scientific concepts that specified what the predicational nature of human reasoning was: the reasoning that was expressed in definition, demonstration, language usage, and scientific theories. Predication, Greek kathegorein, is defined as “the cognitive act of affirming, denying, or qualifying broader patterns of meaning in relation to narrower or targeted patterns of meaning” (p. 7). It relates the meaning of signals, signs, and information to the broader context outside them. This process occurs on the basis of opposition; therefore, contextuality and opposition are the constitutive

422 features of propositions. Apposition is the opposite operation – matching signals and information within the limits of the system, at the same level, apart from the context. The difference between the opposition and the apposition constitutes the difference between human agency and the functioning of a digital machine. This difference is also the key to the understanding of John Searle’s famous Chinese room experiment (Searle, 1980). Rychlak’s interpretation (Chapter 2, “The Chinese Room and Its Implications”) stresses the following differences: (a) between the extrospective (prevailing in AI) and the introspective description of objects, processes, and subjects, of which only the latter reveals their unique positioning, (b) between signals processed in the machine and significant information distinctive of humans, (c) between logical procedures and mechanical processes, when the latter are due to natural regularities, and the former are arbitrary, and (d) between signals matching and the meaningful opposition among them with respect to the context. All of the above justifies the thesis that a person operating the signals in the Chinese room does not have consciousness because she or he does not predicate anything. In fact, Rychlak does not add anything new to Searle’s argumentation; he only outlines and strengthens the rationale behind it. In Chapter 3, “The Many Faces of Cognitive Simulation”, Rychlak conducts interesting analyses concerning simulation. The AI thesis that a digital machine is the simulation of most humans’ cognitive activities is critically assessed. In Rychlak’s opinion, simulation is a type of representation, a model of a phenomenon. However, it is not the same as imitation or duplication. The lack of this differentiation is the most common mistake made by AI and cognitive sciences. Simulation is a representation that is not identical with the simulated object. Rychlak uses two categories – the Bios and the Logos – which refer, respectively, to the hardware and the program in digital machines and also to the brain and the mind in a human being. He clearly states: “There is none of the simulations occurring here that is true in the realm of the Bios. In the realm of the Logos, there is only duplication taking place!” (p. 49). One should also differentiate between the process and its content, between the physical (biological) processes and logical procedures. The biological level is determined and controlled in a different way from the logical one. The Bios does not ‘produce’ the Logos, Rychlak concludes. On the margin of the above questions, he formulates interesting remarks on demonstrative logic (on which Boolean logic, implemented in the Turing machine, is based) and dialectical logic. The former involves oppositionality and disjunction; the latter, simultaneous A and non-A relationships. Dialectical logic, proper to speech and conversation, occurs before reasoning (following the path from premises to conclusions) takes place. Artificial reason is not able to simulate this part of human thinking; it does not have the predicational ability – to match opposite signs within the larger context of meanings. A computer simulation is not a proper representation of the full range of human reasoning; it is not wrong, but only partially true: “It imitates what we humans do after we have come to a predication encompassed in a premise (assumption, belief, etc.) on which we then act”

423 (p. 65). Unfortunately, Rychlak pays little attention to the role of dialectical logic; therefore, his remarks do not constitute a consistent theory. General theses about the creative role of a language in the process of creating a representation are much more developed and uniform. Rychlak accurately observes that the concept of representation plays the same role as the concept of idea in Locke’s and Hume’s tradition – it is the content of the process of reasoning, it comes from the environment, and language acquisition is the main mechanism through which it is acquired. Rychlak argues that this theory is false, showing how this old assumption is used in today’s AI concepts such as frames, scripts, and plans. Empirical studies of children’s language acquisition make it clear that this process is not a simple association or cumulation of words when children imitate adults (as inputs). Children, despite what certain AI theoreticians say, use words according to their owns preferences and intentions, and ascribe meanings to them, even within the framework of poor linguistic experience. The ease with which word order is changed and contrary meanings are brought together proves that language acquisition and usage happen on the basis of a partially innate mechanism of predication in terms of purposes and intentions. In the introspective and propositional perspective, Rychlak defines human agency in the following words: Agency is the organism’s capacity to behave/believe in conformance with, in contradiction of, in addition to, without regard for environmental or biological promptings or determinants. (p. 102). Next he asks: “Can we ascribe this feature to the artifacts?” In fact, some AI exponents argued that negative feedback is sufficient evidence of the purpose-oriented functioning of the machine (Rosenblueth et al., 1943; Minsky, 1986). Others see the agency of digital machines in the recursion of their programs, which seems to them to be the same as self-reflexivity of human thought (Hofstadter, 1980). It is believed that the principle of recursion (“what is higher is also lower”) is universal enough to explain the functioning of a computer as well as of a human mind. Rychlak contrasts such a conclusion with the argument of Lucas (1961), and also with Nagel and Newman (1958), which claimed that mind has a vital advantage over a digital computer. He concludes that agency as self-reflexivity is not the same as recursivity and that the difference lies mainly in predication. Chapter 7, “Predication in Human Perception and Brain Functioning”, is the application of the category of predication to the interpretation of the neural level of cognition. Rychlak bases this chapter on neurophysiological research. He starts with the difference between sensation (direct signals based on bodily mechanisms) and perception, which is an indirect function and, as such, inferential. Perception could be directed, learned, and predicated; the proper description of it is a formalfinal-causal model, whereas the proper description for sensation is an efficientcause model. Chances are that perception has its vital predicational character. It should, however, be pointed out here that Rychlak does not develop his conclusions any further. It is strange, for he could find substantial justification for his thesis in

424 Searle’s Intentionality (1983, pp. 37–78), whom he nevertheless widely quotes. Surprisingly, he does not mention Jean Piaget’s works, either. Summing up his observations on human and artificial intelligence, Rychlak repeats that, being only partially true, the computer analogy has a fatal impact on the psychology of humans, even though computers are very useful to them. Digital machines can never act as humans act – “taking the position”, “for the sake of” – on the basis of the opposition of different meanings in a wide context. They do not act purposely, though some parts of their structure and functioning are quasi-goaloriented. In contrast, Rychlak says, “human beings are telic animals” (p. 177). The notion of “goal” or “telos”, here, is a crucial criterion of the differences between humans and digital machines; it is a differentia specifica of human beings. Its realization is human creativity, different from a machine’s efficiency. Cognitive science, as well as all the humanities, seem not to be interested in it at all, concludes Rychlak.1 Notes 1 English translation assistance kindly provided by Aldona Zwierzynska-Coldicott.

References Hofstadter, Douglas R. (1980), Gödel, Escher, Bach: An Eternal Golden Braid, New York: Vintage. Lucas, J. R. (1961), ‘Minds, Machines, and Gödel’, Philosophy 36, pp. 112–127. Nagel, Ernest and Newman, James R. (1958), Gödel’s Proof , New York: New York University Press. Minsky, Marvin (1986), The Society of Mind, New York: Simon and Schuster. Rosenblueth, Arturo, Wiener, Norbert and Bigelow, Julian (1943), ‘Behavior, Teleology, and Purpose’, Philosophy of Science 10, pp. 18–24. Searle, John R. (1980), ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences 3, pp. 417– 457. Searle, John R. (1983), Intentionality: An Essay in the Philosophy of Mind, Cambridge, U.K.: Cambridge University Press.

Department of Philosophy and Sociology, Maria Curie-Skłodowska University, 20-031 Lublin, Poland E-mail: [email protected]

´ MAREK HETMANSKI

Steven W. Horst, Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind, Berkeley, CA: University of California Press, 1996, xix + 427 pp., $48.00 (cloth), ISBN 0-520-20052-7. The central claim of this book is that “the very notion of ‘meaning’ that we apply to symbols is interwoven with conventionality through and through” (p. 7). The

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.