Scientific knowledge and environmental policy: why science needs values

Share Embed


Descripción

This article was downloaded by: On: 20 November 2008 Access details: Access Details: Free Access Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Environmental Sciences Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713682447

Scientific knowledge and environmental policy: why science needs values Michael S. Carolan a a Department of Sociology, Colorado State University, Fort Collins, USA Online Publication Date: 01 December 2006

To cite this Article Carolan, Michael S.(2006)'Scientific knowledge and environmental policy: why science needs values',Environmental

Sciences,3:4,229 — 237 To link to this Article: DOI: 10.1080/15693430601058224 URL: http://dx.doi.org/10.1080/15693430601058224

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Environmental Sciences December 2006; 3(4): 229 – 237

ENVIRONMENTAL ESSAY

Scientific knowledge and environmental policy: why science needs values

MICHAEL S. CAROLAN

Downloaded At: 20:09 20 November 2008

Department of Sociology, Colorado State University, Fort Collins, USA

Abstract While the term ‘science’ is evoked with immense frequency in the political arena, it continues to be misunderstood. Perhaps the most repeated example of this – particularly when dealing with environmental policy and regulatory issues – is when science is called upon to provide the unattainable: namely, proof. What is scientific knowledge and, more importantly, what is it capable of providing us? These questions must be answered – by policymakers, politicians, the public, and scientists themselves – if we hope to ever resolve today’s environmental controversies in a just and equitable way. This paper begins by critically examining the concepts of uncertainty and proof as they apply to science. Discussion then turns to the issue of values in science. This is to speak of the normative decisions that are made routinely in the environmental sciences (but often without them being recognized as such). To conclude, insights are gleaned from the preceding sections to help us understand how science should be utilized and conducted, particularly as it applies to environmental policy.

Keywords: Proof, science, values, policy, global warming, objectivity, junk science, uncertainty

1. Introduction There remains – among the public, media, politicians, and even scientists – a distorted view as to what scientific knowledge is and what it is capable of providing. This view, at its most basic level, is that science is only about facts; a way of knowing that rests upon objectivity and precision and that stands outside of history (which is to say, it is asocial). This (socially) constructed separation between facts and values has a long history, arguably going back to Plato and his belief that theoretical thought and practical action should be separated. Yet, as I detail, this separation exists only theoretically; practically speaking, scientific knowledge is an amalgamation of both facts and values. This brings me to the term ‘uncertain proof.’ Science is not about proof.1 While this statement may initially sound absurd, proof, in its strictest sense, is a chimera when discussed in the context of science. At best, we can say that, while science can help to reduce the uncertainty surrounding a given phenomenon, it does not (it cannot) eliminate that uncertainty in its entirety. In other words, proof is approximated within science (not absolutely Correspondence: Michael S. Carolan, Department of Sociology, B236 Clark, Colorado State University, Fort Collins, CO 805231784, USA. E-mail: [email protected] ISSN 1569-3430 print/ISSN 1744-4225 online Ó 2006 Taylor & Francis DOI: 10.1080/15693430601058224

Downloaded At: 20:09 20 November 2008

230

M. S. Carolan

obtained). All proof, in its most practical sense, then, is to a degree uncertain. Uncertain proof thus need not be the product of ‘bad’ or ‘junk’ science. Rather, it represents the end result of science even when it is done with great methodological care; a point we would do well to remember as we debate ‘the science’ that lies behind our understanding of environmental problems. Indeed, the social nature of science is only amplified when environmental phenomena are the subject of investigation. The numerous systems that make up environmental problems – which involve both natural (hydrological, atmospheric, etc.) and social (economic, political, etc.) variables – produce an emergent complexity that is greater than the sum of its parts. Our view of this tremendous complexity is always bounded, however. As Haraway (1988) notes, we are not infinite beings capable of seeing everywhere from nowhere – what she calls a ‘godtrick.’ Given that we cannot line up all of ‘the facts’ of a situation in a manner that reveals reality in its totality – a point that is particularly true when dealing with the immense complexity that underlies environmental issues (e.g. global climate change) – non-objective variables help shape, as I later detail, just what we ‘see’ when we look at the world. The idea, then, that we can rid values from environmental science is not only unlikely but absurd. In the end, environmental science rests, at least in part, on normative assumptions, which is to say it rests upon the making of social, political, and, ultimately, ethical decisions. Importantly, this is not a symptom of ill-conceived science or of what is often branded today as ‘junk science.’ Indeed, I could not conceive of environmental science taking place without such value-laden decisions. The goal, then, should not be to rid values from science, for that would only cripple it. Rather, it is to manage those normative assumptions in a just and deliberative way. The policy challenge being reviewed here centers on the idea that environmental controversies may not solely be the result of poor, inadequate, or junk science. I therefore seek to problematize the conventionally held belief that more and/or better science can unequivocally settle all such disputes. Indeed, this paper illustrates how just the opposite can be true: that the belief that science alone can resolve all environmental disputes might in fact be the source of some of this contestation. For example, how scientists define and operationalize concepts can be of significant consequence to their findings. Yet, many concepts employed within the ecological sciences are not objectively given, such as ‘climate change’ (as I detail shortly), ‘species’ (Queiroz 2005), and ‘biodiversity’ (Carolan 2006). This is not to suggest that these terms have no nondiscursive, material correlate. Rather, it is only to point out that their definition and operationalization is in part a reflection of beliefs and assumptions about how these entities should look (and how they ought to be conceptualized). No amount of science can alone resolve a dispute that centers on an entity is not objectivity given. Such definitional issues are matters of policy, not science. Yet, until we realize this point, and understand that such problems cannot be resolved in a purely objective manner, environmental conflicts will continue indefinitely. This paper begins by critically examining the concepts of uncertainty and proof from a philosophy of science perspective. As I detail, such concepts do not represent epistemic states or speak to different levels of knowledge. Uncertainty and proof, rather, are best understood as discursive constructs, which reveal more about the individuals making the claims than they do about the quantity and quality of the knowledge involved. Discussion then centers on the intertwining of facts and values in environmental science. Here, I also speak to the claim that environmental science not only makes values judgments but needs those judgments in order for it to have anything to say. To conclude, insights are gleaned from earlier sections to make policy relevant suggestions as to how science should be used within the arena of environmental politics.

Scientific knowledge and environmental policy

231

Downloaded At: 20:09 20 November 2008

2. Understanding the capabilities of science Questions of truth and proof have long been debated among philosophers. Perhaps the closest approximation to a theory of proof came in the early twentieth century via Mach and the Vienna circle – what is known specifically as verificationism and more generally as logical positivism. According to this branch of thought, truths are inductively acquired through repeated positive verifications of a theory. Thus, in order to prove the veracity of, say, the statement ‘all swans are white’ (assuming an unproblematic understanding of the term ‘white’), one must repeatedly verify it against empirical observations. And if one repeatedly witnesses only white swans, then they can say that the statement has been confirmed, which is to say the statement has been (loosely) proven. Significant logical problems, however, undermine this form of truth-seeking. While numerous critiques can be summoned to dispose of logical positivism’s brand of proof, I will only mention that provided by Karl Popper (1961 (1934)). I focus on Popper’s critique simply because it represents, arguably, the most succinct and efficient of the lot (although Popper’s alternative to verificationism, what he calls falsificationism, is no less problematic). Theories, as Popper notes, often take the form all Xs are Ys: ‘all swans are white’ or ‘all bodies attract one another with a force proportional to the product of their masses and inversely proportional to the square of their distance apart.’ To refute such a theoretical form thus requires that one find a single X that is not also a Y. To prove such a theory, however, would require that one observe every single X to make sure that it is indeed also a Y. Yet, this would mean that one must engage in an exhaustive search of not only the entire universe, but of the entire universe from the beginning to the end of time. Let us think of this point in the context of two highly contentious environmental controversies: global climate change and the risks/safety of genetically modified (GM) food. How could one prove, in the strictest sense, that global climate change is anthropogenically driven (as demanded by some on the political right) or that GM food is safe (as demanded by some on the political left)? What evidentiary data would one have to acquire in order to reach such an epistemic state in either case? In both cases, such proof would require the impossible – that one take an all-knowing and all-seeing position. Take the case of global climate change. Not only would we need an unattainable amount of terrestrial data (e.g. actual, versus proxy, temperature readings dating back hundreds of thousands of years (to be sure long timescale cycles are not missed)), but also extraterrestrial data (e.g. of sunspot cycles that go back millions of years (to, again, be sure long timescale cycles are not missed)) if we are to prove the existence of anthropogenically driven climate change. On the other hand, the risk/safety of GM food can only be known inductively, by monitoring its impacts on humans and the environment after its release into the environment. Yet, even if 100 or 1000 years elapse with no discernible negative impacts, this is not proof of its safety. For example, one problem with proof-as-corroboration, as it applies to the safety/ risk of GM food, centers on its heavy reliance on experiential data. Thus, what if we were simply not looking in the right place during those 1000 years for unknown negative effects (for if we did know what to look for those effects likely would not be so unknown!)? Given that we are not all-seeing, God-like beings, we must not mistake ‘not observing’ for ‘not existing,’ for, again, perhaps our limited gaze simply missed observing a particular (yet still very real) effect. This is why, for example, some view the Precautionary Principle as an appropriate guide to action when dealing with certain technologies: because it decentralizes the responsibility for certifying a technology as ‘safe,’ not only over time but also across social networks and forms of expertise that are in possession of different forms of relevant knowledge (Jasanoff 2000).

232

M. S. Carolan

This also highlights an important limitation of ‘closed’ scientific debates (where only concise, objective knowledge is deemed admissible): its tendency to ignore the possible existence of what is called ‘non-knowledge’ (those unknowns that we do not know about). Non-knowledge his is not the same as uncertainty. Uncertainty can be plugged into predictive models, which results in those probabilistic statements we hear so much about in the media. Yet, as the above example involving the risks of GM food illustrates, models by definition are not reality (that is why they are called ‘models’) and as such present only a partial picture of the world they are designed to represent. Thus, non-knowledge, which references those unknown variables/processes that we do not know about and thus are not included in predictive models, speaks to something distinct from uncertainty, which represents those unknowns that we do know about and that can be included in models. An illustration of this can be found in an exchange at a London public meeting in 2001 between the chair of the UK scientific Advisory Committee on Releases to the Environment of GMOs (ACRE) and a member of the Agriculture and Environment Biotechnology Commission (AEBC) advisory body. At issue, the topic of scientific uncertainty as it applied to understandings of scientific knowledge (and the limits associated with such ways of knowing).

Downloaded At: 20:09 20 November 2008

AEBC: Do you think people are reasonable to have concerns about possible ‘unknown unknowns’ where GM plants are concerned? ACRE Chair: Which unknowns? AEBC: That’s precisely the point. They aren’t possible to specify in advance. Possibly they could be surprises arising from unforeseen synergistic effects, or from unanticipated social interventions. All people have to go on is analogous experience with other technologies . . . ACRE: I’m afraid it’s impossible for me to respond unless you can give me a clear indication of the unknowns you are speaking about. AEBC: In that case don’t you think you should add health warnings to the advice you’re giving ministers, indicating that there may be ‘unknown unknowns’ which you can’t address? ACRE: No, as scientists, we have to be specific. We can’t proceed on the basis of imaginings from some fevered brow . . . (as quoted in Wynne 2005: 84) The point of this section is not to undermine science. Rather, my goal is to emphasize science’s limitations so that we do not ask from it more than it can provide (recognizing, in the end, that even science has epistemic blind spots). The work of Naomi Oreskes (1999) is particularly insightful in this regard, for she illustrates how scientific acceptance can still exist in science without proof. In one case, she notes the tremendous scientific consensus that emerged around Wegener’s theory of continental drift in the first decades of the twentieth century. This was then replaced by a general scientific consensus around plate tectonics in the 1960s and 1970s. And in each case, consensus emerged without direct evidence to support (let alone prove) either theory (rather, relevant inferences were abductive). Indeed, even after plate tectonics was empirically confirmed in the mid-1980s the consensus surrounding the veracity of this theory (or mechanism) is still not complete – that is to say, even after empirical confirmation the theory, in its strictest sense, has not been proven. Outliers in the data

Scientific knowledge and environmental policy

233

continue to exist, which have been assembled, for example, by Expanding Earth Theorists in such a way to provide an alternative explanation of the data (Scalera & Karl-Heinz 2003). To say, then, that plate tectonics, in a strict philosophy of science sense, has been confirmed is not to same as saying it has been proven. Oreskes prefers to use the term ‘robust consensus’ as an alternative to ‘proof.’ In a widely discussed article in the journal Science, Oreskes (2004) uses this idea of a robust consensus to silence climate change skeptics who continue to focus upon the uncertainty of the science of global warming. In this piece, she conducts a systematic review of all peer-reviewed scientific work on the subject, concluding that there is tremendous scientific consensus among the experts as to the existence of anthropogenic climate change. Thus, rather than focusing on the uncertainty of the science of global warming (which will never be entirely vanquished), Oreskes argues we should use this tremendous consensus – which is about as close to proof as science gets – as the rallying point for action.

Downloaded At: 20:09 20 November 2008

3. The confluence of facts and values in environmental science So what, then, is scientific knowledge? Admittedly, this question is not easily answered. Philosophers have for centuries been in search of ‘essential’ criteria upon which to define science from non-science and have yet to come up with anything definitive (as hinted at in the previous section) (Kuhn 1970 (1962); Mulkay 1976). In fact, many sociologists would argue that searches for clean and precise definitions of science will always be problematic. Instead, they argue that science is simply what scientists do; something that is bounded off rhetorically and organizationally for strategic ends, so as to provide, for example, scientists with a discursive means to protect their knowledge claims from non-certified (non-expert) sources (Gieryn 1983; Jasanoff 1987). For those uncomfortable with such a ‘loose’ definition, a growing consensus can also be noted among scholars concerning the so-called ‘scientific method:’ namely, that there is no mode of inquiry used exclusively by scientists (Haack 2003). If science involves a different mode of inquiry it is because its practitioners investigate with care. This is what Kuhn (1970 (1962) was arguing when he differentiated between ‘observation’ (what scientists do) and ‘perception’ (a less methodical form of inquiry), the former he described as ‘collected with difficulty’ while the latter he referred to as ‘the given’ (p. 126). Even so, scientists are not all seeing; even science, to repeat an earlier mentioned point, has epistemic blind spots. Simply because scientific observations are conducted with care and difficulty does not mean that those observations are ‘pure’ – that they exist independent of a social context. Perhaps a few examples will help to convey this point. Take, for example, confidence limits. Confidence limits represent an evidentiary threshold used across most scientific disciplines. Yet, these limits are not objectivity given, which is to say they reflect a socially imposed burden of proof (in accordance with conventions as to what an ‘acceptable’ limit is). In other words, confidence levels are not (and cannot be) deduced through scientific methods alone, but rather are a reflection of values, preconceived systems of order, and beliefs. To further build upon this point, let us look briefly at the field of epidemiology. As members of a social network – with its own norms, identity structures, and social conventions – epidemiologists’ understanding of what counts as evidence is different, at least to a degree, from those embedded within different social networks, such as community activists. Social conventions strongly discourage professional epidemiologists from making false positives (e.g. concluding that there is a causal link between contaminant X and a cancer cluster when there is not). This explains why a higher confidence limit (of at least 90%) is typically the norm

Downloaded At: 20:09 20 November 2008

234

M. S. Carolan

within such professional fields. Scientists have a reputation and identity to uphold as a result of their membership to a larger professional community. Such type I errors (false positives), as they are called, could thus not only be embarrassing but potentially damaging to their future membership within this community (and the resources associated with it (e.g. access to grants)). Conversely, community activists, given their social networks, have equally compelling reasons to avoid at significant costs false negatives (or type II errors). For them, erring on the side of caution, if it means saving their lives and the lives of loved ones, is well worth making an (type I) error every now and again (Kleinman 2005). This in part explains why proponents of what has been called ‘popular epidemiology’ prefer a much lower confidence limit when examining causal relationships between pollutants and cancer clusters (Brown & Mikkelsen 1997). Yet, values are not only relied upon to make sense of science at its back-end, when knowledge claims and data are being evaluated. Values also play an essential role in the frontend of science – such as when scientists operationalize (and define) their object of study. Take the field of climate science. Climate science is an unquestionably rigorous scientific discipline. Nevertheless, its very object of study – namely, climate change – is not pre-given, which is to say a definition of this phenomenon is not revealed through scientific methods. For example, should definitions of ‘climate change’ speak to only anthropogenic drivers or should ‘natural’ forces also be allowed in such a definition? The Framework Convention on Climate Change (FCCC), for example, defines climate change as ‘a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability over comparable time periods’ (although human influences on atmospheric chemistry, like the effects of particulates on climate, are not included under FCCC conceptions of ‘human activity’) (Pielke 2005: 549). Conversely, the Intergovernmental Panel on Climate Change (IPCC) defines climate change as ‘any change in climate over time whether due to natural variability or as a result of human activity’ (IPCC 1995: 56). And such divergent definitions can produce equally divergent policy outcomes. The FCCC definition, for example, creates a condition where action to combat global climate change can only occur after such changes have been attributed to human activity, while the IPCC definition sets a much lower epistemic benchmark to trigger policy action (Pielke 2005). Should definitions of climate change include or exclude non-human forces? The correct answer depends upon the normative assumptions brought to the issue: e.g. about how climate should behave, how nature ought to look, and the like. From this we can thus say that climate science, due to the point that definitions of ‘climate’ and ‘climate change’ are not pre-given, rests, at least in part, upon value judgments. For, ultimately, without such definitions no measuring and modeling of these phenomena could possibly occur. 4. Science, democracy, and policy Rather than further barricading the so-called ‘halls of science’ from all non-scientists, perhaps we should take a different route: by opening those doors up to include a wider variety of stakeholders and in doing this recognize that knowledge production is not something that occurs in only specialized space but across the social field (Jasanoff 2003). Such a move, importantly, would not undermine science, but instead would reflect a more honest understanding of it. As mentioned, value-laden decisions are already made routinely in science: what’s a sufficient level of evidence; what’s ‘climate’ and how should it behave; etc.? Indeed, even claiming that these questions are best left for scientists to answer is a value statement, for it rests upon a belief as to who should answer such questions.

Downloaded At: 20:09 20 November 2008

Scientific knowledge and environmental policy

235

There are those, however, who would like to see such deliberations shut down. In one example, the United States Congress has recently taken steps to purge all so-called ‘junk science’ (read: non-objective, value-laden science) from the legislative and regulatory process. Perhaps the most pronounced example of this comes through the Data Quality Act of 2001. This Act was secretly (there were no debates or hearings on it) slipped into law as a rider on an appropriations bill for the US Treasury Department. Specifically, this Act authorized the Office of Management and Budget to create guidelines that ‘ensure and maximize data quality.’ And in those instances when the information supplied is believed to lack sufficient ‘quality, objectivity, utility, or integrity,’ it can be sent back for correction (Consolidated Appropriation Act 2001). While it appears to have an air of reason to it, this law has created an epistemic benchmark that is impossible to meet; as mentioned earlier, given the elusive nature of definitive proof in science, one can always point to the data and call for more quality, objectivity, utility, and integrity. Thus, rather than improving the science upon which regulatory decisions are made, it has been used as a tool by parties subject to regulation to challenge all evidence that does not work in their favor (Michaels 2005). In one case, a petition was filed in the United States in 2003 asking the Environmental Protection Agency (EPA) to retract its 1986 publication entitled Guidance for Preventing Asbestos Disease Among Auto Mechanics. It claimed that the EPA had not conducted a complete analysis of the scientific and medical literature tied to asbestos related diseases. The EPA, in response, was forced to withdraw this publication and has yet to revise it in a manner consistent with the Data Quality Act (Michaels & Monfortonis 2005). The Data Quality Act is based upon this aforementioned distorted view of science; the view that science is only about the production of concise, objective facts. Another underlying assumption of the Data Quality Act is that ‘sound’ (non-junk) science will unproblematically produce ‘good’ policy. This, however, is a completely fallacious assertion. If we examine the relationship between science and policy closer, we find that disagreements over policy often have little to do with science. Take the current position of the USA toward the Kyoto Protocol. While other commentators have highlighted how the Bush administration has actively worked to distort and play up the uncertainty of the science of global warming so as to justify a business-asusual policy stance (e.g. McCright & Dunlap 2000, 2003; Mooney 2005), the science of this matter is to some degree secondary to fundamental value-positions when it comes to policy. Thus, even if there was general acceptance of the science of global warming (which, as noted by Oreskes (2004), is already the case within at least the scientific community), this most assuredly would not usher in the end of policy disagreements on the subject. Given the science, should the Kyoto Protocol be signed and ratified? This question can only be answered by reflecting upon one’s values. For the Bush administration, whose evaluative hierarchy arguably locates economic variables above those linked to the environment, the science does not justify such action, given that The Protocol may negatively impact those very things that it values (GDP, economic growth, the ‘free market,’ etc.). Conversely, for many on the political left, the science does justify the signing and ratifying of the Kyoto Protocol. This is because their evaluative hierarchy arguably places environmental variables (e.g. biodiversity, preservation, and ecosystem health) on an equal (if not greater) plane to economic variables. It is therefore important that politicians and policymakers make these underlying values more explicit, rather than masking them behind arguments for ‘proof’ and denouncements of ‘junk’ science. For as long as such policy disputes continue to be framed as being purely over ‘the facts’ of the matter, such conflicting value positions will never be aired and discussed in a

236

M. S. Carolan

deliberative manner. And environmental controversies, I fear, will continue with little hope of resolution.

Downloaded At: 20:09 20 November 2008

5. Conclusion Policy deliberation is minimally served by those who use uncertainty as justification for the status quo. Calls for proof have a seemingly rational air to them – e.g. ‘what’s wrong with waiting to act until we are absolutely certain about the causal mechanisms behind phenomenon X ?’ The problem, however, is that uncertainty could always be used to justify a business-as-usual stance. Given that uncertainty will always be a part of our fragmented understanding of complex environmental phenomena, the term adds very little when evoked within policy deliberations. Rather then focusing on the points of disagreement, why not assess ‘the science’ of a given phenomenon by the amount of agreement (or consensus) it generates among those qualified to study it? Thus, for example, rather than pointing out that there exist a few dissenting expert voices on the science of global climate change – and use that as justification for the status quo – let us focus instead on the vast consensus that exists among climate scientists as to the reality of global climate change (Oreskes 2004). The question that should therefore be asked is not ‘Do we have proof?’ but ‘Is there scientific consensus and if so how much?’ Of course, the next question then becomes ‘How much consensus is enough?’ This question, however, is not pre-given, and as such is best left for the public and politicians to sort out given its explicit reliance upon statements of value. Beyond this, a broader array of stakeholders needs to also be given a greater voice in ‘the science’ of environmental science. This needs to occur at both the front- and back-ends of science: in terms of the latter, the public should be able to weigh in on how evidence is evaluated (e.g. what confidence limits ought to be used); while in terms of the former, they should be allowed to contribute to the definition of important scientific artifacts (e.g. how ‘climate’ and ‘climate change’ ought to be defined). In all, we must be more honest with ourselves when it comes to understanding what scientific knowledge is (an amalgamation of facts and values) and what it is capable of providing (uncertain proof). Through such recognition, science’s ability to resolve questions and controversies can be maximized as its underlying normative assumptions are managed in a just and deliberative way. And perhaps then meaningful policies can be developed that not only protect the environment, but do so while also having an eye toward issues of social justice, equity, and consensus. To be clear, I am not arguing that science is of no value to conservation and regulatory efforts. Environmental policy needs science. Yet, importantly, the science it needs is not the mythical view of science that we are taught about in grade school, which rests upon the holly trinity of eternal truths, objectivity, and proof. That science does not exist. Some may nevertheless argue that scientists, because of their expertise on these issues, should still be bestowed with the cognitive authority to make these value-laden calls. Yet, in light of the ‘isought’ problem first detailed by David Hume centuries ago (which recognizes the logical problems that arise when statements of fact are taken for statements of value), the normative nature of these questions undermines that epistemic authority. In the end, then, a scientist is no better suited to deal with these questions of value than is a non-scientist. And given the democratically elected nature of politicians, versus a non-elected scientist, we could do far worse than to have these questions (e.g. about how ecological phenomena should be defined and how uncertainty ought to be managed) answered in the political arena.

Scientific knowledge and environmental policy

237

Note 1. Indeed, even the once ironclad (100% certain) mathematical proof is beginning to be questioned. Due to the rise of computer-assisted calculations, proofs are being developed that involve calculations of such magnitude that they cannot be completely peer reviewed, thus opening the door to a degree of uncertainty (Sautoy 2006).

Downloaded At: 20:09 20 November 2008

References Brown P, Mikkelsen E. 1997. No safe place. Berkeley, CA: University of California Press. Carolan M. 2006. Science, expertise, and the democratization of the decision-making process. Soc Nat Resour 19:661 – 668. Consolidated Appropriation Act. 2001. P.L 106 – 554, x515. Gieryn T. 1983. Boundary-work and the demarcation of science from non-science: Strains and interests in professional ideologies of scientists. Am Sociol Rev 48:781 – 795. Haack S. 2003. Defending science – within reason: Between scientism and cynicism. Amherst, NY: Prometheus. Haraway D. 1988. Situated knowledges: the science question in feminism and the privilege of partial perspective. Feminist Stud 14:575 – 599. IPCC (Intergovernmental Panel on Climate Change). 1995. IPCC second assessment report. Cambridge: Cambridge University Press. Jasanoff S. 1987. Contested boundaries in policy-relevant science. Soc Stud Sci 17:195 – 230. Jasanoff S. 2000. Commentary: Between risk and precaution – reassessing the future of GM crops. J Risk Res 3:277 – 282. Jasanoff S. 2003. Technologies of humility: Citizen participation in governing science. Minerva 41:223 – 244. Kleinman D. 2005. Science and technology in society. Malden, MA: Blackwell. Kuhn TS. 1970 (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press. McCright A, Dunlap R. 2000. Challenging global warming as a social problem: An analysis of the conservative movement’s counter-claims. Soc Probl 47:499 – 522. McCright A, Dunlap R. 2003. Defeating Kyoto: The conservative movement’s impact on U.S. climate change policy. Soc Probl 50:348 – 373. Michaels D. 2005. Scientific evidence and public policy. Public Health Matters 95:S5 – S7. Michaels D, Monfortonis C. 2005. Manufacturing uncertainty: Contested science and the protection of the public’s health and environment. Public Health Matters 95:S39 – S48. Mooney D. 2005. The republication war on science. New York: Basic Books. Mulkay MJ. 1976. Norms and ideology in science. Soc Sci Info 15:637 – 656. Oreskes N. 1999. The rejection of continental drift. New York: Oxford University Press. Oreskes N. 2004. The scientific consensus on climate change. Science 306:1686. Pielke R. 2005. Misdefining ‘climate change’: consequences for science and action. Environ Sci Policy 8:548 – 561. Popper K. 1961 (1934). The logic of scientific discovery. New York: Science Editions. Queiroz K. 2005. Ernst Mayr and the modern concept of species. PNAS 102:6600 – 6607. Scalera G, Karl-Heinz J. 2003. Why expanding Earth? Rome: INGV Publishers. Sautoy M. 2006. Burden of proof. New Scientist, 26 August – 1 September (2566):41 – 43. Wynne B. 2005. Reflexing complexity: Post-genomic knowledge and reductionist returns in public science. Theory Cult Soc 22:67 – 94.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.