An emerging experimental approach to public administration research in Canada

Share Embed


Descripción

135

NEW FRONTIERS

Carey Doberstein

An emerging experimental approach to public administration research in Canada

There has been a surge in the experimental research in public administration in the last five years (Bouwman and Grimmelikhuijsen 2016). Results from experiments conducted internationally, for example, have pushed back on long-standing assumptions in the field, such as budgetmaximizing bureaucratic behaviour (Moynihan 2013), and articulated key causal effects of transformational leadership on performance in public organizations (Belle 2014). Both examples demonstrate the practical value and interest of experimental research to practitioners and scholars alike. Yet the Canadian public administration research community has been slower to follow, though there are hints of an emerging interest in experimentation.

Why conduct experiments? If virtually all public administration research has been non-experimental and yielded many critical insights of fundamental dynamics of policymaking and administration, why conduct experiments in public administration research? Many pressing questions in public administration have been answered through traditional methods and richly descriptive analysis, but many other questions involve causality. Among them: what shapes behaviour in the public sector? How can more effective and efficient arenas of decision-making be designed? How can programs and policies be crafted to respond most effectively to the problems identified? For scholars and practitioners interested in such questions of causality in core areas of public administration, experiments represent the gold standard of research design, offering the opportunity to establish appropriate analytical controls that allow for specific and credible causal claims to be made. When scholars aim to make causal claims, they are confronted by the “endogeneity problem”: the relationships of factors under investigation may be correlated with each other in ways impossible to sort out. The endogeneity threat can emerge from potential two-way causal relations, Carey Doberstein is Assistant Professor of Political Science at the University of British Columbia.

CANADIAN PUBLIC ADMINISTRATION / ADMINISTRATION PUBLIQUE DU CANADA VOLUME 60, NO. 1 (MARCH/MARS 2017), PP. 135–139 C The Institute of Public Administration of Canada/L’Institut d’administration publique du Canada 2017 V

136

NEW FRONTIERS

omitted variables (known and unknown), and selection bias—and no amount of control variables can solve this problem (Blom-Hansen, Morton, and Serritzlew 2015). Yet the endogeneity problem in most respects can be resolved through an experimental research design. Experiments can been conducted in the “lab”—a simulated environment, often drawing on student samples—or in more natural settings using public servants as research subjects. However, they must have design controls and experimental groups to which subjects are randomly assigned. In short, random assignment of research subjects to experimental or control conditions means observed and unobserved (that is, endogenous) factors that might affect outcomes are equally likely to be present in the experimental and control groups (and thus do not explain the phenomenon under investigation). The researcher thus can more credibly make specific causal claims in that context.

Strengthening evidence-based decisionmaking Parallel to the experimental turn in public administration research, we see policymakers and civil servants seeking “evidence-based solutions,” which demands that we grapple not only with what this means, but also the methods which generate evidence in the field. The turn to evidence-based decision-making in bureaucracies will increase demand for such experimental methods from practitioners seeking robust findings on causal effects related to organizational performance, policy reforms or program designs, as demonstrated in some examples from Canadian scholarship and practice below. A central concern of public administration scholars is the relationship between the citizen and the administrative state, including perceptions of high service standards (particularly with outsourced delivery) and equitable treatment among eligible recipients, and questions of accountability.  Etienne Charbonneau of ENAP in Montreal is among the few Canadian public administration scholars who has published experimental work and has contributed fascinating findings on citizen perceptions of government performance from public reporting and benchmarking. Charbonneau and Van Ryzin (2015) examine how citizens consume comparative performance information of elementary schools in the United States, and they discovered that contrary to assumptions among public administrators, citizens appear to give less weight to more contextualized, nuanced performance benchmarks (for example, comparing last year scores, or comparing similar schools), and instead citizens give more weight to simple comparisons of the school to the state average. Transparent government means little to

NEW FRONTIERS

137

citizens if they cannot or do not make sense of it—these are important findings and ought to cause reflection among performance evaluators. A second theme encountered in the growing field of experimental public administration is focused on bureaucratic behaviour. For example, Doberstein (2016) examined how policy analysts in the BC provincial government assessed the credibility of various sources of policy research from academics, think tanks, and advocacy groups using a survey experiment. The experimental design was simple: for half of the respondents, the authorship of the studies was randomly switched (unbeknownst to them) but the content remained the same, allowing for systematic comparison of the average credibility assessments across the control and experimental groups. The experimental findings lend evidence to the hypothesis that academic research receives a “credibility bonus” to those in government, whereas think tank or advocacy organization research suffers from a “credibility penalty” from their organizational identity, regardless of its research content. Merely the name of source that conducted the research powerfully shaped how policy professionals in government viewed the credibility of the research. The discovery of such systematic biases in information processing within government was uniquely aided by the experimental method and unlikely to have been identified through other methods such as interviews with those same policy analysts. A third theme of experimentation in public administration is through the controlled testing of program models by bureaucratic agencies in conjunction with social scientists. The Behaviourial Insights Team in the UK (also known as the “Nudge Unit”), famously conducts experimental trials on policy instrument settings to improve effectiveness and efficiency, spurring similar policy shops in the Canadian government in ESDC Canada and the Innovation Hub in the PCO, as well as Behavioural Insights Units (BIUs) at the provincial level in Ontario and BC. Manitoba was host to “Mincome,” a Canadian guaranteed annual income field experiment from 1974 to 1979, analyzed recently by economist Evelyn Forget (2011). The Social Research and Demonstration Corporation (SRDC), a Canadian nonprofit research organization, has for decades devised large-scale demonstration projects using random assignment evaluation designs, most recently on how to deliver most effective career development services and postsecondary financing reforms for under-represented groups. More recently, Canada is home to one of the most comprehensive randomized controlled trials to test a program model called Housing First to address chronic homelessness. The Mental Health Commission of Canada (MHCC) was given $110 million to test the Housing First approach in a variety of environments and with different populations in five Canadian cities (MHCC 2014). Over 2,000 participants were recruited from shelters and from the streets to participate in the study, randomly placed into

138

NEW FRONTIERS

experimental and control groups, and tracked over a four-year period. The results show that those in the Housing First program model were substantially more likely to remain in housing and exhibit better health outcomes than the control group (business as usual) and that for the most chronically homeless this approach saved the government $21.72 for every $10 of public spending. These experimental findings directly shaped the federal government’s policy shift toward Housing First in 2013.

Next steps forward The examples demonstrate that experimental work is being conducted in Canadian public administration in practice and in the scholarly community, with considerable room for growth. What considerations should guide the next frontiers of this subfield? First, we should resist using experiments as a way of “knowing more and more about less and less” (McGrath 1964), and instead design experiments with attention focused on the big questions in public administration. Second, experiments can be expensive, but have demonstrated their value, chiefly from presenting clear and credible results in a style decision makers need to move forward. Partnering with government agencies and departments in pilot experiment studies is one method, but community foundations and traditional granting agencies have also shown considerable interest in these methods. Most experimental public administration research in Canada and elsewhere has focused on individuals as the unit of analysis, but much administrative and political decision-making takes place in small groups (for example, administrative decision-making, or even public-private negotiations), and experimental work ought to expand to this area. Experiments must not merely be simple designs with individuals as the unit of analysis and testing one variable of interest, but instead become more advanced, given the complex nature of modern public administration. How to get there? First, experimental methods in public administration should not be seen as a panacea, or a substitute for other methods, but as complements to traditional observational methods of research. Second, our graduate students need to be exposed to experimental work, and receive training sufficient to conduct experiments, just as we train our students in interview methods and policy analysis tools. The infrastructure to do this in Canadian graduate schools in spotty—this author as a graduate student took supplemental course work in the United States to gain fluency in this method—and departments should look to expand expertise in this area, perhaps on a collaborative basis. Third, through education and outreach, we need to generate buy-in from the professional public service community in Canada, as is done with the federal government’s Innovation Hub and similar BIUs in BC and Ontario (the latter of which has completed numerous pilots

139

NEW FRONTIERS

ranging from tax collection to public health), to cooperate and even collaborate with our experimental research programs to establish realism in our designs, while maintaining the highest ethical standards for our interventions.

References Belle, Nicola. 2014. “Leading to make a difference: A field experiment on the performance effects of transformational leadership, perceived social impact, and public service motivation.” Journal of Public Administration Research and Theory 24 (1): 109–36. Blom-Hansen, Jens, Rebecca Morton, and Søren Serritzlew. 2015. “Experiments in public management research.” International Public Management Journal 18 (2): 151–70. Bouwman, Robin, and Stephan Grimmelikhuijsen. 2016. “Experimental public administration from 1992 to 2014: A systematic literature review and ways forward.” International Journal of Public Sector Management 29 (2): 110–31.  Charbonneau, Etienne, and Gregg G. Van Ryzin. 2015. “Benchmarks and citizen judgments of local government performance: Findings from a survey experiment.” Public Management Review 17 (2): 288–304. Doberstein, Carey. 2016. “Whom do bureaucrats believe? A randomized controlled experiment testing perceptions of credibility of policy research.” Policy Studies Journal. Early view. Forget, Evelyn L. 2011. “The town with no poverty: The health effects of a Canadian Guaranteed Annual Income Field Experiment.” Canadian Public Policy 37 (3): 283–305. McGrath, Joseph E. 1964. “Toward a theory of method for research on organizations.” In New Perspectives on Organizational Research, edited by W. Cooper, H. Leavitt, and M. Shelly. New York: Wiley, pp. 533–57. Mental Health Commission of Canada. 2014. National Final Report: Cross-Site at Home-Chez Soi Project. Available at http://www.mentalhealthcommission.ca/sites/default/files/ mhcc_at_home_report_national_cross-site_eng_2_0.pdf. Moynihan, Donald. 2013. “Does public service motivation lead to budget maximization? Evidence from an experiment.” International Public Management Journal 16 (2): 179–96.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.