A Comparison of Physician Assistant Programs by National Certification Examination Scores

June 24, 2017 | Autor: Roderick Hooker | Categoría: Curriculum and Pedagogy
Share Embed


Descripción

R e s e a rc h A r t i c l e

A Comparison of Physician Assistant Programs by National Certification Examination Scores Roderick S. Hooker, PhD, PA; Brian Hess, PhD; Daisha Cipher, PhD Purpose: The purpose of this study was to determine whether differences in attributes of physician assistant (PA) programs explain variation in Physician Assistant National Certifying Examination (PANCE) scores. Methods: Variability in PANCE scores, aggregated over a 5-year period (1997-2001), was examined using a selected set of PA program attribute variables, as well as examinee age and gender. The following program variables were included: type of institution (private vs. public), type of degree (non-master’s vs. master’s), Carnegie higher education institution typology (i.e., undergraduate institutions vs. graduate-level research I and II institutions), class size, duration of program and cost of tuition. Results: A total of 18,276 PANCE scores were identified. A multiple regression analysis indicated that only 3.6% of the total variability in PANCE scores was accounted for by the set of program variables, examinee age, and gender. Squared semipartial correlations for each explanatory variable were less than 1%, indicating no meaningful differences in PANCE performance for specific types of PA programs. Conclusions: Differences in selected PA program characteristics, examinee age, and gender were not meaningfully associated with PANCE performance. Given these results, future research may want to focus on psychological or individual-level predictors of PANCE performance nested within PA programs. (Perspective on Physician Assistant Education 2002;13(2):81-86)

Introduction Entrance into a career as a PA in the United States requires a license or certificate from one of the states, the District of Columbia, Guam (a U.S. territory), or the federal government (e.g., the military services, Veterans Rod Hooker is an associate professor in the Department of Physician Assistant Studies, University of Texas Southwestern Medical Center, Dallas, Texas. Brian Hess is the director of test development and research for the National Commission on Certification of Physician Assistants, Norcross, Georgia. Daisha Cipher is an assistant professor in the Department of Biostatistics, School of Public Health, University of North Texas Health Science Center at Fort Worth, Fort Worth, Texas. Correspondence should be addressed to: Roderick Hooker, PhD, PA University of Texas Southwestern Medical Center 5323 Harry Hines Boulevard Dallas, TX 75390-9090 Voice: 214-648-1701 Fax: 214-648-1003 E-mail: [email protected]

Administration, Public Health Service, and Bureau of Prisons). Within most legislative domains, this license or certificate is predicated on graduation from an accredited PA program and on passing the Physician Assistant National Certifying Examination (PANCE). The National Commission on Certification of Physician Assistants (NCCPA) develops and administers the PANCE.1 As of the end of 2002, there were 134 accredited PA programs in the United States, together providing a wide range of styles and approaches to educating students who will successfully pass the PANCE and become primary care clinicians. These programs, whose curricula are modeled after those of allopathic and osteopathic medical schools, teach the basics of medical science followed by clinical rotations. This occurs, on average, over a continuous 26-month period (with a range of 12–45 months, depending on the

prior qualifications of matriculants). Programs must meet certain accreditation standards.2-4 Unlike medical schools, which seem to have a standard profile of education, content, and degree, there is a wide variety of PA programs from which to choose. PA programs may be housed in research-oriented universities, hospitals, colleges, and 2-year community colleges. Some programs grant a master’s degree, while others grant a bachelor’s degree or a certificate. Approximately half of the institutions that sponsor PA programs are established through public funding at the state level; the rest are private.5 The tuition at these schools varies widely. Hooker revealed a 16-fold difference in total tuition from the least expensive to the most expensive PA program in 2000. Higher tuition costs are associated with privately funded institutions.2 While tuition may be a factor in an applicant’s selection of a program, it is

Perspective on Physician Assistant Education, Vol. 13, No. 2, Summer/Autumn 2002

81

A Comparison of Physician Assistant Programs by National Certification Examination Scores

not the only factor that must be considered. Applicants need to know the characteristics that are likely to contribute to their success as students. Without this information, an applicant probably selects a PA program based on geographical location, reputation, type of degree awarded, and possibly, the pass rate on the PANCE (as declared by the program). At the program level, little is known about what contributes to becoming a successful PA. In many ways, each PA program is an entity unto itself—an experiment in medical education. Few programs have tried to measure themselves by some other model of PA education in order to evaluate how well they are achieving some endpoint. A high pass rate on the PANCE, one measure of PA program success, seems to be the only uniform endpoint that all programs strive for. To date, there has been limited systematic study of which PA program attributes contribute to success on the PANCE. One study by Oakes and colleagues compared student attributes and PANCE scores. They found that higher scores in the first and third trimester of a military PA program predicted higher PANCE scores.6,7 McDowell, Clemens, and Frosch analyzed PANCE scores and found that master’s degree students had higher than average percent pass rates on the PANCE, higher than average core scores, higher than average primary care scores, and higher than average clinical skill scores. They also found that students educated by PA programs where the accreditation status had been granted for the maximum period (signaling no concerns by the accreditation agency) performed significantly higher on the clinical skills portion of the PANCE.7 Because little is known about how PA programs compare to each other in academic achievement and capability, we set out to study differences in the attributes of PA programs by examining the PANCE performance of their graduates. Specifically, the object of our study was to try to explain a proportion of the variability in PANCE scores using a set

82

of PA program variables as well as examinee age and gender. In addition, by using a set of explanatory variables, it was possible to determine whether differences in specific PA program characteristics, including examinee age and gender, could explain variability in PANCE scores for first-time PANCE examinees. Our goal was not to make inferences about PA clinical competency in the field but to see if optimal program characteristics emerged.

Method We identified a set of explanatory variables by which PA programs could be aggregated. These program attribute variables included: • Type of institution (private vs. public) • Type of degree (non-master’s vs. master’s) • Carnegie higher education institution typology (i.e., undergraduate institutions vs. graduate-level research I and II institutions) • Class size • Duration of program • Cost of tuition Examinee age and gender were also included in the set as possible explanatory variables. Identifying Program Variables Many of the PA program attributes allowed for natural aggregations, such as public versus private institutions, type of degree conferred, size of class, duration of program, and total tuition of the program. This information was obtained for all programs from the 2000 Physician Assistant Programs Directory and verified by visiting each program’s Web site.8 PA programs were grouped and labeled according to identified characteristics and submitted to the NCCPA for coding and analysis. PA program characteristics with first-time examinee PANCE scores were aggregated for 5 years—1997 through 2001. It should be noted that individual and programspecific PANCE scores were not identifiable at any time during the study.

PANCE Scores Individual PANCE scores are based on the number of correct responses a candidate makes as well as on the difficulty of the items. Candidates’ responses to the items they see are entered into a computer program that uses a Rasch model to estimate each candidate’s proficiency. Scores on different forms of the examination are equated. Equating compensates for minor differences in difficulty between different forms and prevents unfair advantage to candidates who take easier forms and disadvantage for those taking a harder form. The equated scores are then standardized such that a reference group defines the metric of the scale. Standardized PANCE scores usually range from 200 to 800, although higher and lower scores are possible. Data Analysis Descriptive statistics were computed for each program variable, age, and gender, and preliminary analysis consisted of one-way ANOVAs and Pearson correlation coefficients. Statistical significance was determined using p
Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.