Assessment of physician competence in prehospital trauma care
Descripción
Injury Vol. 26, No. 7, pp. 471-474, 1995 Copyright 0 1995 Elsevier Science Ltd Printed in Great Britain. All rights reserved
002C-1383195
$10.00 + 0.00
ooze-1383(95)00075-5
Assessment of physician hospital trauma care N. Notzer’,
A. Eldad’
competence
and Y. Donchin
‘Unit of Medical Education, Tel Aviv University, University Medical School, Israel
2Bum Unit, and 3Department
in an attempf to develop a model to meastsE(rethe competence of physicians providing emergency care under difficult field conditions, 75 Israeli army medical corps physicians were evaluated fhrough fhe use of four instruments: a debriefing interview, peer assessment, selfassessment and written examination. The special on-siteassessment model was designed fo examine actual events, enabling an assessment of performance in real situations rather than simulatedcases. Significant positive correlations were found between the resulfs of fhe written examination and the peer evaluation on two offour measures (r=O.36, P=O.OOl; r=0.23, P=O.O5) as well as on the two measures regarding self-evaluafion and peer evaluation (r = 0.54. P = 0.001; r = 0.38, P = 0.05). If was found fhat those physicians who were trained in the army’s medical officer course scored signi&znfly higher on the written examination (p = 0.001) and were rated more highly by their senior peers (p = 0.048) than those who did not receive such training. If was concluded that if is advanfageous to we a combination of knowledge (written examination)and performance (peer assessment or self-assessment) measures in order to arrive at a more comprehensive assessment of competence. In addition, the wriften examination format should be expanded and developed to include more clinical vignettes requiring treatment decisions, making this insfrumenf a more clinically oriented measure of physician competence in trauma care.
Injury, Vol. 26, No. 7, 471-474,
in pre-
1995
Introduction Physicians on ambulance duty or serving in the army are called upon to treat trauma patients under difficult and often extreme circumstances, on the highway or at the scene of a military or civil emergency. Assessment of trauma care in the field is more difficult than in the hospital, asthe field situation is far from a controlled or standardized research setting. The large number of variables which complicate on-the-scene trauma care include the number of casualties (which usually must be treated by a single physician), weather and lighting conditions, availability of transport for the victims, distance from the closest trauma centre and the equipment available on the scene. Is it feasibleto evaluate objectively the care delivered by physicians under the most difficult of conditions? Is it possibiefor an expert assessor,far removed from the scene
of Anaesthesiology,
Hadassah Hebrew
of the incident, to assessthe competence of these physicians? Much of the literature reports on a variety of instruments for clinical evaluation which have been tested in hospital settings under carefully controlled research conditions.‘,‘. These instruments include direct observation, oral and written examinations, global rating scales, medical record review, patient managementprofiles, computer simulations and simulated patients’,3. In the present study, an attempt was made to select and examine evaluation instruments which would be suitable for the unique and variable conditions present at the sceneof an incident requiring trauma care. It is difficult to pinpoint one instrument which will adequately assessclinical competence. The multiple choice question (MCQ) examination is convenient to use and is a quantifiable and reliable measurefor testing clinical knowledge. However, clinicians agree that it is not sufficient to assessclinical competence. Other instruments, such as the debriefing interview, peer review or self-evaluation, add valuable information but are costly and require a high level of involvement on the part of senior staff. In addition, the results obtained from these measures are difficult to quantify. Global rating scales, which depend on peer review and self-evaluation, definitely have a place in the field of clinical assessmentbut should be used in conjunction with other more objective measures’. In light of these considerations,and in order to develop a predictive model for future use, the evaluation was based on a combination of four methods often used in competence assessment:a written examination (MCQ), a debriefing session,peer review and self-evaluation. This study was designed to addresstwo specific issues: (1) to examine the feasibility and adequacy of, and the relationships between, a number of instruments designed to evaluate the physician’s competence in trauma care in field situations, in order to suggest a comprehensive assessmentmodel for the future, and (2) to indicate background variables which influence the physician’s competence in providing trauma care in order to arrive at guidelines for training medical officers.
Materials
and methods
The’Israel Defence Forces (IDF) Medical Corps, which is the main provider of in-situ trauma care in casesof military and civilian emergencies,has adopted the trauma treat-
472
Injury: International
ment regimen of the American College of Surgeons, the Advanced Trauma Life Support (ATLS) Course for Physicia&. This system provided the standards of clinical performance for the study. The study population comprised 75 physicians in the regular army and on military reserve duty, known as military officers (MOs), who had provided on the scene treatment, over a period of one year, to patients sustaining traumatic injuries of a moderate to severenature, asdefined by a trauma scorebelow 9 on Champion’sTraima Severity Scores.It should be noted that the study was not basedon a samplepopulation but rather included all incidents which met the study criteria. Procedure Five senior traumatologists, with military backgrounds and at least 10 years’ experience in trauma care, conducted the assessmentsessions.Four assessmentinstruments were developed and pre-tested by the researchteam. In order to improve inter-rater reliability, the assessorsattended preparatory sessionson how to perform a debriefing interview and peer review assessment(using criteria basedon the ATLS) and to administer the written examination. The assessment lasted approximately 2 h and took place within 24-36 h of the incident, while the event was still fresh and the MO was unaware of subsequenttreatment received by the patient in the hosptial. If military conditions permitted, the peer assessorwas immediately transported to the site of a reported incident to facilitate an accurate reconstruction of the incident. In casesof major events, the medicalunit was often still engaged in treating the casualtieswhen the assessorarrived on the scene.Due to the high cost in manpower and complicated logistics, only one senior peer was assigned to evaluate each incident.
The assessment instruments Debriefing interview The peer assessorconducted the debriefing and completed a standardized form which included a detailed description of the incident, the diagnostic and treatment considerations and the medical procedures performed at the scene. The researchers’intention was to develop a debriefing format which could be quantified and usedasan additional measurein the analysis of performance.
Journal of the Care of the Injured Vol. 26, No. 7, 1995
and the other half testing comprehension and clinical application. The questions were pre-tested in courses in trauma management administered by the Israel Defense ForcesMedical Corps and their reliability was found to be acceptable (0.75). The blueprint of the examination showed a representative selection of the areasrelevant to trauma care. The number of test question (50) was minimal due to the time constraints of the assessment,with a maximum of 50 minutes allotted to the written examination. Self-assessment by the MO Following the debriefing interview, the MO was askedto evaluate himself according to two criteria: (I) his overall competence in trauma care provision, and (2) his management of the incident. The MO rated himself on a 4-point scale,from 1 (unacceptable)to 4 (very good).
Results The four instruments proved feasible to employ with respect to their kase of implementation and the degree of co-operation received from both the assessorsand the MOs. In the debriefing interview, much valuable anecdotal information was collected but the results could not be quantified and were difficult to categorize. As a result, the debriefing interview was mainly used as the basisfor the peer assessment. Peer assessment The mean scores (on a scale of 1-4) and standard deviations for the two incident related criteria were 2.8 (SD 0.83) for management of the incident, and 3 (SD 0.86) for clinical skills. The ratings for the general criteria were 3 (SD 0.77) for overall competence, and 2.8 (SD 0.77) for theoretical knowledge. The distribution of the ratings by the peer assessorsshows no difference in the peer assessor’sratings of the MOs on the four criteria in question, or in the evaluation of the general criteria as opposed to the incident related criteria. Written examination The mean score in the written examination was 59.7 per cent (SD 12). Scores for the sub-categoriesranged from a high of 73 (the chest) to a low of 53 (general medical knowledge) with standard deviations of 19 and 17, respectively. The large range of the standard deviations attest to the heterogeneousnature of the MO population tested.
Peer assessment of the MO Following the debriefing session, the MO was rated according to four criteria. The first two criteria related to the assessment of the particular incident asreflected by the MO’s management of the incident (i.e. his medical decisions)and the level of the MO’s clinical skills.The last two criteria related to a general assessmentof the MO’S ability to provide trauma care based on the MO’s theoretical knowledge and his overall competencein caring for trauma victims. Each of the four criteria was rated on a 4-point scalefrom I (unacceptable)to 4 (very good).
Self-assessment The mean scores (on a 4-point scale) and standard deviations on the two self-assessment criteria were 2.8 (SD 0.95) for overall competence in trauma care, and 3 (SD 0.79) for management of the incident. In the selfassessment,asin the caseof the peer assessment,there was little difference between the ratings on the different criteria.
Written examination An MCQ examination was administered to test the physician's knowledge in the medicalareasrequired for the provision of trauma care. The questionsrelated to primary care in common and rather uncomplicated trauma cases with approximately half of the items testing factual recall
Relationships between results Some relationships were found between the results of the assessment instruments (with the exception of the debriefing), as demonstrated by correlations and multiple regressionanalysis. First, a significant positive correlation was found between the results of the written examination
Notzer
et al.: Physician
competence
in pre-hospital
trauma
care
Table 1. Correlationsbetween resultsof written examination and peer/self-assessment Assessment criteria
Peer assessment
Management of the incident Clinical skills Theoretical knowledge Overall competence *P< 0.05
Peer assessment
Self-assessment
0.17
0.21*
0.23” 0.14 0.36”
0.22’
**PC 0.001
Written examination
and the peer evaluation on two out of four measures: overall competence (v= 0.36, P= < 0.001) and clinical skills (r= 0.23, P= < 0.05) (TableI). The correlation for overall competence was high, despite the variability of the conditions prevailing in the incidents evaluated. In the regression analyses, only overall competence in trauma treatment showed a significant prediction of scoresin the written examination (P= O.Ol), while comprehension questions on the written examination proved the best predictors of the MO’s overall competence. Secondly, positive correlations - although of a rather low magnitude - were found between the MO’s selfassessment and the written examination with respect to both of the criteria measured:overall competence (Y = 0.22, P< 0.05) and management of the incident (Y= 0.21, P< 0.05). Thirdly, significant positive correlations were found between the MO’s self-assessmentand the assessmentof the senior peer in the two measureswhich both the MO and senior peer were asked to evaluate, with a higher correlation regarding management of the incident (r= 0.54, P< 0.001) and a somewhat lower correlation with regard to overall competence in trauma treatment (Y = 0.38, P< 0.05). (TableII and Figztre 7). The relationship of background variables to the results of the assessmentinstruments were examined. Personal and military experience background variables included seniority in the medical profession, field of specialization and previous military posting. With regard to non-military background variables, seniority itself did not prove to be a positive contributing factor in trauma care knowledge, with test scores on the written examination declining proportionally to the number of years which had .elapsed since graduation from medical school (P= 0.001). Similarly, when the MO’s field of specialization was considered, surgeons did not score higher on the written examination than non-surgeons.In addition, when the peer assessment was analyzed with respect to seniority and field of specialization, no relationship was found between overall competence in trauma care and the MO’s medical specialty or number of years sincegraduation from medical school.
Self-assessment
Figure 1. Correlationsbetweenassessment of generalcompe-
tence (peer and self-assessment) and scoreson the written examination.
The effect of the military experience variables was evident regarding the written examination only: the younger MO’s in compulsory service scored significantly higher on the written examination than the reserve officers (P= 0.03) and those who had attended the army medical corps medical officer coursehad significantly higher results than the MOs who had not attended the course (P= 0.001). On the other hand, previous experience in trauma treatment and the previous military assignmentof the MOs did not correlate with the written examination results. With regard to the peer assessment,none of the military background variables correlated with the assessment of overall competence in trauma treatment as evaluated by their peers. On the other hand, the senior peer’s assessmentof clinical skills was favourable with respect to the graduates of the military medical academy. The f-test showed a significant difference between the meanratings of those who participated in the army medical officer course (3.06, SD 0.85) and those who did not (2.33, SD 0.82) (P= 0.048).
Discussion The Israeli emergency medical system is staffed mostly by young physicians who are not specialistsin trauma care, a factor which reinforces the need to assurequality standards of pre-hospital trauma treatment. This study demonstrated, once again, the need for two types of assessment
Table II. Correlationbetween peer assessment and the MO’s self-assessment Self-assessment Peer evaluation
Overall
Overall competence Management of the incident *PC 0.05 **PC
0.001
competence 0.38’ 0.40”
Management
of the incident 0.48’” 0.54”
474
Injury:
International
instruments, one to measureknowledge and the other to assessperformance6.Although the two types of measures were significantly correlated, the combined results more fully reflected the ability of the MO to provide trauma care in the field. This was similar to results of studiesconducted under controlled conditions in hospitals and medical schoo1s2,7.8 Basedon the findings in the present study, it is advocated that the MCQ should contain a greater number of questionswhich test comprehensionaswell asquestions simulating clinical situations requiring treatment decisions. sions. The highly significant correlation between the senior traumatologist’s assessmentand the MO’s self-assessment highlights the potential for the routine use of a structured self-assessmentinstrument. This less expensive measure could provide a reliable routine report of the MO’s performance in the non-cognitive sphere, i.e. in clinical procedures9,while the peer evaluation tool would be used only periodically for quality assurance. The lack of correlation found with background variables is worthy of note. The widely accepted assumption, that seniority, specialty training and previous medical or military experience will guarantee effective trauma care, proved to be false. The only significant factor which enhanced performance was participation in the army’s specialmilitary medicine and trauma courses.These observations may be explained in part by the different environments and trauma care facilities found in the hospital and pre-hospital settings and the fact that trauma medicine is a relatively new specialty in Israel. These factors have obvious ramifications for training new MOs and for refresher course programmes for veteran MOs”. In addition to the data and conclusionsgleaned from the study, reports from medical officers in the field showed that the very fact that this study was conducted put the subject of quality control in trauma care on the agendaand, in their estimation, contributed to the improvement of trauma care.
Journal
of the Care of the Injured
Vol. 26, No. 7,1995
References 1 Neufeld VR and NormanGR eds.Assessing Clinical Compe-
tence.New York: Springer,1985. 2 AndersonBO, SunJH,Moore EEet al. Thedevelopmentand
evaluation of a clinical test of surgicalresidentproficiency. Surgery1989:106:347. 3 Lloyd JSand LangsleyDG eds.How to Evaluate Residents. Chicago:AmericanBoardof Medical Specialties,1986. 4 American College of SurgeonsCommittee on Trauma. Advanced
Trauma
Life Supporf Course Instructor Manual.
Chicago:AmericanCollegeof Surgeons,1989. 5 ChampionHR, SaccoWJ, HannanDS et al. Assessment of injury severity: the triageindex. Crif Care Med 1980;8: 201. 6 AbrahamsonS. Evaluation:definition, process,competence (keynote address). In: Proceedings of the International Workshop on Evaluation. Seoul:SeoulNational University Collegeof
Medicine, National Teacher Training Center for Health Personnel, 1992, 21.
7 Maatsch JL.Model for a criterion referencedmedicalspecialty test. Final report, Office of Medical Education Research and Development, Michigan State University in, collaboration with the American Board of Emergency Medicine. 8 Forsythe GG: McGagie WC and Friedman CP. Construct validity of medical clinical competence measures: a multitrait-multimethod matrix study using confirmatory factor analysis. Am Ed Res 11986; 23: 315. 9 Arnold L, Willoughby TL and Calkins EV. Self-evaluation in undergraduate medical education: a longitudinal perspective. IMed Educ 1985;60: 21. 10 Kluger Y, Rivkind A, Donchin Y et al. A novel approach to military combat trauma education. ] Trauma 1991;31: 564.
Paper accepted 19 April 1995. Requests for reprints should be addressed to: Dr Netta Notzer PhD, Unit of Medical Education, Sackler Faculty of Medicine, Tel Aviv University, POB 39040,Tel Aviv 69976, Israel.
Lihat lebih banyak...
Comentarios