Assessment of english-language proficiency for general practitioner registrars

Share Embed


Descripción

Original Research

Assessment of English-Language Proficiency for General Practitioner Registrars

ANNA CHUR-HANSEN, PHD; TARYN ELIZABETH ELLIOTT (BA HONS PSYCH); NIGEL CHARLES KLEIN, B. ED., BAVE; CATE ADELE HOWELL, MBBS. Introduction: English-language proficiency of medical practitioners is an issue attracting increasing attention in medical education. To best provide language education support, it is essential that learning needs are assessed and that useful feedback and advice are provided. We report the outcomes of a language assessment that was embedded within the context of a comprehensive general practice learning-needs analysis. Methods: A group of general practitioner registrars (N = 18) training in Adelaide, South Australia, participated in the learning-needs analysis. The analysis used reliable, validated rating scales that provided information on both verbal and written language skills. These scales were used in the context of an objective structured clinical interview. The interviews were videotaped to enable multiple ratings per candidate. Following the learning-needs analysis, ratings were collated and fed back individually to participants according to a feedback report and template. Results: Of this sample, 5 (28%) were found to have no need for any assistance with either spoken or written language, 5 had poor handwriting, 5 were considered to have minor difficulties, and 3 (17%) were identified as having substantial spoken and written English-language difficulties. These outcomes allowed medical educators to focus the language education support offered to the general practitioner registrars appropriately. Conclusions: Language skills can be usefully assessed within a more comprehensive learning-needs analysis. In combination with this assessment, the provision of specific feedback and recommendations for appropriate language-learning opportunities is essential. Key Words: English-language proficiency, training, general practitioner, trainee

Dr. Chur-Hansen: Associate Professor, Discipline of Psychiatry, University of Adelaide, South Australia; Ms. Elliott: Senior Education Research Officer, Adelaide to Outback GP Training Program; Mr. Klein: Manager, Adelaide to Outback GP Training Program, Medical Education; Dr. Howell: director, Primary Care Mental Health Unit, Discipline of General Practice, University of Adelaide, South Australia. Dr. Chur-Hansen was paid by the Adelaide to Outback GP Training Program as a consultant to collaborate with the Adelaide to Outback medical education team in the design of the project reported on in this article. In accordance with the ethical requirements of a research project, this report on an education initiative ensures that trainees’ confidentiality is maintained. The report has been written with all data based on group performance so that no individuals can be identified. The authors thank Leticia Supple and Mabel Chee for acting as raters. Correspondence: Anna Chur-Hansen, PhD, Discipline of Psychiatry, University of Adelaide, South Australia, 5005; e-mail: anna.churhansen @adelaide.edu.au.



© 2007 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/chp.092

Introduction The necessity for requiring minimum standards for the English-language proficiency of international medical graduates is uncontroversial and well-accepted.1 Australian universities similarly accept the necessity of screening international medical students’ English-language proficiency before admission to medical courses. Some universities also screen all incoming cohorts for language skills, regardless of the student’s country of origin, offering interventions to improve English where need is identified.2 Such screening of doctors entering an Australian college specialty training program has not, to our knowledge, been documented in the literature. High-level language proficiency is essential for medical practitioners. The ability of a physician to be able to interact with a patient, based on both interpersonal skills and language proficiency, has been shown to influence patient satisfaction, patient adherence, and the likelihood of seeking treatment and health outcomes.3 Recent newspaper reports of language difficulties negatively affecting patient care do not instil confidence in the public.4 A sound command of language is also fundamental to effective dealings with peers, colleagues, educators, and the wider health professional community.

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 27(1):36–41, 2007

Chur-Hansen et al.

In addition to the recognition that good language skills lead to better clinical encounters, a recent Australian General Practice Training requirement for regional training providers (RTPs) stipulates that RTPs must formally assess the learning needs of all general practitioners enrolled in the program (GP registrars) from 2006 onward.5 One of the areas requiring assessment is language. All identified learning needs must then be addressed within registrar training. The Adelaide to Outback GP Training Program (A2O) is 1 of 21 RTPs of the specialty of general practice in Australia. Its region covers both urban and rural areas within South Australia. Typically GP registrars enrolled in the program will complete 3 years of training, depending on their recognized prior learning. Their first year in the program is spent working in the hospital environment, gaining experience across a number of disciplines. Following this, the registrars complete 2 years of community-based GP placements, in combination with attendance at a number of mandatory training activities. Training is based on the apprenticeship model, and support is staged toward achieving independent practice. In this article, we describe a procedure for assessing the spoken and written English-language proficiency of GP registrars in A2O commencing their first year of community placements. The aim was to identify both their strengths and areas for improvement in language, which could then inform coaching by English-language specialists. The exercise was completely formative, with no summative consequences for candidates. The contextual setting for this project was within a comprehensive learning-needs analysis (LNA) to prepare registrars for entry into GP community training. Method Several bodies, most notably the Educational Commission for Foreign Medical Graduates in the United States, have employed physician examiners and standardized patients to evaluate the spoken English-language proficiency of candidates seeking to enter postgraduate medical education programs.3,6,7 Nurse examiners have also been used to assess spoken-language skills.3 The typical method employed involves rating each candidate on his or her spoken language in the context of an objective structured clinical interview (OSCI), using a 4-point Likert scale ranging from 1 (low comprehensibility) to 4 (very high comprehensibility). However, Rothman and Cusimano7 elaborated on these ratings. In their study, candidates were rated on a 7item speaking performance rating instrument, which assessed pronunciation, speech flow, grammar, vocabulary, question handling, listening comprehension, and coherence of approach. Often in studies that assess spoken English during an OSCI, medical communication skills, such as the demonstration of empathy and building rapport, are also rated on the basis of the candidate’s performance during the interview. A written account of the interview has also been rated in some research to assess written English skills.8

Chur-Hansen and Vernon-Roberts9 have designed and validated a Language Rating Scale to rate Australian medical undergraduates’ spoken English-language proficiency during an OSCI with a standardized patient. A Written Language Rating Scale complements the ratings of spoken language. The items on the 2 scales reflect the concerns expressed over a number of years by clinical educators regarding the specific weaknesses identified in students’ language skills.10 The instruments were designed so that areas for improvement could be identified, with a view to engaging the student in specialized assistance from an Englishlanguage specialist. That is, the instruments serve the purpose of providing formative feedback to candidates as well as providing information about language proficiency to medical educators. They have also been designed so that reassessments can be made following any interventions, to determine whether improvements have been made. In accord with the typical method, the Language Rating Scale was designed for use during an OSCI, with ratings made by an examiner and a standardized patient. Following the interview, the candidate immediately writes an account of what took place, including what was discussed and what decisions, if any, were made with the patient. Assessors who view a videotape of the interview before making any judgments so that they are aware of what actually took place during the interview then rate the linguistic quality of this account on the Written Language Rating Scale. There are 12 items on the Language Rating Scale, each 1 rated from 1 (poor) to 5 (excellent) with a midpoint of 3 (adequate). Spoken language is rated for use of correct tense; use of appropriate register (that is, professional or “lay” language); comprehensibility of speech due to accent; appropriate rate of speech; appropriate use of nonverbal communication (congruent with the spoken language); appropriate response to requests, apologies, and thanks from the patient; understanding of informal language (such as colloquialisms and slang); appropriate use of informal language; clarification where comprehension due to language skills is lacking; audibility of speech; fluency of speech; and a global, overall impression of spoken-language proficiency. The Written Language Rating Scale comprises 10 items rated on the same 5-point scale, as described. The items are appropriate content; use of appropriate register (avoiding value judgments and jargon); use of appropriate vocabulary; appropriate use of tense; appropriate use of articles, pronouns, and prepositions; appropriate use of spelling, punctuation, and capitals; legibility of handwriting; appropriate use of conventions (such as forms of address); fluency of written expression; and a global, overall impression of written-language proficiency. Both instruments allow for qualitative comments about language to be recorded in addition to ratings and allow for missing data—for example, if the candidate does not use colloquial language, this cannot be rated. This does not pose any problem because the instruments are not scored to produce totals.

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS—27(1), 2007 DOI: 10.1002/chp

37

Assessment of English-Language Proficiency

Procedure A total of 18 candidates were involved in the language learning-needs analysis (LNA), making up the entire intake of trainees for the A2O training program for that year. Eleven of these were born in Australia or New Zealand. For those who were born overseas, the mean length of time spent living in Australia was 13.8 years (SD 9.0 years). From the group there was a minimum of 2 years spent in Australia and a maximum of 30 years. The group consisted of both Australian Medical Council Exam Graduates and Australian graduates. In any rating task, it is essential that assessors be adequately trained. All raters for this exercise were involved in a 2-hour training session in which all items on the Language Rating Scale were defined, explained, and discussed. Two videotapes of GP registrars interviewing a standardized patient, deliberately selected to represent high English-language proficiency and lower proficiency, were viewed and rated using the instrument, and the results were discussed and considered. Standardized patients were trained in their role not only for authenticity but also with a view to ensuring that they allowed the candidate ample opportunity to speak and offer explanations during the interview. They were trained to incorporate slang into the interview, specifically “knocked up,” “up the duff,” and “When is it, like, safe to have sex?” On the day of testing, registrars were required to interview the standardized patient for a maximum of 15 minutes. The case involved “Maria,” a 16-year-old student presenting to the GP requesting the combined oral contraceptive pill because she wished to commence sexual activity with her 16-year-old boyfriend. A medical educator observed the encounter and rated first the candidate’s clinical skills and then the candidate’s spoken language. The standardized patient also rated spoken language immediately following the interview, as well as rating the GP’s medical communication skills. Both sets of ratings for spoken language were completely independent, with no conferring between them in arriving at ratings. Five minutes after each interview was allowed for completion of ratings. The candidates were allocated to 1 of 4 medical educatorstandardized patient pairs. Two of the medical educators were men, and as a requirement of the case, all standardized patients were women and all were in the 30- to 40-year age group. All interviews were videotaped. On completion of the exercise, the 8 raters met with the first author for a debriefing session, to report on the process. Three others— the second author (T.E.E.), a research assistant, and an English-language specialist—subsequently viewed the videotapes and made ratings. Thus, each candidate received 5 independent ratings for spoken-language proficiency on the Language Rating Scale. Immediately on completion of the interview, candidates moved to a second station where they were provided with a sheet with the following instructions: “You have 10 minutes to record the interview you have just completed. Write an 38

account of the interview so that another health care professional would understand what took place and the decisions and conclusions drawn by you and the patient. You should write in complete sentences and in paragraphs. Your writing must be legible. Please avoid the use of specialized medical terminology. If it is used, please ensure that it is explained.” The 3 raters, all of whom rated spoken language, were provided with detailed instructions related to the Written Language Rating Scale criteria and how to rate the written account. These 3 raters first observed the videotaped recording of the interview and then proceeded to rate written-English proficiency using the instrument. The first author (A.C.H.) was provided with all raw data and collated the results, producing a summary report for each candidate, for the provision of formative feedback. The identity of each candidate was blinded, and she had no other information other than the names of the raters. Results In total, 18 candidates were rated for spoken and written English-language proficiency. Data were tabulated, and both the ranges and averages for each item on both instruments were examined. A summary of the ranges, means, and standard deviations across the group can be found in TABLE 1 (spoken language on the Language Rating Scale) and TABLE 2 (written language on the Written Language Rating Scale). For the purpose of referring candidates for assistance to an English-language specialist, a score of 1 or 2 (poor) on any one criterion of either instrument was used as the cutoff. A score of 3 (adequate) resulted in referral to a medical edu-

TABLE 1. Ranges and Means for Items on the Language Rating Scale (N = 18)

Item Use of correct tense

Missing, No. (%)

Range

Mean (SD)

––

2–5

4.6 (0.7)

Avoidance of jargon

2 (2.2)

2–5

4.2 (0.9)

Comprehensible accent

2 (2.2)

2–5

4.4 (1.0)

Rate of speech appropriate

––

2–5

4.5 (0.8)

Appropriate nonverbals

––

2–5

4.5 (0.7)

4 (4.4)

2–5

4.5 (0.8)

18 (20.0)

1–5

4.5 (1.0)

Response to apologies, thanks, requests Understanding of informal language Use of informal language

43 (47.8)

1–5

4.0 (1.2)

Clarification where comprehension lacking

11 (12.2)

2–5

4.4 (0.9)

Audibility of speech

––

3–5

4.6 (0.7)

Fluency of speech

––

1–5

4.5 (0.9)

1 (1.1)

2–5

4.5 (0.8)

Overall impression of language proficiency

Note: Total number of scores = 90 (18 candidates ⫻ 5 raters).

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS—27(1), 2007 DOI: 10.1002/chp

Chur-Hansen et al. TABLE 2. Ranges and Means for Items on the Written Language Rating Scale (N = 18) Missing, No. (%)

Item Appropriate content

Mean (SD)

Range



2–5

4.0 (0.7)

1 (1.9)

2–5

4.3 (0.7)

Appropriate vocabulary



2–5

4.3 (0.9)

Appropriate tense



2–5

4.1 (1.0)

Use of pronouns, articles, prepositions

1 (1.9)

2–5

3.9 (1.1)

Use of spelling, punctuation, capitals



1–5

3.8 (1.0)

Legibility of handwriting



2–5

3.5 (0.9)

Use of conventions, eg, use of Mr., Ms.



1–5

3.7 (1.3)

Fluency of written expression



1–5

3.9 (1.2)

1 (1.9)

2–5

3.9 (0.9)

No value judgments made

Overall impression of written proficiency

Note: Total number of scores = 54 (18 candidates ⫻ 3 raters).

cator for further discussion and action if required. Scores of 4 and 5 (excellent) were taken to be indicative of no problems with language proficiency. Reliability analysis revealed that the overall reliability found among raters for both the Language Rating Scale and the Written Language Rating Scale was high (
Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.