WAYS TO PROFICIENCY IN SPOKEN ENGLISH AS A FOREIGN LANGUAGE – TRACING INDIVIDUAL DEVELOPMENT

October 6, 2017 | Autor: Irena Czwenar | Categoría: Second Language Acquisition
Share Embed


Descripción

18







Irena Czwenar
Kielce University of Humanities and Natural Sciences

WAYS TO PROFICIENCY IN SPOKEN ENGLISH AS A FOREIGN LANGUAGE – TRACING INDIVIDUAL DEVELOPMENT

Introduction
The usual way of measuring linguistic development in the practice of language teaching is through application of batteries of language tests addressing various subsystems and skills. Oral proficiency testing invariably involves assessment of the students' performance in an oral interview carried out by a group of examiners, or raters.
Since proficiency is a complex construct, the measurement of its development must involve an explicit identification of its perceived dimensions. The aspects of oral proficiency which were addressed in this study include the qualitative aspects of fluency, linguistic accuracy and complexity. Although the choice of proficiency dimensions, i.e. marking criteria, varies across existing examination formats, the three aspects of proficiency examined in this study feature in the widely accepted standardized assessment frameworks (cf. the CEFR document, 2001, ACTEFL examination guidelines, 1999). Besides, in SLA research there is a tradition of measuring the quality of performance in terms of fluency, accuracy and complexity (cf. Skehan and Foster, 1999; Robinson, 2001; Tarone, 1980; Larsen-Freeman, 2006).
Spoken language characteristics
The present section briefly describes the most important features of the spoken language. These particular linguistic features, which reflect various aspects of the speech production process, are as follows:
Speech is delivered via the oral/auditory channel, which means that it is produced by interlocutors talking face-to-face in a particular context. This inevitably affects the way in which speakers 'package' information and the language choices they make.
Spoken language is typically dynamic and interactive; discourse develops as a result of interaction between the speakers, and between the speakers and the context.
Most speech is produced spontaneously, with no possibility of planning or rehearsing in advance. The real-time, 'online' processing makes it sensitive to the constraints of short-term memory.
The above properties have profound implications for the organisation and quality of the subsystems of the spoken language, that is, grammar, lexicon and phonology. In most general terms, the syntax of the spoken language tends to be fragmented and relatively simple; phrasal and clausal structures are less elaborated than those typical of the written genres. A similar lack of elaboration characterises the spoken vocabulary, which is of a narrower range and more repetitive than the vocabulary used in the written language. Spontaneous, unprepared talk abounds in hesitation phenomena, including repetitions, reformulations, silent and filled pauses.
2.1. The grammar of the spoken language
In structural terms, spoken discourse does not resemble a hierarchy, and subordination and embedding are far less frequent than in the written mode. The most common pattern of clause combination in spoken language is the linking of clauses in a sequential way, with clauses being added one after another (which is a result of real-time processing). Because the processes of conceptualization, formulation and articulation of a message run in parallel (cf. Levelt, 1989), speakers have no time to work out complex patterns of main and subordinate clauses. Thus, clause subordination is rare in informal spoken English; the prevailing clause-linking device is coordination, with conjunctions such as and, but, or. There are examples of clause subordination by means of because and so, but these two connectors often act more like coordinating than subordinating conjunctions. Subordinate clauses often occupy complete speaker turns, in which case they do not appear to be overtly connected to any specific main clause. Very often, they refer to and complement the other speaker's turn. Clausal blends, i.e. syntactic structures which are completed differently from the way in which they were begun, are also typical in spoken English (cf. Carter & McCarthy, 2006).
Some of the most common structural features characterising the grammar of the spoken language are given below:
Clauses and phrases tend to be linked through chaining or coordination.
There is a high incidence of subordinate clauses which do not appear to be connected to any particular main clauses.
Grammatical structures are far less complex than in the written language. Post-modification is rare.
Many constructions are incomplete or simply abandoned by the speaker.
Lexical properties of the spoken language
The specificity of spoken language vocabulary is closely associated with the nature of speech and cognitive, psychological and social factors underlying the processes of speech production. It is important to note at this point that certain aspects of spoken vocabulary may be present in one spoken genre or text-type, yet not necessarily observable in another. Therefore, the properties of spoken language vocabulary discussed in this section should be seen as representing informal, conversational register, rather than applying to the spoken language in general. The choice of this particular variety of spoken English is motivated by the intention to highlight those aspects of lexis which are the most salient characteristics of the kind of spoken discourse investigated in the present study. The general characteristics of informal conversational vocabulary, in terms of its complexity, range and the frequency of individual words, based on the findings of corpus-based research (cf. Carter & McCarthy, 1997; Biber et al., 1999), include the following:
Speakers avoid 'lexical and syntactic elaboration'; as a result they rarely use complex and sophisticated words (Biber et al., 1999).
A fair amount of conversational lexis serves interpersonal and interactional purposes rather than transactional ones.
Spoken language is characterised by a high occurrence of prefabricated lexical expressions, often idiomatic in structure and meaning.
Many 'words' cannot be classified in terms of traditional grammar; e.g. now may be used to refer to time, but also as a discourse marker - to close down a topic or phase of a conversation.
Avoidance of 'lexical and syntactic elaboration' is reflected in a relatively low level of lexical density (cf. Biber et al., 1999). Another statistical measure of the vocabulary profile of a text is lexical variation (McCarthy, 1990; Schmitt, 2000). This parameter makes use of the distinction between types and tokens; repetitions of the same word are treated as one type, while each occurrence of a word is a token. A text in which many tokens are repeated has a relatively low number of types, and consequently, its lexical variation expressed as the type/token ratio is low. The two measures reflect slightly different dimensions of vocabulary statistics. A text containing numerous repetitions of content words may be characterised by a high lexical density and a low lexical variation at the same time.
2.3. The effects of processing constraints on the quality of speech
Spoken language is rarely prepared in advance and rehearsed. The 'online' production of speech means that processes of planning and execution of utterances run in parallel. As well as encoding his/her own utterances, the interlocutor has to decode the language produced by the other participant(s) in interaction. The most obvious outcome of the difficulties involved in the encoding and decoding of messages is the occurrence of dysfluency phenomena in the form of pauses, repeats and reformulations. Processing constraints also have an impact on the length and complexity of syntactic structures the speaker is able to produce, since the possibilities of planning utterances ahead of the actual production are severely affected by the limitations of the human working memory (reported to have a span of 5-7 words). By the same token, the size of the syntactic structure which can be held incomplete in memory until the next planning phase begins is reduced. Another consequence of this mode of production is the fact that structures occupying initial and middle positions of a clause are relatively simple when compared with those occupying final positions (Biber et al., 1999).
The study
3.1. Participants and procedures
The aim of the study reported in this article was to identify properties which characterise the spoken English of non-native speaking students of English. The aspects of spoken English which will be discussed below include the following:
Fluency of oral performance
Grammatical and lexical accuracy, counted as errors
Lexical variation measured by means of the type-token ratio
Grammatical complexity represented by the number of structural units, such as dependent clauses and phrases
The study was conducted at the English Philology department on students attending a three-year bachelor degree programme in English. On entering the college, the students typically represent the level of proficiency in English comparable to that of Cambridge First Certificate candidates. Graduates are expected to have attained the level of proficiency corresponding to CAE (Certificate in Advanced English) and to be nearing the level of CPE (Certificate of Proficiency in English).
The students' achievement is measured by means of various types of language tests addressing language subsystems and skills. Oral proficiency is assessed on the basis of the student's performance in an oral interview by a group of examiners using descriptive assessment criteria, which are then averaged to give the final grade. The same questions keep recurring as the students take their final examinations: Do the students improve their oral proficiency skills in the course of college training? If they do, how much progress do they make every year? Can the students' performance on oral tasks be measured in a more objective way than through assessment issued by a group of raters? Can we isolate the crucial components of their oral performance in order to provide clearer evidence of improvement?
The present study was undertaken in response to the above general considerations and its ultimate aim was to develop a structured way of grasping the sense of progress and development of the oral proficiency of upper-intermediate and advanced students of English. Owing to the lack of a general index of foreign language acquisition (Ellis, 1994) and to the complexity of language, it is impossible to capture the sense of progress by examining improvement in a single component of proficiency (Skehan & Foster, 1997). Therefore, the specific aims of the study were to investigate the gains in oral fluency, accuracy and complexity of the spoken language production of the students.
The method of investigation adopted in the study involved longitudinal monitoring of the students' progress in oral proficiency. To this end, the students were interviewed in a set of oral tasks applied serially over a period of three years. The interviews were conducted on a one-to-one basis.
The interview questions, which were predetermined in advance, were open-ended in order to elicit longer responses from the students. The rationale behind using open-ended questions was to obtain a substantial amount of 'natural-sounding' spoken language data. After pilot-testing of the format, the first interview was introduced in the first weeks of instruction; this point is referred to as 'Year 0'. The consecutive interviews were repeated in the final weeks of Year One, Year Two and Year Three. This way, eight of the nine subjects took part in a series of four interviews, one subject participated in three interviews, so that the procedure eventually yielded 35 interviews.
The interview schedule was built around three tasks, in which the interview format resembled the procedure formerly used in CFC examinations. The tasks elicited transactional language in the form of long-turns. The same tasks were repeated on every application of the interview. The following tasks were used:
The interviewee responded to questions about her home town, family, interests and future plans
On the basis of a visual prompt, the interviewee compared and contrasted two situations, adding her personal reflections on the problem
The interviewee expressed her opinion on a wider issue of successful language learning
All the interviews were tape-recorded and transcribed for later analyses. The transcripts were saved in an electronic form, thus producing a mini-corpus of learner spoken English containing 24.500 words. The data were subsequently coded and analyzed in terms of fluency, accuracy and complexity measures. The prevailing methods of data analyses were quantitative.
Fluency was operationalised in terms of three temporal measures: (a) rate of speech, defined as the number of words per minute; (b) frequency of pauses, i.e. the number of silent and filled pauses per 100 words; (c) mean length of a pause. The temporal parameters were measured by means of spectral analysis tools available from the Praat electronic package (Boersma & Weenink, 2004).
The measure of accuracy used in this study was the number of errors per one hundred words. In order to perform the error count consistently, a distinction between error type and token was recognized, as suggested by Lennon (1991). Thus, errors which were identical at the level of lexical realization were treated as tokens of the same type. The procedure of error count included only error types.
Linguistic complexity was examined at lexical and grammatical levels. The main measure of lexical complexity was lexical variation operationalised as the type-token ratio. Grammatical complexity was described in terms of syntactic sophistication at phrase and clause levels. As a point of reference, a taxonomy of phrasal and clausal units was drawn up, adapted from the wide range of descriptive categories provided by Miller & Weinert (1998) and Biber et al. (1999).
For each set of the data, the mean score was calculated to illustrate the group's average on the given parameter in successive years. Moreover, the standard deviation for each set of results was calculated to show the degree of dispersion in the group.

3.2. Results and discussion
3.2.1. Fluency
Each of the fluency measures used in the study reflects a different aspect of fluent speech production, yet all these parameters are inextricably linked to one another. The rate of speech is taken to reflect the speed and automaticity of speech production. An increase in the rate of speech is thus considered to indicate improvement in fluency. The frequency and distribution of pauses show how much information the speaker is able to encode in a single act of planning a message. The falling trend in the frequency of pausing is thus indicative of better performance. The mean length of a pause shows how much time the speaker needs to encode the next message. The falling trend in the mean length of a pause is interpreted as evidence of improvement. In order to render the degree of progress in a more transparent way, the results displayed in this section will be limited to one measure of fluency, that is, the rate of speech.
Student
Year 0
Year 1
Year 2
Year 3
A
115,7
116,5
115,9
123,8
B
123,5
122,2
141,9
158,6
C
96,2
75,4
75
82,3
D
86,54
92,1
113,6
113,2
E
106,8
98,4
103,1
105,9
F
111,4
102,2
100,3
116,8
G
102,7
--
121,5
102,3
H
107,7
88,5
106,3
106,3
I
77
83,3
78,6
87
M
103,06
97,33
106,24
110,69
SD
13,67
14,97
19,54
21,06

Table 1. Scores obtained for the rate of speech /number of words per minute/.
As can be seen from Table 1, improvement in the rate of speech over the 3-year period is indicated by the mean score, which increased from 103,06 words per minute in Year 0 to 110,69 in Year 3. However, there is a certain degree of inter-individual variation; the standard deviation value, which illustrates dispersion within the group, is on the increase as well. This implies that the rate of improvement for individual students varies, meaning that while some students made considerable progress (A, B, D, F, I), others showed slower rate of improvement or stability (E, G, H), or even regression (C).
Figure 1 communicates the above scores in a visual format, which offers a better opportunity for drawing comparisons between the speakers. For example, it shows the substantial difference in the rate of speech produced by Student B and Student C. Also, the lines in the graph – ascending or descending in turns – show the extent of fluctuation in the rate of speech for more than half the students. Only three of them (A, B, D) improved their speech rate consistently over the years.

Figure 1. Rate of speech: within-group variation.
3.2.2. Accuracy
The measure of accuracy used in the study was the number of errors per 100 words. This general measure of accuracy is reported to have been used in research of learner language before (cf. Ellis & Barkhuizen, 2005). Erroneous utterances were identified in terms of extent (cf. Lennon, 1990). All the instances of error were crosschecked against the BNC data for greater reliability and consistence with native speech norms. Instances of error immediately self-corrected by the speakers were excluded from the count. Two broad categories of error were adopted: grammatical and lexical. The problem of error embedding was resolved by calculating an erroneous sequence of language containing an embedded error as 1 token. The details of the scores for accuracy are displayed in Table 2 and Figure 2 below.
The mean score in accuracy for the group improved over time, in that the number of errors per 100 words had fallen from 5.74 to 4.1. The dispersion within the group decreased, as indicated by the decline in the standard deviation value from 1.52 to 1.09. Considerable improvement had taken place in the case of Students B and I, where the reduction of frequency of error was more than 50%. Only one student (D) obtained a raw score which indicated a higher frequency of error at the final point of measurement than at the onset of the study. And although her final score was not markedly different from the mean, there was no improvement in comparison with her initial score. The other students showed moderate progress on this measure, even though the rate of improvement fluctuated from year to year.
Student
Year 0
Year 1
Year 2
Year 3
A
7,99
5,11
4,98
5,36
B
5,35
2,60
3,79
2,48
C
3,98
4,43
3,62
3,03
D
3,84
5,06
4,81
4,21
E
6,46
4,02
4,56
5,20
F
3,56
3,54
3,78
3,13
G
6,11
--
4,86
5,50
H
7,63
6,93
8,77
4,77
I
6,74
6,04
3,77
3,23
M
5,74
4,72
4,77
4,10
SD
1,52
1,27
1,51
1,09

Table 2. Errors per one hundred words / number of errors divided by the total number of words, divided by 100/.

Figure 2. Accuracy measured as the frequency of error.
3.2.3. Lexical complexity
The parameter used for measuring lexical complexity was the type/token (T/T) ratio, taken to be an index of lexical variation or richness. It is arrived at by dividing the total number of different words, i.e. types, by the total number of words in a text (tokens). This measure is known to be affected by the length of the text, that is to say, the longer the text is, the lower the T/T ratio. Because the texts produced by the students in the study vary in length, it was necessary to obtain a standardised ratio. Therefore the actual scores were obtained with the help of Wordsmith Tools software (Scott, 1998), which allowed to divide each text into segments of equal size (400 words) and calculate the mean score for all the segments.
Student
Year 0
Year 1
Year 2
Year 3
A
40,00
34,75
41,25
41,50
B
36,75
40,25
41,00
40,33
C
36,50
42,00
41,38
42,50
D
40,87
42,75
45,50
42,25
E
39,25
41,00
40,50
41,25
F
39,75
39,25
38,00
40,50
G
40,00
--
40,00
47,75
H
37,00
38,70
35,75
39,50
I
41,50
43,00
41,13
44,50
M
39,07
40,21
40,50
42,23
SD
1,75
2,53
2,50
2,39

Table 3. Lexical variation calculated as the type/token ratio.


Figure 3. Lexical complexity: the type/token ratio.
As can be seen from the figures in Table 3 and the graph in Figure 3, there was a steady increase in the mean score for the T/T ratio. However, individual results show a high degree of fluctuation in lexical complexity over the years. At the same time, the SD value is at its lowest in Year 1, but remains relatively constant during the following years, which suggests that the students demonstrated similar tendencies – rising or falling - to use rich vocabulary at successive points of measurement. It can be inferred from Figure 3 above, that almost all the students followed the same route of lexical complexity development over the years, the exceptions being Student A, who evidently under-performed on this measure in Year 1; Student D, who obtained the highest score in the group in Year 2; and Student G, whose score in Year 3 exceeded the mean by a large margin.
3.2.4. Grammatical complexity
The grammatical complexity measure presented below, that is, the amount of clausal subordination, was defined as the number of dependent clauses, both finite and non-finite, per 100 words. The study actually used one more measure of grammatical – the number of complex phrases per 100 words – but given the limited scope of the article the latter will not be presented graphically.
The notion of complexity, as applied in this analysis, is used in compliance with the standards defined for spoken English, i.e., clauses which were incomplete were discarded from the count. Clauses containing errors within the verb phrase boundaries were ignored.
The scores obtained for grammatical complexity are shown in Table 4 and Figure 4.
Student
Year 0
Year 1
Year 2
Year 3
A
3,27
5,49
5,63
5,21
B
4,74
5,20
5,46
5,73
C
4,14
5,22
6,41
5,81
D
4,32
4,57
4,29
4,81
E
3,23
4,02
3,54
4,97
F
3,36
4,67
4,86
5,68
G
2,32
--
3,68
5,35
H
2,79
4,25
3,83
4,37
I
3,54
3,69
4,51
4,99
M
3,52
4,64
4,69
5,21
SD
0,67
0,60
0,97
0,43

Table 4. Grammatical complexity: dependent clauses per 100 words.
As can be seen from the data, all the students improved their performance in the use of subordination, and most students improved consistently throughout the years. The standard deviation index reached its lowest value in Year 3, which indicates less variation along this measure in the group performance at the final stage of language instruction. Even though in some cases there was a falling trend in the final year (Students A and C), the end results were markedly better than they were at the initial stage. Therefore, we can observe a gradual rise in the value of the mean score obtained in the consecutive years of the study. Figure 4 below demonstrates this growing tendency clearly.


Figure 4. Grammatical complexity: the use of subordination.
Conclusion
The findings of the study allow to infer that sustainable development of oral proficiency in the foreign language is not ensured for every member of the same student group, despite the fact that they are all involved in the same language teaching programme. However, in spite of the differences in the levels of fluency, accuracy and complexity achieved by individual students, the central tendencies for each measure, as shown by the mean score, are positive. The fact that some of the students improved one aspect of oral proficiency, while neglecting another, may indicate that their linguistic systems had not stabilised sufficiently, i.e. they were still at the phase of restructuring. The finding that some aspects of language proficiency may improve, while other aspects regress, is supported by similar evidence from studies of linguistic development reported in the SLA literature (cf. Larsen–Freeman, 2006).
With respect to fluency development, the findings showed that some of its temporal aspects improved over time, while others remained at the same level. Sets of data obtained at successive points of measurement demonstrated an upward trend for the rate of speech and frequency of pauses, as indicated by the mean score. However, the mean length of the pause remained relatively constant throughout the years. The results also showed greater accuracy in the use of lexical items and grammatical structures in the students' oral performance. As regards complexity, the results showed a marked increase in the index of lexical complexity (the T/T ratio), and in both indices of grammatical complexity (dependent clauses and complex phrases counted per 100 running words).
As regards individual improvement, fluency, accuracy and complexity levels did not improve in equal proportions in the consecutive years, and development along one or two of these dimensions seems to have taken place at the expense of another. A plausible pattern that emerges from the data obtained is that of conflicting priorities in individual paths of linguistic development; enhancement in one of the aspects of proficiency seems to exert an inhibitory effect on the development of another/other dimension(s).
A variety of factors may contribute to the shaping of individual paths of development. An important factor which significantly contributes to language improvement is the sum of the linguistic experience of the learner. Individual student profiles sketched on the basis of information provided via questionnaires (not discussed in this article) suggest that the subjects of the study had rather limited experience of using English for real communication in naturalistic settings. Even though the students were offered ample opportunities to engage in communicative activities, their learning experience was nevertheless situated in the language classroom context. The length of language learning experience preceding admission to the college does not seem to have affected the students' proficiency level in English. As a matter of fact, the best results have been reported for those students who had learnt English for a relatively short period of time (e.g. Students B and F) before entering the college. The same students were found to be able to make better use of the available resources for language enhancement and take every opportunity to improve their English. This observation emphasises the role of learner autonomy and responsibility for one's own learning.
References:
ACTFL Proficiency Guidelines (1999) New York: American Council on the Teaching of Foreign Languages.
Biber, J., Johansson, S. and Leech, G. (1999) Longman Grammar of Spoken and Written English. Harlow: Longman.
Boersma, P. and Weenink, D. (2004) Praat: doing phonetics by computer. http://www.fon.uva.nl/praat/
Carter, R. and McCarthy, M. (1995) Grammar and spoken language. Applied Linguistic, 16, 141-158.
Carter, R. and McCarthy, M. (1997) Written and spoken vocabulary. In N. Schmitt and M. McCarthy (eds.) Vocabulary. Description, Acquisition and Pedagogy (pp. 20-39). Cambridge: Cambridge University Press.
Carter, R. and McCarthy, M. (2006) Cambridge Grammar of English: a Comprehensive Guide. Spoken and Written English. Grammar and Usage. Cambridge: Cambridge University Press.
Council of Europe (2001) Common European Framework of Reference for Languages: learning, teaching, assessment. Cambridge: Cambridge University Press.
Ellis, R. (1994) The Study of Second Language Acquisition. Oxford: Oxford University Press.
Ellis, R. and Barkhuizen, G. (2005) Analysing Learner Language. Oxford: Oxford University Press.
Larsen-Freeman, D. (2006) The emergence of complexity, fluency, and accuracy in the oral and written production of five Chinese learners of English. Applied Linguistics 27(4), 590-619.
Lennon, P. (1991) Error: some problems of definition, identification, and distinction. Applied Linguistics 12(2), 180-195.
Levelt, W. J. M. (1989) Speaking: from Intention to Articulation. Cambridge, MA: MIT Press.
McCarthy, M. (1990) Vocabulary. Oxford: Oxford University Press.
McCarthy, M. (1998) Spoken Language and Applied Linguistics. Cambridge: Cambridge University Press.
Miller, J. and Weinert, R. (1998) Spontaneous Spoken Language. Syntax and Discourse. Oxford: Clarendon Press.
Robinson, P. (2001) Task complexity, task difficulty, and task production: exploring interactions in a componential framework. Applied Linguistics 22(1), 27-57.
Schmitt, N. (2000) Vocabulary in Language Teaching. Cambridge: Cambridge University Press.
Scott, M. (1998) WordSmith Tools. Version 3.0. Oxford: Oxford University Press.
Skehan, P. and Foster, P. (1997) Task type and task processing conditions as influences on foreign language performance. Language Teaching Research 1 (3), 185-211.
Skehan, P. and Foster, P. (1999) The influence of task structure and processing conditions on narrative retellings. Language Learning 49(1), 93-120.
Tarone, E. (1980) Communication strategies, foreigner talk and repair in interlanguage. Language Learning 30, 417-31.

Key words: fluency, accuracy, complexity, spoken English proficiency, individual development
Abstract
The article reports on the results of a longitudinal study conducted with the aim of finding evidence of improvement in the oral proficiency of nine students of English as a foreign language, whose target level of proficiency was upper-intermediate and advanced. All the students followed an English philology programme combined with a teacher training module run by a college of tertiary education in Poland. The general question behind the study concerned the level and range of improvement in the oral proficiency of students in the consecutive years of their involvement in the college programme. More focused questions aimed to investigate the extent of improvement along the dimensions of fluency, accuracy and complexity and the possible ways in which the three dimensions interact in the course of linguistic development. The data presented in the article were elicited through a series of interviews administered over a period of three years. The results showed that, over time, the students developed greater fluency, spoke more accurately and used more and more complex grammar and lexis. The findings also showed that the students developed their oral proficiency in different ways, by prioritizing one or two of its dimensions over the others.


Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.