Скачать 2.08 Mb.
5.6. Training with Tasks and Items for Reading, Listening and Linguistic Competences
The objective of the activities described in this section is to ensure that panellists can relate their interpretation of the CEFR levels to exemplar test items and tasks so that they can later build from this common understanding in order to:
The techniques described can be used for test items and test tasks used to evaluate receptive skills and – where appropriate – to evaluate other aspects of language use, such as grammar and vocabulary.
Tasks which involve integrated skills (e.g. listening to a text and answering questions, and then using the information gained to make a summary) will need to be considered from the point of view of the difficulty of both the receptive and productive aspects of the task. There may be a deliberate difference in the difficulty level of the two parts of the task, and this needs to be addressed in training. Item difficulty may vary (and be varied systematically, if one so wishes) depending on the read or heard text, on the comprehension ability tested and on the response that the test taker needs to make to indicate comprehension.
As with performance samples, training with illustrative tasks and items with known difficulty values should take place first and then be followed by the process of analysing locally produced items (Chapter 6).
Training with illustrative test tasks and items includes, in this order:
It is essential to start with the skill of reading. In the same way that it is easier to work on spoken and written performance (which can be observed directly) than to work on receptive skills (which cannot be observed), it is far easier to work on reading and rereading texts and items in print (that can be seen) than it is to work on listening items and texts (which cannot be seen) in several rounds of listening.
Once the process of assessing items for reading has been completed, organising the session for the skill of listening and working with listening texts will be easier, as the participants will already be familiar with the task to be done. The coordinator needs to decide how to organise the sessions and to estimate the duration of the sessions, depending on the context and the background of the participants.
Even if participants have already attended a general Familiarisation session described in Chapter 3, a sorting exercise with descriptors for the skill concerned before starting difficulty estimation and standard setting is a necessary training exercise.
The CEFR provides overall, general scales (e.g. “Reception”, “Overall Reading Comprehension”, “Overall Listening Comprehension”), and also specific scales that describe different receptive language activities (e.g. “Listening as a Member of an Audience”) and strategies (“Identifying Cues and Inferring”).
Coordinators need to decide on the most relevant scales for the examination in the context in which it is administered. Work should always start with analysis and discussion of overall scales (e.g. “Overall Reading Comprehension”). Then the coordinators may pool the most context relevant subscales for the skill concerned (e.g. “Listening as a Member of an Audience), or use the self-assessment reformulations of the CEFR descriptors employed in the DIALANG project (CEFR Appendix C), and ask participants to sort the descriptors into the six CEFR levels (see Section 3.2.1 Activity f).
Standardisation of items testing linguistic competences will need to take a slightly different approach to the one followed with reading and listening because of the need for a specification of the type of exponents that can be expected to be relevant to different levels. The CEFR provides general descriptors for elements of communicative language competence (CEFR Section 5.2; Manual tables A1A3), but such linguistic specifications are unique to each language. Section 4.3 outlines the tools currently available. The DIALANG project also developed a set of specifications, with advice to item writers, for 14 languages.
The standardisation process follows three phases similar to those training procedures employed with standardised performance samples:
Phase 1: Illustration: A first assessment of the level of one text and its corresponding tasks and items. This preliminary activity will help the participants tune into the CEFR levels for the skill being assessed.
It is essential to consider both the question of the level of the source text and the difficulty of the individual item(s) associated with it. A text does not have a “level”. It is the competence of the test takers as demonstrated by their responses to the items that can be related to a CEFR level. The most that can be said about a text is that it is suitable for inclusion in a test aimed at a particular level.