List of Figures

НазваниеList of Figures
Размер2.08 Mb.
1   ...   5   6   7   8   9   10   11   12   ...   45

5.6. Training with Tasks and Items for Reading, Listening and Linguistic Competences

The objective of the activities described in this section is to ensure that panellists can relate their interpretation of the CEFR levels to exemplar test items and tasks so that they can later build from this common understanding in order to:

  • relate locally relevant test items to the CEFR levels;

  • as added value, gain insights into developing test items that can eventually claim to be related to CEFR levels.

The techniques described can be used for test items and test tasks used to evaluate receptive skills and – where appropriate – to evaluate other aspects of language use, such as grammar and vocabulary.

Tasks which involve integrated skills (e.g. listening to a text and answering questions, and then using the information gained to make a summary) will need to be considered from the point of view of the difficulty of both the receptive and productive aspects of the task. There may be a deliberate difference in the difficulty level of the two parts of the task, and this needs to be addressed in training. Item difficulty may vary (and be varied systematically, if one so wishes) depending on the read or heard text, on the comprehension ability tested and on the response that the test taker needs to make to indicate comprehension.

As with performance samples, training with illustrative tasks and items with known difficulty values should take place first and then be followed by the process of analysing locally produced items (Chapter 6).

Training with illustrative test tasks and items includes, in this order:

  1. Becoming fully aware of the range of CEFR subscales of descriptors for specific areas that are available in the CEFR (see Chapter 4).

  2. Identifying the content relevance of the tasks analysed in terms of construct coverage vis-à-vis CEFR levels and scales. As mentioned in Section 4.3.2, the findings in the Dutch CEFR construct project (Alderson et al 200612), and the resulting CEFR Content Analysis Grid for Listening & Reading13 may be very useful.

  3. Estimating the level each task and item represents in terms of the relevant CEFR descriptors.

  4. Discussing the possible reasons for discrepancies between estimated and empirically established levels.

  5. Confirming the level of difficulty against empirical data.

It is essential to start with the skill of reading. In the same way that it is easier to work on spoken and written performance (which can be observed directly) than to work on receptive skills (which cannot be observed), it is far easier to work on reading and rereading texts and items in print (that can be seen) than it is to work on listening items and texts (which cannot be seen) in several rounds of listening.

Once the process of assessing items for reading has been completed, organising the session for the skill of listening and working with listening texts will be easier, as the participants will already be familiar with the task to be done. The coordinator needs to decide how to organise the sessions and to estimate the duration of the sessions, depending on the context and the background of the participants.

      1. Familiarisation Required

Even if participants have already attended a general Familiarisation session described in Chapter 3, a sorting exercise with descriptors for the skill concerned before starting difficulty estimation and standard setting is a necessary training exercise.

The CEFR provides overall, general scales (e.g. “Reception”, “Overall Reading Comprehension”, “Overall Listening Comprehension”), and also specific scales that describe different receptive language activities (e.g. “Listening as a Member of an Audience”) and strategies (“Identifying Cues and Inferring”).

Coordinators need to decide on the most relevant scales for the examination in the context in which it is administered. Work should always start with analysis and discussion of overall scales (e.g. “Overall Reading Comprehension”). Then the coordinators may pool the most context relevant subscales for the skill concerned (e.g. “Listening as a Member of an Audience), or use the self-assessment reformulations of the CEFR descriptors employed in the DIALANG project (CEFR Appendix C), and ask participants to sort the descriptors into the six CEFR levels (see Section 3.2.1 Activity f).

Standardisation of items testing linguistic competences will need to take a slightly different approach to the one followed with reading and listening because of the need for a specification of the type of exponents that can be expected to be relevant to different levels. The CEFR provides general descriptors for elements of communicative language competence (CEFR Section 5.2; Manual tables A1A3), but such linguistic specifications are unique to each language. Section 4.3 outlines the tools currently available. The DIALANG project also developed a set of specifications, with advice to item writers, for 14 languages.

      1. Training for Standard Setting

The standardisation process follows three phases similar to those training procedures employed with standardised performance samples:

Phase 1: Illustration: A first assessment of the level of one text and its corresponding tasks and items. This preliminary activity will help the participants tune into the CEFR levels for the skill being assessed.

It is essential to consider both the question of the level of the source text and the difficulty of the individual item(s) associated with it. A text does not have a “level”. It is the competence of the test takers as demonstrated by their responses to the items that can be related to a CEFR level. The most that can be said about a text is that it is suitable for inclusion in a test aimed at a particular level.

Table 5.4: Reference Sources in the CEFR


CEFR Reference

Situations, content categories, domains

Table 5 in CEFR 4.1

Communication themes

The lists in CEFR 4.2

Communicative tasks

The lists in CEFR 4.3

Communicative activities and strategies

The lists in CEFR

Texts and text-types

The lists in CEFR 4.6.2 and 4.6.3

Text characteristics: length of test tasks, coherence of test tasks, structure of test tasks

The information in CEFR


The description in CEFR 7.1, 7.2 and 7.3
1   ...   5   6   7   8   9   10   11   12   ...   45


List of Figures iconList of Figures (in Power Point file atada seismicattribute figures)

List of Figures iconList of tables list of figures list of abbreviations

List of Figures iconList of tables list of figures

List of Figures iconList of tables list of figures

List of Figures iconList of figures

List of Figures iconList of Figures

List of Figures iconList of Tables and Figures

List of Figures iconList of Tables and Figures

List of Figures iconList of figures introduction

List of Figures iconWrite here the English version of your “Resumo Key-words: List of Figures

Разместите кнопку на своём сайте:

База данных защищена авторским правом © 2014
обратиться к администрации
Главная страница