Language assesment meeting 13

CHAPTER 6

ASSESSING LISTENING 


OBSERVING THE PERFORMANCE OF THE FOUR SKILLS 

Before focusing on listening itself, think about the two interacting concepts of performance and observation. All language users perform the acts of listening, speaking, reading, and writing. They of course rely on their underlying competence in order to accomplish these performances.When you propose to assess someone's ability in one or a combination of the four skills, you assess that person's competence, but you observe the person'sperformance. Sometimes the performance does not indicate true competence: a bad night's rest, illness, an emotional distraction, test anxiety, a memory block, or other student-related reliability factors could affect performance, thereby providing an unreliable measure of actual competence. So, one important principle for assessing a learner's competence is to consider the fallibility of the results of a single performance, such as that produced in a test. As with any attempt at measurement, it is your obligation as a teacher to triangulate your measurements: consider at least two (or more) performances and/or contexts before drawing a conclusion. That could take the form of one or more of the following designs: 
• several tests that are combined to form an assessment 
• a single test with multiple test tasks to account for learning styles and performance variables 
• in-class and extra-class graded work 
• alternative forms of assessment (e.g., journal, portfolio, conference, obsen:ation, self-assessment, peeT~sessment). 

THE IMPORTANCE OF liSTENING 

   Listening has often played second fiddle to its counterpart speaking. In the standardized testing industry, a number of separate oral production tests are available (fest ofSpoken English, Oral ProfiCiency Inventory, and PhonePass, to name several that are described Chapter 7 of this book), but it is rare to find just a listening test. One reason for this emphasis is that listening is often implied as a component of speaking. How could y~u speak a languag without also listening? In addition, the overtly observable nature of speaking renders it more empirically measurable then listening. But perhaps a deeper cause lies in universal biases toward speaking. A good speaker is often (unwisely) valued more highly than a good listener. To determine ifsomeone is a proficient user of a language, people customarily ask, "Do you speak: Spanish?" People rarely ask, "Do you understand and speak Spanish?" 
Every teacher oflanguage knows that one's oral production ability-other than monologues, speeches, reading alo~d, and the like-is only as good as one's listening comprehension ability. But of even further impact is the likelihood that input in the aural-oral mode accounts for a·large proportion of successful language acquisition. In a typical day, we do measurably more listening than speaking (with the exception of one or two of your friends who may be nonstop chatterboxes!).Whether in the workplace, educational, or home contexts, aural comprehension far outstrips oral production in quantifiabe terms of time, number of words, effort, and attention. 

BASIC TYPES OF LISTENING 

   As with all effective tests, designing appropriate assessment tasks in listening begins with the specification of objectives, or criteria. Those objectives may be classified in terms -of several types of listening performance. Think about what you do when you listen. Literally in nanoseconds, the following processes flash through your brain: 
1.You recognize speech sounds and hold ~ temporary "imprint" of them in short-term memory. 

2. You simultaneously determine the type of speech event (monologue, interpersonal dialogue, transactional dialogue) that is being processed and attend to its context (who the speaker is, location, purpose) and the content of the message. 

3.You use (bottom-up) linguistic decoding skills and/or (top-down) background schemata to bring a plausible interpretation to the message, and assign a literal and intended meaning to the utterance. 

Each of these stages represents a potential assessment objective: 
•  comprehending ofsurface structure elements such as phonemes,words, intointonation, or a grammatical category 
• understanding of pragmatic context 
• determining meaning of auditory input 
• developing the gist, a global or comprehensive understanding 

MICRO- AND MACROSKILLS OF LISTENING 

A usful way of synthesizing the above two lists is to consider a finite number of micro- and macroskills implied in the performance of listening comprehension. Richards' (1983) list of microskills has proven useful in the domain of specifying objectives for learning and may be even more useful in forcing test makers to carefully identify specific assessment objectives. In the following box, the skills are subdivided into what I prefer to think of as microskills (attending to the smaller bits and chunks of language, in more of a bottom-up process) and macroskills (focusing on the larger elements involved in a top-down approach to a listening task). The microand macros kills provide 17 different objectives to assess in listening. 
Microskills 

1. Discriminate among the distinctive sounds of English. 
2.  Retain chunks of language of different lengths in short-term memory. 
3. Recognize English stress patterns, words in stressed and unstressed 
      positions, rhythm.ic structure, intonation contours, and their role in 
       signaling information. 
4. Recognize reduced forms of words. 
5. Distinguish word boundaries, recognize a core of words, and interpret   word order patterns and their significance. 
6.  Process speech at different rates of del ivery. 
7. Process speech containing pauses, errors, corrections, and other  performance variables. 
8.  Recognize grammatical word classes (nouns, verbs, etc.), systems (e.g., 
tense, agreement/pluralization), patterns, rules, and elliptical forms. 
9.  Detect sentence constituents and distinguish between major and minor     constituents. 
10. Recognize that a particular meaning may be expressed in different    grammatical forms. 
11. Recognize cohesive devices in spoken discourse. 

 DESIGNING ASSESSMENT TASKS: INTENSIVE LISTENING

Once you have determined objectives, your next step is to design the tasks, including making decisions about how you will elicit performance and how you will' expect the test-taker to respond. We will look at tasks that range from intensive listening performance, such as minimal phonemiC pair recognition, to extensive comprehension of language in communicative contexts. The focus in this section is on the fllicroskills of intensive listening. 

DESIGNING ASSESSMENT TASKS: RESPONSIVE liSTENING 

     A question-and-answer format can provide some interactivity in these lower-end listening tasks. The test-taker's response is the appropriate answer to a question. 

DESIGNING ASSESSMENT TASKS: SELECTIVE LlSTENING 

    A third type of listening performance is selective listening, in-which the test-taker listens to a limited quantity of aural input and must discern within it some specific information. A number of techniques have been used 'that require selective listening. 

ListeningCloze 

   Listening cloze tasks (sometit11es called cloze dict~tions or partial dictations) require the test-taker to listen to a story. fllonologue,or conversation and simultaneously read the written text in which selected words or phrases have been deleted. Cloze procedure is most commonly associated with reading only (see. Chapter 9). In its generic form, the test consists of a passage in which every nth word (typically every seventh word) is deleted and the test-taker is asked to. supply an appropriate word. In a listening cloze task, tet-takers see a transcript of the passage that they are listening to and flU· in the blanks with the words or phrases that they hear. 

Information Transfer 

   Selective listening can also be assessed through an infor:mation transfer technique in which aurally processed information must be transferred to a visual representation, such as labeling a diagram, ideniifying an element in a picture, completing a form, or showing routes on a map. 

DESIGNING ASSESSMENT TASKS: EXTENSIVE LISTENING 

Drawing a clear distinction between any two of the categories of listening referred to here is problematic, but perhaps the fuzziest division is between selective and extensive listening. As we gradually move along the continuum from smaller to larger stretches of language, and from micro- to macroskills of listening, the probability of using more extensiveJisteningtasksjrrcl"eases. Some important questions about designing assessments at this level emerge. 

Dictation 

  Dictation is a widely researched genre of assessing listenit:lg comprehension. In a dictation, test-takers hear a passage, typically of 50 to 100 words, recited three times: first, at normal speed; then, with long pauses between phrases or natural word groups, during which time test-takers write down what they have just heard; and finally, at normal speed once more so they can check their work and proofread. Here is a sample dictation at the intermediate level of English. 

Communicative Stimulus-Response Tasks 
   Another-and more authentic-example of extensive listening is found in a popular genre of assessment. task in which the test-taker is presented with a stimulus monologue or conversation and then is asked to respond to a set of comprehslions. sucnt (as you saw in Chapter 4 in the discussion of standardized testing) are corrimonly used i.fl commercially produced proficiency tests. The monologues, lectures. and brief conversations used in such tasks are sometimes a little contrived, and certainly the subsequent multiple-choice questions don't mirror communicative, real-life situations. But with some care and creativity, one can create reasonably authentic stimuli, and in some rare cases the response mode (as shown in one example below) actually approaches complete authenticity. Here is a typical example of such a task. 

Authentic Listening Tasks 

   Ideally, the language assessment field would have a stockpile of listening test types that are cognitively demanding. communicative, and authentic, not to mention interactive by means of an integration with speaking. However, the nature of a test as a sanple of performance and a set of tasks with limited time frames implies an equally linlited capacity to mirror all the real-world contexts of listening perfonnance. 
Reference
Buck, Gary. (2001). Assesstng listening. Cambridge: Cambridge University Press. 
Richards, Jack C. (1983). Listening comprehension: Approach, design, procedure. TESOL Quarterly, 17, 219-239. 
Mendelsohn, David J. (1998). Teaching listening. Annual Review of Applied Linguistics, 18, 81-101. 


CHAPTER 7

ASSESSING SPEAKING 

BASIC TYPES OF SPEAKING 

1. bnitative. At one end of a continuum of types of speaking performance is the ability to simply parrot back (imitate) a word or phrase or possibly a sentence. While this is a purely phonetic level of oral production, a number of prosodiC, lexical, and grammatical properties of language may be included in the criterion performance. 
2. Intensive. A second type of speaking frequently employed in assessment contexts is the production of short stretches of oral language designed to demonstrate competence in a narrow band of grammatical, phrasal, lexical, or phonological relationships (such as prosodic elements-intonation, stress, rhythm, juncture). 
3. Responsive. ,Responsive assessment tasks include interaction and test comprehension but at the somewhat limited level of very short conversations, standard greetings and small talk, simple requests and comments, and the like. 
4. Interactive. The difference between responsive and interactive" speaking is in the length and complexity of the interaction, which sometimes includes mUltiple exchanges and/or multiple participants. 
5. Extensive (monologue). Extensive oral production tasks include speeches, oral presentations, and story-telling, during which the opportunity for oral interaction from listeners is either highly limited (perhaps to nonverbal responses) or ruled out altogether. 

MICRO- AND MACROSKILLS OF SPEAKING 


Microskills 

1.  Produce differences among English phonemes and allophonic  variants. 
2. Produce chunks of language of different lengths. 
3. Produce English stress patterns, words in stressed and unstressed positions, rhythmic structure, and intonation contours.

Macroskills 

1. Appropriately accomplish communicative functions according to situations, participants, and goals. 
2.Use appropriate styles, registers, implicature, redundancies, pragmatic conventions, conversation rules, floor-keeping and -yielding, inte'rrupting, and other sociolinguistic features in face-to-face conversations. 

3. Convey links and connections between events and communicate such relations as focal and peripheral ideas, events and feelings, new information and given information, generalization and exemplification. 

DESIGNING ASSESSMENT TASKS: IMITATIVE SPEAKING

   You may be surprised to see the inclusion ofsimple phonological imitation in a consideration of assessment of oral production. After all, endless repeating of words, phrases, and sentences was the province of the long-since-discarded Audiolingual Method, and in an era of communicative language teaching, many believe that nonmeaningful imitation ofsounds is fruitless. Such opinions-have faded in recentyears as we discovered that an overemphasis on fluency can sometimes lead to the decline of accuracy in speech. And so we have been paying more attention to pronunciation, especially' suprasegmentals, in an attempt to help learners be more comprehensible. 

PHONEPASS TEST 

   The PhonePass test elicits computer-assisted oral production over a telephore. Test-takers. read aloud, repeat sentences, say words, and answer questions. With a downloadable test sheet as a reference, test-takers are directed to telephone a designated number and listen for directions. The test has five sections. 

DESIGNING ASSESSMENT TASKS: INTENSIVE SPEAKING 

   At the intensive level, test-takers are prompted to produce short stretches of discourse (no more than a sentence) through which they demonstrate linguistic ability at a specified level of language. Many tasks are "cued" tasks in that they lead the test taker into a narrow band of possibilities. 

DESIGNING ASSESSMENT TASKS: RESPONSIVE SPEAKING 

   Assessment of responsive tasks involves brief interactions with an interlocutor, fering from intensive tasks in the increased creativity given to the test-taker and from interactive tasks by the somewhat limited length of utterances. 

TEST OF SPOKEN ENGLISH (TSE) 

   Somewhere straddling responsive, interactive, and extensive speaking tasks lies another popular commercial oral production assessment, theTest of Spoken English (TSE)'. The TSE is a 20-minute audiotaped test of oral language ability within an academic or professional environment. TSE scores are used by many North American institutions of higher education to select international teaching assistants. The scores are also used for selecting and certifying health professionals such as physicians, nurses, pharmacists, physical therapists, and veterinarians. 

DESIGNING ASSESSMENT TASKS: INTERACTIVE SPEAKING 

The fmal two categories of oral production assessment (interactive and extensive speaking) include tasks that involve relatively long stretches of interactive discourse (interviews, role plays, discussions, games) and tasks. of equally long duration but that involve less interaction (speeches, telling longer stories, and extended explanations and translations).The obvious difference between the two sets of tasks is the degree of interaction with'an interlocutor. Also, interactive tasks are what some would describe as interpersonal, while the fmal category includes more transactional speech events. 


ORAL PROFICIENCY INTERVIEW (OPI) 

The best-known oral interview format is one that has gone through a considerable metamorphosis over the last half-century, the Oral Proficiency Interview (OPI). Originally known as the Foreign Service Institute (FSI) test, the OPI is the result of a historical progression of revisions under the auspices of several agencies, including the Educational Testing Service and the American Council on Teaching Foreign Languages (ACTFL). 




Reference 
Underhill, Nic. (1987). Testing spoken language:A handbook of oral testing techniques. Cambridge: Cambridge University Press. 

Brown,].D. (1998). New ways of classroom assessment. Alexandria,VA: Teachers of English to Speakers of Other Languages. 
Ceice..Murcia, Marianne, Brinton, Donna, and Goodwin,]anet. (1996). 

Teaching pronunciation: A reference for teachers ofEnglish to speakers ofother languages. Cambridge: Cambridge University Press. 

Komentar

Postingan populer dari blog ini

Language assesment meeting 14

Project 3 language assessment

Project language assessment meeting 7