Abstract
隨著國內外口譯訓練課程增設,每年可能投入臺灣口譯市場的準口譯員人數日益增加。面對這塊共同的口譯市場,口譯培訓機構如何評估準口譯員的能力,其考試作法之異同,以及考試方式能否因應臺灣市場對口譯能力之需求等,這些都是值得探討的問題。本研究目的是比較和分析現存國內外各口譯培訓機構考試的命題與評分方式,以供相關培訓機構和測驗機構作為命題和評分工作之參考。本研究以訪談或問卷蒐集資料,佐以文件分析,並利用電郵或電話確認訪談內容和資料。蒐集的資料經過分析、比較和整理,分別歸類在考試概況、命題和評分等主題之下。命題部分探討試題規格、命題人資歷、命題原則、試題材料來源,以及考試作業與流程。研究發現,為了讓口譯考試保有實作測驗的特性,一般無法做到完全以客觀方式控制試題的難易度;試題難易度不容易保持一致,因此口譯培訓機構多在評分時將試題難易度列入考量。評分方面探討評分人資歷、評分規準與評分方式、評分人會議和評分訓練,以及評分作業與程序等。研究發現,國內外各培訓機構多非常依賴評分人的專業判斷;各校都定有詳細程度不一的評分規準,但是評分規準往往無法真正體現在實際的評分作業中。
In the past decade, Taiwan and other countries have seen a growth in the number of interpreter training programs and, as a result, an increasing number of new interpreters have entered the job market. As the market becomes more competitive, one has to wonder how these aspiring interpreters are judged by their respective training programs at their exit exams as being ready to work as interpreters. This study aims to answer this question by comparing exit exam practices of Taiwanese, Chinese, British and American programs that train English-Chinese interpreters. Eleven such programs were chosen, including seven programs in Taiwan, one in China, two in Britain, and one in the USA. Our data were collected through interviews, questionnaires, correspondence, and analysis of relevant documents. All data were analyzed, coded and categorized into three categories: exam policies, test-writing and evaluation practices. Our analysis showed that interpreter programs generally did not use specific criteria to judge or control the difficulty level of tests. The difficulty of a test was often used as part of the evaluation criteria. Also, interpreting experts' holistic judgment was heavily relied upon in evaluation. All interpreting programs had developed evaluation criteria for their exit exams. However, these criteria were often not thoroughly followed in the actual exam evaluation.
In the past decade, Taiwan and other countries have seen a growth in the number of interpreter training programs and, as a result, an increasing number of new interpreters have entered the job market. As the market becomes more competitive, one has to wonder how these aspiring interpreters are judged by their respective training programs at their exit exams as being ready to work as interpreters. This study aims to answer this question by comparing exit exam practices of Taiwanese, Chinese, British and American programs that train English-Chinese interpreters. Eleven such programs were chosen, including seven programs in Taiwan, one in China, two in Britain, and one in the USA. Our data were collected through interviews, questionnaires, correspondence, and analysis of relevant documents. All data were analyzed, coded and categorized into three categories: exam policies, test-writing and evaluation practices. Our analysis showed that interpreter programs generally did not use specific criteria to judge or control the difficulty level of tests. The difficulty of a test was often used as part of the evaluation criteria. Also, interpreting experts' holistic judgment was heavily relied upon in evaluation. All interpreting programs had developed evaluation criteria for their exit exams. However, these criteria were often not thoroughly followed in the actual exam evaluation.
Translated title of the contribution | Interpretation Evaluation Practices: Comparison of Eleven Schools in Taiwan, China, Britain and the USA |
---|---|
Original language | Chinese (Traditional) |
Pages (from-to) | 1-42 |
Number of pages | 42 |
Journal | 翻譯論叢 |
Volume | 1 |
Issue number | 1 |
DOIs | |
Publication status | Published - Sept 2008 |
User-Defined Keywords
- 口譯考試
- 命題
- 評分
- 專業考試
- 結業考試
- evaluation
- exit examination
- interpretation examination
- professional examination
- test-writing