Assessment Criteria at Exams: A Hot Topic These Days

There have been some exchanges lately in the blogosphere about the evaluation criteria applied to conference interpreting students at their final exams. This is natural – it being exam season and all, people (and by people, I mean both students AND their trainers) are thinking hard about what is to be expected of students at their final exams and if they are going to be able to deliver.

Think positive!

I’d like to make my own modest contribution to this debate, which was started by Jerome (@jamizuno) at the 2interpreters blog in his post on Graduation exams evaluation, was picked up in a post by Elisabet (@tulkur) on Research on Quality in Interpreting and led to an interesting exchange on Twitter. (The post What Every Client Wants? that came out on Lifeinlincs last week is not directly related to our discussion, but touches upon a similar topic.)

I won’t tell you what I had to say in response to Elisabet’s and Jerome’s posts – for that, you can read their comment sections. I’d just like to keep a promise I made to Jerome to share with readers the guidelines* that external examiners are asked to apply at the exams here in La Laguna. Here they are:

ASSESSMENT GUIDELINES FOR THE EMCI – LA LAGUNA

The following is a set of standardised assessment criteria which we supply to all members of the assessment panel so that they are aware of how they should be evaluating students.

List of agreed criteria:

Mother tongue: The student must be capable of expressing themselves appropriately in their mother tongue, demonstrating a rich vocabulary and appropriate use of register; this includes an absence of any interference from other languages in the mother tongue, i.e. no structural calques from the source language to the target language (capacity to reformulate).

Comprehension: The student must demonstrate in their speech that they have an excellent understanding of each one of their working languages which extends to specialist and technical subjects.

Terminology: The student must be capable of understanding terminology & expressions that are unique to the source language and find suitable equivalents in the target language.

Fluency: The student must demonstrate good articulation in their mother tongue which will allow them to deliver fluid speeches, without unnecessary pauses, so that the listening public is easily able to understand the message.

Precision: The student must express ideas clearly and precisely and without any unnecessary rambling.

Problem solving: The student must be able to show that they are capable of reacting to and solving problems that arise during the course of a speech, problems such as – not understanding a key word, losing track of several ideas in a row, etc.

Presentation: The student must demonstrate the ability to deliver his/her version with confidence, fluency, well-paced timing, suitable body language for the type of speech being given, and good eye contact.

Stress management: The student must be able to correctly deliver his/her speech despite being under more pressure than is normal in the classroom environment given that the assessment panel is made up of unfamiliar individuals.

Technique: The student must demonstrate in both the simultaneous and consecutive exams that they have the necessary technical skills to be able to deliver a good speech in each of the modules that demonstrate the correct use of the specific techniques that are applied in each of them.

Preparation for professional practice: The student must be able to demonstrate that they are fully prepared to enter the labour market despite their lack of experience.

All these criteria are laid out uniformly in an assessment form that is given to the assessors so that they are able to take notes which will allow them to award the final mark.

So there you have it. I’m happy to engage with fellow trainers and students in a discussion on the suitability (and objective measurability) of these criteria. They are not gospel, and it should be emphasized that they are by no means the only set of criteria out there. That said, they do seem to work quite well in practice.

I have seen other approaches outlined in internal documents shared by other schools and trainers. Each system will have its plus and minus points and will weigh the various factors differently, but most assess roughly the same aspects of the student’s performance. (Note: Jerome’s blog post refers to the “process-oriented” approach as proposed by Sylvia Kalina, which is worthwhile checking out, if you’re interested in that sort of thing).

Anyway, to my mind, what is important is that students on a given training course should have a clear idea of the particular approach taken by their school. They should be told what exactly they will be assessed on, as well as the level at which they will be expected to perform at the different stages of the course (i.e. mid-terms and finals).

I went through this assessment form with my students prior to the consecutive mid-terms in February, and I’m pleased to say that they were basically able to brainstorm the full list of criteria without any help from me. So they clearly know what is expected of them. And fortunately, most of them have been able to deliver it as well, at both the consecutive and the simultaneous midterms. Now there’s just that small matter of those upcoming finals to deal with…

*Thanks to the MIC administration at the ULL for granting permission to share this document on my blog

13 thoughts on “Assessment Criteria at Exams: A Hot Topic These Days

  1. Thanks for that, Michelle. I will be starting my CI course at Leeds in September, so it is very useful to see the criteria of another university that’s highly regarded for its CI programme.

  2. Dear colleague,

    Thank you for this very interesting post. May I ask a couple of questions ?

    When you write that the criteria are laid out uniformly in an assessment form, does that mean that there is no guideline about the relative weight of the different criteria?

    Do you have an example of the assessment form? Is it a grid with 10 columns corresponding to the 10 criteria?

    Are the candidate’s interpretations recorded and accessible after the exam?

    Finally, how are students rated? Do you use numerical marks or tags like “passed” or “almost passed” ?

    Sorry, I know these are a lot of questions, but the topic is fascinating, and it looks like you just opened Pandora’s box !

    Cédric

    • Hi there,

      Thanks for your interest. I’ll answer your questions as best I can.

      The bit that says the criteria are laid out in an assessment form is actually the tail end of the document I shared, which I didn’t draft myself. There is no strict guideline about the weighting between the different criteria, but I think examiners generally tend to try to give them roughly equal weighting.

      The actual form that I have seen examiners fill out (and which I’ve used when on the MIC juries myself) is not in the shape of a grid at all. There is one comment box for each interpretation, where examiners are meant to record their impressions. Beside that box is are “pass” or “fail” boxes for them to tick. The comments recorded in the boxes are used as the basis for the pass/fail decision and also for the detailed feedback given to students in the post-exam interview (where they receive the results).

      All exam speeches (which are delivered live) are video-recorded, as are the interpretations. Any student who has questions about the results can sit down with their instructors after the fact and review the recordings.

      The exams are not graded on a 1-10 scale; rather, the mark given is an overall pass or fail based on the assessment of the various criteria and the jury’s deliberations. The “degree” of the pass or fail will be reflected in the comments given to the student in the interview, but doesn’t have any practical value: i.e. a “strong” pass (where a students aces the exam) will be counted the same toward the final course result as a weak pass (where someone just scrapes through).

      The aptitude test results *are* graded on a scale of 1-10, however, since only the top 16 students of 100 or more applicants can be accepted and I think the course administrators find a numerical grading system useful when ranking the applicants.

      Hope this helps!

      • Hello,

        Hello,

        Yes, it helps a lot, thank you very much for sharing this and for the very thorough answer.

        I would like to contribute to the debate started by Jerome by giving two references which I find informative and useful for those who want to know more about testing and assessment.

        Angelelli, Claudia, and Holly E. Jacobson. Testing and Assessment in Translation and Interpreting Studies: A Call for Dialogue between Research and Practice. Philadelphia, Pa: John Benjamins Pub. Co, 2009.

        Sawyer, David B. Fundamental Aspects of Interpreter Education: Curriculum and Assessment. Amsterdam: J. Benjamins, 2004.

        I suppose there is a lot more out there about testing and assessment, of course, above all in the domain of second language testing.

        I heard about a seminar held a the EP whose goal was to prepare candidates to the accreditation test. Important parts of the seminar were mock tests. The candidates could listen to the jury deliberating and apparently, it helped them to understand what was the test really about. Of course, there may be the risk of passing the mock tests and failing the real one. But it doesn’t happen that often, I would say.

        I am wondering how studens would react to such a seminar during the last stage of their interpreting course.

        Cédric

      • Hi again,

        Thanks for the references. I’m sure they’ll be useful to readers who may not have come across these sources before. Are you doing research in the field of CI?

        As for the EP seminar, you may be referring to the “top-up course” offered to candidates who have just barely failed the admission exams. I don’t know if it’s still run by the EU institutions (the EP and SCIC have a joint accreditation process), but at least as of a few years ago, they would send those candidates who seemed very promising but who had just barely failed the accreditation test on a three-week intensive to prepare them for a second go at the exam. My understanding was that this “top-up” , as they called it, involved plenty of practice with feedback from staff, plus mock exams. I didn’t realize that they would also allow students to listen in on jury deliberations, although I can easily imagine how useful it would be! As to the chances of passing the mock exam and failing the real one, anything is possible. I did hear that they were getting pretty good results from the top-up approach, though.

        As to whether this listening in on the jury would be a good exercise to run in the final stretch of a training course, it’s worth considering. Just off the top of my head, the potential problem I see would be that the “jury” might not be quite so frank about expressing their views if the student was listening in, so the students wouldn’t get the “real thing”, if you like. Also, the students already get plenty of feedback in class, so the added value of the mock exam exercise would have to come from somewhere else. It would be less the feedback provided and more the experience of the actual exam situation. On the MIC, they do run mock exams (in the exam room, with the students having to dress up for it, etc.) prior to all the real exams, and I think it’s a very useful exercise for all involved. I’ll ask them what they think of your idea of letting them spy on the jury as well – they might be up for it ;)!

        Thanks,

        Michelle

  3. Hi
    Your guidelines seem to focus mainly on A language assessment. Do you use the same criteria when students are working into their B?

    • Hi,

      Good point. The course trains almost exclusively A language interpreters, so that would explain the bias in the assessment criteria. There are rare occasions where a B has to be assessed, though. I’d have to ask them if they have established criteria for those situations. From what I have seen observing examiners in past years, they always arrange to have someone with an A in the language assessing the student working into a B. Factors weighed include whether the accent impedes communication and the student’s ability to work in different (especially higher) registers in their B, in addition to all of the other factors relating to completeness, accuracy etc.

      Hope that helps!

  4. Pingback: Weekly favorites (May 14-20) | Adventures in Freelance Translation

  5. Thanks for sharing them Michelle. I believe this is also a very useful step in the part that I cherish particularly – the more we share and discuss, the more we reach transparent understandable equal criteria. I’ll see if TÖI wants to share their guidelines.

    • Thanks, I agree wholeheartedly! I’d like to see what guidelines they use at your school.

      I’m currently in the process of developing evaluation criteria for another course I’m working on, and it’s interesting to see the input coming from my fellow trainers. Each has their own system, and yet all seem to be trying to measure the same things, just in slightly different ways.

  6. I have been thinking of becoming a court interpreter.. but i can not find any information about what to do or where to start anywhere. I speak Spanish and i feel that being a court interpreter would be a great career for me but i need to learn more about it, it has been a struggle for me because like i mentioned above, i cant find anyone in Phoenix, AZ that could answer my questions or walk me through the process.

    What I really want to know is what schools should i look into? how long would it take to get certified or learn enough to begin working as an interpreter?…how difficult is it to get hired?.. what people like and dislike about the profession.. What is the working environment like? What is the stress level.. What kind of qualities do you need to do well in the position?… Any comments would be appreciated.

    I am currently working at a law firm as a paralegal, I began working here with out any experience at all. I really like it and have considered law school but the truth is i do not want to go to school for that long and have such a stressful career. I dont think i am passionate enough to become an attorney, i dont feel like all that loan debt and years in school is worth it to me. I have always liked translating for others and i like the idea of not having as much stress and still being able to be in court while helping others when there is a language barrier.

    If there is anyone out there that could help me, please feel free to contact me via email.

    yadynoyum@gmail.com

    Thank you very much!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.