Today I got my course evaluations for last semester, just in time to help me prepare for the next semester.
I was relieved to see that my minimalist syllabus (with extra info on the web) from last semester was viewed favorably: 100% of the class of ~ 30 students said the syllabus was accurate and gave them the information they needed.
- 100% of the students felt the course stimulated them to think critically about the course material, said they would take another course from me, and believed that I had 'exceptional' knowledge of the course material. Thank you!
- 97% said I am prompt in returning graded material and providing feedback about student performance. The one student who disagreed with this statement must have been very cranky -- I ALWAYS give exams and homework back in the very next class after the exam or homework due date. Or maybe waiting 2 days seems like a long time to this person?
The most variable score was for a question related to whether the amount learned in the course was similar or different to student expectations. I never know what to make of this question. If a student expected the course to be bad and it was, then the course met their expectations and I would get a high score, and a high score is generally considered good for this question. If a student expected the course to be bad and it wasn't, I'd get a low score. The result is usually an uninterpretable range. Last term, most students seem to have learned more than they expected. I suppose that's good, but it seems to suggest they had low expectations, and that's not so good.
At my previous university, faculty had no choice about teaching evaluations being published for all to see, and I had no problem with this, although I would have preferred it if the reviews were accompanied by some factual information about the courses in addition to the student evaluations.
At my current university, we have to give permission for certain parts of the evaluation to be released. I haven't done this, in part out of inertia (you have to take the initiative to do it, and I just haven't), and in part because I think the evaluation questions are very poorly worded at this university. The question about whether the course met expectations is just one example.
Another very prominent question on the evaluation form has to do with whether the student liked the classroom. I don't get to choose my classroom, so what does this have to do with me or my teaching abilities? This question is always my lowest score. Last semester, 12% really liked the classroom, 69% had ratings ranging from good to very good, 15% thought the room was just OK, and 4% thought it was very poor. If my evaluations were published, I trust that students could sort this question out from the ones that have real meaning for a course, but even so.. it bothers me to be rated on this.
I suppose this question could be used to identify major problems with a particular room, but there must be other ways to acquire that information. The main value of this how-did-you-like-your-classroom question, however, is that it is entertaining for faculty to compare their results for the very same room. The 'goodness' of any particular classroom tends to fluctuate with time of day (early in the morning, the room is not good; it gets a little better up until lunch; gets worse again just after lunch, and then gets better until late afternoon). It is also interesting to compare the Room Question results for faculty who team-teach the same course in a single term. The results can be remarkably different. This shows that evaluations are best used to get a cosmic sense for whether a course worked well or not, and the results shouldn't be picked apart in detail.
14 years ago
4 comments:
My alma mater had a student-published book of faculty reviews that were quite informative. I think it was a star-based rating system (like for movies), which I never found useful except for the extremes.
I liked the quotes describing people's voices (for the particularly monotonous or gratingly high-pitched) and irritating habits (frequent throat-clearing, perpetual hacking cough, spitting on the front row). It was also good to know if faculty were outwardly sexist, for example, because students weren't shy about saying so in their reviews.
I know that room evaluations frequently depended on how much sunlight was coming through the windows throughout the day. Sun in your eyes = not good. Freezing cold room = not good. Leaky roof = not good. Construction noise = not good.
Everything else = usually okay.
But I'm very amused by the different professor/different room rating phenomenon. I'd love to get a social scientist's take on the origins of that kind of variation in individual's responses. Do happy students really not notice where they are?
97% said I am prompt in returning graded material and providing feedback about student performance. The one student who disagreed with this statement must have been very cranky -- I ALWAYS give exams and homework back in the very next class after the exam or homework due date.
Lots of reasons besides cranky. Could have ticked the wrong box, or not read the question properly.
When you see a stack of evaluation forms, and the evaluation forms have a typo on them and refer to a portion of the class that didn't exist, and yet there are several student evaluations on this non-existent portion, then you know that there are quite a few spurious answers out there!
Why in the world would they ask about the classroom? Who writes these evaluations? I agree, they sound a bit irrelevant and silly.
Evaluations for an adult education institution I've studied at had two sections.
One related to the teacher, the other to the institution - ease of enrolment, facilities etc.
Our teacher always used to stress the difference when we filled in the forms! We always praised him, the departmental organisation was rated lower....
Post a Comment