Today I got my course evaluations for last semester, just in time to help me prepare for the next semester.
I was relieved to see that my minimalist syllabus (with extra info on the web) from last semester was viewed favorably: 100% of the class of ~ 30 students said the syllabus was accurate and gave them the information they needed.
- 100% of the students felt the course stimulated them to think critically about the course material, said they would take another course from me, and believed that I had 'exceptional' knowledge of the course material. Thank you!
- 97% said I am prompt in returning graded material and providing feedback about student performance. The one student who disagreed with this statement must have been very cranky -- I ALWAYS give exams and homework back in the very next class after the exam or homework due date. Or maybe waiting 2 days seems like a long time to this person?
The most variable score was for a question related to whether the amount learned in the course was similar or different to student expectations. I never know what to make of this question. If a student expected the course to be bad and it was, then the course met their expectations and I would get a high score, and a high score is generally considered good for this question. If a student expected the course to be bad and it wasn't, I'd get a low score. The result is usually an uninterpretable range. Last term, most students seem to have learned more than they expected. I suppose that's good, but it seems to suggest they had low expectations, and that's not so good.
At my previous university, faculty had no choice about teaching evaluations being published for all to see, and I had no problem with this, although I would have preferred it if the reviews were accompanied by some factual information about the courses in addition to the student evaluations.
At my current university, we have to give permission for certain parts of the evaluation to be released. I haven't done this, in part out of inertia (you have to take the initiative to do it, and I just haven't), and in part because I think the evaluation questions are very poorly worded at this university. The question about whether the course met expectations is just one example.
Another very prominent question on the evaluation form has to do with whether the student liked the classroom. I don't get to choose my classroom, so what does this have to do with me or my teaching abilities? This question is always my lowest score. Last semester, 12% really liked the classroom, 69% had ratings ranging from good to very good, 15% thought the room was just OK, and 4% thought it was very poor. If my evaluations were published, I trust that students could sort this question out from the ones that have real meaning for a course, but even so.. it bothers me to be rated on this.
I suppose this question could be used to identify major problems with a particular room, but there must be other ways to acquire that information. The main value of this how-did-you-like-your-classroom question, however, is that it is entertaining for faculty to compare their results for the very same room. The 'goodness' of any particular classroom tends to fluctuate with time of day (early in the morning, the room is not good; it gets a little better up until lunch; gets worse again just after lunch, and then gets better until late afternoon). It is also interesting to compare the Room Question results for faculty who team-teach the same course in a single term. The results can be remarkably different. This shows that evaluations are best used to get a cosmic sense for whether a course worked well or not, and the results shouldn't be picked apart in detail.
11 years ago