As we enter the season of Teaching Evaluations for the term that, for many of us, is winding down (or up, depending on your mood), let us consider two issues:
(1) today's issue: Is the end of the term (but before the final exam) the (a) best, (b) worst, (c) as good as any other time to ask students to evaluate their professor's teaching?
(2) an issue for a not-too-distant future post (to give you time to think about it): What is the strangest comment you have seen in a teaching evaluation? (in your own evaluations or someone else's). I am particularly interested in strange comments that are not entirely off-topic.
A few months ago, a colleague was telling me about the results of a survey of undergraduates taking introductory Science courses at his institution. I was very surprised about the results of this survey (which I will not describe here). I was surprised because the alarming and depressing results were completely opposite to what I have observed (or thought I had observed) in my own classes and in my own department. So I asked this colleague for more information about the survey. How were the questions worded? When was the survey given to the students?
Aha. The answers to both questions explained a lot, and in particular, the one that is relevant to today's post re. the timing of the survey: It was given a few days before the final exam, when students are at their most stressed.
Yes, my colleague said, but if you give students a survey (or a teaching evaluation) after the final exam, most of them won't do it. And you can't give the survey/evaluation too early because the students won't have enough information about the class to respond authoritatively. And participation has to be voluntary, so you can't threaten them with consequences if they don't do the survey (and that might be counterproductive anyway). The only time to get a decent participation rate is just before the final exam.
OK.. but what if that skews the results? (and the people who are interpreting the survey results don't take that possibility into account and assume that there is a crisis because the students seem kind of stressed out?)
Assuming that student evaluations of teaching are going to continue to be employed by universities: If there were a way to ensure that (most) students would do the evaluation, is it "better" to have them do the evaluations just before the end of the term or after they get their grade? That is, is it better to have students do the evaluations when they aren't sure how they are doing in the class or when they know exactly how they did in the class?
And what is meant by "better" anyway? There are various ways to interpret that, but "better" in this context means a time when students will provide the most fair and thorough evaluations, after reflecting deeply on their own role in the learning process and how much they got out of the class.
I don't know the answer, but I do know that students are typically given teaching evaluations at a time when they are feeling a lot of stress about all the end-of-term activities (exams, papers). It would be interesting to know whether teaching evaluations and other surveys of student opinions of their courses would be substantially different after the term is over as compared to just-before-the-term-end. Surely someone has studied this?
13 years ago
At my institution, evaluations are done online and made available during the last couple weeks of the semester. As incentive, students are eligible for prizes when they fill out the narrative sections of the evaluations, if they complete them before a specific date. The more evaluations they complete, the greater their chances are of winning.
Also, students who have not completed the evaluations are prompted to do so when they try to access their grades after the end of the semester.
The strangest comment I ever received was on a module that contained 18 lectures. The student didn't like the course because "it was like we were being taught... lectured even!".
At my undergraduate institution they started online surveys that students could complete at any time between the last day of classes and the last day of exams.
As an incentive, when students filled out the evaluation they could see their final grades a week early.
Worked like a charm. I imagine it was also much cheaper to run that system than having a bunch of people manually entering the data and comments from pieces of paper.
(1) Late in the term, but before they begin to freak out about finals.
(2) "Will you marry me?" followed by "So unfair! The professor expected us to know algebra" (in a calculus-based physics class).
As an undergrad (and grad) student, there were several occasions where I disliked a course/professor's teaching style/requirement/etc. while I was taking it, only to come to appreciate it several semesters later after I had been able to use the knowledge gained in the course or had a chance to put the professor's style/course topic/etc. in a broader context. It would be interesting to have students complete course evaluations both during the course and 1-2 semesters later, and compare the results. And by during the course, I mean towards the end of the teaching period (maybe the 2nd or 3rd to last lecture?), not during finals. Most students are too stressed before finals or too exhausted after them to give a thoughtful and accurate assessment.
My undergraduate institution experimented with several different strategies for these surveys. I thought the best one was to give the survey after the final exam, but to prevent students from seeing their grades until they had filled out the survey.
Both the surveys and the grades were handled by the same main undergraduate portal website, so this was presumably easy to engineer. After some period of time (say two weeks or a month) grades became available even if you hadn't filled out the survey. But, most undergraduates are curious enough about grades that this provided a sufficient incentive.
At my undergraduate university, I seem to recall evaluations always being filled out during our last regular class, but at my graduate institution, they are filled out anytime in the last 2 weeks of the course, and most professors opt for as early in that period as possible. I've received my own TA evaluations, and I haven't noticed that this significantly impacted the results (or impacted what I thought the results would be). I love the idea of bribing the students with their grades, but I'm just not sure how accurate evals would be after the exam but before the results are revealed. I also like the idea of doing surveys later (1-2 months) because I also think there were courses that, in the middle of them, I didn't quite appreciate, but did later.
If you were attemping to design the worst possible method of assessing teaching effectiveness, it’d be pretty fucken hard to outdo student evaluations. Students haven’t the faintest fucken clue what good teaching is about, they don’t know what they need to learn, and they don’t know how to judge whether they are learning it.
Do armies poll their basic trainees to see if their drill sergeants are going a good job? FUCKE NO!
I never really liked doing student evaluations. It was really a waste of time but the fact that it counted as one session of lab, which counted towards 30 creds out of the 120 creds for that year it was a really big deal. It was the profs idea of letting us know that we are ultimately responsible for how we get taught or even think about how we might teach. Still, I never really wanted to bother to fill them in but the nagging and promise of grades definitely worked.
This is the second place I read this comment from you. I'll set you straight now.
The most professional armies get feedback from their soldiers on course content and instructional technique. Even after real-life operations soldiers and officers go through After Action Reports. One aspect of AARs are where every one becomes very honest with themselves and admits their faults so that everyone can improve the performance of the team.
If students don't understand what they need to learn then the instructor has failed. A good instructor will illustrate the importance/usefulness of the course and put the subtopics in context.
Our university has moved to on-line teaching evaluations. Students can complete them any day during the last 2 weeks of classes. Response rate is worse so far than in class ones. But unofficial reports from faculty are that their ratings have increased (could be the 10+% who aren't completing them).
Ideally students should rate faculty/instructors more than once during the semester, just as we grade them throughout the semester and not just at the end. E.g., after a major assignment, after a certain percentage of lectures, etc. But I doubt that change will happen.
I don't think online evaluations and the larger window of opportunity is any different from asking the students to do an in-class/on-paper evaluation if the timing is still end-of-term. The timing relative to stressful events is the same.
@Nathan and CPP
Perfect....this ignoramus Comrade Prof is always around with his swear word filled comments, where he thinks he is the smartest guy around. Let's see you work your way out of this one, CPP.
I give informal evaluations (through Blackboard's survey section) three times during the semester, with some overlap of questions with the university evaluations and some things I am curious about. I offer extra credit for completing the assessments. It gives me a good chance to see what they like and don't like, and them a chance to feel like they are participating in the course.
From this, I cannot say that I see any reason why the university wide evaluations couldn't be conducted a month before the end of the semester. Students are fully able to evaluate the course at that point and they aren't all panicky about their impending final exams. I get more feedback, longer and more thoughtful verbatim answers, and a higher response rate when the evals are done mid-November.
In classes where sections are taught in sequence by different professors you could compare evals of those who teach early (presuming the evals are done at the end of each section) with those who teach at the end. If the same professor teaches at different times in different years, those comparisons would be especially valuable.
I've been TAing large classes for several semesters where students are asked to fill them out after the exam, and surprisingly few leave without doing so.
It would also be interesting to see if various instructions can affect the results, eg saying things like "we really look forward to and appreciate your feedback" right before they fill them out etc - can these sorts of comments affect the results? So many potential variables!
You might be able to increase participation by pointing out (whether just before the evaluation, or randomly through the semester) how the current course has been adapted based on previous years' evaluations.
For example, "students in past years commented that the readings in this book were less useful, so we made it recommended reading rather than required reading."
I think if students have a reason to believe that their evaluations are being taken seriously, they'll be more likely to participate.
I miss the days of paper. Students who do not come to class are the most likely to be bitter about a poor grade.
I gave a "practice" mid-term eval this year, which ended up not being entirely anonymous because of a low sample size. A student who had dropped the course unofficially had the worst to say.
This term, I have at least 3 students out of 40 who stopped coming to class, but are still on the books. I expect it has to do with financial aid - you can't go beneath a certain number of credits.
Although this is the 4th time I'm teaching an intro level liberal arts physics course, I still don't feel I've graduated to doing it with ease. The evals are just stressful.
@Anon @ 7:29 am:
I decided last year that doing two evals, one semester (or more) apart, would be ideal. And what do you know, my students evidently thought so too!
One of my profs this term put an (ungraded) question on the midterm asking for our input on the class; he promised that if there was anything the majority didn't like, he'd fix it in the latter part of the semester. Sadly, he's the one who needs the least input because he is totally awesome. I wish all profs would do this!
Anonymous 4:04: I once saw the opposite comment. Instead of the students complaining that they had to know algebra in a calc-based physics class, they claimed the TA was teaching them calculus in an algebra-based class. Except he wasn't. They just didn't know what calculus was, and assumed "algebra that they'd forgotten" must be calculus.
My favorite comment received in an evaluation: (large lecture format science course aimed at non-science majors)
"Prof (Anonymous) is very clumsy"
Our student evaluation surveys are done in the last week of lectures. We have a week off after lectures and before exams. I haven't gotten the sense that students are panicky, and I haven't seen that show up in course evals.
As for the merits of student evaluation surveys, they're terrible: the only problem is that everything else anyone has come up with is even worse.
I think a more interesting question is: what are the effects of switching from paper to online surveys? I've spoken to some colleagues at other universities who were very negative about the switch: they said it decreased overall response rates, and simultaneously made it more likely that students who don't attend classes responded to the survey, leading to a smaller fraction of students responding, and a higher fraction of bitter students responding.
How about "This woman should not be allowed to have children!" and "But we NEED to be spoon-fed!"?
At my institution, undergraduates can view some numeric scores and one of the short narrative answers on the evaluations. This is an incredibly popular feature, as it helps students get a feel for the class before signing up for it in the future. Students can only see the "public" portions of the evaluations if they've completed for their own classes in the previous quarter. This system works very well, as students have an incentive to complete the evaluations but aren't forced to. They can fill them in any time during the last 3 weeks of the quarter, so having a sample comprised of stressed people going into a final is unlikely.
I once got a comment on an evaluation (for an Intro Psychology course) that said: "how are we supposed to study for your exams if you put examples on the exams that we've never seen before?" Of course, this student didn't recognize that the EXAMPLES were new on the exam, but the CONCEPTS were the same as those they'd studied -- they were supposed to apply their knowledge in exams rather than regurgitate. This comment has drastically changed how I discuss exam preparation in my Intro class (which is filled with freshmen who need to hear this explicitly explained to them).
A friend of mine once got an evaluation (for a 200-level Social Psychology course) that said "don't be so scientific." What is she supposed to do, INTUIT the course material instead? Psychology is an empirical science, but apparently this student was expecting pop psychology or Freud. Again, this was an incredibly revealing comment.
Our univ gives the online evals in the last 2 weeks of classes. I agree with most that this is the worst time for it. I think the best would be just a few weeks after the mid term and certainly after the last drop date.
Last semester, I had one commenter who cursed and told me to go back to whatever Asian country I came from. This semester, I ignored the emails telling me to remind my students to do the evals. They get enough email reminder from the people doing the evals anyways. I'll let the chips fall where they will.
I second ryan's suggestion (#3). Perhaps he and I went to the same place. Making grades accessible rapidly to those who fill out course evals and delaying them for those who do not tends to ensure they are done. And it allows them to be done after the exam while ensuring they happen before they see their grades. A win-win!
Post a Comment