Speaking of evaluations.. like many university faculty in the past month or two, I recently filled out a survey as part of the NRC rating process. Earlier in the year I filled out a form involving information to be used by the NRC for ranking my own department; this latest survey involved my ranking 15 other departments (the so-called 'reputational ranking' part of the process).
I've talked with other colleagues about how we each went about our rankings; not about our actual rankings, but about our methods. We each got a wide-ranging list that included at least one of the supposed top-ranked departments in our field as well as some schools we didn't even know had a Ph.D. program in our field. The information provided up front included the number of graduate faculty, various data about Ph.D.'s awarded, and demographic data about the number of women and ethnic minority faculty.
Some of my colleagues said they did their survey very quickly because they already had opinions about the places on their lists and they didn't need to look at the data provided or read information on a department's webpages. This raises questions about how to rank unfamiliar departments. That is, if you don't know anything about a particular Ph.D. program, couldn't name any faculty in that department, and don't know anyone who got a Ph.D. there, does that mean the program is deficient or does it mean that you should check the 'no opinion/insufficient info' option because perhaps the program is strong in a sub-field different from your own?
Others spent a lot of time reading all the information provided and following links to other webpages. I didn't spend a huge amount of time on the survey, but I also didn't blast through it without considering the data. I had opinions about all but perhaps one place on my list prior to getting the survey, but I didn't want to fall into the trap of rating a program high just because it was always rated high. I wanted to think about each place in terms of its program in recent years, not what it was like > 10 years ago.
Some faculty approached their surveys by considering each of the programs on their list in the context of the 15 programs on the list, and some viewed each in a context beyond the list of 15. I did the latter, but I assume that the ultimate dataset will be large enough so that these variations won't matter in the end. It might not matter anyway if each list is constructed to include a wide variety of programs.
Another issue we discussed is whether the demographic data about faculty gender and ethnicity is an indicator of the quality of a program, and why those data are reported up front in this survey and not information about faculty publications and funding. We all agreed that we would have liked more information on traditional measures of a department's research activities.
What do you think? All other things being equal between 2 large departments, should a total lack of women faculty in one result in its getting a lower ranking, and, if so, a slightly lower ranking or a substantially lower ranking?
7 years ago