Wednesday, September 29, 2010

Don't Take It Personally

So, how's everyone feeling after the release of the NRC rankings? Did you sit by your computer, waiting for the magic moment when the results appeared on the internet? Or did you torture your department chair (and/or dean) to tell you the results last week, when most of them got a preview of the results? Did you get any gossipy emails before or after from colleagues in programs that were ranked higher (or lower)? Or are you wondering what the NRC is (Nuclear Regulatory Commission? National Ranking Committee?) and why you should care?

My answers: No, I was busy for much of the day until late afternoon; when I had a chance, I read an un-illuminating e-mail from the chair. I knew that he knew the results last week, but I also knew that he would not break under torture, and so I did not try. I did get gossipy emails, mostly before, including from a department chair at another university, but these emails were quite vague so all I knew was that my department would be/should be pleased. National Research Council. I don't know why you should care; I actually don't care if you care, but I'm going to write about the NRC rankings anyway because it was an Event in Academe that happened today.

And in fact my department seems to be reasonably pleased with the results, even though our department has changed in some important ways in the 5 years since the data were collected.

I joined my current department after 1995, so this was the first time the department has been NRC-ranked since I've been here. Of course, other faculty have come and gone since 1995 as well, and these rankings aren't about any particular individual. Nevertheless, to the extent that we semi-care about rankings and to the extent that we can even interpret the new NRC rankings, it's hard not to take such things a little bit personally and hope that our illustrious presence will help our program in some quantifiable way.

As I was thinking about how I feel about rankings as an individual in a program being ranked, I remembered an incident involving a report written by a visiting committee just before I arrived in one of my tenure-track positions. I was hired after the retirement of a professor who had never published much but who was much loved by students and colleagues. I also liked this man very much; he was extremely kind to me as a newly arrived assistant professor, and went out of his way to help me get started.

From my (egotistical) point of view, I believed I was going to be an asset to the department because I was an active researcher and I cared about teaching. Maybe I wouldn't ever be as beloved as Professor X, but I hoped I could contribute to the department and university in some important ways.

I was therefore kind of hurt when the report said that hiring me didn't result in any net gain to the department because I was in the same field as distinguished Professor X, whose retirement was a great loss to the department, and it was too soon to tell if I would amount to anything. Considering that I had already published more than Professor X and was arriving with a grant, I thought they could have been a bit more optimistic about me. Indeed, I kept hearing the phrase "big shoes to fill" whenever someone commented on the fact that I had "replaced" Professor X. It was depressing.

General rankings are less personal, but the publications and scholarly reputation of each of us contribute to the rankings, so it's hard not to take the results somewhat personally, for good or bad.

Of course, there are different ways you can view the results, depending on the results and on your perception of your role in your department relative to your colleagues; for example:

- If the results of the NRC or other ranking of your program are good, you may feel quite good about your contributions to this ranking.

- If the results are not so good, then you have at least two options, assuming that you care enough to have an opinion: (1) You can be annoyed at the flawed methods that resulted in the underestimation of your program; or (2) You can be annoyed that your under-performing colleagues are dragging you down with them.

So which is it? Who is happy/unhappy with the NRC results for their program? (And would you rather have A Specific Number, or do you like the way these new results are presented?)

16 comments:

AnonEECSProf said...

The NRC rankings? I wasn't paying attention before they came out. But watching the conversation about them today, I've had to laugh at the ineptness of these rankings in my field.

I'm in an EECS (Electrical Engineering and Computer Science) department. The NRC rankings in my area have some, er, issues....

For instance, NRC didn't count conference papers for EECS programs. (In our field, conference papers are our main publication forums, and can be as prestigious/selective as journals, or more so.) NRC counted conference papers for pure CS departments, but not for joint EECS departments, which creates some weird biases in their numbers.

The NRC data also appears to contain errors in data collection, some so outlandish that we were all wondering whether anyone did any sanity checking on their figures at all. (For instance, when calculating the publications per faculty figure for the University of Washington's CS department, they apparently included adjunct professors in the count of faculty, but didn't count their publications in the count of publications. Being near Microsoft, UW's CS department has an enormous number of adjunct faculty, which totally screwed up UW's NRC ranking.)

The result is rankings that are absurd. For instance, they show relationships like University of Central Florida beating UC Berkeley, or University of South Carolina beating MIT. It makes one wonder whether anyone with any intelligence actually looked at the numbers before they released them.

So overall I quite enjoyed the release of the rankings today. It was good for some unexpected comic relief. Now I get to wait to hear the wailing from the departments that got shafted, and the exultation from the departments that got a boost. Woo-hoo!

It's a shame the NRC rankings only come out once every decade; what will I do for entertainment for the next ten years?

I can't believe I'd ever be saying this, but: it sounds like the leading authority for rankings in my field might be US News and World Report. (Gack! Wash my mouth out with soap!)

Anonymous said...

There is a message board used by people in my field (mostly for sophomoric attacks on job candidates) that has been going crazy about these rankings since yesterday. I just moved departments, and my new department did very well while my old department did surprisingly poorly. Both departments are pretty similar on the dimensions I find meaningful, though, so I don't think the rankings mean much.

Jen said...

My grad-school program (in the broad Biology category) actually fared pretty well - we were above the median in all categories except the percentage of graduate students who go on to academic careers after finishing their degree - only 11.5%. This shocked me at first, but the more I think about it, the more sense it makes based on what I know about the students in my program - most of them have gone on to careers in industry, law, high school education, editing/publishing, etc. Also, about a quarter of the students in the program are in the MD/PhD program, and the vast majority of them choose to go into medicine full time, rather than do a dual medicine/research track. I'm one of a few in my class of 20 either currently in or actively seeking an academic career. Given that my program kept harping on its success in preparing students for academic careers, this must be a wake-up call for them.

Anonymous said...

Wow! According to the NRC, my program really sucks!

The NRC has confirmed what I long suspected - the results make me neither happy nor unhappy.)

Anonymous said...

I've not looked at the ratings, though I've been told by a statistician that the report his year does not try to make the rather silly rankings based on noisy data that have been done in the past.

I did check to see if my department was rated. Being a fairly new field, it was not. Only 19 departments at our university were rated (chosen mainly by how many other places had essentially identical programs---uniqueness is strongly penalized in NRC ratings).

Since my department was not ranked, I can continue with my belief that we are the best in the world.

My former department was rated, but I didn't bother to look it up: I hate to have to "register" at sites to get preliminary data, and I was about to pay $90 for the final report to satisfy what was at most idle curiosity.

Anonymous said...

@gasstationwithoutpumps: I got the data for free--when they ask you to register, they only ask for email, first name, and last name.

I was curious to see how my department fared. Answer: it didn't fare at all. My school is evidently small enough that it wasn't even on the list. I wonder what the magic cutoff is?

I chose MyU because it seemed to be a good compromise. Despite being a small school, there was a decent amount of modern, well-maintained analytical equipment (a greater quantity than, and superior quality to that at the large LocalStateU where I did my postbac work, in fact). Yet the department was small enough that everyone would know my name (we have ~50 grad students in the department).

I think it's a good school, and many people in my field would recognize the name. But I guess it just wasn't a worthy enough school to make the NRC rankings. Oh well.

Anonymous said...

Well, in the case of U. South Carolina ECE, I think it is a reflection of how it's a small department that has a faculty member with a very visible and productive research program. While MIT EECS may well be a better program than U South Carolina, it does say something about the productivity (or lack thereof) of some faculty members at schools like MIT EECS and/or Berkeley EECS.

Anonymous said...

"Yet the department was small enough that everyone would know my name (we have ~50 grad students in the department)."

I'm at a big state university (one of the top 100 in world listings), and 50 grad students is not a small department here. Only 2 departments have over 100 grad students (computer science and education, both with large masters programs). Most departments are in the 30-70 student range, so 50 is a pretty typical size.

I'm sure that there are schools where some programs have several hundred grad students, but I suspect that these departments are typically dysfunctional, as teaching that large a group is very difficult to do well, so things have to get very bureaucratic and perfunctory.

50 grads is a good size for a departmental grad program---enough for some diversity of students and continuity, small enough for individual attention and advising.

Anonymous said...

My group did really, really well. We all got together last night and had a party and were happy to be a part of the group. The key difference between how I felt about this and how your statement made me think you felt, is that none of us were focused on how OUR presence in the group affected the rankings. Our firm belief is that it takes a village to have a good graduate training program.

UnlikelyGrad said...

@gasstationwithoutpumps:

Yeah, I didn't think it was terribly small. (I looked at smaller departments when I was applying but they didn't seem to have the resources I needed to do the sort of work I wanted to do.)

I guess I understand why the NRC included schools like the FamousStateU where my sister teaches--I applied there (different department, of course), went to visit, and was appalled at how many grad students there were. I think that department admits something like 100/year, which I just couldn't live with. Of course, 100+ students obviously could live with it and went there: the NRC ratings are obviously useful for those people.

My question is this--if a department with 50 grad students is not unusual, why aren't these departments ranked as well? Don't these smaller departments take on the vast majority of grad students? Of course, if you get down to a very small department, your results won't be statistically significant. So where should the NRC draw the line? (Note: my school has about twice as many students as Caltech, but somehow Caltech ended up on the list.)

I guess it all comes down to what the purpose of these rankings is supposed to be. If it's meant to help prospective grad students evaluate schools, then it's pretty much useless for everyone but the applicants who want to go to a large school. If it's meant to evaluate research capabilities, it's ignoring the brilliant people who just happen to work at small schools. (There are a couple of profs in our department who bring in close to $1M research money and publish ~10 papers/year.)

So what are the rankings good for? As far as I can tell, only establishing prestige for people who already have it. Because those who don't have it don't get included in the survey...

EECSGeek said...

Anonymous writes: "While MIT EECS may well be a better program than U South Carolina, it does say something about the productivity (or lack thereof) of some faculty members at schools like MIT EECS and/or Berkeley EECS."

Cute. That's easy to say -- if you don't know anything about the field. In my view, anyone who thinks the NRC rankings say something negative about the productivity of MIT EECS profs is either off their rocker or not very well informed about the field. MIT EECS profs are among the tops in the world, in research productivity and intellectual leadership.

sophia said...

I think its gud for me....

Anonymous said...

To EECSGeek:

Yes, there are many MIT EECS professors who are leaders in their fields with excellent work, productive students and plenty of funding. However, there are quite a few who are living off past glory and not getting funding aggressively. For example, you will run into groups that are down to 1-3 students which don't publish that regularly. On the other hand, at Stanford for instance, most professors have large groups with a lot of funding. You can also see this at the number of papers that appear at premiere ECE conferences (DAC, ISSCC, IEDM, IMS) from MIT compared to other schools. So yes, this lack of uniformity among MIT EECS faculty contributes to this lower ranking.

I'm just saying that the emperor does not have clothes.

male humanist said...

I looked pretty carefully at the rankings in my field. My judgment about it squares with AnonEECSProf's about hers/his.
My own department did very well -- I'd say at the high end of where we could reasonably be expected to be placed. But when I looked at the whole list, my pride/enthusiasm/whatever drained quickly away. The list is absurd, leaving out a very top U Cal department altogether, ignoring a superb department at a generally lousy university, putting a "Uh, who???" department in the top ten, etc.

And, if you look at the methodology, it's not surprising. The methodology is very bizarre. It's as if they assumed that the national reputation of a department is correct, got lots of people's opinions, and then tacked weightings onto their 'objective' criteria in a linear regression so that the results would fit the reputation. And of course, since a lot of the factors are garbage (at least in my field), that's going to lead to a very noisy echo of national reputation.

How on earth did they ever come up with this idea?

AnonEECSProf said...

I apologize if I'm flogging a dead horse, but has anyone seen this article?

NRC: Nonsensical Ranking Clowns

A respected computer scientist reports that:

"The complicated regression analysis used to generate the scoring formula led to the percentage of female faculty in a given department actually counting against that department’s reputation score (!)."

What does it say about the validity of the NRC rankings, and the NRC methodology, when their ranking penalizes gender diversity? When it is biased against women? Should we accept rankings whose foundation is explicitly and unapologetically sexist?

My understanding is that the weights were derived by polling a small sample of faculty (via a methodology never clearly disclosed), then attempting to retrofit weights that would match their subjective evaluation of various institution. If the NRC retrofitting methodology is valid, what does it say about the sample of faculty polled -- and indeed, about our field as a whole -- that apparently the best way to capture our subjective evaluation of peer institutions is by down-ranking all institutions with an above-average number of female faculty?

This is disturbing and disappointing all around.

Doctor Pion said...

What is so odd is that the fuzzy scheme they came up with makes formerly important boundaries like upper quartile, well, even fuzzier. I haven't looked hard, but there must be cases where you can use one of the schemes to argue that you are now in the top quartile while using another to justify adding just one more position to push you into the top quartile!

It is hard to respect a process where the methodology has changed as many times as in this case. The only thing missing was a ranking done the same way as in 1995, just to provide a point of comparison.