Thursday, July 31, 2014

CV Gap Years

Every year I get asked to write letters for the evaluation of faculty at other institutions for tenure and/or promotion. My typical thought process on being asked to write a letter for someone I don't know well is: "OK, I've heard of that person/read their papers/seen them at conferences. Sure, I'll write a letter." Then I note the due date and send off a quick e-mail agreeing to write the letter. Most often the request arrives in the summer and I write the letters in summer or early fall. [If you click on the 'tenure' label in the frame on the right -- perhaps after scrolling down a bit -- you will see my previous comments on writing tenure letters.]

When it gets to be time to study in detail the materials relevant to the evaluation -- for example: CV, selected publications -- in many recent cases I have dealt with (recent = past 5 years) -- there have been complications. Example complications: unexplained gaps in the publication record (at least, unexplained to outside reviewers), lack of advisees and lack of publications with advisees, and/or few to no grants (and no research proposals pending with the individual as PI). In a recent example, I was asked to comment specifically on publication quality and quantity, grants, and other research aspects, but I found this difficult owing to some of these complications.

I can think of 'good' explanations for all of those complications. A gap in publications could be related to a massive time commitment setting up a lab and preparing new classes; it could also be related to personal issues that would not trigger an official extension of the probationary period and that would not be explained in a cover letter to external letter writers. Lack of advisees could be caused by unsuccessful attempts at advising students who quit or failed for reasons completely unrelated to the advising ability or practices of the faculty member. And we all know that it is difficult to get grants these days (although we still have to try, so a lack of pending research proposals is troubling).

The host institution is of course aware of all these issues, knows the context, and will likely do what it wants about them -- ignore them completely and focus on the individual's potential or treat them as fatal flaws that justify denial of tenure/promotion -- no matter what my letter says. And there are other significant factors (teaching ability) that are typically not known by outside letter-writers who are asked to comment on scholarship.

Sometimes I think that these letters are just a necessary formality and there is nothing useful that I can say in my letter. It's not constructive to think about that while working on one of these letters, so I try to think about how -- as a faculty member reading other people's letters for colleagues -- I find some letters to be quite useful. These letters can be useful not so much for whether the individual thinks the candidate should or should not be tenured and/or promoted but for the perspective they provide about the person's body of work.

So I try to focus on that aspect of my letters. After (re)reading some of the candidate's publications and thinking about their ideas and work and trajectory, I try to express what I think about that person's scholarship and their impact on the field. (I have written before about how I do not like to do comparisons with others in the field and I do not like to answer the question of whether someone would get tenure at my institution.) Writing in detail about the candidate's research may or may not be of interest to faculty and administrators but I think it's the best contribution I can make to the process, more so than any detailed comments about the data in the CV.

Thursday, July 10, 2014

Room for Improvement

Student comments on my teaching of a particular course:

Great professor!
I have enjoyed this class!
I liked the readings.
This course required too much previous knowledge.
Professor very helpful with homework.
Homework very useful for class.
Well-constructed lectures.
Very organized lectures.
She speaks very clearly.
She answered my homework questions.
She provided images and charts to supplement the subject matter.
The in-class exercises were helpful.
I liked the practice exercises we did in groups during lecture.
I liked that she asked questions during class and this helped deepen my understanding of concepts.
Useful supplementary material to help us understand lecture material.
She explained the topics completely in class. Didn't use a textbook as a crutch.
It was great that lecture and lab material were well coordinated.
She was always ready to answer questions.
She was always willing to help with any questions.
She provided the subject matter very clearly.
The last project was too much work for this level of class.
Lecture presentations very clear.
I liked the in-class exercises.
You should improve your teaching methods.

Note that almost all of the comments are in the 3rd person (except for the last one), as if the students were writing to someone else about me, rather than writing to me with feedback. I don't know if it matters in terms of type and level of feedback whether the student is imaging an unknown audience or speaking directly to me (?). At evaluation time, I give a little talk to the class about the importance of this feedback and how it is used by instructors and the department/college/university, but I think there is still general confusion among students about what exactly the purpose of these evaluations is and who reads them and whether anyone cares what they think.

These are overall nice comments, and unfortunately also rather classic in that the criticisms are too vague to help me understand what the specific complaints are.

The last comment, despite being too vague to be useful in any specific way, is absolutely right. Despite being deep into my mid-career years, I don't want my teaching to fossilize. I want to improve. In recent years I have attended teaching workshops and gotten some ideas from those. When I team-teach, a faculty colleague is in the classroom with me, so I get some peer feedback. And last term, I jettisoned the too-long and too-detailed textbook and provided focused readings, including some that I wrote myself. That seems to have worked quite well (or at least no one said they missed having a textbook), so perhaps that counts as an improvement. I would also like to do some new things involving e-learning and have been to some workshops and meetings about that.

I am thinking about teaching because I was just looking at my evaluations, though mostly I am enjoying having lots of uninterrupted time for research. This week I even managed to submit a manuscript on which I am primary author. It's been about two years since I've been able to do that (and I don't mean to imply that I did it alone -- an excellent colleague was essential to the completion of this paper).

As I was finishing the paper (and a related grant proposal) recently, it occurred to me that I could create a new teaching module based on this work and incorporate it into the class for which I just received teaching evaluations (not, of course, as extra work but replacing some older material). Probably more than any major change in teaching style, a realistic way that I can improve my teaching is to find good ways to incorporate new material -- specifically, integrating New Science with Classic Science, so that students learn the fundamental stuff without which they are incomplete as scientists and people and yet are also exposed to new things that help them see where the field is at (including being exposed to unresolved questions that might inspire them).

Anyway, it's been a busy summer so far. My father recently asked me if my husband "also has the summer off" and I was actually quite calm about it this time. Have you had a similar conversation with anyone yet this summer? Parents? Neighbors? Friends? Students? Assuming that you do in fact work in the summer even if you are not teaching, did you (1) smile serenely and let them continue to exist in ignorance; (2) correct them (a) calmly, (b) not calmly; or (3) lapse into stony silence (if having a conversation) or send a glaring emoticon (if in e-contact)? (or other..).

Monday, June 23, 2014

Men are from Pluto

A colleague and I were talking about this and that recently and he said that at some point he needs to find a new research topic, as the one that he has been working on (very successfully, and in fact sort-of pioneered) is getting very crowded. It's not as much fun (says him) to be in a crowd instead of way out ahead.

So then he said that it was difficult to start working on a very-different topic because it can be difficult to get funding if you lack a track-record and expertise in that new thing. True enough. So I said, "Collaborate" (unsaid but well known: That's what I do).

He said, "No, you can't project authority if you collaborate."


Context: We are both full professors and therefore getting adequate credit for our work is not a career life-or-death issue as it is for early-career scientists. For the early-careerers, this can be important (depending on your particular context). Collaboration can still be a significant research component -- enjoyable and rewarding in many cases* -- as long as you also stand out from the crowd in some way for your ideas and expertise.

But other than that, who cares about projecting authority? OK, some people do. My colleague clearly does, and he is very good at it (projecting authority). I don't really care. Well, I do a bit (I don't like being overlooked), but I don't think collaborating has lessened my "authority". If anything, it has increased it.

I reject as a general philosophy the idea that collaborating de-authoritizes you (I just made that word up), although if that's what floats your boat, go ahead and enjoy your authority (alone).

* if your colleagues are not jerks, and if they don't hold up manuscripts and proposals.

Monday, June 09, 2014

Measure for Measure of Success

Something that I have been seeing more and more in grant proposal reviews (my own and those of colleagues who have shared theirs with me) is the idea that it's not enough to have a record of success advising grad students, undergrads, and postdocs in research -- you have to understand and explain your advising techniques and you have to have a plan for assessing and improving.

OK, I get that, but even when I attempt to do those things, it isn't good enough for some reviewers. They think that I (and my colleagues) are relying too much on past success and traditional measures of success (degrees, publications, conference presentations, post-graduation employment). They are not convinced that that is sufficient. They want something different. Apparently, unless you change something, you are not improving and therefore are not being transformative, or something.

Example reviews (comments condensed/reworded to remove any identifying vocabulary):

A highly qualified PhD student has already been identified for this research but the mentoring of this student and an undergraduate is largely assumed based on prior experience of the PIs. The PIs have records of successful advising but should include in the proposal a more intentional discussion of how they plan to train the next generation of scientists. The mechanism for success is not explained and there is no plan for assessing success of their mentoring. How will successful training of the graduate student be determined other than by the record of publications, presentations, and completion of the thesis? Although the research is potentially transformative and this is an excellent team of researchers, because of these shortcomings in the broader impacts I have given the proposal a lower rating.

That makes no sense to me. I am definitely not saying that we all deserve to have all of our grants awarded just because we have had past success. However, I think that if the proposed research is deemed excellent by a reviewer and the PI has a demonstrated record of success with advising, it does not make sense to downgrade a proposal rating for the reasons given in the example review above, contributing to the rejection of the proposal and therefore a lack of funding for the graduate student.

Here's another:

[From a review of a proposal that included one week of salary for a soft-money research scientist who runs a lab in which students would do some analyses for a proposed project]: Description of the mentoring of the postdoc is not well developed. There is no mention of career counseling. Mentoring in professional activities such as writing proposals and papers is confined to discussions and support for participation in conferences and workshops. There is no mention of how the postdoc will be mentored to collaborate with diverse groups of researchers and students. There is no description of the postdoc's career path in the context of developing an effective mentoring plan for him.

And this:

[From a review of a proposal that included a substantial component of support for undergraduate research]: These PIs have a long record of success in advising undergraduate students in research but no evidence is presented for how the field of research on undergraduate research will be advanced. 

These are just anecdotes, of course, plucked from reviews of different proposals by different PIs. At least one of the proposals even involved a colleague who does research on teaching and learning. It wasn't enough. Some of us PIs have attended national and local workshops on teaching and learning, read some of the relevant literature, even co-authored papers (some with education specialists) in science ed journals. It's not enough.

I think that giving attention to effective advising is an important component of research (and therefore grant proposals), but I also think these and similar reviews show that certain reviewers have run amok and are harming the very people (students, postdocs) they think they are helping.

Friday, May 16, 2014


A colleague recently shared a review of his (rejected) proposal. The most negative review contained this statement:
There were a number of editorial errors found throughout the proposal. Some were missing commas and the like,
The mind, like, boggles.

Now I need to know: have you ever
  • commented on punctuation in a review? (minor errors, egregious errors, pet peeves?) -- did it affect your overall rating of the proposal, as far as you know?
  • received a comment on punctuation in a review? (minor errors, egregious errors, pet peeves?)
And if there were punctuation, grammar, and/or spelling errors in a review that criticized you for real or perceived punctuation errors, how did you feel about that?

Did you think, "Oh well, the reviewer is just bashing out a review on a webform and of course shouldn't be held to their own high standards for punctuation in proposals. Perhaps that missing comma on page 10 of the proposal indicates that I am a sloppy and untransformative scientist and therefore didn't deserve a grant that would have supported student research."

Or did you think something a bit more negative about the reviewer's punctuated hypocrisy?

Thursday, May 08, 2014

Liveblogging the Exam

Although taking an exam is most certainly more stressful than giving an exam, giving an exam can be quite stressful. I am not asking for sympathy, I am just stating a fact.

The most stressful exams to administer are those in large classes in which students are packed into every available seat and there may be (alas, too often) issues with cheating. You can devote considerable time and effort to anti-cheating activities such as giving multiple versions of exams, you can have students sign an honor statement, you can patrol classroom non-stop during the exam, or you can just hope for the best.

Giving an exam to a small or medium-sized class is less stressful because the logistics are easier, but that is not to say that giving an exam even in these circumstances is lacking in stress. Or, at least, that is my opinion. Is there anyone who would rather give an exam than have a regular class? I would much rather have non-exam class time. [I am deliberately not addressing the possibility of not having exams at all. In some courses I do not give exams, in others I do, depending on the course.]

OK, so I am about to give an exam in a medium-sized class. I am not dexterous enough to blog while handing out the exam (and the TA does not seem to be in evidence), but I can combine semi-live and liveblogging to try to capture the essence of the experience from the front of the room.

I enter the room. They are all here, in their seats, staring at their notes. This has been a very punctual class so I am not surprised they are all here on time. In a more typical class, students would appear throughout the first 10+ minutes of the class, even on an exam day, making a lot of noise as they rush in, grab an exam, find a seat, deal with the logistics of finding a writing utensil and putting their water bottle in a suitable place

Put your notes away! This takes a moment. I try not to let it eat into exam time (there is another class in this room right after ours), so I start handing out the exam to the nearest row of students who are note-free.

Can we start now? I should remember to say that they can start as soon as they get the exam but sometimes I forget and then someone always asks. It is not a large class, so the time difference between those who get the exam paper first and those who get it last is about a minute. If anyone in the back needs that extra minute, they can have it at the end. In a large class, I need a fleet of assistants to help hand out exam forms so that no student has to wait too long to get the exam or to get their question answered if they have one during the exam.

The room is never totally quiet. Someone is always turning a page, even if the exam has.. one page. And certainly if the exam paper has 2 pages: rustle rustle rustle. The second-most common noise is erasing. There is a lot of erasing going on at this very moment. Some students have very impressive erasers.

Two students just had questions about different exam questions. Both were answered easily by my pointing to a key word or words in the exam question. In both cases the student immediately saw that the answer to their question was right in front of them and thanked me.

The first student is done with the exam, halfway through the allotted time.

The second student just finished, with 20 minutes left to go. Make that three students.

Are the ones who finish very early the ones who are doing well in the course so far? In fact, there does not seem to be a strong correlation between those who are doing very well or very not-well in the course (to date) and those who finish the exam early, just as I predict that there will be a random collection of students (with respect to course performance) at the very last second of the exam.

At various times in my teaching past, I have timed myself taking an exam that I wrote. The purpose of this was to see how much time it took just for the physical act of writing (correct) answers, in the ideal case in which the answer is immediately known. The amount of time available for students to take the exam should of course be greater than this, but greater by how much? For certain courses, I developed a simple formula and adjusted the number and type of test questions accordingly.

I haven't done that in a while. Perhaps I am in that dangerous stage of my teaching career when I assume that I 'just know' how to do things like create an exam that is fair in length and level. I like to think there would be some warning signs (in my teaching evaluations? in other student comments?) if I have gone astray (or were to do so in the future).

I have started grading the exams turned in early. Students in previous classes have told me that it stresses them out if I start grading while some of them are still taking the test but it's not as if I am chortling in an evil way as I slash giant red X's through incorrect answers. I am not even sighing. I suppose it could be disconcerting if I also made happy sounds while grading. So I try to be subtle and quiet, serious and respectful as I make my first forays into grading amidst continued test-taking by the remaining students. Do you start grading turned-in exams whilst other students are still taking the test?

.. with a few minutes left to go, ~50% of the class remains. Some are just staring at the exam paper, some are writing rapidly. Some are obsessively clicking their writing utensils.

The deluge of exam-turning-in is about to begin. Some students acknowledge my presence as they turn in the exam and some do not make eye contact. I don't try to read anything into this. Some of the no-eye-contact students may have done quite well and are still feeling the residual effects of exam-stress. My general (delusional?) impression is that the students found the exam to be reasonable. I do not sense major unhappiness or anxiety.

We are in the final minute; 20% of the class remains.

Has anyone ever studied what % of students stay until the very end of an exam time, even if they are finished? Is there a universal value? Or is there a characteristic value for each professor, and/or for particular types of classes, institutions, etc.? The next time you give an exam, please record (and post in a comment) the % of students who stay until the very last possible moment (and was this in a large, medium, or small class)? Do you think the % who cling onto every second avalable is a function of your exam-philosophy or something else?

In this case, 8% of the class needed the exam extracted from them with some effort on my part. I do not like that. What do you do when you have to pry an exam away from a student when time is up? Do you just take it? Do you loom over them? Do you try increasingly forceful statements? Do you beg them to turn it in? Tell them you will not accept the exam if it is not turned in now? Do you walk out of the room?

The exam is done. I was interested in doing some grading while the exam was going on (efficient use of time! good way to see how some students did on the exam!), but now I am not.

Only one student has asked when I would have the exam graded (answer: "in a few days if possible"). I did not mention that cats are an essential part of my grading ritual (TMI).

Thursday, April 24, 2014


This is a poll I have done before (4 years ago), and I am interested to see if the results will be different now. I am curious about how often you check your citation statistics -- number of citations, h-index, who is citing you etc. You can sign up to get alerts about this, or you can go and check yourself at one of the citation-counting sites. How often do you get information about your citations?

Whether or not you answered the poll 4 years ago (in April! is there something about April that makes me think about citations?), do you think your citation-checking habits have changed in the last few years? 


How often do you check your citation statistics? free polls