How can we break the cycle of deadline-driven research? In computer science, there has been a growing trend in the past decade or so for researchers to publish in workshops and conferences in order to increase the length of their publication list. This situation is especially true of junior faculty, worried about getting tenure; graduate students, worried about getting job interviews; and now even undergraduates, worried about getting into graduate school. In promotions and tenure committee meetings at some schools, discussion of the number of papers can overshadow discussion of the quality and impact of the candidate’s work.
We have successfully trained deans and provosts that, in computer science, papers in premier conferences count as much as or more than papers in journals. So, the pressure to publish in conferences is even that much more intense. And presumably the more the better. To accommodate the research capacity of our field, new workshops and conferences (and journals) proliferate, resulting today in extensive Web sites maintaining double-digit rankings of conferences. It is now common practice to see the conference success rate included with each publication listed on a candidate’s resume. (I will not repeat the cogent arguments that others have given on the subject of journals vs. conferences, published in CACM [1, 2, 3], but they are relevant to this topic as well.)
We are now in a state where our junior faculty are mentoring graduate students with this deadline-driven approach to research. It’s the only value system they know and they are passing it onto the next generation. When one of my own graduate students said to me, after we agreed that we would submit a journal version of our conference paper, “Jeannette, the author guidelines for Journal X don’t specify a page limit,” I knew something was very wrong with our current culture in computing. We are now in a state where the common thought-chunk of research is a 12-month effort that fits in 12 pages.
We, as faculty advisors, are in a bind: Do we say to our student, “Yes, go ahead and submit to that conference [whose due date is looming]” or “No, don’t waste your time writing for that conference. Your work is not ready. Spend the time developing the work.” Do we give in to the peer pressure our students feel, making them potentially less competitive when they are on the job market? We need to promote a culture that encourages faculty and student researchers to take the time needed to work out their ideas so that when they feel ready, they can submit based on the import of their contribution.
Moreover, conservatism tends to win out in program committees, when submissions are competing for a finite number of conference slots, and in panel reviews for funding agencies, when proposals are competing for finite resources. This attitude leaves less room for the bold, creative, risk-taking, visionary ideas, especially those that are not fully fleshed out with all the i’s dotted and t’s crossed. Note that I have nothing against conferences: they are important for expeditious exchange of technical ideas, as well as networking among researchers and between academia and industry. I have nothing against (high-quality) incremental research: some research agendas are long-term in vision, but rely on making progress step by step, building on prior research results.
The consequences of this deadline-driven research are potentially bad for the field. Our focus should be on the quality of the research we do. Our goal should be on advancing the frontiers of science and engineering.
So how can we break this cycle? One place to start is with the department heads. At hiring time, among other factors, we should look for a candidate’s big idea (or two), not number of publications. In mentoring junior faculty, we need to stress the importance of quality and impact. At faculty evaluation time, we should promote and grant tenure based on quality and impact.
Hopefully, we in the community can at least start a dialogue on this topic. It is for the good of our field--to keep it healthy, exciting, and vibrant.
 J. Crowcroft, S. Keshav, and N. McKeown, “Scaling the Academic Publication Process to Internet Scale,” Communications of the ACM, Vol. 52, No. 1, January 2009, pp. 27-30.
 Moshe Y. Vardi, “Conferences vs. Journals in Computing Research,” Communications of the ACM, Vol. 52, No. 5, May 2009, p. 5.
 K. Birman and F.B. Schneider, “Program Committee Overload in Systems,” Communications of the ACM, Vol. 52, No. 5, May 2009, pp. 34-37.
In general, I think that our community is suffering from an information overload problem. Thanks to the huge improvements in computer software and more recently the growth of the Internet, it is becoming more and more easy to produce articles and to distribute them. Fifty years ago, writing an article was a long process. The author had to find an idea, write it on paper, find a secretary to typeset the paper submit it to a journal (there were very few conferences), wait until the editor has sent the paper for review by post to reviewers, ... Once accepted, the paper was sent to the publisher that had to typeset it again in its own format. The practical difficulty of writing these papers forced researchers to mainly submit mature paper.
Today, with the Internet and all the available software, typesetting is easier and any student can write an submit a paper, no secretary is required to typeset it. We have removed all middlemen (or middlewomen) that participated in the paper writing/reviewing/publishing process, but the result is that now we suffer from a different problem. There are far too many papers written on any topic, with lots of conferences where, thanks to cheap airlines, many researchers present their papers in front of an audience which is either reading their email or preparing their next paper. Thanks to the laptops and the Internet, we can now work from anywhere at anytime. In the past, attending a conference meant listening to the presenters and trying to understand the paper being presented...
I think that a key challenge that our community needs to address is how to detect from an ocean of papers the key innovative ones that need to be widely distributed. The new CACM is a very good step in this direction and I hope that CACM will publish the key innovative papers that all CS scientists must read, but I think that CACM will not be sufficient and we'll need innovative techniques to efficiently (and hopefully quickly) detect the most innovative papers in each CS subfield.
For more on the effect of conferences on our field, please also read Lance Fortnow's "Viewpoint: time for computer science to grow up," in the CACM August 2009 issue, pp. 33-35.
Displaying all 2 comments