An old joke tells of a driver, returning home from a party where he had one drink too many, who hears a warning over the radio about a car careening down the wrong side of the highway. "A car?" he wondered aloud, "There are lots of cars on the wrong side of the road!"
I am afraid that driver is us, the computing-research community. What I'm referring to is the way we go about publishing our research results. As far as I know, we are the only scientific community that considers conference publication as the primary means of publishing our research results. In contrast, the prevailing academic standard of "publish" is "publish in archival journals." Why are we the only discipline driving on the conference side of the "publication road?"
Conference publication has had a dominant presence in computing research since the early 1980s. Still, during the 1980s and 1990s, there was ambivalence in the community, partly due to pressure from promotion and tenure committees about conference vs. journal publication. Then, in 1999, the Computing Research Association published a Best Practices Memo, titled "Evaluating Computer Scientists and Engineers for Promotion and Tenure," that legitimized conference publication as the primary means of publication in computer research. Since then, the dominance of conference publication over journals has increased, though the ambivalence has not completely disappeared. (In fact, ACM publishes 36 technical journals.)
Recently, our community has begun voicing discomfort with conference publication. A Usenix Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems (WOWCS), held in San Francisco in April 2008, focused on the paper selection process, which is not working too well these days, according to many people. (You can find the proceedings at http://www.usenix.net/events/wowcs08/ and a follow-up wiki at http://wiki.usenix.org/bin/view/Main/Conference/CollectedWisdom.)
Two presentations at the workshop evolved into thought-provoking Communications' Viewpoint columns. In the January 2009 issue, we published "Scaling the Academic Publication Process to Internet Scale" by J. Crowcroft, S. Keshav, and N. McKeown (p. 27). In this issue, you will find "Program Committee Overload in Systems" by K. Birman and F.B. Schneider (p. 34). The former attempts to offer a technical solution to the paper-selection problem, while the latter points us to the nontechnical origins of the problem, expressing hope to "to initiate an informed debate and a community response."
I hope the outcome from WOWCS and the Viewpoint columns published here will initiate an informed debate. But I fear these efforts have not addressed the most fundamental question: Is the conference-publication "system" serving us well today? Before we try to fix the conference publication system, we must determine whether it is worth fixing.
My concern is our system has compromised one of the cornerstones of scientific publicationpeer review. Some call computing-research conferences "refereed conferences," but we all know this is just an attempt to mollify promotion and tenure committees. The reviewing process performed by program committees is done under extreme time and workload pressures, and it does not rise to the level of careful refereeing. There is some expectation that conference papers will be followed up by journal papers, where careful refereeing will ultimately take place. In truth, only a small fraction of conference papers are followed up by journal papers.
Years ago, I was told that the rationale behind conference publication is that it ensures fast dissemination, but physicists ensure fast dissemination by depositing preprints at www.arxiv.org and by having a very fast review cycle. For example, a submission to Science, a premier scientific journal, typically reaches an editorial decision in two months. This is faster than our conference publication cycle!
So, I want to raise the question whether "we are driving on the wrong side of the publication road." I believe that our community must have a broad and frank conversation on this topic. This discussion began in earnest in a workshop at the 2008 Snowbird Conference on "Paper and Proposal Reviews: Is the Process Flawed?" (see http://doi.acm.org/10.1145/1462571.1462581).
I cannot think of a forum better than Communications in which to continue this conversation. I am looking forward to your opinions.
Moshe Y. Vardi,
©2009 ACM 0001-0782/09/0500 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.
I'm afraid the editorial is completely one-sided and I'm seriously concerned about the ability of Communications to be a good forum "in which to continue this conversation", when its E-i-C has clearly made up his mind.
For the record, other than for areas with involved theoretical results (for which I am prepared to consider that conference reviewing may be too quick), I believe that conference reviewing is not just "careful" but extremely high-quality: by top people and in direct contrast with other submissions for quality and appeal. Considering the quality of the top conferences in my field (programming languages and software engineering) and contrasting it to the average quality of the top journals, I cannot imagine many researchers with a sense of taste preferring the latter. (This is an average-quality assessment and it is not to say that very high quality papers do not also appear in journals and sometimes only in journals.)
I also find it sad that at the exact moment when all other sciences are trying to deal with the inadequacies of their procedures (arXiv is a telltale sign) we bring back a decades-old, fully-resolved question. Conference publication combines a very high-quality vetting mechanism with a forum for quick dissemination of ideas and community forming. It is also a means that easily accommodates industry researchers and practitioners (whose world is not centered around "promotion and tenure committees"). In short, it is a publication mechanism for the 21st century.
Finally, there is nothing wrong with a field setting its own standards for scholarship. The humanities have single-authored books. The sciences have journals. We have a mixture of conferences and journals.
If the problem is "program committee overload", as the Birman and Schneider article suggests, perhaps we should indeed be looking to Physics for solving it: Most Physics papers (e.g., in the Physical Review A-E series of journals, which is the most standard full-length paper venue in Physics) are reviewed by just a single reviewer. Our conference "program committee" overload is due to assigning 3 or more reviewers per paper.
I could not agree more, in fact I had been toying with the idea to submit an opinion piece on this very topic to CACM.
As program chair of an ACM conference I can attest to the many flaws of the conference review process, in spite of everyone's best efforts. Time constraints, concentrated load, difficulty of holding a relaxed rebuttal phase, competition among authors and referees, pressure to go with average ratings, financial concerns (attendance), and other factors --- all bias and weaken the process in important ways.
And as an interdisciplinary researcher, I experience first-hand how our conference-driven publication practices hurt us in terms of impact, reach, and visibility. Computing journals have very low impact factors and very long dissemination times, compared to other science disciplines. Conferences hurt the viability of journals because top researchers are busy refereeing conference papers. The time constraints of conferences make us submit papers in a rush before they are ready, or impose long delays. And competition for the top conferences, as has been noted by others, means that many good papers do not get the attention they deserve.
I propose a simple solution: abolish conference proceedings. Then papers will be submitted to journals instead. Journals will receive more and better papers. Refereeing resources will shift naturally from conferences to journals. As a result, journals will gain impact, improve quality, and speed up their processes. With our full attention, they will become viable once again and the review process will be more rigorous, effective and timely. Deadlines will no longer be concentrated and we can submit better work, revise it until it is ready, and profit immediately and directly from reviewer's feedback -- the same referee can judge improvements to a paper. I could go on but it seems the many advantages are obvious.
We would still hold conference, of course. In many cases where conferences and journals are nicely aligned, presentations can be selected and invited among the best papers published in the previous year. For newer areas and groundbreaking work, a conference or workshop can still accept submissions --- but it should not publish proceedings; publication is the job of journals.
ACM should take the lead in such a transition because it publishes the proceedings of most top computing conferences, as well as many of the top computing journals. Therefore ACM has everything to gain from leading the way. ACM is also the only body that could successfully shepherd such an undertaking.
The switch would not be easy, but with careful planning we could manage a phased transition over a few years and catch up to the rest of the scientific community.
The key is not in Conference vs. Journals. The thing I don't really understand is that why those reviewers (editors, etc.) need so much time (over months) to review a 20-page journal paper. If their time is so valuable for other things, they should not accept the reviewer positions.
When a paper can be timely reviewed, it really doesn't matter it's a conference or journal paper, if it's accepted.
Just because the EiC has an opinion, does not mean that Communications cannot be a forum for a continued conversation. Communications does not publish only opinions that the EiC agrees with. Communication does publish well-reasoned and well-argued opinions.
This is a difficult issue, with no clear-cut line between the two "sides", in my opinion. After submitting papers to journals only to have the results looking decidedly dated when the papers were eventually published, I have moved to a higher proportion of conference papers.
On the issue of subsequent journal publication of conference papers, a practical problem is that many conferences insist on retaining copyright on the paper, and many journals have a requirement that the research should not have been published before. The tensions here are obvious. Otherwise, I would agree that this two-phase process gives the advantages of rapid dissemination and good, archival publication.
One last comment: in my experience, the quality of conference reviewing procedures is of mixed quality, depending on the conference. A good conference has good reviewing procedures (and probably fairly high rejection rates), others are of dubious quality. I'm starting more and more to look at who stands behind a conference when filtering the flood of CFPs received in my email. If a conference has ACM or IEEE backing in some form, my confidence level is greatly increased.
I would like to add my experience as a researcher in bibliometrics. While in other sciences journals are the main or only way to disseminate research findings, a distinctive feature of computer science research is the importance of selective conferences. This peculiarity of research in computer science complicates the process of bibliometric evaluation for two reasons. First, we need to convince other scientists that conference papers are valuable research outcomes and need to be considered in the research evaluation process. Even when this step has been accomplished, however, the problem is how to evaluate conference papers: the two major bibliometric data sources (Thomson Web of Science and Elsevier Scopus) do not index conference papers (only recently the first has launched a conference proceedings index). Thus, bibliometric indicators computed on these data sources usually do not consider conference publications and the resulting scores for computer scientists with a significant share of conference papers (not published in journals) are lower than expected. Google Scholar includes everything but consistency and accuracy of its output is admittedly lower compared to the mentioned commercial citation-enhanced databases. As a result, even when the importance of conference papers is recognized, I experienced non-trivial problems in the bibliometric evaluation of this source of publication.
Several blogs followed up on this topic:
See also Dan Reed's "Publishing Quarks: Considering Our Culture" at
Displaying all 8 comments