Peer review publications have been around scientific academic scholarship since 1665, when the Royal Society's funding editor Henry Oldenburg created the first scientific journal. As Jeannette Wing nicely argued in her recent blog post here, it is the public, formal, and final archival nature of the process of the Oldenburg model that established the importance of publications to scientific authors, as well as their academic standings and careers.
Recently, as the speed of communication of research results reach breakneck speeds, some have argued that it is time to fundamentally examine the peer review model, and perhaps to modify it somewhat to suit the modern times. One such proposal recently posed to me via email is Open Peer Review, a model not entirely unlike the Wikipedia editing model in many ways. Astute readers will realize the irony of how it is the Wikipedia editing models that often make academics squirm in their seats.
The proposal for open peer review suggests that the incumbent peer review process has problems in bias, suppression, and control by elites against competing non-mainstream theories, models, and methodologies. By opening up the system, we might increase accountability and transparency of the process, and mitigate other flaws. Unfortunately, while we have anecdotal evidence of these issues, there remains significant problems in quantifying these flaws with hard numbers and data, since reviews often remain confidential.
Perhaps more distressing is that several experiments in open peer review (such as done by the journal Nature in 2006, and British Medical Journal in 1999, Journal of Interactive Media in Education in 1996) have had mixed results in terms of the quality and tone of the reviews. Interestingly, and perhaps unsurprisingly, many of those who are invited to review under the new model decline to do so, potentially reducing the pool of reviewers. This is particularly worrisome for the academic conferences and journals, at a time when we desperately need more reviewers due to the growth of the number of submissions.
A competing proposal might be open peer commentary, which elicit and publish commentary on peer-reviewed articles. This can be done prior to publication, or even after the date of publication. In fact, recent SIGCHI conferences have already started experimenting with this idea, with several popular paper panels in which papers are first presented, and opinions from a panel is openly discussed with an audience. The primary focus here is to increase participation, which might also improve transparency. The idea of an open debate, with improved transparency, is of course the corner stone of Wikipedia editing model (and our research project WikiDashboard).
Finally, it is worth pointing out the context under which these proposals might be evaluated. We live in a different time than Oldenburg. In the mean time, communication technology have already experienced several revolutionaries of gigantic proportions. Now, real-time research results are often distributed, blogged, tweeted, facebooked, googled, and discussed in virtual meetings. As researchers, we can ill afford to stare at these changes and not respond.
Beyond fixing problems and issues of bias, suppression, and transparency, we also need to be vigilant of the speed of innovation and whether our publication processes can keep up. Web review management systems like PrecisionConference have gone a long ways in scaling up the peer-review process. What else can we do to respond to this speed of growth yet remain true to the openness and quality of research?
I think that open peer review works in Wikipedia, in part, because everyone is an expert in something, and Wikipedia articles span a broad spectrum.
There has been a great deal of debate in the Health 2.0 community (e.g., on e-patients.net) about new dimensions of openness, engagement and participation ... and the problems that ensue when one relies too heavily on the experts.
I'm sympathetic and supportive of many of open data and other transparency initiatives, but I still think that conferences [at least in computer science] and journals that rely on credentialed experts for peer review have their place. I think it is worth noting that most of the references on Wikipedia - especially in medical articles - are to results published in peer reviewed journals (or, at least, their abstracts).
While I think it is important to consider alternatives or enhancements to the current peer review process (First Monday comes to mind as a good example of promoting timely review and publication), I think it's even more important to broaden the notion of how we evaluate the impact that research[ers] can have. For example, I've read some incredibly insightful, impactful and thoroughly referenced or substantiated blog posts that will never appear in a conference or a journal, and yet in most academic and industry research organizations, such "contributions" count for little or nothing.
Thanks for putting this out there, Ed, it's definitely worth having a conversation about. When reading through this post, two questions came to mind.
First, you present lots of various alternative models for peer review or, essentially, filtering of what gets published. Open peer review, open peer commentary, and current peer review systems are all slightly different and offer different advantages and disadvantages. As it stands, the post is more of an open question, but I'd be interested to hear what you (and others) think would be more or less valuable specific directions to pursue in changing peer review.
Second, I wonder about discipline-specificness. One of the arguments I've often heard for computing research to be primarily conference-based (rather than journal-based) is that the journal review process takes so long that, by the time an article is published and available, it's already out of date. Rather than take on this oft debated argument, I instead want to think about whether the various models for open peer review that we might consider would be more or less amenable to computing research than other fields. For instance, Behavioral and Brain Sciences uses the open peer commentary format, due in part, they argue, to the highly complex and multidisciplinary nature of research in that field. I wonder, then, just as certain publication formats may be more or less suited to different disciplines or research areas, so, too, might different models of peer review vary in their appropriateness or effectiveness?
Joe also makes an excellent point about what "counts," a point that resonates in many ways with Jeannette's post. As we think about potential ways of altering peer review, questions of legitimacy, contribution, and authority may be some of the most important to consider.
Just got back from the NSF/CRA workshop on ultra-large-scale interaction, so haven't gotten a chance to respond directly until now.
Both of you raise great points. Joe's point, I think, is broad in helping us not lose sight of the fact that we're talking about publication records that are used to judge the amount of impact that a researcher has had. Oldenburg's model was invented to facilitate discussion amongst the members of the Royal Society, as far as I can tell. The use of publication lists to help assess impact is probably a later side effect.
Eric, you ask "I'd be interested to hear what you (and others) think would be more or less valuable specific directions to pursue in changing peer review." Here are some raw thoughts:
If what we care about is assessing a person's research output, what're some of the alternative models? Joe's comments suggest that perhaps in the future, everything will be published online (probably as long narratives, but maybe each individual ideas as blog posts? gasp!), and that we can use web metrics to assess impact. Here I have in mind Clay Shirky's Folksonomy posted online. It's certainly influential. How many quotations and quoted by whom? How many tweeted about the blog posts? What position does the person occupy in the social network? These all could be impact measures. Sounds pretty radical, eh? I think we're probably too invested in the current model to change to these somewhat-radical impact measure models.
Something closer to reality now might be how we can start by getting rid of the dead-tree models (read: paper). Often I hear that the limiting factor in how many articles can be published in a journal or conference proceeding is the number of pages the publisher wants to print (and the cost associated with that), or the amount of time we have for paper presentations at a conference. Boohoo! All journals and conference proceedings should be published only digitally now. Yes, this will mean changing the current economic models for journal publishers, but I think it is necessary. For example, we cannot keep up with the growth of the field and still keep to only publishing only 20-30-ish papers in TOCHI or other journals. It's simply not sustainable. Creating more journals just fragments the community. Instead, there should be some growth model for enlarging journals so they accommodate more content every year (important for fields that are growing, like HCI, maybe indexed to the number of faculty member or researchers in that field). That would also mean enlarging the editorial board as the field grows. This also partially answers your question about discipline-specific-ness. Growing fields should get larger journals over time. Dying fields should get smaller journals over time.
I also think open commentary can be implemented now, and organizations like ACM would do well to take charge in trying these experiments now.
One outlet for scientific results could be simply a blog: I'm experimenting with this in an effort to write a conference paper directly on my blog - http://www.ikangai.com/blog/tag/public-paper-writing
This makes the results visible for the public and allows for early input from the community during the writing process. Such an input can be regarded as informal peer review which helps the authors to sharpen their ideas.
I'm aware that such a publication/review process requires a complete different take on the publication process itself and the information might become even more scattered: numerous of scientific blogs could appear basically anywhere in the world and would probably compete for input of the community.
Displaying all 4 comments