Peer review is often described as one of the cornerstones of good science. The idea is simple: Before a scientific work is published it is reviewed by at least two people from the same field and they decide if it is worth publishing. Peer review is widely seen as the thing that distinguishes science from pseudoscience. However in reality things are not so simple and this simplified view can even be dangerous, because it can give pseudoscience credibility once it managed to slip through the peer review process.
Lately two stories highlighted some of the flaws in the peer review process. The first was a paper that only contained ten pages full with the sentence “Get me off your fucking mailing list”
. The paper was created by the computer scientists David Mazières and Eddie Kohler, the Guardian has a story on it
. It is actually pretty old, they made it in 2005 and sent it to dubious conferences and journals that flooded their e-mail inbox. But what made the news lately is that the paper actually got accepted by a publication called the International Journal of Advanced Computer Technology (IJACT)
. Mazières and Kohler didn't pay the publication fees, so the paper wasn't really published, but it should be pretty obvious that no peer review was going on, most likely the replies from the journal were part of some fully automated process.
Fake Open Access Journals
There is a whole bunch of journals out there that are called predatory journals. It is actually a pretty simple form of scam: They create a web page looking like a serious scientific publication and send out mails to researchers asking them to publish their work. Then they charge a small fee for publication. This is widely known, the blog Scholary Open Access lists hundreds of these journals
. Sometimes the lack of peer review is blamed on the whole open access publishing model, fueled by the fact that Jeffrey Beall, the author of the Scholary Open Access blog, isn't exactly a friend of open access. Blaming the whole open access model for fake journals seems hardly reasonable to me. (See this blog post from PLoS founder Michael Eisen on the topic
, I mostly agree with what he writes.)
The second peer review story that lately came up was a paper where a bracket with the sentence “should we cite the crappy Gabor paper here?” seemed to have slipped through the review of the journal Ethology. RetractionWatch has a story on it
. Maybe even more interesting than the fact itself is the explanation given by one of the authors: This was edited in by one of the authors after the peer review. Which opens up quite an interesting question: What do they review when they do peer review? The paper that's finally being published or just some preliminary version that's open to more editing after the peer review?
Many odd stories about peer review
These are just two of the latest examples. There is a large number of odd stories about peer review. In 2012 Retraction Watch reported that different Elsevier journals had reviews from fake reviewers
. The reasons for this remained unknown. Also in 2012 a Korean scientist managed to review his own papers
. (I wrote a story on that back then
/ Google translate link
When the psychologists Stuart Ritchie, Christopher French and Richard Wiseman failed to replicate a very controversial study by Daryl Bem that claimed to have found signs of precognition they had a hard time finding a publisher
. When they submitted it to the British Journal of Psychology it was rejected by one of the reviewers. Later it turned out that this reviewer was no other than Daryl Bem himself, the author of the study they failed to replicate. Finally the study was published in PLoS ONE
. (The whole topic of failed replications and the reluctance of journals to publish them is of course a large problem on its own.)
Earlier this year the scientist Cyril Labbé found out that a large number of papers published by the IEEE and Springer in supposedly peer reviewed conference proceedings
were generated with SCIgen
. It is unclear why that happened. SCIgen is actually a joke computer program that creates computer science papers that look real, but they contain only gibberish and make no sense. The intent of SCIgen was to make fun of conferences with low submission standards, which the authors successfully demonstrated at the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) in 2005. If you always wanted to have a scientific paper with your name on it, just go to the SCIgen web page, it'll create it for you. It is also free software, so if you want to have your own gibberish paper generator you can have it. (my own article on SCIgen/Labbé Google Translate
Publish the Review?
This blog post at Scientific America
makes an interesting point discussing the mailing list paper: There's a lack of transparency in peer review. For a reader of a supposedly peer reviewed paper the only thing he knows is that the journal claims it is peer reviewed. They just don't have any proof that the review really happened.
There are a number of proposals how to improve the peer review process. One is a pretty straightforward idea: The reviews themselves could be published. Usually a peer review itself is a lengthy text where the reviewer explains why he thinks a piece of science is worth publishing. Making the reviews public would not only make it much harder to create fake reviews and add transparency, it would also have another obvious benefit: The review itself can be a part of the scientific discourse, containing insights that other researchers might find valuable.
Another thing that is gaining attraction is post publication review. Scientists demand ways to be able to comment on scientific works after they've been published. PubMed, the leading database of medical articles, started PubMed Commons as a way for scientists to share reviews on already published works.
Some proponents of Open Science have more radical ideas: Publish everything as soon as possible, review later, even going as far as publishing data when it comes in and publishing the analysis later. This has up- and downsides. The upside is that this makes the scientific process much faster and much more transparent. The obvious downside is that this would remove any distinction between good and bad science the peer review process delivered in the past. However given how bad this works in practice (examples above), I'm inclined to say the advantages will probably outweigh the disadvantages. Also it should be noted that something like this is already happening in many fields where it is common to publish preliminary versions of articles on preprint servers and do some formal, peer-review process or conference presentation months or even years later. Some journals already experiment with new ways of peer review, PeerJ
is one of the more prominent examples.
There's a lot of debate about the future of peer review and it is heavily intertwined with other debates about open access publishing and preregistration. One thing should be clear though: While peer review might be an early indicator whether or not something is good science, it is hardly a reliable one.