Skip to content

Fraudulent Peer Review

The organization COPE (Committee on Publication Ethics) has issued a statement that indicates attempts to manipulate the peer review process on a large scale.

While not much details are available it indicates that some agencies provide "services" to scientific publishers that include fake peer reviewers. The strategy by these agencies seems to be to submit papers to scientific journals and at the same time trying to propose fake peer reviewers to the same journal in the hope that they'll get to review the submitted article. Then they submit favorable reviews in the name of the non-existing reviewers.

This sounds similar to a story form 2012 I recently also mentioned here where peer reviews for journals from the publisher Elsevier were retracted due to peer reviewers that didn't exist. The current news indicates that this has happened at a much larger scale than previously known.

Probably a large number of publications will be retracted following these incidents.

Press Releases exaggerate Research and Journalists are happy to uncritically repeat the exaggerations

A study published today by the British Medical Journal tries to investigate the often unhealthy relationship between biomedical and health related studies, press releases on the studies and the resulting news articles. There's a widespread feeling among scientifically minded people that “the media gets it wrong”. This is hardly controversial, it's always good to have some scientific data on the details. The study is titled “The association between exaggeration in health related science news and academic press releases: retrospective observational study“ the main authors are Petroc Sumner and Christopher Chambers.

The authors took press releases from 20 major UK universities. They then checked the press release and the resulting news article for typical exaggerations in the field. They took three very common examples: Claiming causation where the study only claims correlation, inference about humans from animal studies and practical advice about behavior change. There is one important limitation the authors point out: They didn't ask whether the studies themselves where already exaggerated, they only tried to measure the exaggerations that go beyond the study itself.

The main results are unsettling, but to be expected: Press releases exaggerate a lot (between 36 % and 40 %). If the press release is exaggerated journalists are much more likely to also exaggerate (around 80 % for all three examples). If the press release does not exaggerate there is still a substantial chance that the journalist will do. Journalists especially like to exaggerate consumer advice.

More exaggeration does not mean more news articles

There is one result that is a bit more difficult to interpret. The authors found that whether or not a press release is exaggerated makes hardly a difference in media uptakes. One has to be careful not to jump to conclusions too fast here and not make the same exaggeration mistakes this whole study is about. This could be interpreted as a sign that science doesn't have to exaggerate in press releases to get media coverage. But another very plausible explanation is that the more interesting studies are less likely to be exaggerated and the less interesting studies are successful in filling that gap by exaggerating their results.

I thought whether a causal relationship could be checked with a different study design. It certainly would be possible to make some kind of randomized controlled trial, though I'm not sure if this would be ethical as you'd have to deliberately produce exaggerated press releases to do so.

Who's to blame

Appart from the data the study already led to some discussion who's to blame and what to do about it. Interestingly both the study itself and an Editorial by Ben Goldacre tend to argue in a direction that scientists are to blame and should change. They both argue that they don't believe in change in journalism (certainly something for me and my colleagues to think about).

Science journalist Ed Yong made a strong statement on Twitter where he argues that all the blame should go to the Journalist. “We are meant to be the bullshit filters. That is our job.” I can't argue with that.

It's certainly interesting that the scientists seem to put the blame on science while the journalist blames his profession. However in the end I think there's neither an excuse for writing exaggerated news articles nor an excuse for exaggerated press releases.

Ben Goldacre has some very practical suggestions how to change science press releases. He argues press releases should contain full names of both the PR people and the scientists involved and responsible for writing them to improve accountability. He also proposes that press releases should be much more integrated in the scientific publishing process. They should be linked from the study itself and they should also be open to post-publication review processes and criticism from the scientific community. I think these are good ideas, though probably not sufficient to tackle the problem. (By the way, here is the press release about this study and it is not linked from the study itself. They could lead by example.)

The Problem with Peer Review

Peer review is often described as one of the cornerstones of good science. The idea is simple: Before a scientific work is published it is reviewed by at least two people from the same field and they decide if it is worth publishing. Peer review is widely seen as the thing that distinguishes science from pseudoscience. However in reality things are not so simple and this simplified view can even be dangerous, because it can give pseudoscience credibility once it managed to slip through the peer review process.

Mailing list paper
This is peer reviewed science
Lately two stories highlighted some of the flaws in the peer review process. The first was a paper that only contained ten pages full with the sentence “Get me off your fucking mailing list”. The paper was created by the computer scientists David Mazières and Eddie Kohler, the Guardian has a story on it. It is actually pretty old, they made it in 2005 and sent it to dubious conferences and journals that flooded their e-mail inbox. But what made the news lately is that the paper actually got accepted by a publication called the International Journal of Advanced Computer Technology (IJACT). Mazières and Kohler didn't pay the publication fees, so the paper wasn't really published, but it should be pretty obvious that no peer review was going on, most likely the replies from the journal were part of some fully automated process.

Fake Open Access Journals

There is a whole bunch of journals out there that are called predatory journals. It is actually a pretty simple form of scam: They create a web page looking like a serious scientific publication and send out mails to researchers asking them to publish their work. Then they charge a small fee for publication. This is widely known, the blog Scholary Open Access lists hundreds of these journals. Sometimes the lack of peer review is blamed on the whole open access publishing model, fueled by the fact that Jeffrey Beall, the author of the Scholary Open Access blog, isn't exactly a friend of open access. Blaming the whole open access model for fake journals seems hardly reasonable to me. (See this blog post from PLoS founder Michael Eisen on the topic, I mostly agree with what he writes.)

The second peer review story that lately came up was a paper where a bracket with the sentence “should we cite the crappy Gabor paper here?” seemed to have slipped through the review of the journal Ethology. RetractionWatch has a story on it. Maybe even more interesting than the fact itself is the explanation given by one of the authors: This was edited in by one of the authors after the peer review. Which opens up quite an interesting question: What do they review when they do peer review? The paper that's finally being published or just some preliminary version that's open to more editing after the peer review?

Many odd stories about peer review

These are just two of the latest examples. There is a large number of odd stories about peer review. In 2012 Retraction Watch reported that different Elsevier journals had reviews from fake reviewers. The reasons for this remained unknown. Also in 2012 a Korean scientist managed to review his own papers. (I wrote a story on that back then / Google translate link.)

When the psychologists Stuart Ritchie, Christopher French and Richard Wiseman failed to replicate a very controversial study by Daryl Bem that claimed to have found signs of precognition they had a hard time finding a publisher. When they submitted it to the British Journal of Psychology it was rejected by one of the reviewers. Later it turned out that this reviewer was no other than Daryl Bem himself, the author of the study they failed to replicate. Finally the study was published in PLoS ONE. (The whole topic of failed replications and the reluctance of journals to publish them is of course a large problem on its own.)

SCIgen paper
My very own SCIgen publication - would it get a peer review?
Earlier this year the scientist Cyril Labbé found out that a large number of papers published by the IEEE and Springer in supposedly peer reviewed conference proceedings were generated with SCIgen. It is unclear why that happened. SCIgen is actually a joke computer program that creates computer science papers that look real, but they contain only gibberish and make no sense. The intent of SCIgen was to make fun of conferences with low submission standards, which the authors successfully demonstrated at the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) in 2005. If you always wanted to have a scientific paper with your name on it, just go to the SCIgen web page, it'll create it for you. It is also free software, so if you want to have your own gibberish paper generator you can have it. (my own article on SCIgen/Labbé Google Translate)

Publish the Review?

This blog post at Scientific America makes an interesting point discussing the mailing list paper: There's a lack of transparency in peer review. For a reader of a supposedly peer reviewed paper the only thing he knows is that the journal claims it is peer reviewed. They just don't have any proof that the review really happened.

There are a number of proposals how to improve the peer review process. One is a pretty straightforward idea: The reviews themselves could be published. Usually a peer review itself is a lengthy text where the reviewer explains why he thinks a piece of science is worth publishing. Making the reviews public would not only make it much harder to create fake reviews and add transparency, it would also have another obvious benefit: The review itself can be a part of the scientific discourse, containing insights that other researchers might find valuable.

Another thing that is gaining attraction is post publication review. Scientists demand ways to be able to comment on scientific works after they've been published. PubMed, the leading database of medical articles, started PubMed Commons as a way for scientists to share reviews on already published works.

PeerJ
PeerJ is experimenting with new ways of transparent peer review.
Some proponents of Open Science have more radical ideas: Publish everything as soon as possible, review later, even going as far as publishing data when it comes in and publishing the analysis later. This has up- and downsides. The upside is that this makes the scientific process much faster and much more transparent. The obvious downside is that this would remove any distinction between good and bad science the peer review process delivered in the past. However given how bad this works in practice (examples above), I'm inclined to say the advantages will probably outweigh the disadvantages. Also it should be noted that something like this is already happening in many fields where it is common to publish preliminary versions of articles on preprint servers and do some formal, peer-review process or conference presentation months or even years later. Some journals already experiment with new ways of peer review, PeerJ is one of the more prominent examples.

There's a lot of debate about the future of peer review and it is heavily intertwined with other debates about open access publishing and preregistration. One thing should be clear though: While peer review might be an early indicator whether or not something is good science, it is hardly a reliable one.

Welcome to the betterscience.org blog

If you've been following news in the past months you could read that scientific studies found out that pesticides are linked to autism, an intelligent computer passed the so-called Turing test, a new mathematical algorithm will endanger the security of the internet, a higher concentration of antioxidants in organic food makes them healthier, PowerPoint slides make people stupid and apples improve women's sex life.

I have checked some of these claims and ignored others. However, I am quite certain that all of these claims are just plain wrong. You can find these stories almost on a daily basis. There's a never ending flow of bogus science stories in the media. It would be easy to blame the journalists and sensationalist media here. But there are various mechanisms at work here: Scientists exaggerating their research, press releases exaggerating scientific claims and journalists either uncritically reporting them or reporting them in a completely misleading way.

Bogus news stories about scientific results are just the tip of the iceberg. In recent years I became increasingly interested in everything that science gets wrong. This started when I became aware of the problem of publication bias: Many scientific results never get published. I learned that there's a big debate about a reproducibility crisis in science. Often enough scientific results cannot be replicated if other scientists try to do so. The bottom line is that far too many published scientific results are simply wrong and huge amounts of resources are wasted.

While these problems get more attention, some people want to try out radically new ways of doing science. A community that has gathered around the idea of Open Science wants to turn the scientific publication process around and bring much more transparency to science.

Science will never be perfect. Mistakes and preliminary results that later turn out to be wrong are an essential part of science. But many of the problems are fixable.

I find these issues incredibly interesting. I first had thoughts about writing a book about it, but I decided starting a blog would be an easier task. So here it is and hopefully it will present some interesting insights in a debate that is crucial to science.

Before I end this introduction I want to make something clear: I'm not against science. In fact, I love science. It's the only way we have to reliably find out things about the world around us. It is great that these days we know so many weird things about the universe that generations before us couldn't even imagine. Sometimes the flaws of the scientific process are used by the proponents of pseudoscience. However, they offer no better alternative. Proponents of so-called alternative medicine want to replace science with personal experiences, creationists want to replace science with ancient books, others want to replace science with the latest woo woo they found somewhere on the Internet. None of these things offer a meaningful alternative to science. The only way to fix science is better science.