Skip to content

Chocolate, weight loss, bad studies and bad journalism

Dark ChocolateOn Friday a documentary aired on German and French television channel Arte about dietary and weight loss studies. The authors of the documentary had done an experiment where they created a fake study claiming that eating dark chocolate can help with weight loss. The study was done in cooperation with science journalist John Bohannon, who wrote about the experiment on io9.com.

They did a real study and published it in a journal, but it was obviously flawed in many ways. They published a press release and created a webpage for an Institute of Diet and Health which doesn't exist. Shortly afterwards a number of media outlets started reporting about this study, the first big one was "Bild", the largest German newspaper.

There are a number of ways in which this study was flawed:
  • The study had 15 participants, which is a very low number for such a study.
  • The study was published in an obviously bad journal (International Archives of Medicine). There is a large number of such scientific journals that publish just anything if you pay them for it. Recently there was a case where someone successfully submitted a paper only containing the sentence "Get me off Your Fucking Mailing List" to such a journal.
  • In the documentary it is mentioned that during the measurements the participants in the control group received a glass of water before they where weighted.
  • The authors where cherry picking their results. They did a lot of measurements on the study participants. By pure chance one of the value they measured would improve in a significant way. This kind of flaw in scientific studies is best explained by this xkcd comic.

The last point is probably the most interesting, because it can not necessarily be spotted in the final publication. One way to avoid this is the pre-registraiton of studies in public trials registers together with the methodology. There is increasing pressure to pre-register trials in medicine. Unfortunately, that debate has rarely received the field of nutrition, study registration is rarely done at all in that field.

The point of all this is of course that studies on nutrition aren't much better. While the whole story got a fair amount of praise, there was also a debate about the ethics of such a study. The questions at hand here aren't so simple. Obviously the participants of the studies were misled. However it is not uncommon to mislead participants of studies about the real intent of the research. In psychology a lot of studies would be just impossible to conduct otherwise.

Another point of criticism is that the study wasn't approved by an institutional review board. It'd be an interesting question if an ethics board would've approved a study with the pure intent to show flaws in journalism and the scientific publication process.

My personal opinion is that the ethical issues raised by such a stunt are at best minor compared to the ethical issues with all the supposedly serious studies that get published all the time and have the same flaws.

The only issue I might have with the whole story is that I feel the reality is often even grimmer. I'm pretty sure that with more effort the study could've been published in a real journal. The fallback to an obvious fraud journal was according to Bohannon due to the time constraints of the documentary.

Often enough media stories about health and nutrition (and also about a lot of other things) aren't based on studies at all. It's not rare that these stories are merely based on opinions by single researchers, preliminary lab research or yet unpublished studies.

I don't know if this was the source for the chocolate study idea, but three years ago the British Medical Journal had a publication about the positive effects of the ingredients of dark chocolate. Not only did that trigger a number of media reports, the German Society of Internal Medicine (DGIM) issued a press release seriously proposing that health insurances could cover the costs for dark chocolate for patients with metabolic syndrome. (Here's a talk by Gerd Antes mentioning this issue.)

These things happen on a daily basis, and they don't just happen in nutrition science.

(Image source)

Press Releases exaggerate Research and Journalists are happy to uncritically repeat the exaggerations

A study published today by the British Medical Journal tries to investigate the often unhealthy relationship between biomedical and health related studies, press releases on the studies and the resulting news articles. There's a widespread feeling among scientifically minded people that “the media gets it wrong”. This is hardly controversial, it's always good to have some scientific data on the details. The study is titled “The association between exaggeration in health related science news and academic press releases: retrospective observational study“ the main authors are Petroc Sumner and Christopher Chambers.

The authors took press releases from 20 major UK universities. They then checked the press release and the resulting news article for typical exaggerations in the field. They took three very common examples: Claiming causation where the study only claims correlation, inference about humans from animal studies and practical advice about behavior change. There is one important limitation the authors point out: They didn't ask whether the studies themselves where already exaggerated, they only tried to measure the exaggerations that go beyond the study itself.

The main results are unsettling, but to be expected: Press releases exaggerate a lot (between 36 % and 40 %). If the press release is exaggerated journalists are much more likely to also exaggerate (around 80 % for all three examples). If the press release does not exaggerate there is still a substantial chance that the journalist will do. Journalists especially like to exaggerate consumer advice.

More exaggeration does not mean more news articles

There is one result that is a bit more difficult to interpret. The authors found that whether or not a press release is exaggerated makes hardly a difference in media uptakes. One has to be careful not to jump to conclusions too fast here and not make the same exaggeration mistakes this whole study is about. This could be interpreted as a sign that science doesn't have to exaggerate in press releases to get media coverage. But another very plausible explanation is that the more interesting studies are less likely to be exaggerated and the less interesting studies are successful in filling that gap by exaggerating their results.

I thought whether a causal relationship could be checked with a different study design. It certainly would be possible to make some kind of randomized controlled trial, though I'm not sure if this would be ethical as you'd have to deliberately produce exaggerated press releases to do so.

Who's to blame

Appart from the data the study already led to some discussion who's to blame and what to do about it. Interestingly both the study itself and an Editorial by Ben Goldacre tend to argue in a direction that scientists are to blame and should change. They both argue that they don't believe in change in journalism (certainly something for me and my colleagues to think about).

Science journalist Ed Yong made a strong statement on Twitter where he argues that all the blame should go to the Journalist. “We are meant to be the bullshit filters. That is our job.” I can't argue with that.

It's certainly interesting that the scientists seem to put the blame on science while the journalist blames his profession. However in the end I think there's neither an excuse for writing exaggerated news articles nor an excuse for exaggerated press releases.

Ben Goldacre has some very practical suggestions how to change science press releases. He argues press releases should contain full names of both the PR people and the scientists involved and responsible for writing them to improve accountability. He also proposes that press releases should be much more integrated in the scientific publishing process. They should be linked from the study itself and they should also be open to post-publication review processes and criticism from the scientific community. I think these are good ideas, though probably not sufficient to tackle the problem. (By the way, here is the press release about this study and it is not linked from the study itself. They could lead by example.)