Does That Health Advice Seem Too Good to Be True? Here’s Why

Getty Images

Is milk good for you or bad for you? The question is something that scientists and health journalists have been giving mixed signals about for years. Every aspect of it — from the skim vs. full-fat debate to even the idea that it's good for your bone health because of all the calcium — has ridden the roller coaster of scientific opinion.

That’s why when we saw the bold, clear title of a report called "Milk and Dairy Products: Good or Bad for Human Health? An Assessment of the Totality of Scientific Evidence," we were ready to rejoice. Its summary concluded that “scientific evidence supports that intake of milk and dairy products contributes to meeting nutrient recommendations and may protect against the most prevalent, chronic non-communicable diseases, whereas very few adverse effects have been reported.” Has the answer to all of our questions finally arrived? Not so fast.

First, we looked to the paper’s stated “Competing interests and funding” section. There we found acknowledgement that three out of the review’s six authors had received research funding from something called the Dairy Research Institute — a registered trademark of Dairy Management Inc., an industry group whose stated purpose is "to increase demand for dairy through research, education, and innovation" (taken from its website). To their credit, the authors acknowledged their conflicts of interest, saying the sponsors had no input in their research. But should we take their word for it?

“I don’t question the notion that milk has health benefits,” says New York Times health and nutrition reporter Anahad O’Connor, "it’s just that when the list of competing interests and funding is so long, it’s practically a section unto itself, you look at the studies reviewed and they're also industry funded, and the fact that they have a section on dairy alternatives that plays down any health benefits, it seems like this is a paper that raises some red flags.”

The battle with Big Sugar is perhaps our best example of what happens when we take industry-funded health studies for granted. Remember when saturated fat was a risk factor for heart disease, and sugar wasn’t? Yes, there was a conflict of interest there that is now well documented

The main issue with industry-funded research is not so much the quality of the work but rather how scientists ask the initial research question and how they interpret the results, says Marion Nestle, a professor of nutrition and food studies at New York University. “The science could be fine,” she says, “but industry-funded studies tend to get spun to favor the sponsor.”

When scientists design a study, “they’re the ones deciding where the goalposts are going to be,” says O’Connor. Analysis of the funding sources of papers and their conclusions has found that research paid for by industry is more likely to turn up results that are good for the sponsor, and researchers funded by the beverage industry don’t find the links between soda consumption and obesity that independently funded researchers do. “It’s such a stark contrast that it’s shocking,” O’Connor says.

Michael Moss, the Pulitzer prize–winning journalist and author of Salt, Sugar, Fat says he has known scientists who took money from Coca Cola but readily bit the hand that fed them. “I don’t rule out industry funding as a deal breaker,” he says, but it’s important to consider what a study isn’t telling you. An industry group may be funding research on a very specific aspect of their product they’re confident they can use to slap a health claim on the label, but the context of the whole product is what’s relevant to consumers.

When it comes to looking at all studies on a subject and drawing conclusions about what it means people should eat, “this is really rocket science stuff," says Moss. "It's very difficult for even the most well-intentioned people.”

So what’s a consumer to do? Look out for some signs of trouble with studies, reports from journalists (not all of us are looking into conflicts of interest all the time), and especially corporate press releases and social media accounts. Here are some of the top things that raise our alarm bells.

  1. The conclusions are sweeping. Actual big conclusions take decades of research.
  2. The sources of research funding include groups named after the thing being studied.
  3. The study was entirely observational. That is, it was documenting behavior and not asking people to change. You can’t draw conclusions about what’s causing what, and it’s possible to find all sorts of spurious associations.
  4. Researchers relied on people's memory. Quick: What did you have for lunch last week?
  5. The study was just looking at a snapshot in time. Say the research was done the week after Thanksgiving. “Suddenly you’re considered a very high consumer of turkey or sweet potatoes,” O’Connor says, and that skews results.
  6. Researchers only studied a small number of people. How small is too small depends on the type of study, but larger experiments generally are less likely to turn up fluke results.
  7. 50 percent of what? The conclusions say eating a certain food will decrease your risk for, say, heart disease by 50 percent, but don’t say what the baseline risk actually is. If your risk for heart disease was low in the first place, the food’s effect would be less impressive than “cut in half” sounds.
  8. The paper is the only example of research with such conclusions. If he had the power, Moss says, “I would ban dissemination of all scientific studies until they’d been replicated independently.” While that’s not realistic, you can be more confident the results of a study hold up if other scientists ran the exact same experiment and got the same results.
  9. The dramatic results were in rats. Not humans. 

For access to exclusive gear videos, celebrity interviews, and more, subscribe on YouTube!