Your guide to nutrition nonsense part 3: Assessing scientific papers

Welcome to the third installment of our guide to avoiding nutrition quacks and food myths. This time, we’re focusing on scientific papers.

People who sell fad diets and unhelpful health hacks know that people want to see the evidence. So, they’ll often share links to papers that “back up” their claims.

After all, if they can’t point to any evidence that their product works, why should anyone believe them? 

But unless you’re trained in the art of science, making sense of scientific papers can be challenging or impossible. 

However, you can get a sense of how reliable the evidence is, even if you don’t have a Ph.D. in nutrition. Here, we’ll show you how.

Below are some simple things to look out for in a scientific paper. You won't have to wade through unwieldy acronyms or unpronounceable chemical names.

1. The date

Is this research old? If it’s, say, from the 1960s or 1970s, that’s a red flag. It doesn’t mean the research wasn’t conducted well, but science moves fast, and the consensus may have moved on since then. 

And if a product really is a wonder supplement (or whatever), there would be more recent evidence to fall back on.

2. Which journal?

Not all scientific journals are equal. Some of the big names, like JAMA, The BMJ, and Nature, need no introduction, but others are less reliable, and some are even fraudulent.

Respectable journals have a strict peer review process. Having many eyes on a manuscript before it’s published helps ensure that the findings are solid.

However, so-called predatory journals put cash flow above scientific rigor. They may have fake editorial boards, and they often publish incorrect or misleading studies.

It’s increasingly easy to set up a website, so there’s a growing number of dodgy online publications. 

If a paper is published by one of these outlets, it doesn’t necessarily mean that the research is rubbish — some legitimate scientists get hoodwinked into publishing in these journals. However, it’s another clue.

Here’s a handy list of hundreds of faux journals. Still, it's incomplete, so it's no fail-safe.  

3. Impact factor

A journal’s impact factor is a measurement. It shows how often papers in a particular journal are cited by other papers. The reasoning is that if other scientists are linking to these papers, they must have merit.

For example, Nature has an impact factor of 69.5, JAMA's is 157.3, and The BMJ's is 96.22. 

These journals are big hitters. If a journal has an impact factor above 3, it’s considered good. And if it’s over 10, it’s exceptional.

Impact factor isn’t a perfect tool, but it’s worth checking. Just Google “[the journal's name] impact factor,” and it should come up.

Again, just because a journal has a low impact factor doesn’t mean it’s not reliable. 

But if a paper makes big claims — for instance, that a simple remedy reverses obesity — you have to ask why it didn’t make it into a bigger publication. If a finding truly is that impressive, why is it in an obscure journal?

Here’s a good example of a paper with so many red flags you can disregard it:

The paper in question was published in September 2021. Although it focuses on concerns about COVID-19 vaccines, it appears in the Journal of Insulin Resistance. That’s a bit of a mismatch — red flag number one.

This journal is quite new and relatively small. So, it doesn’t have an impact factor yet. As we’ve mentioned, this score has to be taken with a pinch of salt, but this is red flag number two.

The author of this paper is Aseem Malhotra. He's also a member of the journal’s editorial board.

On its own, perhaps you could let that slide — there's a fee to publish in open-access journals, and not everyone has the funds to pay. As a member of the editorial board, you may get a discount.

However, we’re now at red flag number three, making the paper less than credible. 

4. Participants

The first section of a scientific paper is called an abstract. It’s a summary of what the scientists did and what they found. You can get a lot of information from it without delving into the weeds.

One thing to check is whether the study was carried out in humans, animals, or in a laboratory. 

If you find terms like “cell lines” or “cell culture,” the study was likely done in a lab — so not in a live animal.

This type of research is important, but what happens in a test tube won’t necessarily happen in a complex collection of organs, tissues, and cells like you.

Animal studies are one step up from lab studies, but we can’t assume that a person will respond in the same way as a lab rat.

So, while they’re important steps in the scientific process, animal and lab studies can’t prove that X supplement will benefit you. 

5. What type of study?

There are many ways to do research, and different methods give us different types of information. They’re all important in different ways. We'll introduce some of the most common types below.

Meta-analyses and systematic reviews

The most reliable information comes from meta-analyses and systematic reviews.

This is where researchers collect information from other studies and analyze it all together.

So, seeing “meta-analysis” or “systematic review” in a paper's title or abstract is a good sign.

Double-blind, placebo-controlled

The gold standard of research is called a double-blind, placebo-controlled clinical trial. 

First, this means that the scientists compared whatever they were testing with a placebo or some other control.

Using a control is important because all participants in a study have an increased likelihood of feeling some kind of effect (good or bad). 

A control might be a placebo, such as a sugar pill. Or it can be another active compound.

Comparing the ingredient of interest against something else helps scientists understand whether the effect they're seeing results from what they’re studying or the placebo, for example.

Second, “double-blind” means that neither the participants nor the researchers know who is receiving the active compound and who is receiving the placebo. 

This means that the researchers can’t consciously or subconsciously influence participants’ reactions. 

It’s worth noting that in nutritional research, it’s not always possible to "blind" people. This is especially true when studying whole foods or dietary patterns. 

Still, looking out for the word “placebo” or “control” is useful. If the researchers aren’t comparing their intervention with anything, it’s difficult to draw conclusions.

For instance, imagine someone was investigating whether a supplement reduced the symptoms of flu, but they didn’t compare it to anything else — everyone took the same supplement.

At the end of the 2-week study, everyone in the trial felt better. If you were unscrupulous, you could say that the supplement improved flu symptoms.

But as we all know, the flu usually gets better on its own — after 2 weeks, most people would feel better anyway. 

If the scientists had compared their supplement against a placebo or another compound, the scientists could see how their supplement compared with something else. 

Alternately, they could have given the supplement to only half of the participants to test how the groups' recoveries differed.


If you see the word "randomization," that's also a good sign. It means that the scientists assigned participants to the experimental and control groups randomly.

This helps reduce the risk of bias. It means that both groups will be roughly equivalent.

Randomization also helps ensure that any changes that the researchers spot stem from what they're testing, rather than inherent differences between the groups.

Epidemiological studies

In epidemiological studies, scientists collect data from hundreds or thousands of people. 

This type of research can provide really useful information. However, it can only show correlation, not causation.

The results of epidemiological studies help guide scientists when they're designing randomized controlled trials. From these, they can figure out whether any links are causal.

6. Industry funding

Scientists need money to conduct research, and getting your hands on that cash can be challenging.

Food companies and supplement manufacturers sometimes have deeper pockets. They may want to understand whether their products work and if they can make specific claims in their ads and packaging.

There’s nothing inherently wrong with using industry funds — it’s how many scientific discoveries are made. However, it’s not always clear how much influence a company has over the results of a study.

So, if you find a study on a particular supplement that’s funded by the manufacturers of that supplement, it’s worth noting. 

Still, it doesn’t mean that the research was poor quality — if it’s published in a good journal, the paper will have gone through a peer-review process. 

To get an idea of affiliations, try clicking on the names of the authors at the top of the paper. This may show you where they work.

And there’s sometimes a section called “Funding,” “Acknowledgements,” “Disclosure,” or “Conflicts of interest.” These can show you other funding sources. 

7. Retracted?

Although this rarely happens, some papers do get retracted.

It might be because of misconduct or errors that make the paper's findings untrustworthy. 

Or, the scientists might have breached ethics guidelines or plagiarized work.

If you search the title of the paper on the National Library of Medicine's database — PubMed — it will have “Retracted” at the start, like this one. Or it might have a big red banner at the top, like this one.

If research has been retracted, no one should be using it as evidence.

The takehome

None of these tips on their own shows whether a particular paper backs up dubious claims. 

But a few red flags together indicate that the evidence might not be up to scratch.

You don't need to delve deep into the paper to check for all of the red flags we list above.

And if the red flags build up, you should approach the product with caution.


Curing the pandemic of misinformation on COVID-19 mRNA vaccines through real evidence-based medicine - Part 1. Journal of Insulin Resistance. (2022). 

Journal Impact Factor: Its use, significance, and limitations. World Journal of Nuclear  Medicine. (2014). 

Potential predatory scholarly open-access publishers. (2022). 

Predatory journals: no definition, no defence. Nature. (2019). 

Randomized, double-blind, placebo-controlled, linear dose, crossover study to evaluate the efficacy and safety of a green coffee bean extract in overweight subjects. Diabetes, Metabolic Syndrome, and Obesity. (2021). 

Retracted: Exploring the medication pattern of Chinese medicine for peptic ulcer based on data mining. Journal of Healthcare Engineering. (2022). 

Retracted science and the retraction index. Infection and Immunity. (2011). 

The bogus academic journal racket is officially out of control. (2014).