Now, more than ever, there is so much misinformation out there about health. Falling for bad information about research studies can cause you to make choices that harm your health, either by convincing you to skip useful health care or worse, to choose something actively harmful.
How can you know the difference?
I have for you a checklist of things to look at as you evaluate research headlines you come across online. Scroll down to learn more about each one.

- Check your own bias I know, I know, you think you’re totally even handed in your evaluation of research. But we all have bias, and it’s worth taking a moment and think to yourself “what am I expecting to read here?” and “How invested am I in having this article back up my current views?” as you might be surprised that you already have strong feelings on what you’re about to read.
- Check the source this might be the author of the article, the publication, the friend who shared it or all of these. Are any of these sources trying to sell you something? Trying to make a political point? If so, make sure you factor that in to your assessment.
- Check the article after the headline – Unfortunately, many people share articles online, sometimes even with inflammatory commentary, without reading the actual article. Don’t be that person. Read the whole article before you share so you don’t perpetuate bad info or embarrass yourself.
- Check the original source – Some news sources report on research studies based only on a press release, which has been exaggerated to grab attention. When the publication then pumps it up to grab your attention again, it can end up very distorted. Reliable news sources will have enough information about a research study that you can find the real deal and see if it’s a match.
- Check the experts – The article should include perspective and quotes from a reliable expert in the same area, one who is NOT affiliated with the research study. These experts can be very telling, and often point out concerns and limitations of the research that can be important.
- Check the context – Research studies should always be viewed in the context of the *entire body* of research. Yes, it’s true that a single study can sometimes call the entire body of research into question, but any study that runs contrary to the body of work on that topic should be viewed with extra scrutiny and should hold up to replication in future studies.
- Check the numbers – In this era of trying to go viral, everyone is overstating their results. Beware of pilot studies with very small numbers, studies that imply a correlation means something is caused by someone else, and studies that use phrasing like “approaching significance” to describe results that don’t reach significance.
- Check the population – The study should be done on either a wide and large sample of the entire population, with few exclusions, or on a population that matches the population it’s applied to. Sometimes it’s obvious: A study done on rats cannot be assumed to apply to humans. Sometimes it’s less obvious: A study done on first time parents birthing in hospitals in Taiwan probably cannot be accurately applied to a 4th time parent planning a home birth in Amsterdam.
- Check the risk wording – One really easy way to make tiny risk numbers look more impressive is to use multiplier numbers – and often they’re rounded up to get to that point. Don’t just look at relative risk words like “double the risk!” or “half the chance!” – look at the actual numbers. If those numbers are not included, be very, very wary. 1 in 10 million is *technically* half of 1 in 20 million but the actual risk is very small either way.
- Check your own bias again – we are all susceptible to “confirmation bias” – the idea that we more easily accept things that agree with beliefs we already hold. Take a minute and think back on your response as you learned about this new research information. Did you find yourself thinking “I KNEW IT!” or “There’s no way this is right, I just have to find where they screwed up.” Those are signs you’ve fallen for confirmation bias.