Eight secrets to making sense of scientific studies
A practical guide to separating flimsy research from solid science.
- 23 September 2025
- 7 min read
- by Linda Geddes

Every year millions of scientific studies are published. Some of them are genuinely groundbreaking, with far-reaching consequences for medicine, technology, or other areas of science.
However, many more are exploratory in nature, with findings that may be overturned or refined by subsequent research, while a small proportion suffer from flawed methodologies or subpar science.
You don’t need a science degree to identify which studies are worth your attention and which ones belong in the bin. Here are eight simple rules for spotting good science, and bad.
Unfortunately, the internet is rife with scientific-sounding claims that either misrepresent the findings of well-conducted studies or are based on poor-quality research.
Though deciding which research is trustworthy might seem daunting, the good news is you don’t need a science degree to identify which studies are worth your attention and which ones belong in the bin. Here are eight simple rules for spotting good science, and bad.
1. Where was the research published?
A good starting point is the journal a study has been published in. The best journals send every paper they're considering publishing to experts in the same field, who read it carefully, spot weaknesses and suggest improvements. If the paper doesn’t meet the journal’s standards it's rejected or sent back for revision.
Although this ‘peer-review’ system isn’t foolproof, it’s the best system science currently has for filtering out weak or misleading studies.
A quick search should reveal if a journal sends its articles for peer review. You can also look at a journal’s ‘impact factor’ – how frequently its papers are cited by other researchers, on average. And you can try pasting the title of the study into Google Scholar, and look at how often this individual study has been cited.
Be particularly wary of studies that have been published as pre-prints. These haven’t yet been vetted by experts, which means their methods, data and conclusions may contain errors, overstatements or flaws that mean they’ll never be published in a reputable journal.
Although pre-prints enable exciting findings to be shared quickly, they can also spread misinformation.
2. How long ago was the study published?
Science moves quickly. As methods improve and new evidence accumulates, findings may be revised or overturned entirely.
This doesn’t mean older studies are worthless, but if a study was published more than five to ten years ago, you should investigate whether its findings have been confirmed by later studies and separate research groups.
One approach is to look at recent reviews and meta-analyses, which pull together results from many different studies and can help make sense of conflicting findings. Cochrane Reviews are considered the gold standard for such assessments in health care.
3. Who are the authors?
Just because someone has a ‘Dr’ or ‘Professor’ in their name doesn't guarantee expertise in the specific topic the article addresses.
Lack of relevant experience can raise questions about the quality of the research, particularly if their findings contradict established research or appear in a less reputable journal.
You can check an author’s previous publications and expertise by searching a database such as PubMed or ResearchGate, or simply googling them.
Pay particular attention to the first and last authors listed on the paper: the first author will typically have conducted most of the experiments and written up the manuscript, while the last author is usually a senior researcher who has supervised and overseen the study.
Also watch out for potential conflicts of interests, such as employment by or funding from a company or organisation that might benefit from the findings. While this doesn’t automatically invalidate the research, it is important to interpret the findings with this in mind.
Authors’ affiliations are typically listed at the top of an article, while most journals also include a separate funding or acknowledgements section as well as a conflicts of interest statement, often near the end.
4. Who or what are the study subjects?
Most medical research is carried out in cells, animals or humans.
Studies in cells are very useful for uncovering biological mechanisms, but don’t reflect the full complexity of living organisms.
This is why researchers typically progress to animal studies and then to clinical trials, which assess the safety and effectiveness of treatments in humans.
Sometimes compelling-sounding research cited in the media or online refers to cell or animal experiments, which may not necessarily apply to humans.
Even studies in humans can be misleading if participants don’t reflect the wider population. Sometimes researchers focus on narrow subgroups for valid reasons – such as testing whether they are more likely to benefit from a specific treatment – but this could mean the findings are irrelevant for anyone else.
Have you read?
Also watch out for epidemiological research, which studies patterns of disease in populations. Although it can be a powerful way of spotting trends and potential risk factors, it has limits.
A common pitfall is confusing correlation with causation: just because two things occur together doesn’t mean one directly causes the other. For example, both ice cream sales and shark attacks increase in summer, but eating an ice cream probably doesn’t increase your risk of being bitten by a shark.
Further experiments in cells, animals and humans will usually be needed to ascertain whether correlations do involve causation.
5. How big is the sample size?
In general the smaller a study’s sample size, the less confidence you can have in its findings.
Sometimes, small studies are necessary, such as in rare medical conditions, and good researchers will usually explain these limitations.
Small studies can also provide a useful starting point for further research, but their results should be treated with caution until confirmed in larger trials.
Studies that include thousands or hundreds of thousands of participants are far more likely to produce reliable conclusions.
6. Is there a control group?
In clinical trials, results from participants should be compared with those from a ‘control group’: a group of individuals who haven’t been exposed to the treatment or condition being tested.
To reduce bias, participants shouldn’t know whether they’re in the test or control group – this is referred to as being ‘blinded’ – and in a double-blind study, even the researchers are unaware which participants are in which group until the very end.
However, depending on the type of treatment being studied, blinding isn’t always possible – or ethical.
7. Do the results support the conclusions?
This is probably the trickiest thing to check if you are a non-expert in this field, which is why checking the reputation of the journal and seeking out review papers that synthesise all the evidence related to a particular research question, is so important.
Even so, there are some clues to look out for.
One thing to be aware of in the reporting of scientific findings by the media is the difference between ‘relative’ and ‘absolute’ risk
The biggest clue is checking that the authors measured what they are claiming and aren’t stretching the data.
Good researchers will frame their results in language such as ‘more likely’ and ‘less likely’, or ‘increased risk of’ and ‘decreased risk of’.
They will also be clear about the limitations of their study. If a researcher is making bold claims about the implications of their research for a specific disease or condition, this should be a red flag.
It is also worth questioning whether the effect being reported has any real-life relevance: results can be statistically significant, but the difference might be so small that it is unlikely to have practical implications.
One thing to be aware of in the reporting of scientific findings by the media is the difference between ‘relative’ and ‘absolute’ risk. If a drug increases the risk of a specific side effect from 1 out of 1,000 people to 2 out of 1,000 people, that’s a 50% increase in relative terms, but the actual chance of experiencing that side-effect (the absolute risk) is only 0.2%.
8. What are other researchers saying about this paper?
Science relies on collective scrutiny, so when a researcher makes a bold or controversial claim it’s worth seeing how others are responding to it in the general media, specialist journals and on social media.
Fact-checking sites and health organisations may also weigh in on controversial studies, providing valuable context or warnings: listen to their guidance.