top of page

Do Your Own Research: Really?


In today’s world of accessible information and social media, we’re often told or want to “do our own research”. It has become somewhat of a joke – the rallying cry of conspiracy theorists stuck in a deep rabbit hole of misinformation created by trolls (or hostile states).


In matters of health, misinformation abounds, sometimes from well-meaning people, sometimes from people who stand to gain from spreading it.

The thing is, there is valuable information to be found amongst all the noise. Can someone who is not an expert find it? How? To do this, you need to be able to evaluate the quality of the information, and this will often depend on its format.


Science Journalism


Most people with no science background will find medical information in the form of science journalism. In science journalism articles, subjects are simplified (sometimes oversimplified) for a lay audience. A science journalist will (in theory) read a scientific paper and interpret the findings in their own article.


In reality, that is not always quite what happens. Since more spectacular articles will be read by more people, journalists often interpret the actual findings in the source paper in a much more confident way than the original authors did. This is then compounded by the tendency of other journalists covering the same subject to read another newspaper article rather than the (dry and technical) source material. Other shortcuts include reading only the abstract and conclusion and never looking at methods or discussion, which means the reader cannot tell if the conclusion is actually warranted according to results and study design.


There is quality science journalism, but one has to keep the above caveats when using newspaper articles as sources for science information. If the article is what is commonly called “clickbait” (with titles like X Things Your Doctor Won’t Tell You / Does Not Want You To Know About Y), chances are the information in it is very far from the original source’s information – in some cases it can even arrive at the complete opposite conclusion from the source material or… be simply made up. Another rule of thumb is that if something is too good to be true (Lose Weight With Our No-Exercise All You Can Eat Bacon Donut Diet)… It’s probably not true.


Curated Content


Another common source of information is content that is curated / moderated by users. Wikipedia, for instance is a common one.


It may be surprising but, for most subjects, Wikipedia articles are relatively accurate. Wikipedia writers and editors are volunteers who take pride in vetting sources for content, and normally do a good job of keeping articles up to date and correct. Some pages are written (and/or vetted) by specialists in the relevant field. Some other pages however, especially on controversial subjects, do get vandalized – often by one or a few persistent, single-minded individuals who uses multiple accounts to evade bans (the page on homeopathy for instance has been legendary for this).


Curated content sources are generally a good starting point to research information. Good articles will link science papers as references for the information that a reader can go and read for themselves.


Going to the source: science papers


Science journals are often a bit dry and technical, but they are the actual source of the information, from the people who actually did the research. But here too one has to be careful. Not all research is equal – some papers published under the appearance of research can have sketchy methods that mean their conclusion is not as solid as it seems or can be straight out fraudulent. There are also different qualities of research journals. Not all articles are automatically good simply because they were published in the format of a science journal.


Generally, a good research journal will publish only peer-reviewed articles. A peer review means people in the same field as the author have read the paper, commented on it, and have accepted, asked for modifications or clarifications, or straight out rejected it. Peer review, while it does not always catch bad papers, is essential because often the techniques of a given field are so specific that only someone who has worked with them knows their caveats and weaknesses.


Real life example: someone wrote a paper in which they used a technique called Raman spectrometry to detect physical changes in homeopathy remedy samples. Anyone familiar with this technique knows one very important detail: it is very sensitive to contaminants. They also notice that the author has failed to purify their reagents in the correct way, and that the spectra pictured in the article show a contaminant that is commonly found in commercial solvent. This automatically makes the author’s conclusion about their work completely invalid, and the paper worthless. Normally the paper should have been rejected by a good reviewer (but in this particular case, it somehow slipped and got published anyway).

But let’s say you’ve found your source paper, and that it is published in a good journal that practices peer review. How can you, as someone who is not an expert, get information out of it ? For this you have to know the anatomy of a research article.


First you have an abstract or summary. This is the TL;DR of the article. It packs the essentials of its work and conclusions in one paragraph. The abstract is a good starting point in reading. Often it contains all you need to know if the newspaper piece that sent you to it reported it correctly. In the case of clinical studies, it often also contains a very important detail the newspaper almost never reports: the confidence the authors have in their results, in the form of statistical certainty. This is where the tone of the newspaper article will diverge from the science paper: science never states things as certain, it always qualifies it with statistics. If you want to be able to evaluate the quality of most clinical studies, the subject to study is statistics.


In the body of the paper, you first have the introduction. The introduction will present the context for the research, often with a short review of previous work. It’s often relatively easy to read even for a non-expert. It might be interesting to read for someone who wants to learn a bit more about the field, because sometimes you’ll find references to what are called “review” papers in it. Review papers are overviews of the state of the research on a subject by leading specialists.


Then you have the methods. This is the part that is very difficult to interpret for someone who is not in the field, but which should have been vetted by reviewers. It describes the study design, essentially how the author set out to do the research, and why.


Next are the results. This part is interesting to look at if you know how to look at graphs.


Finally, there are the discussion and conclusion. The discussion can be a bit difficult to go through, but it is often a very important part of a paper. This is where the authors will discuss what the results might mean, and why they are arriving at their chosen conclusion. Often this is where the difficulties and possible problems with the research will be discussed in detail – another part you will not find in newspaper articles.


Example: you have a study about the effect of taking a particular vitamin in preventing a disease. In the conclusion, there is a nicely statistically significant conclusion that, yes, vitamin X does prevent Y. But hidden in that pesky discussion section are the factors that the authors are aware they failed to control for. This might mean the conclusion is a lot less certain than it seems at first sight, but the newspaper article, written by someone who wants clicks and views and does not take the time to read dry discussions about study design, presents this as “Vitamin X Prevents Y”.


This is why people often get the impression that science “contradicts itself” – the scientist is saying “I saw something that means X is really good for Y, we need to look into it because Z might also have something to do with it”, the journalist quotes “X is really good for Y”. Then when further work comes along and shows “never mind, looks like this was mostly due to Z, which previous research hadn’t controlled for”, the line becomes “X is useless, actually” because it will get clicks.


Should I “do my own research” then?


Yes, I think more people should get interested in science and educate themselves despite this. But for this to work, you have to be aware of the limits of your knowledge and understand that Google, Facebook and Youtube do not replace a science or medicine education. Sometimes there are situations where you will not be able to tell if the information is good or not, and this is fine are long as you are aware of it when the time comes to act on it. In terms of medicine, it’s always best to discuss the actions you want to take based on information you found online with your doctor.

66 views0 comments

Recent Posts

See All

Did you know?

The cognitive effects of so-called "non-drowsy" antihistamines are typically unknown to the general public because they're usually taken in low doses over the short term. However, on their doctor's ad

bottom of page