Every morning over breakfast, I scan the newspaper headlines for health stories. I do this not just for my own edification, but because I know that if there’s something juicy in the headlines, my phone will soon be ringing off the hook with questions about it. As a consumer health reference librarian, it’s my job to help people find answers.
If you read this magazine, you probably have a lot in common with my callers. You’re proactive about your health. You try to eat right and exercise, and you follow health stories in the news in an effort to stay abreast of the latest research. You’re also a critical thinker who makes your own decisions, and wants to make sure you’re getting good, solid information. So when you see a statistic in a popular magazine, or read a blurb about how studies show this or that fact to be true, you sometimes wonder about the research on which those conclusions are based. Maybe you’d like to know more about the studies or evaluate the research yourself. Maybe you’d like to know whether a study is objective and reliable, or whether it is really relevant to your own concerns.
You might be able to glean some of this information by reading further in the article. But to get all the facts, you may need to go back to the original research report. How do you get your hands on a medical or scientific study? Once you do, how do you weigh its strengths and weaknesses and make sense out of all those charts, tables and impossibly long words?
While it can be challenging, the good news is you don’t need to be a math whiz or have a science degree to make sense out of most health-related research! To prove the point, let’s walk through the process step-by-step, from reading an intriguing story in the news to obtaining and evaluating the study itself.
Medicine Makes News
“PICKLE JUICE CURES FOOT FUNGUS” screams the headline. Sounds intriguing, but if you’re smart, you’re not going out to buy pickle juice – at least, not without some further reading. “Newspaper headlines are good for alerting you to the topic of a study, but not for reporting its conclusions,” says Barbara Gastel, a physician and professor of journalism at Texas A&M University. “Headlines are written by copy editors who may be in a hurry and in need of a ‘hook.’ Since they haven’t had a chance to research the topic carefully, they may inadvertently distort something. Specifically, since the function of a headline is to attract attention to the article, a certain amount of exaggeration may creep in.”
Remember the “telephone game” you played as a kid? A sentence was whispered from person to person down the line, and ended up hopelessly (and often hilariously) mangled at the end. The translation from scientific study to news article to headline can sometimes resemble that process. If you read the article more carefully, however, you may notice – way down at the bottom – some suggestion that the scientist’s conclusions are a bit more modest than the news headline advertised. It might say something like: “Pickle juice appears to kill foot fungus in wombats; more study is needed before applying these results to humans.”
Depending on the quality and depth of the article, you might also get some information about the scientists responsible for the research, and their affiliation. Does the article mention where and when the study has been or will be published? Or has it been published at all?
Be a little more cautious if the research was just presented at a conference or meeting. “At conferences,” Gastel points out, “researchers often present preliminary findings that can be quite tantalizing, but that may or may not pan out.” If a study is accepted by a peer-reviewed journal, explains Gastel, that indicates it’s undergone close review by other experts in the field. This doesn’t necessarily mean it’s right, she cautions, but it does mean it’s plausible.
“As a reader,” notes Gastel, “I would put more stock in a study in the Journal of the American Medical Association (JAMA), the New England Journal of Medicine or Annals of Internal Medicine than something that was merely presented at a conference or touted in a daily newspaper.”
While general-interest publications may accurately report stories, explains Gastel, they sometimes get less scientifically rigorous material to begin with. In their interviews with general-interest journalists, researchers will sometimes make broader claims about the importance of a study than they would in scientific literature. They know that peer-level scientists will read their work critically and take issue with any flaws or exaggerated claims, while reporters or the general public may lack the skills to catch such subtleties.
Beyond evaluating the details of the study, you also have to consider who is choosing to publish and support it. Jonathan Marks, a molecular anthropologist at the University of North Carolina, notes that “science has been radically transformed in the last generation into a business, so that virtually every statement is a grant proposal, or an advocacy for one thing or another.”
That means that the article you read in the newspaper may actually be a press release directly from the public relations department of the university or company at which the study was conducted; they of course have an interest in the research and the institution being viewed favorably by the public.
If you are interested in verifying the veracity of a study, take a look at who funded it. Corporate-funded studies aren’t necessarily deceptive bunk, but it pays to consider the credibility of the source and role its agenda may have played – both in the study itself and in the publicity strategy around it. Studies funded by corporations or organizations that stand to benefit from the results should be looked at more critically than those conducted by independent or governmental organizations.
Going to the Source
One of the most common questions I hear about medical journal articles is “Why can’t I get the whole thing on the Internet?” This complaint comes from frustrated clients who have finally found a citation or abstract for the study they are interested in but who then – click as they might – are not able to see the full text. The reason for this: copyright and profit. The medical journals sell expensive subscriptions and have no incentive to give their information away for free. That said, there are some journals that do; check the Web sites listed in the Resources section for suggested links.
Often, the very best places to get help tracking down studies (and information about studies) are libraries. Your local public library may subscribe to the basic journals like JAMA and New England Journal of Medicine. Many have online full-text databases that are available to anyone with a library card, and most offer interlibrary loan services. If you are near a medical school or teaching hospital, you may be able to get access to their libraries. There are also consumer health libraries – medical libraries designed for the general public. There are more than 200 of these around the country and their numbers are growing (see the Resources section for directories).
Sifting Out the Science
So let’s say you finally get your hands on the original pickle-juice study. If all those tables and charts and numbers are starting to intimidate you, relax! Once you’ve learned the general layout of a research article and understand a few key statistical concepts, it’s not difficult to start picking it apart.
One of the most important sections of a research article is the methods section, which describes how the study was designed, how many subjects were included, how they measured their outcomes, etc. This is an important part of the paper, as a bad study design or a too-small sample size will render any conclusions suspect.
Not all studies are alike. Some types of studies provide more reliable conclusions than others. According to Richard K. Riegelman, professor of Epidemiology-Biostatistics and Medicine at the George Washington University School of Public Health, the three most important types of studies are: randomized clinical trials; longitudinal or cohort studies; and case control studies. “What they all share,” he says, “is having a comparison or control group. This concept of making a comparison is key.” Riegelman explains the different types of studies this way:
- Randomized clinical trials, also called randomized controlled studies (or RCTs), are the gold standard in medical research. They consist of two groups of people, whose characteristics are as similar as possible. Subjects are randomly assigned to groups. The best studies are “double blinded,” meaning that codes are used so that neither the subjects nor the researchers know who is in which group. One group is given the treatment, and one is given a placebo, or fake treatment. Then the groups are followed to see what happens.
- Cohort studies follow two groups of similar people studied for an extended period of time. There is no intervention or treatment provided.
- Case control studies are not as reliable as either of the previous two but sometimes are the only way to begin studying a new disease or a localized outbreak. In these studies, people who already have a certain condition are compared with similar people who do not. Because these studies work backwards in time, there are more opportunities for bias to creep in. The people who are sick may remember things differently than the others, or some of the people who did have the disease may have died already and thus not been counted in the study, etc.
Weighing the Value
Once you’ve evaluated the structure of the study, you should again review the methods section to see how many people were studied and how they were chosen. You want to try to determine if there were any biases in how the groups were selected that might account for the results.
The number of people selected is also a determining factor in the study’s outcome. RCTs are generally done with a few hundred to a few thousand people. “What they’re looking for,” explains Riegelman, “is something called ‘statistical power.'” What you’re looking for, of course, is a study big enough to demonstrate a significant effect. A study of a dozen people probably can’t tell you much – the margin for error and variability is too great.
At last, you get to the results section. This is where you discover what was determined by the study. It’s also where you’ll find all the numbers and tables. Don’t freak out! You don’t have to know how to do all the calculations yourself to get some meaning out of these figures. According to Riegelman, you should have three key concerns about the result presented: “Is it real?” “Is it big?” “Does it apply to me?” Let’s address these questions one at a time. First, you want to determine whether or not the effect that the researchers found is a real effect, or if it could have happened by chance. This is what researchers are trying to determine in their statistical tests.
Statistical significance is “upside down and backwards logic,” Riegelman explains. “We are taught in most areas to build a conclusion from evidence. With a statistical-significance test, we are seeking proof by elimination.” What he means is that the statisticians are attempting to eliminate the potential influences of coincidence and error. The word “significant” here doesn’t mean “important,” it refers to the probability that this result did not just happen by chance. The important number to look for here is called a p-value; the smaller it is, the less likely the results could have happened by chance. The p-value for any given test should be less than .05, meaning that the probability it could have happened by chance is less than 5 percent.
Next step: Are the results big enough to be important? Let’s say a study reports that “consuming pickle juice resulted in half as many cases of foot fungus as a placebo.” The key statistical concept that describes these changes in risk is known as the odds ratio, or relative risk. This is a measure of the difference between your chances of developing foot fungus if you drink pickle juice and your chances of developing fungus without pickle juice. It is expressed as a ratio. If 10 out of 100 pickle-juice drinkers get fungus compared to 20 out of 100 nondrinkers, the relative risk is two.
But what if the chances of having foot fungus without taking pickle juice were only two in a million? Does dropping the chances to one in a million really make enough difference for you to go out and drink the nasty stuff? Relative risk is fairly meaningless if you don’t know the baseline occurrence of the disease. In some cases, statistical significance does not translate into clinical significance.
Then there’s relevancy. How do you know whether a study really applies to you? The discussion section of a study tries to put the results into a bigger context. It will frequently address problems of the research design, and pose alternative ways to interpret the results. It will also address the generalizability of the result. If the study was done in teenage male army recruits, will the same results hold in middle-aged women? If it was done in mice or monkeys or wombats, do we have reason to believe that it will apply to humans?
The conclusion is the place where the big picture of the whole study should come together. After reading the results and pondering different possible explanations for them, do you agree with the conclusions the authors make? You have to decide for yourself whether or not a study has bearing on your own circumstances and decisions. When in doubt, consult a wise health professional.
Once you start investigating scientific proclamations and health claims yourself, you might discover that this line of inquiry can become almost addictive. Getting information straight from the source is empowering! So, the next time you come face to face with a vague statistic or questionable fact, don’t hesitate to probe a little further. What you find out could prevent you from making a lot of wrong-way turns – and from making some regrettable pickle-juice purchases.