Listen to this article:
I love hearing about life-changing medical breakthroughs as much as the next geezer, but more for their entertainment value than their prescriptive qualities. Anyone who’s been paying attention in recent years can’t help but find some amusement in the oscillating nature of scientific inquiry: “PSA Tests Save Lives!” “PSA Tests Lead to Unnecessary Treatment!” “Stents Prevent Heart Attacks!” “Drug Therapy More Effective Than Stents!” “Fish Oil Boosts Brain Health!” “Fish Oil Offers No Protection Against Dementia!”
The most recent example of researchers toppling a long-held treatment approach? Last week, headlines announced the results of a major study suggesting that some 70 percent of breast-cancer patients who would otherwise be candidates for chemotherapy don’t really need the toxic treatment. “This is very powerful,” study coauthor Ingrid A. Mayer, PhD, told the New York Times. “It really changes the standard of care.”
At least until new research refutes its findings.
That may sound predictably cynical, but it’s a plausible conclusion to reach after hearing about the work of renowned Greek researcher John P. A. Ioannidis, PhD. Since the mid-1990s, Ioannidis has been scrutinizing clinical studies from all over the world to determine their usefulness. His findings reveal why so many of these medical breakthroughs turn out to be less conclusive than the headlines would suggest.
“Overall, not only are most research findings false, but furthermore, most of the true findings are not useful,” he notes in a 2016 PLoS Medicine report.
It might be easy to dismiss Ioannidis as some anti-science crank if you choose to ignore his résumé — which includes stints at Harvard, Johns Hopkins, the National Institutes of Health, and Stanford — as well as the grudgingly positive response to his critique from his peers in the research community. His initial salvo, a 2005 paper in PLoS Medicine, featured a mathematical model that predicted with surprising accuracy the rates at which the results of published clinical studies were refuted. And the numbers were jaw-dropping: 80 percent of nonrandomized studies, 25 percent of standard randomized studies, and 10 percent of large randomized studies.
“You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” Doug Altman, director of Oxford’s Centre for Statistics in Medicine, told the Atlantic in a 2010 Ioannidis profile.
Those ideas go to the heart of what’s wrong with medical research and why the elderly, especially, should think twice before embracing any new healthy-living protocol that comes down the pike.
The basic problem, Ioannidis argues, is that scientists bring their biases to bear on every aspect of their research: what questions they ask, who they choose to participate, what measurements they use, how they analyze the data, and how they present their results. Too often, he notes, researchers set out to prove a theory and design the study to reach that result, rather than focusing on an issue and accepting whatever results emerge.
“At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” Ioannidis told the Atlantic. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
To earn financial support and career advancement, scientists need to publish their research in well-regarded journals. More eye-catching results are more likely to be printed in such publications, or to make the front page of your newspaper or headline the evening news — only to be refuted at a later date.
Even the most venerable peer-reviewed journals are not immune to the more-than-occasional scientific faux pas. In a 2005 report published in the Journal of the American Medical Association, Ioannidis and his team reviewed 49 of the most widely cited research findings between 1990 and 2003 — including popular studies on hormone-replacement therapy, coronary stents, aspirin to control blood pressure, and vitamin E to lower the risk of heart disease — and found that among the 34 retested studies, 14 (41 percent) had published inaccurate or exaggerated results.
The real problem, Ioannidis argues, lies in the unreasonable expectations we place on the scientific community. As long as healthcare consumers expect that every finding issued from research institutions is correct, scientists will continue to deliver questionable “breakthroughs.” If we lower those expectations and begin to recognize scientific inquiry as the process it is, we may begin seeing more responsible — albeit less headline-grabbing — thinking about health-supporting strategies.
“Science is a noble endeavor, but it’s also a low-yield endeavor,” Ioannidis notes. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”