By Jonathan Caine, MD
Nearly every day medical studies are published, or simply announced to the media. Depending upon how sexy the headlines are the more publicity for the authors. Unfortunately, most of the public (reporters included) are blissfully unaware of how to evaluate a study’s methods (often biased) and conclusions (often overstated or simply wrong). Not to mention my own pet peeve: The fact that often the articles about the studies appear in print long before the actual studies themselves.
So, when the phones start ringing in doctor’s offices from patients concerned about what they heard on the “Today Show”, we have no way of searching for the source material to intelligently try and answer their concerns.
For the past 15 years I have been teaching 3rd year medical students as they rotated through their Pediatric clerkship.
Over the years, I have developed my “Dirty Dozen” list of “Rules for Reading and Interpreting Medical Journal Articles.” I talk to the students about this on Day 1 of their rotation and hand them a copy.
1 – Even the most prestigious of medical journals can still publish junk science.
Exhibit A – The Lancet. A British medical journal that published the original Andrew Wakefield article that spawned the MMR vaccine and Autism brouhaha. The journal subsequently took the highly unusual step of retracting the article not only because they found in retrospect it was not only bad science, but deliberately fraudulent as well.
2 – Many studies are poorly designed and have serious methodological flaws making their conclusions invalid.
Nevertheless, somehow they still get funded and they still get published, despite what is supposed to be editorial review. Perhaps the image of the large glossy medical journals filled with expensive ads from large pharmaceutical companies is too great a lure to resist.
3 – Beware of making conclusions from studies with inherent selection bias of participants.
Stan Freberg, among his other talents, was a marketing genius and one of the first to inject parody into advertising. He once created a magazine ad for Chun King Chinese Food. It showed a lineup of nine smiling Chinese men and one frowning Caucasian man all dressed in scrub suits and white lab coats with stethoscopes. The tag line was: “Nine out of ten doctors recommend Chun King Chow Mein!” A funny, but brilliant demonstration of selection bias.
4 – Prospective Studies are better than Retrospective Studies, which in turn are better than Case Controlled Studies.
As the study design becomes less reliable the conclusions become more suspect.
5 – When reading Case Reports remember the plural of Anecdote is Anecdotes, not Data.
An Anecdote is an interesting story, nothing more. Often it is a “One Hit Wonder” whose results are never to be duplicated.
6 – Statistical Significance is not the same as Clinical Significance.
The benchmark in publishing study results as “significant” is often the famous P-value less than 0.05. That is, there is a 95% confidence level that the results of the study could not have happened merely by chance alone. Even so, many studies have results that are statistically significant, but have no real significance in the diagnosis or treatment of patients. For example, a study may show that the residents of one town may have an IQ 1/10 of a point higher than another town. Statistically quite valid, but with no perceived value in the real world. But, statistical significance can make the study “important” enough to be published.
7 – Correlation is not the same as Causation
A recent headline in the news was: “Conception during winter raises autism risk.” This was an article describing a study in the as yet unpublished June issue of Pediatrics. The researchers found an increased correlation of the month of conception from December to March with the subsequent diagnosis of autism. However, there was no direct evidence that the winter time was the cause of this alleged increase. Maybe it was something occurring in the second trimester, not the first that is the significant factor. Maybe spring is the problem season, not winter. Or, maybe neither has anything to do with it.
8 – The Consensus of Expert’s Opinions are often wrong, but seldom in doubt.
Or, as Dr. Alvan Feinstein opined, “The agreement of ‘experts’ has been a traditional source of all the errors that have been established throughout medical history.”
9 – Variations of Normal does not equate with Pathology.
Pediatricians see this all the time since we are pouring over BMI percentile graphs daily. The muscular, but lean athletic teen whose BMI is greater than the 95th percentile, but who looks like Michaelangelo carved his body out of marble. The graph says he is “obese”, but your eyes tell you a different story.
10 – Clustering of symptoms without a clear etiology do not make a distinct Clinical Syndrome.
My favorite non-syndrome syndrome is Chronic Fatigue Syndrome. You name the symptom and it’s part of this elusive “Syndrome”: Fatigue, aches, pains, sleep problems, allergy symptoms, hypotension, dizziness, anxiety and depression. The following have at one time or another been used to “treat” this condition: Rest, exercise, SSRIs, analgesics, antihistamines, decongestants, Florinef, Tenormin, clonazepam, methylphenidate, acyclovir, immune globulins, interferon and galantamine.
11 – Increased awareness of a condition is not the same as increased incidence.
I refer you to the current autism controversy. The conventional wisdom is that the incidence of autism have been rising dramatically over the past several years casting doubt on the theory that autism is somehow a genetic condition. A recent study fromEnglandin adults purportedly shows that the prevalence of autism among British adults is 1% – closely matching the rate in recent studies of US and British children. The authors conclude, “This favors the interpretation that methods of ascertainment have changed in more recent surveys of children compared with the earliest surveys in which the rates reported were considerably lower.”
12 – Relief of anxiety is a form of treatment and may affect the results of a study.
We all know about the so-called “Placebo Effect” where an inert substance can have an unexpected positive result in treating a medical condition. Medical studies often compare drug treatments head-to-head against placebo treatments to see if they actually work. A 2008 study showed that about 50% of internists and rheumatologists that participated had prescribed placebo treatments to patients who were unaware of this. In a more recent study from December, 2010 patients with Irritable Bowel Syndrome (Yes, syndrome) were actually told they were going to be receiving a placebo drug treatment. Sixty percent of them showed improvement compared with only 35% of those with IBS who remained on their standard treatment.
After the students are done, I give them one Bonus Rule to follow once they go into practice that may prevent future cases of agita:
“First do no harm” doesn’t mean, “If it’s not that harmful, then do it, because it just might work.”