The average American is bombarded on a daily basis with the results of scientific studies as presented through the untrained and generally unqualified pulpit of the mainstream media. Television, Social Media, Newspapers and Magazines. The output is dizzying, and much of the research is either poorly done or improperly represented.
Even a simple research question like "Is taking a multivitamin beneficial for health" has been the subject of several conflicting reports in recent years that offer completely opposite answers from credible studies done at credible institutions.
It can be extremely hard to make sense of all this information for even the most trained and educated professional. For those without a scientific back round putting all this together can be downright confusing.
Take it from someone with training in statistics, epidemiology and evaluating scientific research: there is a lot of garbage science happening out there. Much of it is actually happening at world class institutions.
The reasons for this are many, but at the heart of the problem is one simple fact: Doing science is really, really, really hard! The simple fact is that it is extremely challenging in our universe of space, time and matter and our world of financial and logistical challenges and barriers to answer any research question definitvely without somehow corrupting the findings along the way.
For example, let's say we wanted to know if Chemical X released into air by Corporation Z causes lung cancer in humans. Sounds simple right? We have many study designs we could use, but the most pristine way to answer this question would be to do what is called a Randomized Controlled Trial. In that study design we would take a sufficiently large group of individuals and assign each person them in one of two groups. We can then expose one of our two groups to Chemical X and another to a placebo solution that resembles chemical X but is in fact known to be harmless.
By randomly assigning each member of our large number of study subjects to one of of our two groups inferential statistics tells us that each separate group is likely to have similar amounts of demographic and potentially confounding traits (i.e age, smoking, socioeconomic status, etc). For that reason our results will have the best chance to be free from confounding factors. Our study subjects can not know which agent they get (chemical X or placebo), which is a technique known as blinding.
Think of some of the problems that even this method would cause. Most cancers take decades to develop. If the time it takes for Chemical X to cause a lung cancer was 30 years imagine the financial cost of following our patients for 30 years? It would cost tens of millions of dollars in today's dollars, a daunting amount for even the most well financed research institution. So cost is one major strike against doing the RCT, even if it is the best method.
Think of the logistical challenges. Even scientists rarely stay at the same institution for an entire career, so who will conduct our trial?
Furthermore certain patients would get lost over the 30 years and be unavailable to be included in the studies final results. It is unlikely that this would be random and there is a certain type of patient more likely to get lost to follow up. Most likely patients with job difficulties, social or family problems would be more likely to relocate to a place where follow up was difficult or impossible.
Patients from certain geographic areas may be less likely to follow up. For example, let's imagine some of the study patients live in a wealthy neighborhood close to the study where the access to medical care is better. Another group of patients lives in a much poorer neighborhood 30 miles away. Who, over the course of our long study, would be more likely to follow up? Could, by mere geography, we be introducing a key form of inadvertent bias into our study?
Or perhaps patients who develop stress or mental health problems may be less likely to comply with rules of study. This is another way we could introduce bias into our findings . Imagine if we inadvertently excluded our poorest or most mentally ill subjects simply because they were less likely to follow up. They are the ones most likely to smoke, live in bad neighborhoods, have exposure to environmental toxins, have poor nutrition and poor access to care. This could certainly bias our results in many different ways.
Certainly some of our patients may die of other causes during our 30 year study, and we would not obviously have as much follow up for them as for the healthy ones who survive until the studies end. We would intrinsically be excluding the sickest patients at baseline from our study. Think of how this could influence our results.
You could try to solve some of these problems by only following a smaller group, but most lung cancers are exceedingly rare. If we only follow 100 patients in each group and our exposed group has 6 cases and our unexposed has 4 cases, can we say with confidence that those 2 extra cases of the 100 were caused by chemical x, verus mere random chance alone? Mind you in this example, those 2 extra cases amount to a 150% increase relative risk! The realities of inferential statistics demand that a big group is usually essential in these types of studies with rare outcomes so that we can exclude the possibility that the observed differences between groups is not merely due to statistical chance, so small groups are generally not reliable.
Also consider the ethical problems this would cause. Is it obviously not right to give people an agent that scientists suspect may be carcinogenic. What if we decided to only give Chemical X to adults given informed consent? We would be excluding children from the study. Could this not substantially alter the results? What if Chemical X is most carcinogenic in childhood.
How would we assess the outcome of lung cancer in our randomized controlled trial anyways? The only reliable way to detect a lung cancer would be to diagnose a mass with a CT scan and then a biopsy the mass when it was detected. Is it ethical to give thousands of people a CT scan every few years? Studies have shown that ionizing radiation is a known carcinogen with a clear dose response relationship (meaning the more radiation you give, the higher the cancer rate).
Of course most masses that develop in peoples lungs are NOT cancerous, but every patient with a mass would need a biopsy for definitive diagnosis in our study. As you might imagine, sticking a needle in peoples lungs, even under the most controlled conditions, can cause all sorts of medical complications. Undoubtedly a few of our thousands of study subjects over the 30 years would suffer serious or life threatening complications. Even if we could get such a procedure approved by an Institutional Review Board, we might likely have to exclude our sickest patients from these dangerous biopsies ( People with limited cardiopulmonary function). We may have introduced another methodological error likely to corrupt our statistics!
Furthermore, how would we even assess exposure to Chemical X. It is nearly logistically impossible to ask thousands of people to carry monitors for Chemical X for a lifetime. Could we rely on medcial records? Geography? Air Quality Testing? Environmental Records? (We will touch more on this in my next column, about other study methods).
Imagine another dilemma. Let's say that Chemical X indeed causes lung cancer after 20 years of exposure, but it also causes asthma first. Imagine that the most sensitive individuals to Chemical get severe asthma (which would ultimately lead to getting lung cancer) and would be able to infer that they were getting Chemical X (remember they are blinded to whether they get chemical X or Placebo) from the fact they have developed asthma (and not the Placebo.) Maybe this would cause them to drop out of study. Maybe it would cause them to seek medical attention sooner and be more likely to be diagnosed? How could each influence our results?
Or what if we weren't studying a toxic chemical but a potentially beneficial drug. Imagine a study design where our blinded subjects got either placebo pills (usually a harmless sugar pill) or a powerful medicine with a possible therapeutic effect. The medicine, as opposed to the placebo, would be likely to cause side effects. Patients who inferred they were getting the actual drug versus the placebo would no longer be blinded.
This may prompt them to change their behavior. They could seek other treatments or medical attention, not try other treatments, change other habits or behaviors or just simply be more vulnerable to powerful placebo affects (which studies show explain up to a third of the observed in research studies). This would especially be a concern if the outcome we were investigating was subjective, like depression or anxiety or pain. Some psychologists argue that the very minute efficacy of anti-depressant medicines in studies is due to this effect and not any real anti-depressant properties of the medicines.
All these questions, and many, many more, arise during the process of conducting a study design (randomized controlled trial) known to be our most pure method of conducting research. My point of this article is that here are literally dozens of places where even a randomized controlled trial could run into trouble, and produce results that are wrong, distorted or vulnerable to manipulation. It should therefore be no surprise that our science often changes by the study.
When we look to answer a seemingly simple question like "Does Chemical X cause Lung Cancer?" it is easy to imagine the hundreds of potential places that our "clean study" may become polluted. Conducting science in a universe of space, time and matter is nearly impossible even under the best of circumstance.
No comments:
Post a Comment