Responding to a critique of meta-analysis

Steve Simon

2005-10-10

A contributor to the Evidence-Based Medicine list offered a possible criticism of meta-analysis. The criticism was along the lines of (I am paraphrasing and summarizing):

Suppose we have two randomized trials coming up with exactly the opposite conclusion. Assume that bias and confounding are not an issue here. Then one study may be wrong. When meta-analysis takes an average of a correct value and an incorrect value, you will get a meaningless result.

Now assume that the two results differ because they were studying two very different patient populations. An average here is also misleading, unless you weight by the proportion of the true overall population that these two patient populations come from.

As an aside this reminds me of the old joke that a statistician is the only person who could stick his head in an oven and his feet in a bucket of ice and say that he feels fine on average.

Here’s the gist of my response.

The error that this person describes is the error not of meta-analysis, but rather the error of a meta-analysis that ignores heterogeneity. Is heterogeneity present in all meta-analyses and does it result in flawed conclusions in all meta-analyses? Arguing that it is possible to make an error using meta-analysis naively does not imply that all meta-analyses are flawed, especially considering that the folks performing meta-analysis have been aware of the problems caused by heterogeneity and have several proposed solutions to this problem.

Furthermore, if you have two contradictory RCTs, it may be that the patients that you see are dissimilar from either of the two trials. Maybe there is a third reality that applies to your particular patients. So you can’t trust the first trial, you can’t trust the second trial, nor can you trust the meta-analysis.

So what can we conclude here? Meta-analysis can be misleading? That’s hardly surprising. All research can be misleading if it is conducted poorly. And all research can be misleading if you fail to consider how the patients you see are different from the patients being studied by the researchers themselves.

All of our research tools are flawed. The trick comes from understanding when the flaws are trivial and when they are fatal.

Furthermore, this looks like a “straw man” argument to me. You describe an application of meta-analysis that is so simplistic that it does not represent anything even remotely close to how meta-analysis is practiced today. You cite a “flaw” of meta-analysis that has been well known for at least a decade or two and for which there has been substantial research efforts.

If meta-analysis is indeed a flawed process, cite a particular meta-analysis that has lead to an incorrect conclusion and then explain to us why it has failed. That would lead to a far more productive debate than some hypothetical speculation.

Finally, to say a research method is flawed is troublesome when you don’t offer a serious alternative. There are some folks who argue that a large scale randomized trial is the definitive approach and that meta-analysis is only second best. There are others who argue that a carefully done meta-analysis is superior. Ironically, heterogeneity is one of meta-analyses strengths, as its proponents have noted. If a research finding is replicated across a variety of patient populations under a variety of research designs, the meta-analysis is the best way to show robustness of the finding. In other words, a uniform finding across heterogenous studies is far more persuasive than a single finding in a single population.

But rather than offer a serious alternative to meta-analysis, the author of these comments seems to be saying that almost all research is flawed. That’s a very dangerous thing to be saying, because disregarding good research findings, no matter what the research design that generated them, can be dangerous to our health.

Rather than asking if meta-analysis is flawed, I would ask “Are we better off or worse off having the tool called meta-analysis in our research arsenal.” And my answer is yes, we’re much better off as long as we keep these tools out of the hands of amateurs.

You can find an earlier version of this page on my original website.