Bad examples of data analysis are bad examples to use in teaching

Steve Simon

2016-08-07

I’m on various email discussion groups and every once in a while someone sends out a request that sounds something like this.

I’m teaching a class (or running a journal club or giving a seminar) on research design (or evidence based medicine or statistics) and I’d like to find an example of a research study that use bad statistical analysis.

And there’s always a flood of responses back. But if I were less busy, I’d jump into the conversation and say “Stop! Don’t do it!” Here’s why.

Sharing bad examples have several problems. They encourage black-and-white thinking versus a more nuanced interpretation. They also can breed fear and/or cynicism in your students. The alternative to bad examples are to pick examples based on other criteria than how bad or good the statistical analysis is, or to pair bad examples with good examples to show the right way to do research.

Black-and-white thinking. An important critical thinking skill is deciding when a problem is so serious that you should totally disregard that study.? Think of this as the fatal flaw approach to critical thinking. It’s easy to find fatally flawed studies. The problem is that the number and frequency of fatal flaws is far outnumbered by non-fatal flaws.

For example, lack of blinding is a frequent limitation of research, but it rarely is a fatal flaw. If it were, you’d have to disregard almost all of the published research on surgery interventions. I do worry about unblinded studies, but I worry about lots of things. It is a combination of weaknesses that make a study unpersuasive, and I normally don’t let one particular type of weakness dominate my thinking.

Most flaws are not fatal, but rather they decrease the persuasiveness of a research study, unless counterbalanced by other strengths.

Many researchers are already pre-disposed to think in terms of black-and-white. Your job is to increase their sense of nuance and get them to think about shades of gray. If you give them a clearly flawed study, they’ll lose their sense of nuance.

Fear. You and I both know that Statistics is not a difficult thing to learn if you take the time to learn it well. There are subtleties and you need an eye for detail. But the sort of people that are in your journal club have already mastered far more difficult skills, such as inserting a feeding tube that goes down into the stomach rather than into the lungs.

Put yourself in your student’s shoes. You’re going to show them a study that has a least on the face of it looks normal. Meaning that it appears in a reasonably prominent journal and with authors with reasonable credentials. Then you’re going to reveal something to them that they probably didn’t know before. After all, you’re the teacher and they’re the students. And that revelation will demolish the study’s conclusions. This might spook them. Even if they understand the fatal flaw after you explain it (not a slam dunk by any means), they will start worrying about what other previously unrecognized fatal flaws are out there that they are unaware of.

They’ll probably heed your message that you should always consult with a statistician before running a study. But they’ll take it a bit farther and be fearful of making a critical appraisal of a study without checking with a statistician first. They’ll be afraid about spending a lot of time outlining the “trivial” features of a study, but because they missed noticing the one fatal flaw, all of that effort was wasted.

Cynicism. The flip side of fear is cynicism. It may be easy to lie with Statistics, so the saying goes), but it is even easier to lie without them. The problem is that your students are likely to remember only the first half of this statement. When you show fatally flawed studies, they’ll probably start thinking that Statistics is a powerful weapon. If it can destroy this study, maybe it can destroy any study. Every study has a hidden fatal flaw, that once revealed by an experienced Statistician will devastate the conclusions. This is the path to a cynical disregard of all published research.

Alternatives. So what would I recommend instead of bad examples? I argue that you should choose examples, not based on how bad (or good) they are, but rather on how interesting they are to your students. If you are a Statistician, you may need some help here with someone who is familiar with the medical expertise and background of your students.

Then tally the strengths and weaknesses of the study. If your students are able to recognize half or more of the ones that you notice, they are doing very well. Even if the weaknesses strongly outweigh the strengths, avoid excessively negative comments. The limitations of a study mean that you should hold off on acting on the recommendations made by the authors until they can provide more persuasive arguments through a better research study. Then talk about what the better research study would be. What additional steps could the authors take to add more strengths or remove some of the existing weaknesses.

Now you probably won’t heed my advice. It’s too much fun to trash other people’s research. But here’s an alternative. Find a study that is fatally flawed and pair it with a study of the same topic that is fairly persuasive. These studies could both make the same claim or they could make contradictory claims. It doesn’t matter. Contrast what one researcher team did well to what the other team did not (or could not).

If you don’t take the time to pair your bad example with a good example, don’t be surprised when your students revert to black-and-white thinking, or if they become cynical or fearful.

You can find an earlier version of this page on my blog.