Surviving Statistical Spitting Matches

Steve Simon

2006-04-25

I generally dislike an outline or bullet format for presenting information, but I came across a website

that provides such valuable information that I am willing to overlook the lack of narrative text. The title of this web page is quite provocative “Surviving Statistical Spitting Matches” and there is a lot of good advice. The author (Bert Kritzer) developed this material for a professional development seminar for senior staff of the National Conference of State Legislatures in October 1996. He points out that statistics provide evidence, but they seldom give definitive answers. That is an arguable point, but it is worth bearing in mind. In my experience, I have seen many situations where the data themselves are ambiguous and give rise to several competing hypotheses. If two contradictory hypotheses are both consistent with the data, most people will still try to argue that the data still provides evidence of their preferred perspective. A rational person should instead recognize that the data collected so far are insufficient to resolve the controversy. But I have also seen situations where the data provides definitive answers.

Dr. Kritzer offers several questions that you should ask when someone presents you with a statistical argument. The first is

Has the analyst measured the phenomenon or the perception of the phenomenon?

An important question to ask. Are you measuring something real or just what people perceive. I would offer a counterpoint to this, however. Perceptions are sometimes equally important outcome measures. If a patient leaves your office feeling like he/she has not been cured, you have a problem, even if the objective measures of disease say otherwise.

Another important question is

How big is big enough?

which is equivalent to a discussion on the clinical significance of the findings. I’ve written extensively about clinical importance on these web pages (and even in Chapter 3 of my new book). Here are a couple of relevant pages:

Another important issue is the use of leading questions in a survey to produce a desired outcome. Dr. Kritzer asks

Were the data designed to get a specific answer?
How were the questions asked?
Who designed the data collection?

and distinguished between

Asking questions to get specific answers

and

Asking questions to get specific information

This point needs some elaboration. There are a lot of people who use opinion polls to rally support behind their cause. If they want to maximize the evidence in favor of their position, there are lots of little games they can play. For example, if you want to you want to skew the result of a particular quesiton, you might precede it with a comment or another question that frames the debate from your particular perspective. The Statistical Assessment Service website has an interesting example of this.

A survey commissioned for the 60 Plus Association asked if the respondent would be “more or less likely to vote for your Member of Congress if they vote to ELIMINATE the estate tax.” Unfortunately, the question was preceded with the description that “in order to pay for this tax bill, some sons and daughters have had to sell off the farms and small businesses they just inherited." www.stats.org/record.jsp?type=news&ID=318

Getting back to Dr. Kritzer’s presentation, there are another important pair of questions

Compared to what?
Compared to when?

are linked to a series of graphs (graph 1, graph 2, graph 3, graph 4) that show a trend (or lack of a trend) in disciplinary cases from a variety of different perspectives. You might draw markedly different conclusions depending on your frame of reference.

Another important series of questions you should ask, according to Dr. Kritzer are

What is the right comparison?
What is the question I am trying to answer?
Do these statistics speak to that question or another question?

He highlights this with a comparison of mean versus median jury awards, but these questions address a far larger concern than just mean versus median comparisons.

Dr. Kritzer also asks

How has the analyst’s professional orientation affected his or her interpretations?

and while knowledge of a person’s background and potential conflicts of interest are important, often professional orientation is just a red herring. I dislike arguments that start out with a statement like<U+FFFD> “You’re only saying this because you are a ...”. While knowing that someone is an allopath or is on the payroll of a pharmaceutical firm or is an advocate for the environment may color their judgment, I believe that these people can still make careful and thoughtful interpretations of data. In a seminar of mine, I talk about how Stephen Senn, one of the foremost experts on crossover trials has been unfairly denigrated because he receives consulting income from pharmaceutical companies.

Dr. Kritzer’s point, I presume, is that lawyers tend to view the world from an adversarial viewpoint, whereas social scientists tend to take a more collegial view.

There are a lot more ideas presented in this web page. While I dislike the spartan format of an outline, Dr. Kritzer lists some important and provocative questions.