The perils of shortening a survey

Steve Simon

2016-03-22

Dear Professor Mean, I’m trying to publish a research study that involves some survey data, but the peer-reviewer is complaining about something I did. There was a scale that I used that had five items, but because the survey was already very long, I used only three of the five items. The peer reviewer seems to think that I arbitrarily chose these three items after looking at the data. How should I respond?

Let’s be honest here. Your choice of which three questions to use was arbitrary. You didn’t run a formal data analysis that showed that a three item scale has just as good psychometric properties as a five item scale.

So just say in the methods section that you needed to shorten the survey and the choices as to what items to remove was made based on the subjective opinion of a subject matter expert (yourself).

Then when the excessively picky reviewer (in my opinion) rejects your paper, send it to a different journal.� In general, it’s a bad idea to change scales because a reviewer is likely to raise concerns. But I think that the practicalities of research sometimes require you to shorten a survey, because a shorter survey with uncertain psychometric properties might still be preferred to a longer survey that half of your volunteers refuse to complete.

Research is often a series of difficult compromises between theoretical ideals and practical realities. Shortening your scale is limitation that you do need to acknowledge, but having a limitation like this should not disqualify you from publishing your results. If we only published research with no limitations, the research journals would be a lot thinner.

The one thing that you emphasize is that the decision to shorten to three items rather than five items was made PRIOR TO THE COLLECTION OF ANY DATA. You can prove this, if the peer-reviewer doesn’t trust you, by offering to share the protocol that you send to and that was approved by the IRB. It’s still an arbitrary choice, but it is not one that would be biased by the data analysis. It’s not like you ran ten different analyses and then chose the one with the smallest p-value.

You can find an earlier version of this page on my blog.