Dear Professor Mean, I review a lot of observational studies in the literature, and I am concerned about the response rates and when they fall so low that they tend to produce problems with selection bias. I’ve heard that anything lower than 80% is a problem. Is that correct?
You need to be a bit careful here. An observational study means (more or less) that the patients get to select the treatment they are getting rather than having their choice be dictated by the flip of a coin. Any time the patients get to choose, you have the risk of selection bias, even if 100% percent of the patients participate in the observational study.
Now I know there are some observational studies where patients don’t get to choose, such as a study comparing BRCA1 to BRCA2 breast cancer patients, but the overall point is still valid.
Problems when patients fail to respond to a call to participate in a survey, for example, might be better characterized as nonresponse bias. There is no consensus in the research community as to how low a participation rate would cause concerns about nonresponse bias. In some settings, you have basic demographic information about the nonresponders, and you can mitigate concerns about nonresponse if you can show that the demographic profile of the nonresponders is comparable to the demographic profile of the responders.
I personally do not worry about nonreponse if the response rate is greater than 90%, I start to worry if the response rate drops below 70% and I get very nervous if it drops below 50%. But this is a very arbitrary choice on my part. You can argue some of these numbers based on a sensitivity analysis, perhaps.
I have not seen any published cutpoints, arbitrary or not, in the peer-reviewed literature. If there are any, I’d love to see them.