The Science-Based Medicine blog defends itself

Steve Simon

2010-11-09

I get a few fan letters from people, which are greatly appreciated, but when I get the rare critical response, I am even more grateful. It doesn’t matter if the criticism is valid or not. Someone who takes on the unpleasant task of critiquing my work offers some valuable insights on:

One of my webpages, Is there something better than Evidence Based Medicine out there, was highlighted and criticized on the Science Based Medicine blog by David Gorski, and here are some of the things I learned from that criticism. This is an expansion of comments I left on their blog entry.

The criticism was actually quite harsh and I was described as being “naive” and “in denial.” I alternate between being irritated and amused by that characterization, but I’m trying to take it all in stride. The problem with being accused of naivete, is that it is difficult to counter that argument. And it is even more difficult to argue about being in denial. Any counter argument I make is just further proof of my denial.

The blog entry also mentioned people who claim to be a “self-appointed champion” of EBM. So far I’ve been wildly unsuccessful in getting anyone else to appoint me to any role within the EBM community, so self-appointment is my only option. I think of self-appointment as a referral from the only authority who truly understands what is going on. I found out later that the comment about self-appointed champion was not referring to me, but that’s still a title that I’d gladly take.

Enough of the cute stuff. There are some valid criticisms of my writing, but first I need to discuss the term Science Based Medicine (SBM).

What is SBM? Here’s a definition found on the opening entry in the SBM blog:

“the use of the best scientific evidence available, in the light of our cumulative scientific knowledge from all relevant disciplines, in evaluating health claims, practices, and products.”

But how does this differ from David Sackett’s definition of EBM?

“the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”

The only substantial difference I see is the adjective “scientific” that appears twice in the definition of SBM. The claim on the SBM blog is the EBM ignores scientific plausibility. Actually, ignores is too strong a word.

“EBM ‘levels of evidence’ hierarchy renders each entry sufficient to trump those below it. Thus a ‘positive’ clinical trial is given more weight than ‘physiology, bench research or “first principles”,’ even when the latter definitively refute the claim.” http://www.sciencebasedmedicine.org/?p=42

A comment on the blog post that criticized me also seems to get at the whole point of SBM.

“SBM is what EBM should have been. The difference is that SBM erects guard rails of scientific plausibility while EBM, without the strictures of scientific rigor, can be pushed effortlessly into the weeds by passing fashions and passions.”

I disagreed with this comment. After all, I wrote a book about EBM, and I mention scientific plausibility in Chapter 4. Is that sufficient consideration? I also have criticized the rigid hierarchy of EBM on my website. And I’m not the only proponent of EBM who has criticized the rigid hierarchy.

“The view is widely held that experimental methods (randomised controlled trials) are the ‘gold standard’ for evaluation and that observational methods (cohort and case control studies) have little or no value. This ignores the limitations of randomised trials, which may prove unnecessary, inappropriate, impossible, or inadequate. Many of the problems of conducting randomised trials could often, in theory, be overcome, but the practical implications for researchers and funding bodies mean that this is often not possible. The false conflict between those who advocate randomised trials in all situations and those who believe observational data provide sufficient evidence needs to be replaced with mutual recognition of the complementary roles of the two approaches. Researchers should be united in their quest for scientific rigour in evaluation, regardless of the method used.”

But if someone wants to point out that EBM needs work, I’m fine with that. I dislike that they think that EBM needs to be replaced with something better.

Now, I criticized SBM when I wrote

So I think that this criticism of EBM is putting up a “straw man” to knock down. No thoughtful practitioner of EBM, to my knowledge, has suggested that EBM ignore scientific mechanisms.

and I was rightly criticized for a falling for the “no true Scotsman” fallacy. I’d like to believe that most practitioners of EBM do consider scientific mechanisms, and that the people who don’t are practicing PIEBM (Poorly Implemented Evidence Based Medicine). But I really don’t have any data to support this belief.

I’d argue that a definition of EBM

“the integration of best research evidence with clinical expertise and patient values” Source: Sackett DL, Straus SE, Richardson WS, et al. Evidence-based medicine: how to practice and teach EBM. 2d ed. Edinburgh: Churchill Livingstone, 2000.

allows for incorporation of mechanisms under the umbrella of clinical expertise, but this is a stretch, and besides, how people define EBM and how they practice it do not have to be the same thing.

I think that scientific plausibility does have some issues. What do you do, for example, when there are scientifically plausible explanations on both sides of a hypothesis. Also, who decides what is plausible. But I don’t really want to find myself on the opposite side of the fence from those who are advocating greater use of scientific plausibility in medical research. So when I said

“I would argue further that it is a form of methodolatry to insist on a plausible scientific mechanism as a pre-requisite for ANY research for a medical intervention. It should be a strong consideration, but we need to remember that many medical discoveries preceded the identification of a plausible scientific mechanism.”

that was my own version of a straw man. The SBM website believes the scientific plausibility is insufficiently considered by proponents of EBM, but they really haven’t advocated, as far as I can tell that scientific plausibility replace randomized trials at the top of the EBM hierarchy. In particular, Dr. Gorski’s comment

“We do not criticize EBM for an ‘exclusive’ reliance on RCTs but rather for an overreliance on RCTs devoid of scientific context.”

is probably a fairer characterization than mine. In my defense, I did not say that the SBM blog was guilty of insisting on a plausible scientific mechanism for any research, but I still should have been clearer.

So how would you resolve this issue? I mentioned in my comment on the SBM blog how difficult this would be.

We can each accumulate dueling anecdotes of when EBM proponents get it right or when they get it wrong, but I doubt that there will ever be any solid empirical evidence to adjudicate the controversy. Without such evidence, we’ll be forever stuck accusing the other side of being too naive or too cynical. You see EBM as being wrong often enough that you see value in creating a new label, SBM. I see SBM as being that portion of EBM that is being done thoughtfully and carefully, and don’t see the need for a new label.

I generally bristle when people want to create a new and improved version of EBM and then give it a new label.

There’s a group trying to replace the term “evidence based medicine” with “value based medicine” and I see the same problems here. In my experience, people who practice EBM thoughtfully do incorporate patient values into the equation, but others want to create a new label that emphasizes something they see lacking overall in the term “evidence based medicine.”

Instead, I prefer the Sicily statement on EBM. They see EBM as something that evolves over time.

“The term ‘Evidence-based medicine’ was introduced in the medical literature in 1991. An original definition suggested the process was ‘an ability to assess the validity and importance of evidence before applying it to day-to-day clinical problems’. The initial definition of evidence-based practice was within the context of medicine, where it is well recognised that many treatments do not work as hoped. Since then, many professions allied to health and social care have embraced the advantages of an evidence-based approach to practice and learning. Therefore we propose that the concept of evidence-based medicine be broadened to evidence-based practice to reflect the benefits of entire health care teams and organisations adopting a shared evidence-based approach. This emphasises the fact that evidence-based practitioners may share more attitudes in common with other evidence-based practitioners than with non evidence-based colleagues from their own profession who do not embrace an evidence-based paradigm.

“EBP evolved from the application of clinical epidemiology and critical appraisal to explicit decision making within the clinician’s daily practice, but this was only one part of the larger process of integration of evidence into practice. Initially there was a paucity of tools and programmes to help health professionals learn evidence-based practice. In response to this need, workshops based on those founded at McMaster by Sackett, Haynes, Guyatt and colleagues were set up around the world. During this period several textbooks on EBP were published accompanied by the development of on-line supportive materials.

“The initial focus on critical appraisal led to debate on the practicality of the use of evidence within patient care. In particular, the unrealistic expectation that evidence should be tracked down and critically appraised for all knowledge gaps led to early recognition of practical limitations and disenfranchisement amongst some practitioners. The growing awareness of the need for good evidence also led to awareness of the possible traps of rapid critical appraisal. For example problems, such as inadequate randomisation or publication bias, may cause a dramatic overestimation of therapeutic effectiveness. In response, pre-searched, pre-appraised resources, such as the systematic reviews of the Cochrane Collaboration, the evidence synopses of Clinical Evidence and secondary publications such as Evidence Based Medicine have been developed, though these currently only cover a small proportion of clinical questions.”

I also believe there is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work. Dr. Gorski disagreed,

“Simon then appeals to there being some sort of ‘societal value’ to test interventions that are widely used in society even when those interventions have no plausible mechanism. I might agree with him, except for two considerations. First, no amount of studies will convince, for example, homeopaths that homeopathy doesn’t work. Witness Dana Ullman if you don’t believe me. Second, research funds are scarce and likely to become even more so over the next few years. From a societal perspective, it’s very hard to justify allocating scarce research dollars to the study of incredibly implausible therapies like homeopathy, reiki, or therapeutic touch. (After all, reiki is nothing more than faith healing based on Eastern mystic religious beliefs rather than Christianity.) Given that, for the foreseeable future, research funding will be a zero sum game, it would be incredibly irresponsible to allocate funds to studies of magic and fairy dust like homeopathy, knowing that those are funds that won’t be going to treatment modalities that might actually work.”

I realize that some people would never be convinced, no matter how many negative trials are published on a topic, but I also believe that there are enough people who would be convinced to justify the expense and trouble of running these trials. I also disagree with the comment about scarce resources. Money spent on health care is a big, big pot of money and the money spent on research is peanuts by comparison. If we spend some research money to help insure that the big pot of money is spent well, we have been good stewards of the limited research moneys.

I do have to mention a financial conflict of interest here. One of my regular clients for P.Mean Consulting has been Cleveland Chiropractic College. Some chiropractors bristle at the thought that they are part of alternative medicine, but the link is strong enough in most people’s minds that I should disclose this link. The folks at Cleveland Chiropractic College have not been using me much recently, but I’d love to have them back as a regular client.

I believe that everybody deserves their day in court and as long as someone is not trying to abuse the research method to make a point, I’m happy to work with them. I generally put aside any skeptical doubts and try to see what the data says. For what it’s worth, I have found the people at Cleveland Chiropractic College to be very level headed. They want to find out where chiropractic works and where it doesn’t work because it is a waste of everybody’s time and energy to continue to use ineffective therapies. This is perhaps not a general tendency among chiropractors, so I am very fortunate here.

I also am partially supported at my UMKC job through a grant looking at economic expenditures of patients who use CAM providers. This is an NIH grant, and I do not believe that holding an NIH grant on a topic makes you biased. Some people, though, feel that anyone associated with a grant has the temptation to exaggerate the problem being studied so as to increase the chances of getting future funding.

I have a view about alternative medicine that some might characterize as conflicted. I gave a talk about what alternative medicine can teach us about evidence based medicine, and you can find an overview of this talk at my old website. You should read this to get a sense of my perspective on alternative medicine.

The SBM blog frequently cites the p-value fallacy and failure to adopt Bayesian methods as a critical failing of EBM. I disagreed in my response to Dr. Gorski’s blog post.

“But I’m still confused about the Bayesian argument you are making on this site. I can imagine one Bayesian placing randomized trials at the top of the hierarchy of evidence and I can imaging another Bayesian rejecting any research that requires going ‘against huge swaths of science that has been well-characterized for centuries.’ I can even imagine a Bayesian having ‘a bit of a soft spot for the old woo.’ In each case, the Bayesians would incorporate their (possibly wrong-headed) beliefs into their prior distribution. I see the argument about Bayesian versus p-values as orthogonal to the arguments about SBM versus EBM. Am I missing something?”

I’d be very interested in what Dr. Gorski and others on the SBM blog say about Bayesian methods.

Summary

If I had to say one thing about EBM, I would say that it is largely self-correcting. The flaws in EBM, to a large extent, are discovered by the tools of EBM. I can cite lots of examples of this:

So rather than create something new, why not just let EBM evolve to reflect a greater emphasis on plausible scientific mechanisms? The blueprint in the Swaen reference could easily be used to provide convincing evidence that studies without a plausible scientific mechanism are more likely to be false positive.

Postscript

One of the other commentators of Dr. Gorski’s blog entry noted a glaring error in an entirely different post on my website.

From the first link on post modernism: “A tulip bulb is a rhizhome.” AAArgh!!! I’ll get around to reading the rest. But, please, please, Mr. Simon fix that grievous mistake. I know it is a little thing, but a tulip bulb does have a center, a form, and direction and is completely different from a rhizome. Change it to an iris rhizome, or ginger rhizome (well, there is a type of bulb iris, completely different flower). Please. More people know what ginger root looks like. At least more than those who know what a tulip bulb looks like (kind of like a tapered onion, it even has layers).

I really did not know that. I guess that shows how little I truly know about science. In my defense, my mind was probably still a bit fuzzy after reading all that post-modern writing. I see a lot of value in post-modern philosophy when it isn’t taken to excess, but it is very hard to read.

You can find an earlier version of this page on my original website.