Relative Efficacy: A curious phenomenon in therapy, and what happened to the client?

In 1936  Rosenzweig (2) was the first to notice that any time different therapy models were compared, they were invariably found to be equally effective. He borrowed a metaphor from Alice in Wonderland: “At last” the Dodo said “Everybody has won and all must have prizes”.

This phenomenon of outcome equivalence of therapies is now called “the dodo bird effect”.  Every now and then we see a new study find a difference. We saw lots of them when CBT was the new kid on the block. When these studies are put under scrutiny, however, it usually turns out that either they have not controlled for allegiance, which has a large impact on outcome and easily explains the difference, or the comparison wasn’t to a treatment intended to be therapeutic, so was an unfair comparison. It is now such a universally accepted, well tested phenomenon that if a single study holds up to that scrutiny, we really should wait to see if it is reproducible before we take it seriously.

If you compare any number of different models that are intended to be therapeutic, they are always better than waiting lists, and never better than each other. Even models with vastly differing theoretical underpinnings and processes have equivalent outcomes. Can you believe that nearly 80 years after Rosenzweig, some researchers are still designing studies that pit one therapy against another! The 8 such studies funded by the NIMH between 1992 and 2009 cost 11 million dollars! (3. p 267-8)

The popular hypothesis to explain the “Dodo” is that factors common to all approaches account for effectiveness. That hypothesis has satisfied the dodo, but it didn’t set out to explain the phenomenon that therapy is effective, with only 13% of outcome variance attributable to the therapy. The 87% that is extratherapeutic, can’t happen unless the client has the therapy. (Weird phenomenon still to be explained.)

The general perception in our community, and often in students wanting to learn, is that the clever and complex model of therapy must account effectiveness. But, clever models fair just as well as theoretically simple ones. Family Therapy, while effective, has an effect size of .65 (4) compared to .8 (3) for individual therapy, yet the theoretical underpinnings are far more complex.

We do know that “the intention to be therapeutic” is an important commonality.

Compare say EMDR to psycho-analysis. What is happening in both to get equivalent outcomes?

What do you think?

Lets not answer this question too soon as jumping to the wrong hypothesis as history informs us may slow us down by 80 years.

Back in 1986, Luborsky and colleagues, came up with the bright idea of looking at the therapist as the random factor to be studied, rather than the therapy. A refreshing idea, nearly 30 years ago! They looked at the raw data from 4 big studies (3, p 169) and showed therapist effects were much larger than treatment effects. Then Blatt et al. (1996) looked at the NIMH Treatment of Depression Collaborative Research program, a highly regarded and well controlled study. There were some effectiveness scores done so they changed the piles from different treatment to different therapists. They made 3 new piles so they could now compare effective therapists, moderately effective and less effective. They found there were significant differences, independent of the type of treatment, and not related to the therapists experience.

The Dodo would agree that it makes a lot more sense to look for the specific ingredients of effectiveness in the therapist, not in the therapy, but where is the client?

I was looking forward reading the second edition of The Great Psychotherapy Debate. My biggest question was not, “what do we know about the 13% of outcome attributed to what happens in the therapy room?” but,  “What do we know now, 14 years later about the 87% of outcome variance that was called extra-therapeutic (client factors)?

Well, sadly its not there.

So I googled. Top 10 hits take us back to 1992 and Lambert et.al.

Ho hum.

References:

2.  Rosenzweig, S. (1936). Some implicit common factors in diverse methods of psychotherapy: “At last the Dodo said, ‘Everybody has won and all must have prizes.’” American Journal of Orthopsychiatry, 6, 412-415

3. Wampold, B. E. and Zac E. Imel (2015) The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge

4. Sprenkle, D. H (2002) Effectiveness Research in Marriage and Family Therapy. AAMFT

5 thoughts on “Relative Efficacy: A curious phenomenon in therapy, and what happened to the client?

    1. Fantastic review of the state of the research. That 87% is a biggie–and everyone wants a bite at it. Unfortunately, it’s mostly “noise in the signal”–that is error–and a variety of factors the clinician (and therapy) have very little influence or control over (e.g., premorbid functioning). A few have suggested it relates to client strengths and resources. The problem is that as soon as you make “accessing strengths and resources” part of your treatment approach, you are right back in the 13%. Some of the confusion stems from thinking that the 13% isn’t very significant. Recall, that coronary artery bypass surgery has the same effect size as psychotherapy (in general). It shares with psychotherapy, the impact of extra-therapeutic factors, on outcome (e.g., patient premorbid functioning [respiratory functioning] has a big impact on the outcome).

      Like

      1. Thanks Scott. You have been saying we can improve our individual effectiveness by looking at our edge. What if we need to look at the edge of the field of psychotherapy, to improve effectiveness. Good and bad? People who wouldn’t survive bypass got fringey treatments out of desperation, but the success of those fringey things lead the way. People who should have been cured but weren’t got discussed at m&m meetings. It was the extremes where the learning happened, as a place to look without all the noise. I think there has been some of that, but the obsession with the model, instead of the therapist and the client hasn’t got us very far.

        Like

      2. I think therapy is amazingly effective–in general. I’ve waited in vain for new techniques that will revolutionize care via greater effectiveness. Do I hope it happens? Yes, sort of. I’ve also come to the conclusion that psychotherapy does not work like medicine, so the search may continue to be fruitless. Every other month, I get an email and article about a new method. The claims are amazing. Generally, there is no research. When the data finally starts to accumulate, it turns out the methods are no more effective than anything else Perhaps there will be something that comes along that revolutionizes how change takes place. What we know right now, however, is that each clinician can get better. With measurement, feedback, and deliberate practice, outcome becomes more reliable and better. We can WAIT, claiming we “can;t measure” or that something better may come along, or we can get to work. I opt for the latter!

        Liked by 1 person

  1. Hi Gabrielle, I think you are right on the money with this. We know lots of things make a difference in therapy, but those lots of things, tend to make the argument lop sided, as they all have something to do with the therapist; the therapeutic alliance, the theory, the model, even the environment in which the therapist works. Little is made of the myriad of factors that a client brings to the therapy. Sure we know about transference and resistance, but the client is a dynamic organism who changes from one minute to the next biologically and therefore, psychologically. I suspect true measurement of therapy is beyond us, not for want of trying, but that we are unable to control for all variables. Guess we just aren’t the exact science some would like us to be.

    Like

Leave a reply to Kim Michelle Hansen Cancel reply