Psychotherapy research that hangs in the air exactly the same way that bricks don’t

The obsession with causality and investigating pathology in the field of psychotherapy while fascinating, hasn’t led to improved outcomes in mental health interventions. Our understanding of the brain has advanced dramatically in recent decades, but hasn’t led to any advancement in interventions to reduce suffering.

If you drop a brick on your foot, once you get past the shock and pain, you would most likely be interested in doing anything you can to reduce the pain and help the healing.

Imagine when you seek advice you discover that the experts are mostly interested in the brick. They want to know its structure and make up, its chemical composition, the velocity and acceleration of impact. Then if you manage to get them interested in your foot they start to measure chemical imbalances in said foot and ascribe those imbalances as cause for your pain and suffering, crossing their arms and smiling with satisfaction. They then draw your attention to the decades of research findings looking at structural changes found on sophisticated scans of similarly injured feet. Next they tell you that these structural changes are causing your suffering and limitation, with the implication that you should be grateful for such psycho-education. You then find out that after 50 years of such research they are no better now than they were then, at speeding up the healing process or reducing your pain and suffering. Meanwhile your grandmother has bandaged your foot, elevated it, applied ice and brought you a nice cup of tea. Researchers chuckle dismissively at this, having been too busy with their pHd’s to listen to their own grandmothers.

Today is the 8th anniversary of the founding of The Milton H Erickson Institute of Tasmania, and Erickson’s 114th birthday. A good day for a provocative post!

Share this post if you would like to see psychotherapy research focus on assisting healing.

Building a therapeutic relationship when a client seems difficult

“There is no resistant client. It is only that we have not found a way to work with them.” – Karl Thom

Most therapists learn a framework for working with “difficult” clients, but so often, in the moment, experiencing the difficulty, knowing what to do doesn’t seem so obvious. Reflecting after the session or in a supervision conversation, it often becomes obvious, but by then it is too late.

This is probably one of the commenest issues a therapist brings to supervision, and one of the most obvious areas where deliberate practice has a chance to improve effectiveness.

But what to practice?

At The Centre of Effective Therapy, we teach that the most useful beacon to alert you to a problem with the therapeutic relationship, is your own response. The best thing about that is that it is hard to miss.

There are 3 common responses that we have to clients.

The first is that we like them. We are aware that the client is collaborating, responsive and willing, and the work flows. This reminds us why we do this work and we enjoy the interaction.

The second response is that we find ourselves feeling overwhelmed, inadequate or anxious. Our heart sinks and we feel the sudden need to refer this client to someone better than us. We might look at the clock, wishing the session was over. This client seemingly has a huge problem and the difference is that they want us to fix it. They are not invested in a collaborative process, and while we might see that there is something they personally could do differently to ease the problem, they are not ready to see yet. When we see their name in our appointment book for follow up, we might even hope they don’t turn up. But they always come! Steve de Shazer called these people complainants, but if you have learned a different framework, you will likely identify with the feeling, and know where you are in your frame.

The third response we have is frustration. We get a disturbing desire to shake our client. We might feel irritated or annoyed that they are wasting our time and unwilling to look at their issues. We think they are avoidant. When we see their name in our appointment book for follow up, we know they likely will not turn up. Steve de Shazer called these clients visitors, but again, in different frames the response is still the same, and is the easiest thing for us to identify in the moment.

Rob McNeilly takes these responses to the extreme and says that we fall in love with our customers, we feel suicidal with our complainants, and we feel homicidal with our visitors.

We all want to enjoy our work and be effective. We know the alliance is pivotal, and when we call a client difficult, it is usually because we are either feeling frustrated or inadequate. This is our cue to restore the relationship. We could say simply, that we need to do something so that our response changes from irritation or inadequacy, to liking the client. It is our job to do this, and not our client’s.

In our programmes, we teach students that if they have either of these two responses, they simply need to down tools and validate the client”s experience.

That’s it. Simple.

You’ll notice though that the validation is a bit different in the two categories when you get it right. If we feel inadequate, we will likely find ourselves  validating the client’s experience of their problem and how huge it is. If we feel frustrated we tend to validate the client’s experience of not wanting to be here. We may even engage them in the idea of how to get out of having to come again, and find at this point we have a willing collaborator!

If we do this sufficiently we will then be able to engage them in looking at what they might be able to do differently, and as we do this, we notice our response shifts and we start to enjoy the work and we start to like the client.

We restore the relationship by fixing our response.

Next time you find yourself feeling suicidal, or homicidal with your client, try downing tools and validate, actively working to restore the relationship by focusing on their experience, not yours. If you then find yourself enjoying the work and liking the client, you know it is fine to proceed. If the unhelpful response recurs, down tools again, validate, restore and move on.

Other ways that I have noticed help when I am disengaging, is to ask the client what they like to do. A different version of them appears as they talk about what they like, and I find it much easier to be interested and engaged, discovering their resources and knowing that bringing those resources to the problem area will later be fruitful.

I remember reading that Erickson liked to find something about every client that he could appreciate. In one instance he was struggling to find anything with an old man who seemed to have little concern for anything other than himself, and just when it seemed there was nothing, the old man smiled revealing a mouth full of decaying teeth, but one of them was gold. The one gold tooth!

What do you do to engage a so called difficult client? Please leave a comment.

The decline Effect: “I believe in miracles”

I think it was Sir William Osler who said, when a new medication comes out you should use it as often as you can while it is still effective. Science has long known about this phenomenon. I didn’t know it was called “the decline effect” until I read this great blog post from Scott Miller.

We have seen it in therapy with the promise of new therapies, like EMDR and CBT. Initial research giving great hope, and then with time, more research shows they are no better. Then, sadly, we throw the baby out with the bathwater. Science doesn’t look at the amazing phenomenon and try and come up with a hypothesis. It keeps the same old hypothesis and ignores the phenomenon.

In therapy, we have the opportunity to use these weird phenomena for the good of the client, while patiently waiting for science to tell us more. Things like the placebo effect, allegiance and the decline effect are a nuisance in research, or at least those researchers trying to explore an observer independent reality, (which Humberto Maturana reminds us we have no way of knowing exists outside our closed nervous system) , but they are gold nuggets for the savvy therapist.

There is this wonderful phenomenon that happens when media starts to report about effective treatments. “New treatment offers hope” “Miracle cure for…” that sort of thing. Something happens in the therapist who goes to learn this new technique, and something happens in the client who reads it in the newspaper, and in their families who have been so worried about them, and it’s nothing short of magic.

We human beings are capable of believing, and therefore in experiencing magic. A child believes in Santa, or the tooth fairy, and we were all children once.

A woman came to see me to give up smoking. She wanted hypnosis. She was convinced it would work because a friend of hers had seen me for the same thing and had a “miraculous cure.” And so I harnessed her expectancy and launched into what I listened that she listened I did with her friend.” Whamo, a one session wonder. In the follow up session though she said she wanted to do some more hypnosis. As I write this I remember I was reluctant as I usually don’t like to mess with magic, and of course my doubt may have lead to the outcome. She started smoking again. She told me that this kind of thing happens to her a lot. She will go and see some healing person for some issue, say a chiropractor for her back, and the first session is always the stuff of miracles, and the second brings on a recurrence of some symptoms.

So I wondered with her. “What is it that you bring to that first session that is your recipe for magic?” I spoke some words like openness, not knowing, expectancy, faith, being the zen student, probably none sufficient to do justice to the experience but she started to connect with it. We then did a session of hypnosis to connect her to that ability, so the magic could be a function of her and not the healer or the healing. She stopped smoking and agreed not to mess with the magic.

If you haven’t met Raymond the mouse before, I hope you will enjoy this clip.

https://www.youtube.com/embed/O-l_BT_qTkQ” target=”_blank”>

I invite you to find your own way to put down the broom and sweep that decline effect out of your work and your life.

My husband Rob McNeilly has created an online course to explore these wonderful phenomenon of effective therapy. We have been exploring them in conversations and he has distilled them into something wonderful. He is a masterful teacher and of course I am biased, but if you like the idea of exploring ways to identify what you as an individual can do to explore your growing edge, take a look here.

Allegiance: A powerful force in therapy

We all want to be effective in our work. There’s nothing more satisfying than when a client gets over their problem. What would it be like to believe that is possible with all clients? Well, it turns out that believing is seeing.

Allegiance is the degree to which the person delivering the treatment believes that the treatment is efficacious. It turns out that in therapy, allegiance has a  large impact on outcome. (effect size up to .65) (1)

In therapy, unlike medicine, the protocol and its specific effects are not that important, but the degree to which the therapist believes that what they do will work is crucial. Allegiance effects account for as much as 70% of treatment effects.(1). Adherence to treatment manual without attention to the client, on the other hand may actually make things worse.

This is another weird phenomenon of therapy that is sorely missing a scientific explanation.

We don’t need to wait for an explanation, however, to use the power of this phenomenon in our work. The trouble is though, that we also know from outcome research that the model is not the thing that creates effectiveness. The 400+ different models are all as good as each other but only account for up to 1% of outcome variability.

How can we use the power of allegiance while still acknowledging how small a role the model plays in effectiveness? It’s hard for me to believe in a model when it only accounts for 1% of the variability in outcomes.(1)

Believing in ourselves might seem like a good idea. Therapists effects exceed treatment effects, accounting for 3-7% of outcome variance. Who we are is more important than what we do. But, wait, therein lies another weird phenomenon. Professional self doubt seems to correlate with effectiveness. The new, fresh student filled with enthusiasm and hope and knowing that they don’t know, is more often more effective than the seasoned therapists who knows that they know. Supershrinks also doubt themselves more.  Scott Miller says it more clearly than I can here.

So that leaves the client. I choose to believe in the client.

Milton H Erickson said that “a baby doesn’t know it can walk, but you do”. You watch that baby learn to walk with a solid belief that they will succeed, and you don’t realise how powerful the mood of your knowing is for that baby to walk into. He was atheoretical in his approach to therapy, but one of the most spectacular things about him and his work was his unwavering belief in the client.

Some years ago a talented student I was working with rang me up in tears because she had failed an oral exam. The protocol was that she could do it again the following day. She needed calming before we could do anything useful, so I grappled for a distraction. I told her a story. The night before I had had a dream that our newly installed gas hot water stopped working mid shower, and I had a bizarre phone call trying to get the old electric one put back in. I woke and thought, “what a weird dream”. Half an hour later my son yelled out from the shower that the hot water had gone off. I said “Oh my god I dreamed that happened”. My husband rang the gas man who was just around the corner so he came straight over. It turned out that someone walking down our street had reached into our front yard and turned the gas meter off. Easily fixed and kind of funny. The student said to me “For god’s sake Gabrielle, why can’t you dream something useful?!” We both laughed. The next morning I texted her that I had dreamed that she passed her exam. She said when she read the text she had a wonderful feeling of confidence. She went into the exam feeling invincible and performed outstandingly.

We’ve all heard stories of faith healing, or shamanic healing, and the power of belief. Conversely, pointing the bone, in Australian Aboriginal culture causes the death of the recipient. We’ve all had clients believe they have an incurable mental illness proclaimed by a well intended, if ill informed psychiatrist, and we must first work to dispel this myth.

When we get resigned, we start to believe that a client can’t get better and we throw away one of the most powerful aspects of what we do. Once we understand the power of belief we can use it for the good of the client. One of the ways that I notice I can restore my faith in the client is to ask what they like to do. Resources and resourcefulness appear, often in surprising ways.

What do you do to restore allegiance when your faith wavers? Please leave a comment.

1. Wampold, B. E. (2001) The Great Psychotherapy Debate: Models, methods and findings. Mahwa, NJ: Erbaum

Relative Efficacy: A curious phenomenon in therapy, and what happened to the client?

In 1936  Rosenzweig (2) was the first to notice that any time different therapy models were compared, they were invariably found to be equally effective. He borrowed a metaphor from Alice in Wonderland: “At last” the Dodo said “Everybody has won and all must have prizes”.

This phenomenon of outcome equivalence of therapies is now called “the dodo bird effect”.  Every now and then we see a new study find a difference. We saw lots of them when CBT was the new kid on the block. When these studies are put under scrutiny, however, it usually turns out that either they have not controlled for allegiance, which has a large impact on outcome and easily explains the difference, or the comparison wasn’t to a treatment intended to be therapeutic, so was an unfair comparison. It is now such a universally accepted, well tested phenomenon that if a single study holds up to that scrutiny, we really should wait to see if it is reproducible before we take it seriously.

If you compare any number of different models that are intended to be therapeutic, they are always better than waiting lists, and never better than each other. Even models with vastly differing theoretical underpinnings and processes have equivalent outcomes. Can you believe that nearly 80 years after Rosenzweig, some researchers are still designing studies that pit one therapy against another! The 8 such studies funded by the NIMH between 1992 and 2009 cost 11 million dollars! (3. p 267-8)

The popular hypothesis to explain the “Dodo” is that factors common to all approaches account for effectiveness. That hypothesis has satisfied the dodo, but it didn’t set out to explain the phenomenon that therapy is effective, with only 13% of outcome variance attributable to the therapy. The 87% that is extratherapeutic, can’t happen unless the client has the therapy. (Weird phenomenon still to be explained.)

The general perception in our community, and often in students wanting to learn, is that the clever and complex model of therapy must account effectiveness. But, clever models fair just as well as theoretically simple ones. Family Therapy, while effective, has an effect size of .65 (4) compared to .8 (3) for individual therapy, yet the theoretical underpinnings are far more complex.

We do know that “the intention to be therapeutic” is an important commonality.

Compare say EMDR to psycho-analysis. What is happening in both to get equivalent outcomes?

What do you think?

Lets not answer this question too soon as jumping to the wrong hypothesis as history informs us may slow us down by 80 years.

Back in 1986, Luborsky and colleagues, came up with the bright idea of looking at the therapist as the random factor to be studied, rather than the therapy. A refreshing idea, nearly 30 years ago! They looked at the raw data from 4 big studies (3, p 169) and showed therapist effects were much larger than treatment effects. Then Blatt et al. (1996) looked at the NIMH Treatment of Depression Collaborative Research program, a highly regarded and well controlled study. There were some effectiveness scores done so they changed the piles from different treatment to different therapists. They made 3 new piles so they could now compare effective therapists, moderately effective and less effective. They found there were significant differences, independent of the type of treatment, and not related to the therapists experience.

The Dodo would agree that it makes a lot more sense to look for the specific ingredients of effectiveness in the therapist, not in the therapy, but where is the client?

I was looking forward reading the second edition of The Great Psychotherapy Debate. My biggest question was not, “what do we know about the 13% of outcome attributed to what happens in the therapy room?” but,  “What do we know now, 14 years later about the 87% of outcome variance that was called extra-therapeutic (client factors)?

Well, sadly its not there.

So I googled. Top 10 hits take us back to 1992 and Lambert et.al.

Ho hum.

References:

2.  Rosenzweig, S. (1936). Some implicit common factors in diverse methods of psychotherapy: “At last the Dodo said, ‘Everybody has won and all must have prizes.’” American Journal of Orthopsychiatry, 6, 412-415

3. Wampold, B. E. and Zac E. Imel (2015) The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge

4. Sprenkle, D. H (2002) Effectiveness Research in Marriage and Family Therapy. AAMFT

The Science of Therapy

In a conversation between Humberto Maturana and Heinz von Foerster (Truth and Trust), Maturana says that the difference between science and philosophy is that science conserves the phenomenon to be explained and alters the explanatory principles to fit, whereas philosophy conserves the principles. When reflecting on the lack of progress psychotherapy has made in the last 50 years, I wonder if its because the practice of explaining psychotherapy is becoming a philosophical pursuit, rather than a scientific pursuit.

I find it frustrating that the things we know about therapy are very rarely explored scientifically. I’m not claiming to know how, and statistics was my worst subject at Uni, but I’d like to follow someone who does. Science is supposed to explore and explain observable phenomena. There are observable, reproducible phenomena that happen when a client comes to therapy, but the field of therapy rarely applies scientific method to explain these phenomena. Where else would you find researches still designing trials to explore what we have known for 50 years, namely that therapy is effective?

Well I guess in part it has to do with the next step. We still don’t know why or how therapy is effective. When faced with such a dilemma science is supposed to come up with a hypothesis that explains the phenomenon and then test it. Its not supposed to ignore an inconvenient phenomenon and persist with its principles.

Humberto Maturana, a Chilean biologist has a really meticulous way of speaking about this. In this article he details what a scientific explanation is impeccably. ONTOLOGY OF OBSERVING The Biological Foundations of Self Consciousness and The Physical Domain of Existence. Humberto R. Maturana (Here is the link) http://www.inteco.cl/articulos/004/texto_ing.htm

I had to have a little lie down after reading it though. My husband says that reading Maturana is like eating Christmas pudding. I thought I’d have a go at summarising the part about science into a sort of deconstructed Christmas souffle.

Here goes:

A scientist’s job is to provide an explanation of a phenomenon.

This scientist needs to understand that they themselves are observing the phenomenon to be explained and cannot extract themselves from their own observing. So the phenomenon to be explained is integrally tied up in their observing, and is not in itself some independent reality.

An every day explanation is always an answer to the question about the origin of the phenomenon to be explained, and is accepted or rejected by the listener depending on whether or not it satisfies certain criteria of acceptability that the listener specifies.

A scientific explanation, however, is the criterion of validation of a scientific statement. It specifies the phenomenon to be explained. It provides a generative mechanism (the hypothesis), that if it is allowed to operate, gives rise to the phenomenon to be explained, and tells you what to do so that if you do it you will be able to observe the phenomenon that it was explaining in the first place.

So what would a scientist do when confronted with psychotherapy research findings?

Start with the phenomenon to be explained:

Lets pick one.

A client comes to therapy and gets over their problem. The reason they got over their problem is a function of them, not the therapy. But if they didn’t come to therapy and were on a waiting list, they wouldn’t have got better. A weird phenomenon hey! Therapy is effective, but the largest outcome variance is client factors, which seemingly has nothing to do with the therapy.

A hypothesis:

Someone comes to therapy in a disconnected state. They’re all over the place like a mad woman’s knitting. They are running around like a headless chook. Then they have a therapeutic conversation that generates the experience of reconnection, and they are now cooking with gas. They are “in the zone” and then they go out into their life connected with their resourcefulness and resolve their problem with their own client factors.

When that doesn’t work, reconnection isn’t enough. The missing resource then needs to be learned. The savvy therapist then creates an experience where the client is reconnected with their ability to learn, and with their own individual process of learning. The client then goes out into their life and learns the missing resource and gets over their problem.

What about feedback though?

The phenomenon

Feedback improves effectiveness. Well at least it has the potential to. It does seem to matter what you do with the feedback.

Feedback from the client improves outcome. Feedback from the therapist doesn’t improve outcome.

What’s the hypothesis that provides a generative mechanism that if it were to run would create the phenomenon to be explained?

I’m guessing client feedback puts the attention back on the client and stops the therapy veering into irrelevant territory. What do you think?

If you’re interested in this conversation, sign up for this blog and leave a comment.

A General Practice of Therapy

I remember as a kid wanting to be a teacher. I applied to medicine because I got good marks and that’s what you did. I got in, and it seemed like a good idea at the time. I didn’t enjoy it much in the early years of the degree, but with time and experience the practical aspects appealed, and the experience of making a difference to someone was very satisfying. I was headed down a surgical path until the 80 hour weeks, as interesting as they were, made me realise that family life and having children would be pushed to the side. I switched paths to General Practice and was surprised that I really enjoyed it, particularly getting to know people and families and being involved in their ongoing wellbeing.

After 10 years in General Practice most GP’s have seen everything once and are moving into more specialised interest areas. For me medicine was becoming a “paint by numbers” experience, with practices and protocols taking any hint of creativity from its practice. The one area this wasn’t happening was in mental health, where mainstream medicine was sorely lacking in effectiveness. It was at this time that I was taking stock and looking at my books I realised most people were coming to see me for counselling. I began to feel a fraud as I’d had no training in counselling as an undergraduate, and even in the general practice training programme at that time, we had one afternoon with a psychologist and a weekend workshop with Steve Bidulph.

It was then that a flyer for The Diploma of Solution Oriented Psychotherapy Run by Doug Carter and Rob McNeilly at The Brief Therapy Centre in Hobart, arrived in my in-tray, and I liked what it said. It was about assuming resourcefulness and helping people reconnect with their own nascent strengths to move past the problem, rather than diagnosing and pathologising. I knew my patients were resourceful. I often thought if I’d been through what they had, I didn’t think I’d be doing as well as them, so I was keen to find out more.

I went along, and from the first I was hooked. It was exactly what was missing in medicine, which is a problem fixing approach. The solution oriented approach seemed to be the missing half, the other side of the whole, and with both I no longer felt stuck. After 4 years of training I began teaching the approach and found it amusing that I had come full circle.

General Practice provided a unique environment for practicing small aspects of therapy and building skills, and medicine in many ways was a good grounding to work with people who are suffering. I am interested in doing good work, but research into effectiveness of therapy is mostly unhelpful to the practitioner wanting to do better. There are a few notable exceptions including Scott Miller and his work on FIT and Bruce Wampold who wrote The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy work, which is a marvelous distillation of outcome research.

The tangle is in part that psychological studies can’t be blinded. As Jay Haley said in 1994 “The realisation that a therapist influences the data he theorises from and cannot be objective, has caused therapists to realise they are part of the truth they seek.”

I am interested to explore what we should learn, how we can teach and what we should practice to improve effectiveness in the management of mental health and wellbeing. Part of this is what questions should we be asking, that if they were answered we would know what to do?