I believe in CBT and my research shows it works! Therapy allegiance in psychotherapy research


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/benmupsz/public_html/criticalscience.com/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Looking to find a psychologist to help you with your problems? Within the world of psychology, there are many flavours of talking therapy, including Cognitive Behavioural Therapy (CBT) and psychodynamic psychotherapy. Although there are many similarities between different psychotherapies (e.g. empathy, trusting relationships, agreed goals), many of these therapies have distinct philosophies, theories, and specific techniques they employ to effect positive change.

Most psychologists have a certain degree of bias towards a particular flavour of psychotherapy. Perhaps a particular style of therapy makes more sense to one person because its philosophies and theories broadly match that person’s values and their way of seeing the world. Or a person’s personality or life experiences (possibly including therapy themselves) may draw them toward a particular treatment style/approach. Or the school where a person was studying concentrated on a specific approach or was taught by an especially charismatic professor. Suffice to say that the reasons are rarely simple and may span any or all of the above.

This article isn’t going to discuss which therapies are better (or indeed if any are); instead I’ll be looking at what happens when psychology researchers with a bias towards a particular flavour of therapy conduct research into therapies; does their approach come out smelling of roses, or does a researcher’s bias have no effect on their research outcomes?

“But that’s obvious – of course their research will be biased!”

I hear you say. Well, you may have a point, but consider two things:

  1. Researchers (should) go to great lengths to ensure that their research is as free as possible from experimenter bias. For example, you may have heard of the idea of a double-blind trial, where half the research participants are given the actual treatment being researched and the other half are given a placebo treatment. The point is that not only are the participants blind to which treatment they receive, but also that the researcher doesn’t know, in case their biases (conscious or otherwise) affect the way they treat the participant, record related data, etc.
  2. Within the field of psychology, there is a large literature regarding cognitive biases, so it might be natural to assume that psychology researchers would be especially aware of the pitfalls associated with biases affecting their research and guard against them.

What is Researcher Allegiance?

Researcher Allegiance has been defined as “…a researcher’s belief in the superiority of a treatment [and] … the superior validity of the theory of change that is associated with the treatment”. It is regarded as “a very strong determinant of outcome in clinical trials … [and] … one of the greatest threats to the internal validity of findings from RCTs“.

In the case of psychotherapy research, a good example of potential Researcher Allegiance is whether or not a researcher developed the treatment under investigation (this is one of several tests commonly used to determine potential Researcher Allegiance).

Without going into any detail, there are a number of potentially interconnected reasons why someone may wish to (subconsciously or otherwise) show that their therapy works – it may be for egotistical reasons to make/perpetuate their academic reputation; future grants/funding may depend on their research showing how successful the therapy is; or it may be ideological and/or political.

Previous research into Researcher Allegiance in psychotherapy outcome research has been inconclusive – earlier studies seemed to show a strong effect of Researcher Allegiance on outcomes, whereas more recent studies have shown mixed results.

The study

Munder et al. (2013) conducted an extensive literature search of the existing research on Researcher Allegiance in psychotherapy research and selected 30 meta-analyses to bring together into a meta-meta-analysis (yep, that’s right!).

The broad question was to investigate the association between Researcher Allegiance and psychotherapy research outcome. But given the complexities of the studies and number of variables, the researchers wanted to know if any of these variables moderated the outcome of the results. For example, would outcomes be affected by the age of the research, the methods used to measure Research Allegiance, the characteristics of treatment, and target populations?

The authors were also interested in any biases that the authors of each of the 30 meta-analyses might hold, and if those biases would moderate the Researcher Allegiance-Outcome association across the 30 studies. Specifically, did any of these authors have a bias toward the idea that Researcher Allegiance affects research outcomes. In other words, were the authors of the 30 meta-analyses themselves biased to the main point of the research? And if so, what effect would this have?

The technique Munter and colleagues used to measure these potential second-order biases which is worth mentioning – they defined a dichotomous (yes/no) variable stating whether or not those 30 study authors were likely to be biased thus:

  • Not having a bias: If the authors expressed neutral or negative attitudes towards the Researcher Allegiance bias hypothesis
  • Having a bias: If the authors had previously published on Research Allegiance (i.e. did one of the authors cite themselves in the description of Research Allegiance) or if they had portrayed Researcher Allegiance as being a definitive biasing factor in their article

The results

A significant (p<.001) association between Researcher Allegiance and psychotherapy outcome research was found, corresponding to a medium effect size (d=0.54). Expressed another way, Researcher Allegiance accounted for 6.9% of variance in the outcome of a primary studies. Interestingly, this effect was significantly moderated (p=.002) by whether the authors of the 30 meta-analyses were judged to have an allegiance to the Researcher Allegiance bias hypothesis (AAB). To quote the paper:

In meta-analyses with positive AAB, the RA-outcome association corresponded to a moderate to large effect of r=.394, while in meta-analyses with no positive AAB a small to moderate effect of r=.177 was found.

To investigate this effect further and to test whether it was confounded by other moderating variables, a meta-regression was used – this analysis concluded that:

…meta-analyses with positive AAB were more likely to include comparative studies as opposed to controlled studies (n=28, r=.495, p=.007) and to use blind as opposed to nonblind or unclear RA assessment (n=28, r=.538, p=.003).”

and that despite this analysis reducing its effect:

“AAB remained a marginally significant predictor (p=.051), while study design, age group, and blinding did not predict the RA-outcome association (ps≥.49). Thus, there seemed to be some redundancy among predictors, as indicated by the drop in the predictive power of AAB. However, AAB remained the only clear predictor of the RA-outcome association.”

However, the authors continue by warning against over-interpretation of these findings due to shared variance (multicollinearity) among predictors and the reduced (n=22) number of meta-analyses included in this further analysis.

Other moderating variables

Looking at other moderating variables, the authors found that the following variables had:

  • No significant moderating effects: treatment format (individual/group), age group, defined/undefined clinical populations, non-randomised studies included/excluded, study design (comparative/controlled), primary/mixed outcome dimensions, sample size weighted/not weighted, different sources used to measure Researcher Allegiance and different indicators, publication year of meta-analysis
  • Significant moderating effects: There were no other significant moderating variables, although the meta-analyses that used a reprint method with blinded raters (compared to those not reporting blinding) showed a trend for larger effects (p=.096)

How can Researcher Allegiance bias be avoided?

This paper suggests the following ways to prevent the Researcher Allegiance bias affecting psychotherapy outcome research:

  • Comparative studies should be conducted collaboratively by teams with mixed allegiances
  • Study therapists in all treatment conditions should be motivated to learn and deliver their respective treatments
  • Meta-analyses on the efficacy of treatments should include a consideration of Researcher Allegiance as a potential rival explanation of their findings

ResearchBlogging.orgMunder T, Brütsch O, Leonhart R, Gerger H, & Barth J (2013). Researcher allegiance in psychotherapy outcome research: An overview of reviews. Clinical psychology review, 33 (4), 501-511 PMID: 23500154

This entry was posted in Psychology, Psychotherapy and tagged , , , , , , , , , , . Bookmark the permalink.

One Response to I believe in CBT and my research shows it works! Therapy allegiance in psychotherapy research

    Comments are closed.