Psychologists' scoring of forensic tools depends on which side they believe has hired them
A brilliant experiment has proven that adversarial
pressures skew forensic psychologists' scoring of supposedly objective
risk assessment tests, and that this "adversarial allegiance" is not due
to selection bias, or preexisting differences among evaluators.
The
researchers duped about 100 experienced forensic psychologists into
believing they were part of a large-scale forensic case consultation
at the behest of either a public defender service or a specialized
prosecution unit. After two days of formal training by recognized experts on two widely used
forensic instruments -- the Psychopathy Checklist-R (PCL-R) and the Static-99R --
the psychologists were paid $400 to spend a third day reviewing cases
and scoring subjects. The National Science Foundation picked up the $40,000 tab.
Unbeknownst
to them, the psychologists were all looking at the same set of four
cases. But they were "primed" to consider the case from either a defense or
prosecution point of view by a research confederate, an actual attorney who pretended to work on a Sexually Violent Predator (SVP) unit. In his
defense attorney guise, the confederate made mildly partisan but
realistic statements such as "We try to help the court understand that
... not every sex offender really poses a high risk of reoffending." In
his prosecutor role, he said, "We try to help the court understand that
the offenders we bring to trial are a select group [who] are more likely
than other sex offenders to reoffend." In both conditions, he hinted at
future work opportunities if the consultation went well.
The deception was so cunning that only four astute participants smelled a rat; their data were discarded.
As
expected, the adversarial allegiance effect was stronger for the PCL-R,
which is more subjectively scored. (Evaluators must decide, for
example, whether a subject is "glib" or "superficially charming.")
Scoring differences on the Static-99R only reached statistical
significance in one out of the four cases.
This is just the latest in a series of stunning findings by this team of psychologists led by Daniel Murrie
of the University of Virginia and Marcus Boccaccini of Sam Houston University on an allegiance bias among psychologists. The tendency of experts to skew data to fit the side who retains them should come as no big surprise. After all, it is consistent with 2009 findings by the National Academies of Science calling into question the reliability of all types of forensic science evidence, including supposedly more objective techniques such as DNA typing and fingerprint analysis.
Although the group's findings have heretofore been published only in academic journals and have found a limited audience outside of the profession, this might change. A Huffington Post blogger, Wray Herbert, has published a piece on the current findings, which he called "disturbing." And I predict more public interest if and when mainstream journalists and science writers learn of this extraordinary line of research.
In the latest study, Murrie and Boccaccini conducted follow-up analyses to determine how often matched pairs of experts differed in the expected direction. On the three cases in which clear allegiance effects showed up in PCL-R scoring, more than one-fourth of score pairings had differences of more than six points in the expected direction. Six points equates to about two standard errors of measurement (SEM's), which should happen by chance in only 2 percent of cases. A similar, albeit milder, effect was found with the Static-99R.
The groundbreaking research, to be published in the journal Psychological Science, echoes
previous findings by the same group regarding partisan bias in actual court cases. But
by conducting a true experiment in which participants were randomly assigned to either a defense or prosecution condition, the researchers could rule out
selection bias as a cause. In other words, the adversarial allegiance bias cannot be solely due to attorneys shopping around for
simpatico experts, as the experimental participants were randomly
assigned and had no group differences in their
attitudes about civil commitment laws for sex offenders.
Sexually
Violent Predator cases are an excellent arena for studying adversarial
allegiance, because the typical case boils down to a "battle of the
experts." Often, the only witnesses are psychologists, all of whom have
reviewed essentially the same material but have differing
interpretations about mental disorder and risk. In actual cases, the
researchers note, the adversarial pressures are far higher than in this
experiment:
"This evidence of allegiance was particularly striking because our experimental manipulation was less powerful than experts are likely to encounter in most real cases. For example, our participating experts spent only 15 minutes with the retaining attorney, whereas experts in the field may have extensive contact with retaining attorneys over weeks or months. Our experts formed opinions based on files only, which were identical across opposing experts. But experts in the field may elicit different information by seeking different collateral sources or interviewing offenders in different ways. Therefore, the pull toward allegiance in this study was relatively weak compared to the pull typical of most cases in the field. So the large group differences provide compelling evidence for adversarial allegiance."
Although the group's findings have heretofore been published only in academic journals and have found a limited audience outside of the profession, this might change. A Huffington Post blogger, Wray Herbert, has published a piece on the current findings, which he called "disturbing." And I predict more public interest if and when mainstream journalists and science writers learn of this extraordinary line of research.
In the latest study, Murrie and Boccaccini conducted follow-up analyses to determine how often matched pairs of experts differed in the expected direction. On the three cases in which clear allegiance effects showed up in PCL-R scoring, more than one-fourth of score pairings had differences of more than six points in the expected direction. Six points equates to about two standard errors of measurement (SEM's), which should happen by chance in only 2 percent of cases. A similar, albeit milder, effect was found with the Static-99R.
Adversarial
allegiance effects might be even stronger in less structured assessment
contexts, the researchers warn. For example, clinical
diagnoses and assessments of emotional injuries involve even more subjective judgment than scoring of the Static-99 or PCL-R.
For me, this study raised a tantalizing question: Since only some of the psychologists succumbed to the allegiance effect, what distinguished those who were swayed by the partisan pressures from those who were not?
The short answer is, "Who knows?"
The researchers told me that they ran all kinds of post-hoc analyses in an effort to answer this question, and could not find a smoking gun. As in a previous research project that I blogged about, they did find evidence for individual differences in scoring of the PCL-R, with some evaluators assigning higher scores than others across all cases. However, they found nothing about individual evaluators that would explain susceptibility to adversarial allegiance. Likewise, the allegiance effect could not be attributed to a handful of grossly biased experts in the mix.
In fact, although score differences tended to go in the expected direction -- with prosecution experts giving higher scores than defense experts on both instruments -- there was a lot of variation even among the experts on the same side, and plenty of overlap between experts on opposing sides.
So, on average prosecution experts scored the PCL-R about three points higher than did the defense experts. But the scores given by experts on any given case ranged widely even within the same group. For example, in one case, prosecution experts gave PCL-R scores ranging from about 12 to 35 (out of a total of 40 possible points), with a similarly wide range among defense experts, from about 17 to 34 points. There was quite a bit of variability on scoring of the Static-99R, too; on one of the four cases, scores ranged all the way from a low of two to a high of ten (the maximum score being 12).
When the researchers debriefed the participants themselves, they didn't have a clue as to what caused the effect. That's likely because bias is mostly unconscious, and people tend to recognize it in others but not in themselves. So, when asked about factors that make psychologists vulnerable to allegiance effects, the participants endorsed things that applied to others and not to them: Those who worked at state facilities thought private practitioners were more vulnerable; experienced evaluators thought that inexperience was the culprit. (It wasn't.)
But ... WHICH psychologists?!
For me, this study raised a tantalizing question: Since only some of the psychologists succumbed to the allegiance effect, what distinguished those who were swayed by the partisan pressures from those who were not?
The short answer is, "Who knows?"
The researchers told me that they ran all kinds of post-hoc analyses in an effort to answer this question, and could not find a smoking gun. As in a previous research project that I blogged about, they did find evidence for individual differences in scoring of the PCL-R, with some evaluators assigning higher scores than others across all cases. However, they found nothing about individual evaluators that would explain susceptibility to adversarial allegiance. Likewise, the allegiance effect could not be attributed to a handful of grossly biased experts in the mix.
In fact, although score differences tended to go in the expected direction -- with prosecution experts giving higher scores than defense experts on both instruments -- there was a lot of variation even among the experts on the same side, and plenty of overlap between experts on opposing sides.
So, on average prosecution experts scored the PCL-R about three points higher than did the defense experts. But the scores given by experts on any given case ranged widely even within the same group. For example, in one case, prosecution experts gave PCL-R scores ranging from about 12 to 35 (out of a total of 40 possible points), with a similarly wide range among defense experts, from about 17 to 34 points. There was quite a bit of variability on scoring of the Static-99R, too; on one of the four cases, scores ranged all the way from a low of two to a high of ten (the maximum score being 12).
When the researchers debriefed the participants themselves, they didn't have a clue as to what caused the effect. That's likely because bias is mostly unconscious, and people tend to recognize it in others but not in themselves. So, when asked about factors that make psychologists vulnerable to allegiance effects, the participants endorsed things that applied to others and not to them: Those who worked at state facilities thought private practitioners were more vulnerable; experienced evaluators thought that inexperience was the culprit. (It wasn't.)
I tend to think that greater training in how to avoid falling prey to cognitive biases (see my previous post exploring this) could make a difference. But this may be wrong; the experiment to test my hypothesis has not been run.
The study is: "Are
forensic experts biased by the side that retained them?" by Daniel C.
Murrie, Marcus T. Boccaccini, Lucy A. Guarnera and Katrina Rufino,
forthcoming from Psychological Science. Contact the first author (HERE) if you would like to be put on the list to receive a copy of the article as soon as it becomes available.