February 11, 2013

Exercise: Priming students to detect covert biases

In an eye-opening exercise in my graduate forensic psychology course, I had two groups separately analyze a sanitized forensic report. The subject of the report was a 16-year-old boy named "John" who had committed a relatively minor sex offense; the evaluation issue was treatment amenability. After independent group discussions, the two groups shared their impressions as follows:

Group A: "John has a conduct disorder and is narcissistic. His misconduct appears to be escalating. There are ominous warning signs of budding psychopathy. He is at a crossroads in his life; he could go bad fast."

Group B: "This report is biased. The evaluator has joined with John's mother, and is channeling the mother's antagonism toward John. There is evidence of racism, homophobia, and political conservatism. The evaluator’s antipathy toward John feels personal – perhaps he has a wayward teenage son?"

The two groups looked across the table at each other, flabbergasted. Some suspected a trick. "Did you really give us the same report to read?" one student queried.

Yes, everyone had read the identical report. And, in case you wondered, group selection was random; there were no baseline differences that would explain the groups' divergent opinions.

Rather, the difference was in how the two groups were primed to read the report. Their instructions:

Group A: "Read the report with the goal of trying to understand John. What makes him tick? Does he have any potential clinical diagnoses? What is your prognosis for his future?"

Group B: "Read the report with the goal of trying to understand the perspective of the report writer. Do you see any problems with his method or his analysis? If so, do they suggest any potential biases?"

This was no abstract academic exercise. Channeling John’s hateful mother, this seminal report reads like something torn from the pages of an Anne Rule novel, replete with enough (uncorroborated) animal torture and arson to excite any true believer in the infamous McDonald Triad. Going unchallenged at the time, the report had a hugely prejudicial impact on decision-makers. For years to come, institutional bureaucrats and forensic experts quoted liberally from it to bolster their opinions that John was dangerous.

This is not an isolated or unusual case. Alarmist reports like this have remarkable staying power, their uncorroborated claims taking on a life of their own as they ripple through their subjects' lives, eschewing rational analysis or contestation. The power of a single forensic evaluator is truly frightening at times.

Cutting through the hype

So how did a group of graduate students manage to see through the hype that had buffaloed seasoned professionals, to take the measure of the evaluator and expose his subterranean biases? Remarkably, all it took was a simple admonition to think critically, and to be alert to potential biases.

Ideally, we should always be exercising these analytical faculties. We should train ourselves to simultaneously process at least two units of analyses, asking ourselves both:

A. What does this report tell us about its subject?

B. What are the limitations of this report? How might its findings be unreliable, and perhaps flawed by unreliable or insufficient information, unconscious assumptions and biases, or other factors?

Cognitive biases

In the class exercise, Group A was focused only on Question A, whereas Group B focused on Question B. When forensic experts review a report, our approach should be bidirectional, and incorporate both perspectives.

Constructive skepticism benefits from an understanding of cognitive biases and how they work. In the instant case, the most obvious of these was confirmatory bias. This is the tendency to actively seek out and assign more weight to information that confirms one's prior beliefs, discounting or ignoring disconfirmatory data. Clinicians who fall under the spell of psychopathy theory, for example, tend to see psychopaths lurking behind every bush. A clue to the author's preconceptions in John's case was found in a footnote citing Stanton Samenow’s The Criminal Mind, an influential but decidedly polemic treatise that vigorously disavows social factors in crime and -- as its title implies -- caricatures criminals as a breed apart from normal human beings. 

Once you detect such selective perception in play, you may see related cognitive biases which the discerning expert should always be on the lookout for in forensic (and other) reports. These include, but are not limited to:

  • Salience bias, in which inordinate attention is paid to exotic or highly distinctive information, at the expense of ordinary features of a case that may be important. In John's case, the evaluator overweighted the mother's fanciful tales about John's early childhood ("He never cried liked a normal baby!"), while ignoring more proximate evidence of John's confusion over his sexuality. In criminal cases, salience bias often contributes to racial stereotyping.

  • Hindsight bias, or the tendency to see events as more predictable than they were before they took place. Using hindsight, forensic experts are prone to overvalue known facts that tend to explain an event; a countermeasure is to deliberately consider information that supports alternate conclusions. 

  • Availability bias, in which the probability of an event is judged by how easy it is to think of examples. Especially when combined with ignorance of base rates, this can lead to a tendency to overpredict dramatic events, even when -- as in the case of black swans -- their likelihood is actually low.

  • Illusory correlation, in which a relationship is imagined between variables that are in fact unrelated. In John's case, the mother's dramatic tales -- even if true -- may have had little or nothing to do with John's teenage misconduct. However, when read by subsequent decision-makers in a cultural climate that privileges psychopathy as an explanation for criminal conduct, they had an enormously prejudicial impact. 

(Wikipedia maintains an exhaustive list of these decision-making biases, along with links to their definitions.

To avoid perpetuating biases, forensic evaluators should train themselves to think like "Agent J" in Men in Black. Rather than jumping to superficially plausible conclusions, try to consciously develop alternate hypotheses and test their fit with the evidence. This scientific mindset kept Agent J (Will Smith) from assuming that little Tiffany, a blonde girl carrying quantum physics textbooks through the ghetto at night, was the innocent party just because she did not superficially resemble the monsters who were also out and about. Here is the scene from Men in Black that I show in my class, in which Agent J explains his logic in shooting Tiffany -- rather than the monsters -- during a simulation training:

Thanks to Maya for finding this Men in Black graphic.