Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

September 4, 2013

'Authorship bias' plays role in research on risk assessment tools, study finds

Reported predictive validity higher in studies by an instrument's designers than by independent researchers

The use of actuarial risk assessment instruments to predict violence is becoming more and more central to forensic psychology practice. And clinicians and courts rely on published data to establish that the tools live up to their claims of accurately separating high-risk from low-risk offenders.

But as it turns out, the predictive validity of risk assessment instruments such as the Static-99 and the VRAG depends in part on the researcher's connection to the instrument in question.

Publication bias in pharmaceutical research
has been well documented

Published studies authored by tool designers reported predictive validity findings around two times higher than investigations by independent researchers, according to a systematic meta-analysis that included 30,165 participants in 104 samples from 83 independent studies.

Conflicts of interest shrouded

Compounding the problem, in not a single case did instrument designers openly report this potential conflict of interest, even when a journal's policies mandated such disclosure.

As the study authors point out, an instrument’s designers have a vested interest in their procedure working well. Financial profits from manuals, coding sheets and training sessions depend in part on the perceived accuracy of a risk assessment tool. Indirectly, developers of successful instruments can be hired as expert witnesses, attract research funding, and achieve professional recognition and career advancement.

These potential rewards may make tool designers more reluctant to publish studies in which their instrument performs poorly. This "file drawer problem," well established in other scientific fields, has led to a call for researchers to publicly register intended studies in advance, before their outcomes are known.

The researchers found no evidence that the authorship effect was due to higher methodological rigor in studies carried out by instrument designers, such as better inter-rater reliability or more standardized training of instrument raters.

"The credibility of future research findings may be questioned in the absence of measures to tackle these issues," the authors warn. "To promote transparency in future research, tool authors and translators should routinely report their potential conflict of interest when publishing research investigating the predictive validity of their tool."

The meta-analysis examined all published and unpublished research on the nine most commonly used risk assessment tools over a 45-year period:
  • Historical, Clinical, Risk Management-20 (HCR-20)
  • Level of Service Inventory-Revised (LSI-R)
  • Psychopathy Checklist-Revised (PCL-R)
  • Spousal Assault Risk Assessment (SARA)
  • Structured Assessment of Violence Risk in Youth (SAVRY)
  • Sex Offender Risk Appraisal Guide (SORAG)
  • Static-99
  • Sexual Violence Risk-20 (SVR-20)
  • Violence Risk Appraisal Guide (VRAG)

Although the researchers were not able to break down so-called "authorship bias" by instrument, the effect appeared more pronounced with actuarial instruments than with instruments that used structured professional judgment, such as the HCR-20. The majority of the samples in the study involved actuarial instruments. The three most common instruments studied were the Static-99 and VRAG, both actuarials, and the PCL-R, a structured professional judgment measure of psychopathy that has been criticized criticized for its vulnerability to partisan allegiance and other subjective examiner effects.

This is the latest important contribution by the hard-working team of Jay Singh of Molde University College in Norway and the Department of Justice in Switzerland, (the late) Martin Grann of the Centre for Violence Prevention at the Karolinska Institute, Stockholm, Sweden and Seena Fazel of Oxford University.

A goal was to settle once and for all a dispute over whether the authorship bias effect is real. The effect was first reported in 2008 by the team of Blair, Marcus and Boccaccini, in regard to the Static-99, VRAG and SORAG instruments. Two years later, the co-authors of two of those instruments, the VRAG and SORAG, fired back a rebuttal, disputing the allegiance effect finding. However, Singh and colleagues say the statistic they used, the receiver operating characteristic curve (AUC), may not have been up to the task, and they "provided no statistical tests to support their conclusions."

Prominent researcher Martin Grann dead at 44

Sadly, this will be the last contribution to the violence risk field by team member Martin Grann, who has just passed away at the young age of 44. His death is a tragedy for the field. Writing in the legal publication Das Juridik, editor Stefan Wahlberg noted Grann's "brilliant intellect" and "genuine humanism and curiosity":
Martin Grann came in the last decade to be one of the most influential voices in both academic circles and in the public debate on matters of forensic psychiatry, risk and hazard assessments of criminals and ... treatment within the prison system. His very broad knowledge in these areas ranged from the law on one hand to clinical therapies at the individual level on the other -- and everything in between. This week, he would also debut as a novelist with the book "The Nightingale."

The article, Authorship Bias in Violence Risk Assessment? A Systematic Review and Meta-Analysis, is freely available online via PloS ONE (HERE).

Related blog reports:

August 8, 2013

Cluelessness, complacency and the great unknown

The case of the self-blind psychologist

An experienced forensic psychologist -- let's call him Dr. Short -- applies for a job as a forensic evaluator. He is rejected based on his written work sample. He files a formal protest, insisting that the report was fine.

As you all know, forensic reports should be (in the words of an excellent trainer I once had) both "fact-based" and "fact-limited." In other words, we must (a) carefully explain the data that support our opinion, and (b) exclude irrelevant information, especially that which is intrusive or prejudicial.[1]

Dr. Short's report was neither fact-based nor fact-limited. The adduced evidence did not support his forensic opinions, and the report was littered with extraneous material insinuating bad moral character. We learned of the subject's unorthodox sexual tastes and former gang associations, neither of which were relevant to the very limited forensic issue at hand. Using ethnic terms to describe the subject's hair, Dr. Short inadvertently revealed more about his biases than about the subject.

Obviously, based on his vehement insistence that his report was fine, Dr. Short was blind to these deficiencies. Which got me to thinking: Since biases are largely unconscious, can people be made aware of them? Can blind spots be overcome? How can we come to understand what we do not know?

"The anosognosia of everyday life"

Pondering these questions in connection with one of my seminars at Bond University, I stumbled across some intriguing philosophical discourse on the various types of unknowns, and how to remedy them:

The simplest type of unknown has been labeled a "known unknown." This is something we don't know, and know we don't know. Let’s say you learn that someone you are evaluating in a sanity proceeding had ingested an obscure substance just before the crime. If you don’t know the substance’s potential effects, the solution is straightforward (assuming you are motivated): Do the research.

In some cases, we know the question, but no answer exists. For example, we know that six out of ten individuals who score as high risk on actuarial instruments will not reoffend violently, due to the base rates of violence. What we don’t know is how to distinguish the true from false positives. So that’s a known unknown with an unknown answer. But if we are at least aware of the issue, we can explain the field’s empirical knowledge gap in our reports.

However, unknown unknowns [2] are an entirely different kettle of fish. These are things we don't know and don't realize that we don't know. We don't know that there even IS a question that needs to be asked. Without being able to frame the question, we obviously cannot figure out an answer. Put simply: We are clueless.

Unknown unknowns are a major problem in forensic psychology, with its dearth of racial, ethnic and cultural diversity among researchers and practitioners.[3] Vast experiential divides lead evaluators to impose their own moral standards without even realizing they are doing so. In condemning his subject's sexual promiscuity and drug use, for example, Dr. Short made false and universalizing assumptions that revealed ignorance of lifestyles other than his own. (This reminded me of an African American prisoner’s dilemma interacting with white guards in remote, rural prisons; because the farming communities from which these guards were recruited are devoid of mainstream African Americans, the guards tended to assume that all Black people had the characteristics of Black convicts.)

"The anosognosia of everyday life" is the rather gloomy term coined by David Dunning of Cornell University, who specializes in decision-making processes, to describe such routine ignorance.[4] Dunning is a great believer in ignorance as a driving force that shapes our lives in ways in which we will never know. 
"Put simply, people tend to do what they know and fail to do that which they have no conception of. In that way, ignorance profoundly channels the course we take in life."

Apropos of Dr. Short's report, Dunning notes that cluelessness on the part of a so-called expert does not imply dishonesty or a lack of caring:
"People can be clueless in a million different ways, even though they are largely trying to get things right in an honest way. Deficits in knowledge, or in information the world is giving them, leads people toward false beliefs and holes in their expertise."

Laziness a major culprit

Unknown unknowns are not unfathomable mysteries that can never be solved. They are caused by laziness and complacency, which block the process of discovery as surely as a dam holds back water. It’s what German cognitive scientist Dietrich Dorner was talking about when he wrote, in The Logic of Failure, that “to the ignorant, the world looks simple.”[5] We’ve all known people who are incompetent, but whose very incompetence masks their ability to recognize their incompetence. There’s even an unwieldy term for this condition (named after the researchers who studied it, naturally): Just call it the Dunning-Kruger Effect. Quoting Dunning yet again: 
"Unknown unknown solutions haunt the mediocre without their knowledge. The average detective does not realize the clues he or she neglects. The mediocre doctor is not aware of the diagnostic possibilities or treatments never considered. The run-of-the-mill lawyer fails to recognize the winning legal argument that is out there. People fail to reach their potential as professionals, lovers, parents and people simply because they are not aware of the possible."

Before leaving the topic of the great unknowns, I must mention one final type of unknown, an especially pernicious one in forensic work. Unknown knowns, which undoubtedly beset Dr. Short, are unconscious beliefs and prejudices that guide one’s perceptions and interactions. Perhaps the 19th century humorist Josh Billings captured the quality of these unknowns the best when he wryly observed:
"It ain't what you don't know that gets you into trouble. It's what you think you know that just ain't so." [6]

Tackling the great unknown

So, is there any hope for our wayward Dr. Short, oblivious to his biases and blind spots? The answer, as in many facets of life, is: It depends. One of the most elementary lessons one learns as a novice psychologist is that people don’t change unless they are motivated to change. (Hence, a whole area of psychology devoted to enhancing motivation to change, through so-called “motivational interviewing.”) Effective change is rarely compelled. If Dr. Short is open to feedback and correction, this experience could be a wake-up call. On the other hand, his very protest speaks to an impaired capacity for self-reflection, a brittle ego defense that may be difficult to penetrate.

Either way, Dr. Short's dilemma can serve as a lesson for others, including both students and practitioners. The key to opening the locks on the dam of knowledge is readily available: It is simply a genuine desire to learn, and a willingness to confront life’s complexities. To those with a thirst for knowledge, the world is complex, and that complexity is what makes it so fascinating.

Here, in a nutshell, is the advice I gave to the graduate students at Bond during last week’s lecture that touched on the paradoxes of the unknowns:
    If you haven't faced it, it's not easy to imagine this life
  • To reduce the unknown unknowns, seek broad knowledge. Seek out people from other walks of life, who may not share your views or experiences. Travel outside your comfort zone, not just geographically but in other local cultures as well. These experiences can open one’s eyes to difference. Travel vicariously by reading widely, especially OUTSIDE of the insular, micro-focused and ahistorical field of psychology.
  • Study up on cognitive biases and how they work. Especially, understand confirmatory bias, and build in hypothesis testing (including the testing of alternate hypotheses) as a routine practice. (Excellent resources on cognitive biases include Nate Silver's The Signal and the Noise and Carol Tavris and Elliot Aronson's Mistakes Were Made (but not by me), which brilliantly and unforgettably explains how two people can start out much the same but diverge dramatically so that they ultimately stare at each other as strangers across a great chasm.)
  • Create formal feedback loops so that you learn how cases you were involved in were resolved, how your work was received, and whether your opinions proved accurate. 
  • Don't assume you know the answer. Ask questions. And then ask more questions. 

  • Stay humble. Arrogance, or overconfidence in one’s wisdom, can short-circuit understanding as surely as TSA security checkpoints destroy the fun of flying. (That rather strained metaphor is a clue that this post was penned from 40,000 feet in the air.)
  • Finally, and most critically: When you look across the table, try to see a fellow human being, someone who perhaps lost their way in life's dark wood, rather than an alien or a monster. Before you judge someone, try to walk a mile in his shoes.

Ultimately, Dr. Short's dilemma flows not only from complacency but from an essential deficit in empathy, an inability to truly see -- and understand -- the fellow human being sitting across from him in that forensic interview room.

* * * * *

Notes
  1. This is discussed in both the American Psychological Association's Ethics Code (Standard 4.04, Minimizing Intrusions on Privacy, states that psychologists should include in written reports "only information germane to the purpose for which the communication is made") as well as the Specialty Guidelines for Forensic Psychology (see, for example, 10.01, Focus on Legally Relevant Factors).

  2. The term "unknown unknown" is sometimes credited to US Secretary of Defense Donald Rumsfeld, who used it to explain why the United States went to war with Iraq over mythical Weapons of Mass Destruction (WMD’s). Although the phrase gained currency at this time, others had already used it

  3. Heilbrun, K., & Brooks, S. (2010). Forensic psychology and forensic science: A proposed agenda for the next decade. Psychology, Public Policy, and Law, 16, 219-253. 

  4. For further conversation on this topic, see: Morris, E. (2010, June 20), The anosognosic's dilemma: Something's wrong but you'll never know what it is, New York Times blog.  Also see: Dunning, D. (2005). Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself (Essays in Social Psychology), Psychology Press, p. 14-15; Dunning, D. & Kruger, J. (1999), Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments, Journal of Personality and Social Psychology 77 (6), 1121-1134. 

  5. I cribbed the Dorner quote from Dr. Wayne Petherick, Associate Professor of Criminology and coordinator of the criminology program at Bond University. 

  6. Some attribute this quote not to Josh Billings but to Mark Twain, who was kicking around at the same time. Others claim it was neither. For now, the true origin of the quote is just one more of life's unknowns. 

July 29, 2013

ABC experiment exposes everyday racial profiling

In the wake of George Zimmerman's acquittal in the killing of Trayvon Martin, some have portrayed the killer as an outlier. But although most people aren't running around shooting down young Black men wearing hoodies, an experiment produced by ABC TV's "What Would You Do?" suggests that racial profiling is more the rule than the exception when it comes to perceptions of crime. 

In the experiment, three people armed with burglary tools sequentially stage the theft of a bicycle chained up in a public park. First, a white teenager. Then, a black one. Finally, a young blond girl tries her luck. Does anyone try to stop them?

Watch the video and be amazed. Then, pass it along.



The discrepancies in public perceptions graphically depicted in this video may help to explain the disproportionate outcomes under Florida's "stand your ground" law, under which it is legal to kill if one believes one is in imminent peril. Since Floridians enacted the controversial law eight years ago, those invoking it have been more likely to succeed if their victim was Black rather than white, according to an analysis by the Tampa Bay Times. About three in four of those who killed African Americans faced no penalty, compared with 6 out of 10 who killed whites.

In a case at least as egregious as Zimmerman's, a white man named Michael David Dunn is awaiting trial for shooting to death an African American teenager, Jordan Davis, at a gas station. Dunn had initiated a confrontation with Davis and his friends over the volume of the youths' music. Rolling Stone ran a moving profile of the case as an exemplar of the racial animus underlying stand-your-ground laws.

An American Psychological Association essay, "After the acquittal: The need for honest dialogue about racial prejudice and stereotyping," provides further resources on this important topic.

This post comes to you from Waikiki, where I arrived this morning from Queensland, Australia in advance of Tropical Storm (now downgraded to Tropical Depression) Flossie. I hope the storm doesn't stop anyone from attending this week's APA convention.

June 4, 2013

Newspaper unfairly maligned forensic psychologist, news council holds

In an unprecedented case, a Washington news council has determined that the Seattle Times was inaccurate and unfair to a forensic psychologist targeted in an investigative series on the state's sexually violent predator program.

Reporter Christine Willmsen went too far in her four-part investigative series on the costs of implementing SVP laws by singling out psychologist Richard Wollert for public censure. Relying on prosecution sources, she portrayed Wollert as a defense hack who promulgated unorthodox theories in order to line his own pockets, quoting detractors who called him an "outlier" and "a symphony with one note" who spoke "mumbo jumbo."

During Saturday's three-hour hearing, Wollert testified that the Times series had "tainted the Washington jury pool" by implying that psychologists who testified for the defense were not credible, and damaged his professional reputation. He said his annual income plummeted from about $450,000 to between $175,000 and $200,000 in the wake of the January 2012 series. 

Click on the video to watch the three-hour hearing. Then vote (HERE).

By a 7-to-2 vote, the Washington News Council found that the Times did not make sufficient efforts to contact sources other than prosecutors. On the larger question of whether the "Price of Protection" series was "accurate, fair, complete and balanced" in its portrayal of Wollert, the council sided with Wollert by a narrower, 6-to-4 margin. The council was evenly split on whether the headline of the first article in the series, "State Wastes Millions Helping Sex Predators Avoid Lock-up," was "accurate and fair." The votes by the ombudsman body, after a public hearing live-streamed from Seattle Town Hall, have no legal authority. The Times refused to attend what it labeled a "quasi-judicial spectacle," objecting to the council's "assumed authority." 

Accuracy, fairness and journalistic ethics

This is the second investigative series by Times reporter Willmsen to raise protests; in 2004, a local school board challenged the fairness of a series on "Coaches Who Prey." The SVP series illustrates how what passes for investigative journalism these days is often just piling on against the underdog. Reflective of the corporate monopolization of daily media, it bears little resemblance to the muckraking spirit that dominated when I was in journalism school, in the heady afterglow of Woodward and Bernstein's Watergate expose.  

Web page of the Seattle Times series
The basic premise of this series was certainly newsworthy: That Washington’s SVP law-- the first in the United States -- created "a cottage industry of forensic psychologists" who are gorging themselves at the public troughs. (That's a theme of this blog as well.) But by relying almost exclusively on prosecution sources, Willmsen became nothing more than a mouthpiece for government efforts to discredit and silence experts who present judges and juries with information that they don't like.

The case raises important distinctions among accuracy, balance and fairness in journalism. Journalists' voluntary code of ethics calls for accuracy, fairness and honesty in reporting and interpreting information. Balance, on the other hand, is not always desirable. As the Times noted in its rebuttal letter to the news council, "The pursuit of balance resulted in years of articles and broadcasts that gave the 1 percent of scientists who were climate-change deniers the same weight as the 99 percent who were certain that human activities were having an adverse impact on global climate."

Yet in this case, Willmsen's embeddedness with prosecutors resulted in such a profound lack of balance that the series was blatantly biased, and crossed the line from news reporting into advocacy. Consider Willmsen's one-sidedness in reporting on three of her major themes:

Money:

Richard Wollert testifying (from Seattle Times video)
The main theme of the series was that defense-retained experts were gouging the state. Willmsen wrote that Wollert made more than $100,000 on one SVP case; in a video from the series, Wollert is shown testifying that he earned $1.2 million from sexually violent predator cases in Washington and other states over a two-year period. That's a big chunk of taxpayer money, and the revelation undoubtedly caused public outrage against defense attorneys and their experts.

Willmsen wrote that government experts were not paid that much. However, this is demonstrably false. During the period that Willmsen was collecting data for the series, a California psychiatrist who is popular with Washington prosecutors was charging $450 per hour (the average among forensic psychologists being about half that) and -- like Wollert -- had billed more than $100,000 in a single case. His name does not show up anywhere in the series.

Following publication of the series, Washington capped the fees of defense-retained SVP experts at $10,000 for evaluations, a fee that includes all travel expenses, and $6,000 for testifying (including preparation time, travel, and deposition testimony). The fees of prosecution-retained experts were not capped.

Boilerplate reports:

In the video, Willmsen targets defense-retained expert Ted Donaldson for writing boilerplate reports in which the names were changed, but the text remained virtually the same.

Boilerplate reports are indeed a travesty. Especially when someone is facing potentially lifelong detention for a crime that is only a future possibility, experts should present a keen understanding of what makes that specific individual tick. But, having reviewed dozens of reports in Washington SVP cases, I can attest to the fact that government hacks are at least as guilty of this sin. In fact, one of the most popular of prosecution-retained experts in Washington is infamous for writing novel-length boilerplate reports. In two recent cases that I am aware of, he even forgot to excise the name of the previous individual. So, one will be reading along in his report on Mr. Smith, and suddenly come across big chunks of material describing Mr. Jones.

Faulty science:

In her reporting, Willmsen lambasted an actuarial tool developed by Wollert, the MATS-1, as being illegitimate. The idea behind the MATS-1 is to fully account for the reduction in risk that occurs as offenders age. The reporter quoted Karl Hanson, a Canadian researcher who is unhappy with Wollert because Wollert modified his popular Static-99 tool to create the MATS-1. But a fair and unbiased report would not rely for the unvarnished truth on a source who is essentially a business rival. In truth, as I've blogged about myriad times, there are plenty of grounds to critique the methodology and accuracy of any actuarial technique. In contrast to her disdain for the newer MATS-1, Willmsen lauds the Static-99 as the most widely used actuarial tool for assessing sex offender risk. But just because McDonald's sells way more burgers than In and Out Burgers does not mean their beef is any purer. 

As Wollert told the council members, "When we're talking about the advancement of science, people have different ideas"; one test of good research is whether "it's accepted over time." By that standard, Wollert's theories are doing pretty well. His published thesis that actuarial tools overestimated risk among elderly offenders was once controversial but is now widely accepted. Similarly, he was one of the first to publish criticism of the predictive abilities of the MnSOST-R actuarial tool; recently, that instrument was pulled from use because of its inaccuracy in predicting sex offender recidivism. And Wollert was emphasizing Bayesian reasoning -- most recently popularized by Nate Silver in The Signal and the Noise -- before many in our field realized how essential it was. 

While it is certainly laudable for the press to investigate bloated fees and the waste of taxpayer money, by laying all blame on the defense, the Times likely prejudiced the jury pool and squelched zealous representation by defense attorneys, who in terms of both resources and legal clout are like David battling the state’s mighty Goliath. Instead of proffering Wollert as a whipping boy, then, an impartial investigation would have uncovered exemplars of problematic practices on both sides of the legal aisle.

A true muckracking journalist would have dug even further, to ask:
  • How did opportunist politicians bamboozle the public into enacting costly laws that do little to protect the public, while simultaneously distracting from more fruitful efforts at reducing sexual violence? 
  • How has the SVP laws' legal requirement of a mental disorder expanded the influence of forensic psychology, with battles of the experts over sham diagnoses boosting the fortunes of shrewd practitioners, many of whom were toiling away as lowly prison and hospital hacks prior to these laws?  

Talking to the press

During their public deliberations, several council members chastised Wollert for declining Willmsen's repeated requests for an interview. Wollert said that the reporter had been rude and exuded bias when she approached him. But the panelists said Wollert had no legitimate expectation of deference or politeness.

"You cut your own throat," said John Knowlton, a council member and journalism professor at Green River Community College.

But must a source submit to an interview, even if he knows that it is a trap, and that his words will be twisted and used against him? This is a dodgy question. While many journalists are conscientious and above-board, others are not. Recall Janet Malcolm's provocative opening words in The Journalist and the Murderer, examining the ethics and morality of journalism in connection with journalist Joseph McGuiness's betrayal of murder suspect Jeffrey MacDonald in Fatal Vision:
"Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible. He is a kind of confidence man, preying on people's vanity, ignorance or loneliness, gaining their trust and betraying them without remorse. Like the credulous widow who wakes up one day to find the charming young man and all her savings gone, so the consenting subject of a piece of nonfiction learns -- when the article or book appears -- his hard lesson."
Perhaps Wollert could have adequately protected himself by asking the reporter to submit her questions in advance, and by responding in writing, via email. But who knows?

His decision to trust his instincts may have come back to bite him. Then again, he might have been bit even worse had he let down his guard with an agenda-driven journalist who had him in her crosshairs.

THE WASHINGTON NEWS COUNCIL ENCOURAGES THE PUBLIC TO JOIN IN THE DEBATE. CLICK HERE TO VIEW THE VIDEO AND HERE TO CAST YOUR VOTE. COMPREHENSIVE RESOURCES ON THE CASE ARE AVAILABLE HERE. SEATTLE'S NONPROFIT JOURNALISM PROJECT CROSSCUT HAS A GOOD REPORT ON THE NEWS COUNCIL DECISION; YOU CAN ADD YOUR COMMENTS ON THAT WEBSITE.

April 28, 2013

Forensic practice: A no-compassion zone?

Murder trial prompts professional dialogue

Do empathy or compassion have a place in a forensic evaluation? Or should an evaluator turn off all feelings in order to remain neutral and unbiased?

That question is at the center of a controversy in the murder trial of Jodi Arias that I blogged about last week, with the prosecutor accusing a defense-retained psychologist of unethical conduct for giving a self-help book to the defendant.

Under heavy-artillery fire, Richard Samuels* denied prosecutor Juan Martinez's accusation of "having feelings for" the defendant, who killed her ex-boyfriend and is claiming self defense. Samuels testified he gave Arias a book because he is a "compassionate person" and thought the book would help her, but that his objectivity was never compromised. The exchange prompted a juror to ask Samuels:  "Do you believe absolutely that it is possible to remain purely unbiased in an evaluation once compassion creeps in?"

Martinez called a rebuttal witness to testify that gift-giving is a boundary violation and unethical. Newly minted psychologist Janeen DeMarte, testifying in court for only the third time, testified that a forensic evaluator should never feel compassion for a defendant, as such feelings compromise integrity (a position she modified under cross-examination).

Given these starkly divergent positions, I was curious what other forensic psychologists think. So, I initiated a conversation with a group of seasoned professionals, publishing two brief video excerpts of the relevant testimony on YouTube (click on the images below to watch the excerpts) to guide the conversation.

View the Richard Samuels excerpt (18 minutes) by clicking on the above image.

View the Janeen DeMarte excerpt (10 minutes) by clicking on the image.

Gift-giving: A bad idea

Contrary to the prosecutor’s insistence, our Code of Ethics does not prohibit gift-giving. Nor do the Forensic Psychology Specialty Guidelines (which are aspirational rather than binding). It's an ethical gray area.** As with much involving ethics, it all depends. But still, the consensus was that giving a book to a defendant is a mistake. Whether or not it affects one's objectivity, it gives the appearance of potential bias. And in forensic psychology, maintaining credibility is essential. "Gift giving," as one colleague put it, "gives the appearance of either a personal or therapeutic relationship with the defendant."

Samuels's error lay in failing to think through his action, and recognize how his blurring of boundaries could damage his credibility and thus undermine his testimony. Ultimately, by discrediting his own work, he potentially caused harm to the very client whom he was attempting to help.

The nature of the book itself further undermined the expert's credibility in this case. As another colleague pointed out, what good is a self-help book, Your Erroneous Zones: Step-by-Step Advice for Escaping the Trap of Negative Thinking and Taking Control of Your Life, going to do a woman who is in jail and facing the death penalty for stabbing and shooting someone to death?

On the other hand, although gift-giving is a slippery slope, there are times when only a curmudgeon would not give. For example, if you are conducting a lengthy evaluation and you decide to buy yourself a drink or a snack from the vending machines, do you refuse the subject a soda, for fear it would undermine objectivity or lend an appearance of bias? How rude!

Empathy: It's only human 

The general consensus was that, without some measure of empathy, one cannot hope to understand the subject or the situation. One is left with "an equally problematic perspective that dehumanizes and decontextualizes the evaluation,' in the words of another psychologist.

"There is an orientation toward forensic work that is strikingly cold," noted yet another colleague. "I have seen some highly experienced forensic examiners who use their 'objectivity' with icy precision and thereby fail to establish the kind of rapport necessary to obtain a complete account of the offense or other important information…. The absence of empathy can be just as biasing as too much of it."

Or, as Jerome Miller wrote, in one of my favorite quotes from the forensic trenches, "It takes unusual arrogance to dismiss a fellow human being’s lost journey as irrelevant."

In other words, without empathy, any claim to objectivity is illusory, because there is no true understanding. And that, too, is dangerous. DeMarte's extreme position thus errs in the opposite direction from Samuels', in advocating for forensic psychologists to be automaton-like technocrats.

Indeed, the main danger of empathy as discussed by leaders in our field, such as Gary Melton and colleagues in Psychological Evaluations for the Courts, is not that it biases the evaluator, but that it potentially seduces vulnerable subjects into revealing too much, thus unwittingly increasing their legal jeopardy. For this reason, Daniel Shuman, in a minority position in the field, argues that using clinical techniques to enhance empathy is unethical because this can -- wittingly or unwittingly -- cause harm to evaluatees. 

After all, our training as therapists makes us good at projecting understanding, and at least the illusion of compassion. Our subjects often let down their guard and experience the encounter as therapeutic, even when we clearly inform them that we are not there to help them in any way, and even when we remain vigilant to control our expressions of empathy.

"The best forensic evaluations bring all the clinical skills learned to promote self-disclosure and emotional emitting (empathy, reflective comments, attention to feelings, suspension of moral judgment, etc.)," a colleague commented. "We know how to get people to talk about things that they might otherwise wish to hide from others and themselves. Most defendants feel understood or at least feel they have been heard at the conclusion of an assessment."

Behaviors, not emotions, can be unethical

A third general consensus emerging from our professional dialogue was that feelings themselves are "almost never unethical." Which is fortunate, as we can never know for certain what another person is thinking or feeling. Rather, it is the behavior that follows that can be problematic; we must remain alert to what feelings a subject is evoking in us, lest they lead us astray. Sticking close to the data, and being transparent in our formulations, can keep us from behaving incompetently or problematically in response to our feelings, whether of empathy and compassion or -- at least as problematic -- dislike or revulsion. 

Bottom line: Do not check your empathy at the jailhouse door. You need it in order to do your job. And also to remain human.

Thanks to all of the many eloquent and insightful colleagues who contributed to this conversation.


NOTES:

*Samuels has taken down his website (svpexpertwitness.com), so I am providing a link to an old cached version.  

**Psychology ethicist Ofer Zur has written more on gift-giving in psychotherapy, with links to the gift-giving provisions of various professional ethics codes.

April 17, 2013

'Digital lynch mob' assaults expert witness in televised murder trial

Imagine you are testifying in a high-profile murder case being live-streamed over the Internet. Suddenly, an angry mob swarms all over you. More than 10,000 people sign an online petition urging a boycott of your lecture contracts. Your book gets a thousand negative hits on Amazon. You are stalked, and a photo of you dining with the trial attorney is posted on Facebook, implying unethical conduct. You even get death threats.

That is the social media-coordinated avalanche that hit domestic violence expert Alyce LaViolette, testifying for the defense in the capital murder trial of Jodi Arias. The unrelenting cyber assaults so rattled LaViolette that she suffered an anxiety attack that landed her in the emergency room.

But the ER visit may only encourage the cyber-stalkers, who revel online over her discomfiture and obvious emotional deterioration over the course of seven grueling days of court testimony.

This type of Internet mobbing, in which cyber-posses enforce social norms through public shaming, is becoming more and more commonplace. One of the most widely known examples of such Internet vigilanteism was the 2005 case of "Dog Poop Girl," a South Korean woman who gained infamy after she refused to clean up after her dog on a Seoul subway; the harassment eventually escalated to the point that she was forced to quit her university job. 

But what was LaViolette's crime?

The domestic violence counselor had the audacity to opine that Jodi Arias was a victim of domestic violence -- that she was dominated and abused (physically, emotionally and sexually) by the man she eventually killed. Such an opinion bolsters Arias's claim that she killed her ex-boyfriend in self defense.

Murder tragedies as entertainment

Unfortunately for LaViolette, her analysis runs counter to the dominant narrative in a gendered morality play produced by media conglomerate Turner Broadcasting and distributed through its cable channels HLN, CNN and In Session. In this good-versus-evil melodrama, Arias is a psychopathic female who killed a morally righteous man in a fit of jealous rage. Period. End of story. Airbrushed out are all the nuances, the shades of grey inevitably present in any such violent tragedy. 

The burgeoning infotainment industry has perfected a profit-making formula of sensationalized true-crime "reporting" that plays on viewers' emotions, whipping audiences into a frenzy of self-righteous indignation in which they clamor for guilty verdicts -- very often against female transgressors. Nancy Grace's shrill ranting over the Casey Anthony murder acquittal garnered HLN a record of almost three million viewers. More recently, HLN went after another woman, Elizabeth Johnson, suspected in the mysterious disappearance of her baby. 

The Arias case seems Heaven-sent for this voyeuristic style of entertainment, in which vulturous pundits mete out tantalizing morsels of crime "facts" to their addicted audience. Travis Alexander provides titillation from the grave via thousands of graphic emails, instant messages, texts and phone chats in which he degrades his paramour as a "whore," "slut," "corrupted carcass" and "three-hole wonder" whom he can sexually violate at will. For her part, Arias is a demonstrable liar. When her ex-boyfriend was found with a gunshot wound to the head, a slit throat, and more than two dozen stab wounds, she initially claimed innocence. After police demolished her alibi defense, she then claimed that two intruders broke into the home and killed Alexander, before finally admitting to the killing but claiming self defense.

Cast in the starring role of swashbuckling hero in this sordid drama is prosecutor Juan Martinez, a dapper man with a quick mind and an acerbic style, whose meteoric rise from the son of Mexican immigrants to a top government attorney is the stuff of American legend. Women line up outside the Maricopa County, Arizona courthouse, swooning at the sight of him as they jockey for photographs and autographs.

"This is murder trial as entertainment," Josh Mankiewicz, a correspondent for NBC's Dateline program (which ran two segments on the case), told reporter Michael Kiefer of the Arizona Republic. "This is not a trial like O.J. (Simpson's) that sheds new light on society. This is not about race or money. It's a perfect tabloid storm. It is occurring in the absence of any other tabloid storm."

Nancy Grace, "Dr. Drew" and the other pundits capitalizing on such trials foster a false sense of intimacy by calling everyone by first names. They encourage vicarious audience participation on Facebook, Twitter, online polls and other social media. But this is no value-neutral production. This is an archetypal trope that requires a guilty verdict; as one insightful media critic noted, acquittals do not produce the desired catharsis.

Public shaming run amok

In such an emotionally charged climate, anyone affiliated with the defense automatically becomes a villain. However, it is interesting to observe the disparate treatment of LaViolette as compared with a male expert witness, psychologist Richard Samuels. The prosecutor aggressively attacked them both. Playing not only to the jurors but to his sizeable out-of-court fan base, Martinez paced back and forth like a tiger smelling blood, demanding of his cornered prey that they give only "yes or no" answers to his myriad questions. Under his withering cross-examination, both witnesses came across as defensive and evasive. Both were vulnerable due to their confirmatory biases -- a failure to seek out evidence that might disconfirm their case theories. But, objectively, Samuels would seem to invite at least as much criticism as LaViolette, due to his bumbling style, his test scoring errors, and his questionable case formulation (he diagnosed posttraumatic stress disorder using a rating scale on which Arias endorsed a fictitious trauma, of witnessing Alexander's murder at the hands of imaginary intruders).

However, the public's palpable fury against LaViolette far outstrips that targeting Samuels. Consistent with the Turner Network's gendered narrative of criminal villainy, the cyber-posse is fueled by a potent combination of misogyny and homophobia: The expert witness in their crosshairs is "emasculating," "a bull dyke," "a man-hater," "fat," "buck-teethed," "a bitch."

The Internet fosters this culture of hate. Its cloak of anonymity is disinhibitory, emboldening people to spew bile with impunity. In The Cult of the Amateur, Andrew Keen warns that the deluge of anonymous online content is altering public debate, manipulating opinion, blurring the boundaries between experts and the uninformed and weakening the vitality of professional media -- newspapers, magazines, music and movies.

The proliferation of bottom-feeders on Twitter and YouTube is one thing. But it is quite another thing when cyber-bullying seeps into the courtroom, intimidating witnesses and threatening the presumption of innocence.

Can inundated jurors remain unbiased?

Legal experts worry that a virtual deluge of unreliable and biased information -- readily available at the click of the mouse or a TV remote -- is undermining jurors' neutrality. In their off hours, curious jurors in the Arias case can tune in not only to the cable TV and social media debacle, but can watch the defendant's entire videotaped police interrogation -- including excised portions -- as well as a police interview with Arias's parents, in which they speak of her mental problems. Pro- and anti-Arias websites have sprung up. And it's not just outsiders who are furiously Tweeting, texting and blogging about the case.  Witnesses are watching the trial from home and texting the prosecutor with suggestions for cross-examination. Jodi Arias herself is tweeting from the jail, through a friend. ("HLN is an acronym for Haters Love Negativity," she tweeted.)

It would be naive to suppose that the Arias jury is immune to the inflammatory rhetoric swirling around the Internet. Some of the more sarcastic questions that jurors submitted for the expert witnesses sounded scripted by Nancy Grace. For example, one juror asked psychologist Samuels whether a bad haircut could induce posttraumatic stress disorder (PTSD), Samuels's diagnosis for Arias.

Yet trial judge Sherry Stevens -- who allowed cameras into the courtroom in the first place -- is now relying on the honor system rather than regaining control by sequestering the jury.  Complained defense attorney Kirk Nurmi: "The court asks the question of the jurors every morning, 'Have you seen anything on the media?' No one raises their hand... It is a fairy tale to assume that this jury is not hearing any of this. It is all over the news."

Kiefer, the Arizona Republic reporter who broke the story of witness LaViolette's cyber-bullying, gave examples of juror social-networking misconduct in other cases: A Michigan juror who posted a Facebook preview of her verdict ("Gonna be fun to tell the defendant they're GUILTY"); a juror in Britain who polled her social-media "friends" as to whether she should find a defendant guilty.

With more and more successful appeals of verdicts due to such Internet or social-media interference, according to a Reuters Legal survey, an appeal of any guilty verdict in the four-month Arias trial is a virtual certainty.

But any appeal will not mend the reputations of the expert witnesses called by the defense. As a retired Maricopa County Superior Court judge told Michael Kiefer, the Arizona Republic reporter, "it's the electronic version of a lynch mob."

Sree Sreenivasan, a journalism professor at Columbia University, told Kiefer he had never seen anything like the attack on LaViolette, but that it likely will become "standard operating procedure in prominent cases" -- witness intimidation taken to its logical extreme in a public culture of shaming and vilification.

If so, experts may think long and hard before about accepting referrals in high-profile cases. That, in turn, could have a chilling effect on defendants' rights to a fair trial.

Michael Kiefer's insightful Arizona Republic reports on the social media debacle are HERE, HERE and HERE. A full collection of the live-streamed trial videos is located HERE.

March 5, 2013

Remarkable experiment proves pull of adversarial allegiance

 Psychologists' scoring of forensic tools depends on which side they believe has hired them

A brilliant experiment has proven that adversarial pressures skew forensic psychologists' scoring of supposedly objective risk assessment tests, and that this "adversarial allegiance" is not due to selection bias, or preexisting differences among evaluators.

The researchers duped about 100 experienced forensic psychologists into believing they were part of a large-scale forensic case consultation at the behest of either a public defender service or a specialized prosecution unit. After two days of formal training by recognized experts on two widely used forensic instruments -- the Psychopathy Checklist-R (PCL-R) and the Static-99R -- the psychologists were paid $400 to spend a third day reviewing cases and scoring subjects. The National Science Foundation picked up the $40,000 tab.

Unbeknownst to them, the psychologists were all looking at the same set of four cases. But they were "primed" to consider the case from either a defense or prosecution point of view by a research confederate, an actual attorney who pretended to work on a Sexually Violent Predator (SVP) unit. In his defense attorney guise, the confederate made mildly partisan but realistic statements such as "We try to help the court understand that ... not every sex offender really poses a high risk of reoffending." In his prosecutor role, he said, "We try to help the court understand that the offenders we bring to trial are a select group [who] are more likely than other sex offenders to reoffend." In both conditions, he hinted at future work opportunities if the consultation went well. 

The deception was so cunning that only four astute participants smelled a rat; their data were discarded.

As expected, the adversarial allegiance effect was stronger for the PCL-R, which is more subjectively scored. (Evaluators must decide, for example, whether a subject is "glib" or "superficially charming.") Scoring differences on the Static-99R only reached statistical significance in one out of the four cases.

The groundbreaking research, to be published in the journal Psychological Science, echoes previous findings by the same group regarding partisan bias in actual court cases. But by conducting a true experiment in which participants were randomly assigned to either a defense or prosecution condition, the researchers could rule out selection bias as a cause. In other words, the adversarial allegiance bias cannot be solely due to attorneys shopping around for simpatico experts, as the experimental participants were randomly assigned and had no group differences in their attitudes about civil commitment laws for sex offenders.

Sexually Violent Predator cases are an excellent arena for studying adversarial allegiance, because the typical case boils down to a "battle of the experts." Often, the only witnesses are psychologists, all of whom have reviewed essentially the same material but have differing interpretations about mental disorder and risk. In actual cases, the researchers note, the adversarial pressures are far higher than in this experiment:
"This evidence of allegiance was particularly striking because our experimental manipulation was less powerful than experts are likely to encounter in most real cases. For example, our participating experts spent only 15 minutes with the retaining attorney, whereas experts in the field may have extensive contact with retaining attorneys over weeks or months. Our experts formed opinions based on files only, which were identical across opposing experts. But experts in the field may elicit different information by seeking different collateral sources or interviewing offenders in different ways. Therefore, the pull toward allegiance in this study was relatively weak compared to the pull typical of most cases in the field. So the large group differences provide compelling evidence for adversarial allegiance."

This is just the latest in a series of stunning findings by this team of psychologists led by Daniel Murrie of the University of Virginia and Marcus Boccaccini of Sam Houston University on an allegiance bias among psychologists. The tendency of experts to skew data to fit the side who retains them should come as no big surprise. After all, it is consistent with 2009 findings by the National Academies of Science calling into question the reliability of all types of forensic science evidence, including supposedly more objective techniques such as DNA typing and fingerprint analysis.

Although the group's findings have heretofore been published only in academic journals and have found a limited audience outside of the profession, this might change. A Huffington Post blogger, Wray Herbert, has published a piece on the current findings, which he called "disturbing." And I predict more public interest if and when mainstream journalists and science writers learn of this extraordinary line of research.

In the latest study, Murrie and Boccaccini conducted follow-up analyses to determine how often matched pairs of experts differed in the expected direction. On the three cases in which clear allegiance effects showed up in PCL-R scoring, more than one-fourth of score pairings had differences of more than six points in the expected direction. Six points equates to about two standard errors of measurement (SEM's), which should  happen by chance in only 2 percent of cases. A similar, albeit milder, effect was found with the Static-99R.

Adversarial allegiance effects might be even stronger in less structured assessment contexts, the researchers warn. For example, clinical diagnoses and assessments of emotional injuries involve even more subjective judgment than scoring of the Static-99 or PCL-R.

But ... WHICH psychologists?!


For me, this study raised a tantalizing question: Since only some of the psychologists succumbed to the allegiance effect, what distinguished those who were swayed by the partisan pressures from those who were not?

The short answer is, "Who knows?"

The researchers told me that they ran all kinds of post-hoc analyses in an effort to answer this question, and could not find a smoking gun. As in a previous research project that I blogged about, they did find evidence for individual differences in scoring of the PCL-R, with some evaluators assigning higher scores than others across all cases. However, they found nothing about individual evaluators that would explain susceptibility to adversarial allegiance. Likewise, the allegiance effect could not be attributed to a handful of grossly biased experts in the mix.

In fact, although score differences tended to go in the expected direction -- with prosecution experts giving higher scores than defense experts on both instruments -- there was a lot of variation even among the experts on the same side, and plenty of overlap between experts on opposing sides.

So, on average prosecution experts scored the PCL-R about three points higher than did the defense experts. But the scores given by experts on any given case ranged widely even within the same group. For example, in one case, prosecution experts gave PCL-R scores ranging from about 12 to 35 (out of a total of 40 possible points), with a similarly wide range among defense experts, from about 17 to 34 points. There was quite a bit of variability on scoring of the Static-99R, too; on one of the four cases, scores ranged all the way from a low of two to a high of ten (the maximum score being 12).

When the researchers debriefed the participants themselves, they didn't have a clue as to what caused the effect. That's likely because bias is mostly unconscious, and people tend to recognize it in others but not in themselves. So, when asked about factors that make psychologists vulnerable to allegiance effects, the participants endorsed things that applied to others and not to them: Those who worked at state facilities thought private practitioners were more vulnerable; experienced evaluators thought that inexperience was the culprit. (It wasn't.)

I tend to think that greater training in how to avoid falling prey to cognitive biases (see my previous post exploring this) could make a difference. But this may be wrong; the experiment to test my hypothesis has not been run. 

The study is: "Are forensic experts biased by the side that retained them?" by Daniel C. Murrie, Marcus T. Boccaccini, Lucy A. Guarnera and Katrina Rufino, forthcoming from Psychological Science. Contact the first author (HERE) if you would like to be put on the list to receive a copy of the article as soon as it becomes available.

Click on these links for lists of my numerous prior blog posts on the PCL-R, adversarial allegiance, and other creative research by Murrie, Boccaccini and their prolific team. Among my all-time favorite experiments from this research team is: "Psychopathy: A Rorschach test for pychologists?"