July 6, 2010

Mitchell to plead insane

No surprise, but Brian David Mitchell has filed official notice that he plans to go for an insanity defense. Mitchell, as you all know, is awaiting trial in the 2002 kidnapping of Elizabeth Smart in Salt Lake City. The defense notice states an intent to rely upon unspecified "expert testimony as to mental disease or defect." I'm not holding my breath that the trial will start as scheduled on November 1, but when and if it does it is bound to be quite interesting.

I highly recommend that all forensic practitioners read U.S. District Judge Dale Kimball's lengthy ruling on Mitchell's competency to stand trial, issued four months ago. At 149 pages, it's the most comprehensive competency decision I have seen. In describing Mitchell as a cunning malingerer, the decision has plenty of implications for the insanity trial as well.

Utah abolished the insanity defense some years back. The state now uses the restrictive standard of Guilty But Mentally Ill, under which evidence of mental disorder can be introduced only for the restricted purpose of disproving mens rea, or the mental state that must be present in order to be convicted of certain special-intent offenses. (A handy chart showing the insanity standards of each U.S. state is HERE.) However, since the federal government is prosecuting Mitchell, he should still be able to rely upon the defense.

POSTSCRIPT: Subsequent news coverage on the government's response to the insanity filing is HERE.

July 1, 2010

Eyewitness ID: Landmark report urges overhaul

New Jersey case may change legal landscape, reduce wrongful convictions

Mistaken eyewitness identifications are the leading cause of wrongful convictions, playing a role in three out of four DNA exoneration cases to date, according to the Innocence Project. Now, a cutting-edge report commissioned by the Supreme Court of New Jersey recommends major changes to bring the courts into alignment with the current state of the science on eyewitness testimony.

Geoffrey Gaulkin, a retired judge, spent close to a year reviewing three decades of research and taking testimony from experts in a hearing that legal observers describe as unprecedented. His conclusion: About a third of witnesses who pick out a suspect choose the wrong person, and the courts are not keeping up with science to prevent such wrongful identifications in court. Expert witnesses at the evidentiary hearing included John Monahan, law professor at the University of Virginia, Gary Wells of Iowa State University, and Steven Penrod of the John Jay College of Criminal Justice.

The state high court request for a comprehensive probe stemmed from the case of Larry Henderson, who was convicted of manslaughter in 2004 based on a photographic identification procedure.

In his report, Gaulkin recommends far-reaching procedural safeguards, including procedures to assess the reliability of witnesses' identification of suspects. He also proposes that prosecutors, rather than defendants, should bear the burden of proof regarding the reliability of eyewitness testimony, and that juries and judges should be fully informed about the science of eyewitness identification and its fallibility.

Observers say the Special Master's findings of science and law represent a sea change that may eventually serve as a blueprint for other jurisdictions to revamp both their witness identification protocols and their rules on the use of eyewitness evidence in court.

Further resources:
Hat tip: Jane

June 30, 2010

Response bias: Faith or science?

Most extensively studied topic in applied psychological measurement

After one hundred years and thousands of research studies, perhaps we are no closer than ever to understanding how response bias -- a test-taker's overly positive or negative self-presentation -- affects psychological testing. Perhaps what we think we know -– especially in the forensic context -– is mostly based on faith and speculation, with little real-life evidence.

That is the sure-to-be controversial conclusion of a landmark analysis by Robert E. McGrath of Fairleigh Dickinson University, an expert on test measurement issues, and colleagues. The dryly titled "Evidence for response bias as a source of error variance in applied assessment," published in Psychological Bulletin, issues a challenge to those who believe the validity of testing bias indicators has been established, especially in the forensic arena.

The authors conducted an exhaustive literature review, sifting through about 4,000 potential studies, in search of research on the real-world validity of measures of test response bias. They sought studies that examined whether response bias indicators actually did what we think they do -- suppress or moderate scores on the substantive tests being administered. They searched high and low across five testing contexts -- personality assessment, workplace testing, emotional disorders, disability evaluations, and forensic settings. Surprisingly, out of the initial mountain of candidate research, they found only 41 such applied studies.

Of relevance here, not a single study could be found that tested the validity of response bias indicators in real-world child custody or criminal court proceedings. Indeed, only one study specifically targeting a forensic population met inclusion criteria. That was a 2006 study by John Edens and Mark Ruiz in Psychological Assessment looking at the relationship between institutional misconduct and defensive responding on test validity scales.

Does the "Response Bias Hypothesis" hold water?


The authors tested what they labeled the response bias hypothesis, namely, the presumption that using a valid measure of response bias enhances the predictive accuracy of a valid substantive indicator (think of the K correction on the MMPI personality test). Across all five contexts, "the evidence was simply insufficient" to support that widely accepted belief.

McGrath and colleagues theorize that biased responding may be a more complex and subtle phenomenon than most measures are capable of gauging. This might explain why the procedure used in typical quick-and-dirty research studies -- round up a bunch of college kids and tell them to either fake or deny impairment in exchange for psych 101 credits -- doesn't translate into the real world, where more subtle factors such as religiosity or type of job application can affect response styles.

It is also possible, they say, that clinical lore has wildly exaggerated base rates of dishonest responding, which may be rarer than commonly believed. They cite evidence calling into question clinicians' widespread beliefs that both chronic pain patients and veterans seeking disability for posttraumatic stress disorder are highly inclined toward symptom exaggeration.

Unless and until measures of response bias are proven to work in applied settings, using them is problematic, the authors assert. In particular, courts may frown upon use of such instruments due to their apparent bias against members of racial and cultural minorities. For example, use of response bias indicators has been found to disproportionately eliminate otherwise qualified minority candidates from job consideration, due to their higher scores on positive impression management. (Such a finding is not surprising, given Claude Steele's work on the pervasive effects of stereotype threat.)

"What is troubling about the failure to find consistent support for bias indicators is the extent to which they are regularly used in high-stakes circumstances, such as employee selection or hearings to evaluate competence to stand trial and sanity," the authors conclude. "The research implications of this review are straightforward: Proponents of the evaluation of bias in applied settings have some obligation to demonstrate that their methods are justified, using optimal statistical techniques for that purpose…. [R]egardless of all the journal space devoted to the discussion of response bias, the case remains open whether bias indicators are of sufficient utility to justify their use in applied settings to detect misrepresentation."

This is a must-read article that challenges dominant beliefs and practices in forensic psychological assessment.

June 29, 2010

Tweet, tweet!

I was checking out the British Psychological Society's Research Digest blog series on "the bloggers behind the blogs." The series features Vaughan Bell, the man behind the Mind Hacks blog that I admire, and Jesse Bering, whose Scientific American blog Bering in Mind is always fascinating. I noticed that all of these bloggers report that they now "tweet" as well. Not to be left behind, I decided to sign up, too. So now I'm on Twitter. Athough, like Scott Greenfield at Simple Justice, I worry about de-evolution -- "Orwell's nightmare on steroids." Also, I confess that I don't know what I'm doing.

APA 2010: Exciting forensic programming

I was vacillating about whether to attend the upcoming American Psychological Association convention in San Diego, but browsing through the schedule sold me. The American Psychology-Law Society (Division 41) is sponsoring almost two dozen top-notch sessions featuring timely topics and appearances by many forensic psychology luminaries. Especially timely is the focus on juvenile justice issues. Here's a sampling of the great offerings:

Juvenile justice track
  • "Life Without Parole for Juvenile Offenders: Current Legal, Developmental, and Psychological Issues" features Thomas Grisso, Bryan Stevenson, Barry Feld, and Chrisopher Slobogin, dissecting the recent Sullivan and Graham cases and discussing the role of forensic examiners.
  • Judicial Panel on Reducing Racial and Ethnic Disproportionality, hosted by forensic psychology scholar Richard Wiener, features three juvenile court judges and an attorney from the National Council of Juvenile and Family Court Judges
  • "The Construct of Empathy in the Treatment of Adolescents in the Juvenile Justice System," moderated by Lois Condie of Harvard Medical School, will include a presentation by forensic psychologist and professor Frank DiCataldo, whose outstanding book The Perversion of Youth I reviewed here.
Other Div. 41 hot picks
  • "Forensic Assessment": Scholars Daniel Murrie, Richard Rogers, and others will discuss the reliability of forensic evaluations in sanity evaluations, misassumptions regarding Miranda waivers, evaluating the competence of violence risk assessors, and other timely forensic assessment issues.
  • "Mental Health Courts -- The MacArthur Research" features stalwarts John Monahan, Hank Steadman, and others.
  • "Long-Term Solitary Confinement's Impact on Psychological Well-Being -- The Colorado Study" looks to be an especially powerful panel including presentations by Stuart Grassian, an early scholar of segregation psychosis, AP-LS fellow Joel Dvoskin, and Jamie Fellner, an attorney with Human Rights Watch, talking about "Supermax Confinement and the Mind."
  • "Juror Decision Making": Margaret Bull Kovera and other scholars will present recent empirical findings in jury research.
  • "Social Cognition in Court -- Understanding Laypersons' Interrogation Schemas and Prototypes" features false confession scholars Saul Kassin, Solomon M. Fulero, and others.
Early registration ends Wednesday (after which the price goes up), so register now if you plan to attend. Now that you know which panels I am attending, I hope to see many of y'all down in sunny San Diego in just a couple of months.

June 28, 2010

How sex offender registries endanger kids

"Shred Your Sex Offender Map"
-- Forbes.com

That's the advice of Lenore Skenazy, author of the book Free-Range Kids and founder of the movement with the same name. Writing in her "Oddly Enough" column at Forbes, she gives three reasons why "the sex offender registry is making our kids LESS safe":
Recently I consulted my local Serial Killer Registry and found out I'm living next door to a guy who killed three lunchroom ladies when they refused to give him seconds on the chili!

Oh please. I'm kidding. There's no registry of murderers out there. There's no armed robber registry either. Not even one for drunk drivers. No, the only easily available registry for all Americans to consult is the Sex Offender Registry. Because ex-sex offenders are so much scarier than murderers?

No, the reason there's now a sex offender registry in every state ... is that sex offenders have become the focus of intense parental fear. Who could blame us moms and dads, when we hear about kiddie kidnappings 24/7 on the news? The problem is not with nervous parents. The problem is with the registries. Turns out, they're worse than useless.

They are making our kids LESS safe. How? Well, there are three big problems with the registry.
Skenazy's column, explaining the three essential problems, continues HERE.