Showing posts with label risk assessment. Show all posts
Showing posts with label risk assessment. Show all posts

October 27, 2013

Black swan crash lands on Florida SVP program

Audit finds low recidivism, critiques reliance on inflated Static-99 risk estimates


Dan Montaldi’s words were prophetic.

Speaking to Salon magazine last year, the former director of Florida's civil commitment program for sex offenders called innovative rehabilitation programs "fragile flowers." The backlash from one bad deed that makes the news can bring an otherwise successful enterprise crashing down.

Montaldi was referring to a community reintegration program in Arizona that was derailed by the escape of a single prisoner in 2010.

But he could have been talking about Florida where, just a year after his Salon interview, the highly publicized rape and murder of an 8-year-old girl is sending shock waves through the treatment community. Cherish Perrywinkle was abducted from a Walmart, raped and murdered, allegedly by a registered sex offender who had twice been evaluated and found not to meet criteria for commitment as a sexually violent predator (SVP).

Montaldi resigned amidst a witch hunt climate generated by the killing and a simultaneous investigative series in the Sun Sentinel headlined "Sex Predators Unleashed." His sin was daring to mention the moral dilemma of locking up people because they might commit a crime in the future, when recidivism rates are very low. Republican lawmakers called his statements supportive of "monsters" and said it made their "skin crawl."

Montaldi's comments were contained in an email to colleagues in the Association for the Treatment of Sexual Abusers, in response to the alarmist newspaper series. He observed that, as a group, sex offenders were "statistically unlikely to reoffend." In other words, Cherish Perrywinkle’s murder was a statistical anomaly (also known as a black swan, or something that is so rare that it is impossible to predict or prevent). He went on to say that in a free society, the civil rights of even "society's most feared and despised members" are an important moral concern. A subscriber to the private listserv apparently leaked the email to the news media.

The Sun Sentinel series had also criticized the decline in the proportion of paroled offenders who were recommended for civil commitment under Montaldi's directorship. "Florida's referral rate is the lowest of 17 states with comparable sex-offender programs and at least three times lower than that of such large states as California, New York and Illinois," the newspaper reported.

Audit finds very low recidivism rates 


In the wake of the Sun Sentinel investigation, the Florida agency that oversees the Sexually Violent Predator Program has released a comprehensive review of the accuracy of the civil commitment selection process. Since Florida enacted its Sexually Violent Predator (SVP) law in 1999, more than 40,000 paroling sex offenders have been reviewed for possible commitment. A private corporation, GEO Care, LLC, runs the state’s 720-bed civil detention facility in Arcadia for the state's Department of Children and Families.


Three independent auditors -- well known psychologists Chris Carr, Anita Schlank and Karen C. Parker -- reviewed data from both a 2011 state analysis and an internal recidivism study conducted by the SVP program. They also reviewed data on 31,626 referrals obtained by the Sun Sentinel newspaper for its Aug. 18 expose.

All of the data converged upon an inescapable conclusion: Current assessment procedures are systematically overestimating the risk that a paroling offender will commit another sex offense.

In other words, Montaldi’s controversial email about recidivism rates was dead-on accurate.

First, the auditors examined recidivism data for a set of sex offenders who were determined to be extremely dangerous predators, but who were nonetheless released into a community diversion program instead of being detained.

"This study provided an opportunity to see if offenders who were recommended for commitment as sexually violent predators, actually behaved as expected when they were placed back into the community," they explained.

Of the 140 released offenders, only five were convicted of a new felony sex offense during a follow-up period of up to 10 years. Or, to put it another way, more than 96 percent did not reoffend. "This finding indicates that many individuals who were thought to be at high risk, were not," the report concluded.

Next, they analyzed internal data from the program itself. As of March 2013, 710 of the roughly 1,500 men referred for civil commitment were later released for one reason or another. Of those, only 5.7 percent went on to be convicted of a new sexually motivated crime.

Interestingly, this reconviction rate is not much different than that of a larger group of 1,200 sex offenders who were considered but rejected for civil commitment after a face-to-face evaluation. About 3 percent of those offenders incurred a new felony sex offense conviction after five to 10 years, with about 4 percent being reconvicted over a longer follow-up period of up to 14 years.

Logo on wall of sex offender hearing room in Salem, MA
"The recommended and the non-recommended groups differed by less than 2 percent in the percentage of offenders obtaining a new felony sex offense conviction after release," the investigators found. "Such a minor difference is surprising and indicates that the traditional approach to determining SVP status needs to be improved. There are too many false positives (someone determined to fit the SVP definition when he does not, or someone determined to be likely to re-offend but he is not)."

Overestimation of risk was especially prevalent for older offenders. Only one out of 94 offenders over the age of 60 was arrested on a new sex offense charge, and that charge was ultimately dismissed.

Finally, the auditors reanalyzed the data obtained by the Sun Sentinel newspaper via a public records request. Of this larger group of about 30,000 paroling offenders who were NOT recommended for civil commitment, less than 2 percent were convicted of a new sex offense.

What the public is most concerned about, naturally, is sex-related murders, such as that of young Cherish Perrywinkle. Fourteen of the tens of thousands of men not recommended for civil commitment had new convictions for sexual murders. This is a rate of 0.047, or less than five one-hundredths of 1 percent – the very definition of a black swan.

Static-99R producing epidemic of false positives


Determining which offender will reoffend is extremely difficult when base rates of sex offender recidivism are so low. However, the auditors identified an actuarial risk assessment tool, the widely used Static-99R, as a key factor in Florida’s epidemic of over-prediction. Florida mandates use of this tool in the risk assessment process.

Florida Civil Commitment Center
In 2009, government evaluators in Florida and elsewhere in the United States began a controversial practice of comparing some offenders to a select set of norms called "high risk." This practice dramatically inflates risk estimates, thereby alarming jurors in adversarial legal proceedings. The decision rules for using this comparison group are unclear and have not been empirically tested.

The recidivism rate of the Static-99R "high risk" comparison sample is several times higher than the actual recidivism rate of even the highest-risk offenders, the auditors noted. Thus, consistent with research findings from other states, they found that use of these high-risk norms is a major factor in the exaggeration of sex offender risk in Florida.

(It is certainly gratifying to see mainstream leadership in the civil commitment industry coming around to what people like me have been pointing out for years now.)

"The precision once thought to be present in using the Static-99 has diminished," the report states. "It seems apparent that less weight needs to be given to the Static-99R in sexually violent predator evaluations."

What goes around comes around


Due to the identified problems with actuarial tools, and the Static-99R in particular, the independent auditors are recommending that more weight be placed on clinical judgment. 

"It now appears that clinical judgment, guided by the broad and ever-expanding base of empirical data, may be superior to simply quoting 'rates,' which may lack sufficient application to the offenders being evaluated."

Ironically, the subjectivity of clinical judgment was the very practice that the actuarial tools were designed to alleviate. I have my doubts that clinical judgment will end up being all that reliable in adversarial proceedings, either. Perhaps the safest practice would be to "bet the base rate," or estimate risk based on local base rates of reoffending for similar offenders. This, however, would result in far fewer civil commitments.

Consistent with recent research, the auditors also recommended re-examining the practice of mandating lengthy treatment that can lead to demoralization and, in some cases, iatrogenic (or harmful) effects.

Although the detailed report may be helpful to forensic evaluators and the courts, it looks like Florida legislators aiming to appease a rattled public will ignore the findings and move in the opposite direction. Several are now advocating for new black swan legislation to be known as "Cherish’s Law."

As sex offender researcher and professor Jill Levenson noted in a commentary on the website of WLRN in Florida, such an approach is penny-wise but pound-foolish: 

“Every dollar spent on hastily passed sex offender policies is a dollar not spent on sexual assault victim services, child protection, and social programs designed to aid at-risk families…. We need to start thinking about early prevention and fund, not cut, social service programs for children and families. Today's perpetrators are often yesterday's victims."

* * * * *

Photo credit: Mike Stocker, Sun Sentinel
BREAKING NEWS: Montaldi has just been replaced as director of the civil commitment facility by Kristin Kanner, a longtime prosecutor from Broward County, Florida who headed that county's Sexually Violent Predator Unit for almost a decade. Not only does she have a JD in law from the Florida College of Law, but she holds undergraduate degrees in psychology and public policy from Duke. Word on the street is that she is an extremely competent and ethical person. It will be interesting to see how she will be treated by the media and politicians in the event that any black swan crash lands on the facility during her watch.

 * * * * *

The full report on the Florida SVP program is available HERE.  

Related post: 

Systems failure or black swan? New frame needed to stop "Memorial Crime Control" frenzy (Oct. 19, 2010)

October 8, 2013

Study: Risk tools don't work with psychopaths

If you want to know whether that psychopathic fellow sitting across the table from you will commit a violent crime within the next three years, you might as well flip a coin as use a violence risk assessment tool.

Popular risk assessment instruments such as the HCR-20 and the VRAG perform no better than chance in predicting risk among prisoners high in psychopathy, according to a new study published in the British Journal of Psychiatry. The study followed a large, high-risk sample of released male prisoners in England and Wales.

Risk assessment tools performed fairly well for men with no mental disorder. Utility was decreased for men diagnosed with schizophrenia or depression, became worse yet for those with substance abuse, and ranged from poor to no better than chance for individuals with personality disorders. But the instruments bombed completely when it came to men with high scores on the Psychopathy Checklist-Revised (PCL-R) (which, as regular readers of this blog know, has real-world validity problems all its own). 

"Our findings have major implications for risk assessment in criminal populations," noted study authors Jeremy Coid, Simone Ullrich and Constantinos Kallis. "Routine use of these risk assessment instruments will have major limitations in settings with high prevalence of severe personality disorder, such as secure psychiatric hospitals and prisons."

The study, "Predicting future violence among individuals with psychopathy," may be requested from the first author, Jeremy Coid (click HERE).  

September 4, 2013

'Authorship bias' plays role in research on risk assessment tools, study finds

Reported predictive validity higher in studies by an instrument's designers than by independent researchers

The use of actuarial risk assessment instruments to predict violence is becoming more and more central to forensic psychology practice. And clinicians and courts rely on published data to establish that the tools live up to their claims of accurately separating high-risk from low-risk offenders.

But as it turns out, the predictive validity of risk assessment instruments such as the Static-99 and the VRAG depends in part on the researcher's connection to the instrument in question.

Publication bias in pharmaceutical research
has been well documented

Published studies authored by tool designers reported predictive validity findings around two times higher than investigations by independent researchers, according to a systematic meta-analysis that included 30,165 participants in 104 samples from 83 independent studies.

Conflicts of interest shrouded

Compounding the problem, in not a single case did instrument designers openly report this potential conflict of interest, even when a journal's policies mandated such disclosure.

As the study authors point out, an instrument’s designers have a vested interest in their procedure working well. Financial profits from manuals, coding sheets and training sessions depend in part on the perceived accuracy of a risk assessment tool. Indirectly, developers of successful instruments can be hired as expert witnesses, attract research funding, and achieve professional recognition and career advancement.

These potential rewards may make tool designers more reluctant to publish studies in which their instrument performs poorly. This "file drawer problem," well established in other scientific fields, has led to a call for researchers to publicly register intended studies in advance, before their outcomes are known.

The researchers found no evidence that the authorship effect was due to higher methodological rigor in studies carried out by instrument designers, such as better inter-rater reliability or more standardized training of instrument raters.

"The credibility of future research findings may be questioned in the absence of measures to tackle these issues," the authors warn. "To promote transparency in future research, tool authors and translators should routinely report their potential conflict of interest when publishing research investigating the predictive validity of their tool."

The meta-analysis examined all published and unpublished research on the nine most commonly used risk assessment tools over a 45-year period:
  • Historical, Clinical, Risk Management-20 (HCR-20)
  • Level of Service Inventory-Revised (LSI-R)
  • Psychopathy Checklist-Revised (PCL-R)
  • Spousal Assault Risk Assessment (SARA)
  • Structured Assessment of Violence Risk in Youth (SAVRY)
  • Sex Offender Risk Appraisal Guide (SORAG)
  • Static-99
  • Sexual Violence Risk-20 (SVR-20)
  • Violence Risk Appraisal Guide (VRAG)

Although the researchers were not able to break down so-called "authorship bias" by instrument, the effect appeared more pronounced with actuarial instruments than with instruments that used structured professional judgment, such as the HCR-20. The majority of the samples in the study involved actuarial instruments. The three most common instruments studied were the Static-99 and VRAG, both actuarials, and the PCL-R, a structured professional judgment measure of psychopathy that has been criticized criticized for its vulnerability to partisan allegiance and other subjective examiner effects.

This is the latest important contribution by the hard-working team of Jay Singh of Molde University College in Norway and the Department of Justice in Switzerland, (the late) Martin Grann of the Centre for Violence Prevention at the Karolinska Institute, Stockholm, Sweden and Seena Fazel of Oxford University.

A goal was to settle once and for all a dispute over whether the authorship bias effect is real. The effect was first reported in 2008 by the team of Blair, Marcus and Boccaccini, in regard to the Static-99, VRAG and SORAG instruments. Two years later, the co-authors of two of those instruments, the VRAG and SORAG, fired back a rebuttal, disputing the allegiance effect finding. However, Singh and colleagues say the statistic they used, the receiver operating characteristic curve (AUC), may not have been up to the task, and they "provided no statistical tests to support their conclusions."

Prominent researcher Martin Grann dead at 44

Sadly, this will be the last contribution to the violence risk field by team member Martin Grann, who has just passed away at the young age of 44. His death is a tragedy for the field. Writing in the legal publication Das Juridik, editor Stefan Wahlberg noted Grann's "brilliant intellect" and "genuine humanism and curiosity":
Martin Grann came in the last decade to be one of the most influential voices in both academic circles and in the public debate on matters of forensic psychiatry, risk and hazard assessments of criminals and ... treatment within the prison system. His very broad knowledge in these areas ranged from the law on one hand to clinical therapies at the individual level on the other -- and everything in between. This week, he would also debut as a novelist with the book "The Nightingale."

The article, Authorship Bias in Violence Risk Assessment? A Systematic Review and Meta-Analysis, is freely available online via PloS ONE (HERE).

Related blog reports:

July 18, 2013

Most civilly detained sex offenders would not reoffend, study finds

Other new research finds further flaws with actuarial methods in forensic practice

At least three out of every four men being indefinitely detained as Sexually Violent Predators in Minnesota would never commit another sex crime if they were released.

That’s the conclusion of a new study by the chief researcher for the Department of Corrections in Minnesota, the state with the highest per capita rate of preventive detention in the United States.

Using special statistical procedures and a new actuarial instrument called the MnSOST-3 that is better calibrated to current recidivism rates, Grant Duwe estimated that the recidivism rate for civilly committed sex offenders -- if released -- would be between about 5 and 16 percent over four years, and about 18 percent over their lifetimes. Only two of the 600 men detained since Minnesota's law was enacted have been released, making hollow the law's promise of rehabilitation after treatment.

Duwe -- a criminologist and author of a book on the history of mass murder in the United States -- downplays the troubling Constitutional implications of this finding, focusing instead on the SVP law’s exorbitant costs and weak public safety benefits. He notes that "Three Strikes" laws, enacted in some U.S. states during the same time period as SVP laws based on a similar theory of selective incapacitation of the worst of the worst, have also not had a significant impact on crime rates.

The problem for the field of forensic psychology is that forensic risk assessment procedures have astronomical rates of false positives, or over-predictions of danger, and it is difficult to determine which small proportion of those predicted to reoffend would actually do so.

Minnesota has taken the lead in civilly detaining men with sex crime convictions, despite the state's only middling crime rates. Unlike in most U.S. states with SVP laws, sex offenders referred for possible detention are not entitled to a jury trial and, once detained, do not have a right to periodic reviews. Detention also varies greatly by county, so geographic locale can make the difference between a lifetime behind bars and a chance to move on with life after prison.

Ironically, as noted by other researchers, by the time an offender has done enough bad deeds to be flagged for civil commitment, his offending trajectory is often on the decline. Like other criminals, sex offenders tend to age out of criminality by their 40s, making endless incarceration both pointless and wasteful.

The study, To what extent does civil commitment reduce sexual recidivism? Estimating the selective incapacitation effects in Minnesota, is forthcoming from the Journal of Criminal Justice. Contact the author (HERE) to request a copy. 

Other hot-off-the-press articles of related interest:

Risk Assessment in the Law: Legal Admissibility, Scientific Validity, and Some Disparities between Research and Practice 


Daniel A. Krauss and Nicholas Scurich, Behavioral Sciences and the Law

ABSTRACT: Risk assessment expert testimony remains an area of considerable concern within the U.S. legal system. Historically, controversy has surrounded the constitutionality of such testimony, while more recently, following the adoption of new evidentiary standards that focus on scientific validity, the admissibility of expert testimony has received greater scrutiny. Based on examples from recent appellate court cases involving sexual violent predator (SVP) hearings, we highlight difficulties that courts continue to face in evaluating this complex expert testimony. In each instance, we point to specific problems in courts’ reasoning that lead it to admit expert testimony of questionable scientific validity.We conclude by offering suggestions for how courts might more effectively evaluate the scientific validity of risk expert testimony and how mental health professionals might better communicate their expertise to the courts.
Contact Dr. Krauss (HERE) for a copy of this very interesting and relevant article. The following two articles are freely available online:

The utility of assessing "external risk factors" when selecting Static-99R reference groups


Brian Abbott, Open Access Journal of Forensic Psychology

ABSTRACT: The Static-99 has been one of the most widely used sexual recidivism actuarial instruments. It has been nearly four years since the revised instrument, the Static-99R, has been released for use. Peer-reviewed literature has been published regarding the basis for changing the scoring system for the age-at-release item, the utility of relative risk data, and variability of sexual recidivism rate s across samples. Thus far, the peer-reviewed literature about the Static-99R has not adequately addressed the reliability and validity of the system to select among four possible actuarial samples (reference groups) from which to obtain score-wise observed and predicted sexual recidivism rates to apply to the individual being assessed. Rather, users have been relying upon the Static-99R developers to obtain this information through a website and workshops. This article provides a critical analysis of the reliability and validity of using the level of density of risk factors external to the Static-99R to select a single reference group among three options and discusses its implications in clinical and forensic practice. The use of alternate methods to select Static-99R reference groups is explored.

Calibration performance indicators for the Static-99R: 2013 update


Greg DeClue and Terence Campbell, Open Access Journal of Forensic Psychology

ABSTRACT: Providing comprehensive statistical descriptions of tool performance can help give researchers, clinicians, and policymakers a clearer picture of whether structured assessment instruments may be useful in practice. We report positive predictive value (PPV), negative predictive value (NPV), number needed to detain (NND), and number safely discharged (NSD), along with associated confidence intervals (CIs) for each value of the Static-99R, for one data set. Values reported herein apply to detected sexual recidivism during a 5-year fixed follow-up for the samples that the Static-99R developers consider to be roughly representative of all adjudicated sex offenders.

BLOGGER NOTE: I'm posting this research update while stranded at LAX en route to Brisbane, Australia, where I will be giving a series of seminars and trainings at Bond University before flying to Honolulu to give a full-day continuing education training at the American Psychological Association convention. (Registration for that is still open, I am told.) I'll try to blog as time allows, and I hope to see some of you at these venues.

June 13, 2013

International violence risk researchers launch free news service

I don't know about you, but I find it incredibly hard to keep up with the burgeoning research in risk assessment. In this era of international fear and carceral control, disciplines from psychology to criminology to nursing to juvenile justice are cranking out more articles each month, and the deluge can feel overwhelming.

Fortunately, two prominent researchers are offering to help us stay organized and up to speed -- for free. The newly created Alliance for International Risk Research (AIRR) will send out a monthly email containing references to all new articles related to forensic risk assessment from over 80 scholarly journals. And all you have to do is sign up.

Jay Singh and Kevin Douglas, AIRR Editors-in-Chief
The AIRR is brought to you by Jay Singh and Kevin Douglas. Dr. Singh, a newly appointed professor and senior researcher for the Department of Justice in Switzerland, is one of the best and brightest around (I've featured his important research on violence risk assessment more than once on this blog); Dr. Douglas is an award-winning psychology professor at Simon Fraser University and co-author of the widely used HCR-20 violence risk assessment tool, among others. 

Their goal is to keep clinicians, policymakers, and researchers up to date in this rapidly evolving field, thus promoting evidence-based practices in the mental health and criminal justice systems. For articles published in languages other than English, the AIRR even boasts an "international coordinator" who will help disseminate the research to a global audience.

Signing up is easy: Just go to THIS LINK and provide your email contract information. The AIRR promises not to bother you with solicitations, survey participation requests or conference announcements -- "simply the latest risk-related research at your fingertips."

Don't delay! The first AIRR bulletin will be arriving in inboxes on Sept. 1.

May 2, 2013

Spring reading recommendations -- forensic and beyond

Marauding bands of juvenile killers. Gang rapist-kidnappers. Wife beaters.
We’re talking elephants, dolphins and parrots, respectively. That's my forensic psychology angle on Animal Wise, a fascinating new book by nature journalist Virginia Morell.

Not long ago, it was taboo in science circles to claim that animal have minds. But the burgeoning field of animal cognition, having broken out of the strait jacket imposed by 20th-century behaviorism, is now mounting a full-on challenge to the notion of an evolutionary hierarchy with humans at the top. Morell, a science writer for National Geographic and Science magazines, traveled around the world interviewing animal scientists and observing their research projects on everything from architecturally minded rock ants and sniper-like archerfish to brainy birds, laughing rats, grieving elephants, scheming dolphins, loyal dogs, and quick-witted chimpanzees.

She found cutting-edge scientists who not only regard animals as sentient beings, but even refer to their study subjects as trusted colleagues. Professor Tetsuro Matsuzawa in Kyoto, for example, has set up his lab so that when the chimpanzees "come to work" each morning, they enter on elevated catwalks and sit higher than the humans, which makes them feel more comfortable. He cannot understand why humans feel so threatened by his discovery that chimpanzees are capable of holding much more information in immediate memory than can we humans.

"I really do not understand this need for us always to be superior in all domains. Or to be so separate, so unique from ever other animal. We are not. We are not plants; we are members of the animal kingdom." 

 


 

YouTube video of Alex the parrot showing his cognitive skills

Animal researchers are realizing that not only do all animals have individual personalities, but some -such as chimpanzees and dolphins - even develop cultures. This engaging and thought-provoking book can be read on many levels. It is highly informative while also being quite entertaining. But on a deeper level, it probes the moral dimensions of science.

Morell’s 2008 National Geographic article in from which the book grew is HERE. Her Slate article, "What are animals thinking?" is HERE.  My Amazon review (if you are so inclined, click on "yes," this review was helpful) is HERE.


The Signal and the Noise

If you haven't yet read Nate Silver's important The Signal and the Noise, it’s past time to grab a copy. Silver’s analytic method is central to forensic psychology. Best known for his spot-on predictions of U.S. presidential races, Silver argues that accurate predictions are possible in some (limited) contexts -- but only when one learns how to recognize the small amount of signal in an overwhelming sea of noise. And also when one approaches the prediction using Bayes's Theorem. This is one of those engrossing books that really stays with you, and has very practical applications in forensic assessments. I find it especially useful in writing reports. Plus, it helps one understand current events involving prediction, like the story of six Italian scientists being sent to prison for failing to predict a deadly earthquake. (Earthquakes are inherently unpredictable, and Silver explains why.) 

* * * * *

Speaking of forensic report writing, if you want to tune up your own report writing skills, or you are teaching or supervising students, I highly recommend Michael Karson and Lavita Nadkami's book, Principles of Forensic Report Writing, due out at the end of this month. Karson and Nadkami take an innovative and thoughtful approach, helping us to think outside of the box about this essential aspect of our trade.


Other  recommendations

Beyond forensics, here a few other worthwhile books I've read recently:

If American history interests you, check out bestselling author Tony Horwitz's Midnight Rising, about John Brown's ill-fated raid on Harpers Ferry and its role in the abolitionist movement, or Tim Egan's The Big Burn, about the massive fire in the U.S. Northwest that helped change the political landscape and establish the national Forest Service. Both are engrossing and educational; I listened to the audio versions during lengthy road trips.

* * * * *

If you are into dystopic fiction, I recommend Hillary Jordan's When She Woke. In the not-distant future, the government has gone broke, and can no longer afford to maintain its massive prison system. So, instead of incarceration, law-breakers -- in a modern-day riff on The Scarlet Letter -- are dyed bright colors for the length of their sentences. In a globally warmed Texas ruled by Christian fundamentalists, Hannah Payne wakes up bright red, for the crime of aborting her baby. This edge-of-your-seat tale isn't too far-fetched, given current trends, as laws are being passed in Oklahoma and elsewhere to criminalize abortion, and as the public shaming of sex offenders (who in the novel are "melachromed" blue and killed on sight by vigilantes) becomes more and more entrenched.

* * * * *

Finally, I'm just launching into Gary Greenberg's hot-off-the-press book on the DSM, The Book of Woe: The DSM and the Unmaking of Psychiatry, and I can already tell it's going to be a doozy. More on that soon, time permitting....

April 7, 2013

Risk screening worthless with juvenile sex offenders, study finds

Boys labeled as 'sexually violent predators' not more dangerous

Juveniles tagged for preventive detention due to their supposedly higher level of sexual violence risk are no more likely to sexually reoffend than adolescents who are not so branded, a new study has found.

Only about 12 percent of youths who were targeted for civil commitment as sexually violent predators (SVP's) but then freed went on to commit a new sex offense. That compares with about 17 percent of youths screened out as lower risk and tracked over the same five-year follow-up period.

Although the two groups had essentially similar rates of sexual and violent reoffending, overall criminal reoffending was almost twice as high among the youths who were NOT petitioned for civil commitment (66 percent versus 35 percent), further calling into question the judgment of the forensic evaluators.

Because of the youths' overall low rates of sexual recidivism, civil detention has no measurable impact on rates of sexual violence by youthful offenders, asserted study author Michael Caldwell, a psychology professor at the University of Wisconsin and an expert on juvenile sex offending.

The study, just published in the journal Sexual Abuse, is one in a growing corpus pointing to flaws in clinical prediction of risk.

It tracked about 200 juvenile delinquents eligible for civil commitment as Sexually Violent Persons (SVP's). The state where the study was conducted was not specified; at least eight of the 20 U.S. states with SVP laws permit civil detention of juveniles, and all allow commitment of adults based on offenses committed as a juvenile.

As they approached the end of their confinement period, the incarcerated juveniles underwent a two-stage screening process. In the first phase, one of a pool of psychologists at the institution evaluated them to determine whether they had a mental disorder that made them "likely" to commit a future act of sexual violence. Just over one in every four boys was found to meet this criterion, thereby triggering a prosecutorial petition for civil commitment.

After the initial probable cause hearing but before the final civil commitment hearing, an evaluator from a different pool of psychologists conducted a second risk assessment. These  psychologists were also employed by the institution but were independent of the treatment team. Astonishingly, the second set of psychologists disagreed with the first in more than nine out of ten cases, screening out 50 of the remaining 54 youths. (Only four youths were civilly committed, and a judge overturned one of these commitments, so ultimately all but three boys from the initial group of 198 could be tracked in the community to see whether or not they actually reoffended.)

Evaluators typically did not rely on actuarial risk scales to reach their opinions, Caldwell noted, and their methods remained something of a mystery. Youths were more likely to be tagged for civil detention at the first stage if they were white, had multiple male victims, and had engaged in multiple instances of sexual misconduct in custody, Caldwell found.

However, no matter what method they used or which factors they considered, the psychologists likely would have had little success in predicting which youths would reoffend. Even "the most carefully developed and thoroughly studied" methods for predicting juvenile recidivism have shown very limited accuracy, Caldwell pointed out. This is mainly due to a combination of youths' rapid social maturation and their very low base rates of recidivism; it is quite hard to successfully predict a rare event.

Indeed, a recent meta-analysis revealed that none of the six most well-known and best-researched instruments for appraising risk among juvenile sex offenders showed consistently accurate results. Studies that did find significant predictive validity for an instrument were typically conducted by that instrument's authors rather than independent researchers, raising questions about their objectivity.

"Juveniles are still developing their personality, cognitions, and moral judgment, processes that reflect considerable plasticity," noted lead author Inge Hempel, a psychology graduate student in the Netherlands, and her colleagues. "There are still many possible developmental pathways, and no one knows what causes persistent sexual offending."

Caldwell agrees with Hempel and her colleagues that experts' inability to accurately predict which juveniles will commit future sex crimes calls into question the ethics of civil commitment.

"From the perspective of public policy, these results raise questions about whether SVP commitment laws, as written, should apply to juveniles adjudicated for sexual offenses," he wrote. "If SVP laws could be reliably applied to high risk juvenile offenders, the benefit of preventing a lifetime of potential victims makes for a compelling case. However, the task of identifying the small subgroup of juveniles adjudicated for sexual offenses who are likely to persist in sexual violence into adulthood is at least extremely difficult, and may be technically infeasible."

* * * * *

The articles are:

Michael Caldwell: Accuracy of Sexually Violent Person Assessments of Juveniles Adjudicated for Sexual Offenses, Sexual Abuse: A Journal of Research and Treatment. Request it from the author HERE.

Inge Hempel, Nicole Buck, Maaike Cima and Hjalmar van Marle: Review of Risk Assessment Instruments for Juvenile Sex Offenders: What is Next? International Journal of Offender Therapy and Comparative Criminology. Request it from the first author HERE.

March 25, 2013

Miracle of the day: 80-year-old man recaptures long-lost youth

(Or: How committing a new sex crime can paradoxically LOWER risk on the Static-99R)

"How old is the offender?"

 Age is an essential variable in many forensic contexts. Older people are at lower risk for criminal recidivism. Antisocial behaviors, and even psychopathic character traits, diminish as criminals reach their 30s and 40s. Men who have committed sex offenses become at considerably lower risk for further such misconduct, due to a combination of decreased testosterone levels and the changes in thinking, health, and lifestyle that happen naturally with age.

Calculating a person's age would seem very straightforward, and certainly not something requiring a PhD: Just look up his date of birth, subtract that from today's date, and -- voila! Numerous published tests provide fill-in-the-blank boxes to make this calculation easy enough for a fourth-grader.

One forensic instrument, however, bucks this common-sense practice. The developers of the Static-99R, the most widely used tool for estimating the risk of future sexual recidivism, have given contradictory instructions on how to score its very first item: Offender age.

In a new paper, forensic evaluator Dean Cauley and PsyD graduate student Michelle Brownfield report that divergent field practices in the scoring of this item are producing vastly different risk estimates in legal cases -- estimates that in some cases defy all logic and common sense.

Take Fred. Fred is 80 years old, and facing possible civil commitment for the rapes of two women when he was 18 years old. He served 12 years in prison for those rapes. Released from prison at age 30, he committed several strings of bank robberies that landed him back in prison on six separate occasions.

At age 80 (and especially with his only known sex offenses committed at age 18), his risk for committing a new sex offense if released from custody is extremely low -- something on the order of 3 percent. But evaluators now have the option of using any of three separate approaches with Fred, with each approach producing quite distinct opinions and recommendations.

Procedure 1: Age is age (the old-fashioned method)

The first, and simplest, approach, is to list Fred's actual chronological age on Item 1 of the Static-99R. Using this approach, Fred gets a three-point reduction in risk for a total of one point, making his actuarial risk of committing a new sex offense around 3.8 percent.

Evaluators adopting this approach argue that advancing age mitigates risk, independent of any technicalities about when an offender was released from various periods of incarceration. These evaluators point to the Static-99R's coding manuals and workbook, along with recent publications, online seminars, and sworn testimony by members of the Static-99 Advisory Committee. Additionally, they point to a wealth of age-related literature from the fields of criminology and psychology to support their scoring.

Procedure 2: Reject the Static-99R as inappropriate

A second approach is not to use the Static-99R at all, because Fred's release from prison for his "index offenses" (the rapes) was far more than two years ago, making Fred unlike the members of the samples from which the Static-99R's risk levels were calculated. Evaluators adopting this approach point to publications by members of the Static-99 Advisory Committee, generally accepted testing standards and actuarial science test standards to support their choice to not use the test at all.

Procedure 3: The amazing elixir of youth

But there is a third approach. One that magically transports Fred back to his youth, back to the days when a career in bank robbing seemed so promising. (Bank robbery is no longer alluring; it is quietly fading away like the career of a blacksmith.) The last five decades of Fred's life fade away, and he becomes 30 again -- his age when he was last released from custody on a sex offense conviction.

Now Fred not only loses his three-point age reduction, but he gains a point for being between the ages of 18 and 34.9. A four point difference! The argument for this approach is that it most closely conforms to the scoring methods used on the underlying samples of sex offenders, who were scored based on their date of release from their index sexual offense. These evaluators can correctly point to information imparted at training seminars, advice given by some members of the Static-99R Advisory Committee, and sworn testimony by developers of the test itself. They can also point to an undated FAQ #27 on the Static-99 website to support their opinion.

Fred could rape someone to reduce his risk!

Back-dating age to the time of the last release from a sex offense-related incarceration allows for a very bizarre twist:

Let's say that after Fred was released from prison on his most recent robbery stint, back when he was a vigorous young man of 61, he committed another rape. Being 60 or over, Fred would now get the four-point reduction in risk to which his age entitles him. This would cut his risk by two-thirds -- from 11.4 percent (at a score of 5) all the way down to a mere 3.8 percent (at a score of 1)!

While such a scenario might seem far-fetched, it is not at all unusual for an offender to be released from prison at, say, age 58 or 59, but to not undergo a civil commitment trial for a couple of years, until age 60 or 61. Such an offender's score will vary by two points (out of a total of 12 maximum points) depending upon how the age item is scored. And, as Cauley and Brownfield describe, the members of the Static-99R development team have, at different times, given contradictory advice on how to score the age item.

By completely negating the very substantial body of research on age and crime, this technocratic method creates other very concerning -- and paradoxical -- implications, Cauley and Brownfield argue: As the risk estimate for a more persistent offender is lowered, the offender who does not reoffend is stuck with a risk score that is forever jacked up.

Back-dating an offender's age is also at odds with the research that generated the test itself, they say, because the offenders in the samples used to construct the Static-99R had finished serving their sentences on their index sexual offenses within two years of being studied. In other words, none of the offenders had been released many years earlier, and there was none of this curious time-travel business in regard to their ages. As the instrument's developers noted in a publication just last year, the Static-99 "was developed on, and intended for, sexual offenders with a current or recent sexual offense."

So, if you are evaluating an old geezer in the local pen and he tells you that he is only 30 years old, don't assume that he has a delusional belief that he has discovered the elixir of youth -- or that he's pulling your leg. He just might be reciting the age that he was just assigned by a technocratic Static-99R evaluator.

The paper, "Static-99R: Item #1 -- What is the Offender's Age? A lack of consensus leads to a defective actuarial," is available for download both HERE and HERE.

March 5, 2013

Remarkable experiment proves pull of adversarial allegiance

 Psychologists' scoring of forensic tools depends on which side they believe has hired them

A brilliant experiment has proven that adversarial pressures skew forensic psychologists' scoring of supposedly objective risk assessment tests, and that this "adversarial allegiance" is not due to selection bias, or preexisting differences among evaluators.

The researchers duped about 100 experienced forensic psychologists into believing they were part of a large-scale forensic case consultation at the behest of either a public defender service or a specialized prosecution unit. After two days of formal training by recognized experts on two widely used forensic instruments -- the Psychopathy Checklist-R (PCL-R) and the Static-99R -- the psychologists were paid $400 to spend a third day reviewing cases and scoring subjects. The National Science Foundation picked up the $40,000 tab.

Unbeknownst to them, the psychologists were all looking at the same set of four cases. But they were "primed" to consider the case from either a defense or prosecution point of view by a research confederate, an actual attorney who pretended to work on a Sexually Violent Predator (SVP) unit. In his defense attorney guise, the confederate made mildly partisan but realistic statements such as "We try to help the court understand that ... not every sex offender really poses a high risk of reoffending." In his prosecutor role, he said, "We try to help the court understand that the offenders we bring to trial are a select group [who] are more likely than other sex offenders to reoffend." In both conditions, he hinted at future work opportunities if the consultation went well. 

The deception was so cunning that only four astute participants smelled a rat; their data were discarded.

As expected, the adversarial allegiance effect was stronger for the PCL-R, which is more subjectively scored. (Evaluators must decide, for example, whether a subject is "glib" or "superficially charming.") Scoring differences on the Static-99R only reached statistical significance in one out of the four cases.

The groundbreaking research, to be published in the journal Psychological Science, echoes previous findings by the same group regarding partisan bias in actual court cases. But by conducting a true experiment in which participants were randomly assigned to either a defense or prosecution condition, the researchers could rule out selection bias as a cause. In other words, the adversarial allegiance bias cannot be solely due to attorneys shopping around for simpatico experts, as the experimental participants were randomly assigned and had no group differences in their attitudes about civil commitment laws for sex offenders.

Sexually Violent Predator cases are an excellent arena for studying adversarial allegiance, because the typical case boils down to a "battle of the experts." Often, the only witnesses are psychologists, all of whom have reviewed essentially the same material but have differing interpretations about mental disorder and risk. In actual cases, the researchers note, the adversarial pressures are far higher than in this experiment:
"This evidence of allegiance was particularly striking because our experimental manipulation was less powerful than experts are likely to encounter in most real cases. For example, our participating experts spent only 15 minutes with the retaining attorney, whereas experts in the field may have extensive contact with retaining attorneys over weeks or months. Our experts formed opinions based on files only, which were identical across opposing experts. But experts in the field may elicit different information by seeking different collateral sources or interviewing offenders in different ways. Therefore, the pull toward allegiance in this study was relatively weak compared to the pull typical of most cases in the field. So the large group differences provide compelling evidence for adversarial allegiance."

This is just the latest in a series of stunning findings by this team of psychologists led by Daniel Murrie of the University of Virginia and Marcus Boccaccini of Sam Houston University on an allegiance bias among psychologists. The tendency of experts to skew data to fit the side who retains them should come as no big surprise. After all, it is consistent with 2009 findings by the National Academies of Science calling into question the reliability of all types of forensic science evidence, including supposedly more objective techniques such as DNA typing and fingerprint analysis.

Although the group's findings have heretofore been published only in academic journals and have found a limited audience outside of the profession, this might change. A Huffington Post blogger, Wray Herbert, has published a piece on the current findings, which he called "disturbing." And I predict more public interest if and when mainstream journalists and science writers learn of this extraordinary line of research.

In the latest study, Murrie and Boccaccini conducted follow-up analyses to determine how often matched pairs of experts differed in the expected direction. On the three cases in which clear allegiance effects showed up in PCL-R scoring, more than one-fourth of score pairings had differences of more than six points in the expected direction. Six points equates to about two standard errors of measurement (SEM's), which should  happen by chance in only 2 percent of cases. A similar, albeit milder, effect was found with the Static-99R.

Adversarial allegiance effects might be even stronger in less structured assessment contexts, the researchers warn. For example, clinical diagnoses and assessments of emotional injuries involve even more subjective judgment than scoring of the Static-99 or PCL-R.

But ... WHICH psychologists?!


For me, this study raised a tantalizing question: Since only some of the psychologists succumbed to the allegiance effect, what distinguished those who were swayed by the partisan pressures from those who were not?

The short answer is, "Who knows?"

The researchers told me that they ran all kinds of post-hoc analyses in an effort to answer this question, and could not find a smoking gun. As in a previous research project that I blogged about, they did find evidence for individual differences in scoring of the PCL-R, with some evaluators assigning higher scores than others across all cases. However, they found nothing about individual evaluators that would explain susceptibility to adversarial allegiance. Likewise, the allegiance effect could not be attributed to a handful of grossly biased experts in the mix.

In fact, although score differences tended to go in the expected direction -- with prosecution experts giving higher scores than defense experts on both instruments -- there was a lot of variation even among the experts on the same side, and plenty of overlap between experts on opposing sides.

So, on average prosecution experts scored the PCL-R about three points higher than did the defense experts. But the scores given by experts on any given case ranged widely even within the same group. For example, in one case, prosecution experts gave PCL-R scores ranging from about 12 to 35 (out of a total of 40 possible points), with a similarly wide range among defense experts, from about 17 to 34 points. There was quite a bit of variability on scoring of the Static-99R, too; on one of the four cases, scores ranged all the way from a low of two to a high of ten (the maximum score being 12).

When the researchers debriefed the participants themselves, they didn't have a clue as to what caused the effect. That's likely because bias is mostly unconscious, and people tend to recognize it in others but not in themselves. So, when asked about factors that make psychologists vulnerable to allegiance effects, the participants endorsed things that applied to others and not to them: Those who worked at state facilities thought private practitioners were more vulnerable; experienced evaluators thought that inexperience was the culprit. (It wasn't.)

I tend to think that greater training in how to avoid falling prey to cognitive biases (see my previous post exploring this) could make a difference. But this may be wrong; the experiment to test my hypothesis has not been run. 

The study is: "Are forensic experts biased by the side that retained them?" by Daniel C. Murrie, Marcus T. Boccaccini, Lucy A. Guarnera and Katrina Rufino, forthcoming from Psychological Science. Contact the first author (HERE) if you would like to be put on the list to receive a copy of the article as soon as it becomes available.

Click on these links for lists of my numerous prior blog posts on the PCL-R, adversarial allegiance, and other creative research by Murrie, Boccaccini and their prolific team. Among my all-time favorite experiments from this research team is: "Psychopathy: A Rorschach test for pychologists?"

January 27, 2013

Showdown looming over predictive accuracy of actuarials

Large error rates thwart individual risk prediction
Brett Jordan David Macdonald (Creative Commons license)
If you are involved in risk assessments in any way (and what psychology-law professional is not, given the current cultural landscape?), now is the time to get up to speed on a major challenge that's fast gaining recognition.

At issue is whether the margins of error around scores are so wide as to prevent reliable prediction of an individual's risk, even as risk instruments show some (albeit weak) predictive accuracy on a group level. If the problem is unsolvable, as critics maintain, then actuarial tools such as the Static-99 and VRAG should be barred from court, where they can literally make the difference between life and death.

The debate has been gaining steam since 2007, with a series of back-and-forth articles in academic journals (see below). Now, the preeminent journal Behavioral Sciences and the Law has published findings by two leading forensic psychologists from Canada and Scotland that purport to demonstrate once and for all that the problem is "an accurate characterization of reality" rather than a statistical artifact as the actuarials' defenders had argued.

So-called actuarial tools have become increasingly popular over the last couple of decades in response to legal demand. Instruments such as the Static-99 (for sexual risk) and the VRAG (for general violence risk) provide quick-and-dirty ways to guess at an individual's risk of violent or sexual recidivism. Offenders are scored on a set of easy-to-collect variables, such as age and number of prior convictions. The assumption is that an offender who attains a certain score resembles the larger group of offenders in that score range, and therefore is likely to reoffend at the same rate as the collective.

Responding to criticisms of the statistical techniques they used in their previous critiques, Stephen Hart of Simon Fraser University and David Cooke of Glasgow Caledonian University developed an experimental actuarial tool that worked on par with existing actuarials to separate offenders into high- and low-risk groups.* The odds of sexual recidivism for subjects in the high-risk group averaged 4.5 times that of those in the low-risk group. But despite this large average difference, the researchers established through a traditional statistical procedure, logistic regression, that the margins of error around individual scores were so large as to make risk distinctions between individuals "virtually impossible." In only one out of 90 cases was it possible to say that a subject's predicted risk of failure was significantly higher than the overall baseline of 18 percent. (See figure.)

Vertical lines show confidence intervals for individual risk estimates;
these large ranges would be required in order to reach the traditional 95 percent level of certainty.

The brick wall limiting predictive accuracy at the individual level is not specific to violence risk. Researchers in more established fields, such as medical pathology, have also hit it. Many of you will know of someone diagnosed with a cancer and given six months to live who managed to soldier on for years (or, conversely, who bit the dust in a matter of weeks). Such cases are not flukes: They owe to the fact the six-month figure is just a group average, and cannot be accurately applied to any individual cancer patient.

Attempts to resolve this problem via new technical procedures are "a waste of time," according to Hart and Cooke, because the problem is due to the "fundamental uncertainty in individual-level violence risk assessment, one that cannot be overcome." In other words, trying to precisely predict the future using "a small number of risk factors selected primarily on pragmatic grounds" is futile; all the analyses in the world "will not change reality."

Legal admissibility questionable 

The current study has grave implications for the legal admissibility of actuarial instruments in court. Jurisdictions that rely upon the Daubert evidentiary standard should not be allowing procedures for which the margins of error are "large, unknown, or incalculable," Hart and Cooke warn.

By offering risk estimates in the form of precise odds of a new crime within a specific period of time, actuarial methods present an image of certitude. This is especially dangerous when that accuracy is illusory. Being told that an offender "belongs to a group with a 78 percent likelihood of committing another violent offense within seven years" is highly prejudicial and may poison the judgment of triers of fact. More covertly, it influences the judgment of the clinician as well, who -- through a process known as "anchoring bias" -- may tend to judge other information in a case in light of the individual's actuarial risk score.

Classic '56 Chevy in Cuba. Photo credit: Franciscovies
With professional awareness of this issue growing, it is not only irresponsible but ethically indefensible not to inform the courts or others who retain our services about the limitations of actuarial risk assessment. The Ethics Code of the American Psychological Association, for example, requires informing clients of "any significant limitations of [our] interpretations." Unfortunately, I rarely (if ever) see limitations adequately disclosed, either in written reports or court testimony, by evaluators who rely upon the Static-99, VRAG, Psychopathy Checklist-Revised (which Cooke and statistician Christine Michie of Glasgow University tackled in a 2010 study) and similar instruments in forming opinions about individual risk.

In fact, more often than not I see the opposite: Evaluators tout the actuarial du jour as being far more accurate than "unstructured clinical judgment." That's like an auto dealer telling you, in response to your query about a vehicle's gas mileage, that it gets far more miles per gallon than your old 1956 Chevy. Leaving aside Cuba (where a long-running U.S. embargo hampers imports), there are about as many gas-guzzling '56 Chevys on the roads in 2013 as there are forensic psychologists relying on unstructured clinical judgment to perform risk assessments. 

Time to give up the ghost? 

Hart and Cooke recommend that forensic evaluators stop the practice of using these statistical algorithms to make "mechanistic" and "formulaic" predictions. They are especially critical of the practice of providing specific probabilities of recidivism, which are highly prejudicial and likely to be inaccurate.

"This actually isn’t a radical idea; until quite recently, leading figures in the field of forensic mental health [such as Tom Grisso and Paul Appelbaum] argued that making probabilistic predictions was questionable or even ill advised," they point out. “Even in fields where the state of knowledge is arguably more advanced, such as medicine, it is not routine to make individual predictions.”

They propose instead a return to evidence-based approaches that more wholistically consider the individual and his or her circumstances:

From both clinical and legal perspectives, it is arbitrary and therefore inappropriate to rely solely on a statistical algorithm developed a priori - and therefore developed without any reference to the facts of the case at hand - to make decisions about an individual, especially when the decision may result in deprivation of liberties. Instead, good practice requires a flexible approach, one in which professionals are aware of and rely on knowledge of the scientific literature, but also recognize that their decisions ultimately require consideration of the totality of circumstances - not just the items of a particular test. 

In the short run, I am skeptical that this proposal will be accepted. The foundation underlying actuarial risk assessment may be hollow, but too much construction has occurred atop it. Civil commitment schemes rely upon actuarial tools to lend an imprimatur of science, and statutes in an increasing number of U.S. states mandate use of the Static-99 and related statistical algorithms in institutional decision-making.

The long-term picture is more difficult to predict. We may look back sheepishly on today's technocratic approaches, seeing them as emblematic of overzealous and ignorant pandering to public fear. Or -- more bleakly -- we may end up with a rigidly controlled society like that depicted in the sci-fi drama Gattaca, in which supposedly infallible scientific tests determine (and limit) the future of each citizen.

* * * * *

I recommend the article, "Another Look at the (Im-)Precision of IndividualRisk Estimates Made Using Actuarial RiskAssessment Instruments." It's part of an upcoming special issue on violence risk assessment, and it provides a detailed discussion of the history and parameters of the debate. (Click HERE to request it from Dr. Hart.) Other articles in the debate include the following (in rough chronological order): 
  • Hart, S. D., Michie, C. and Cooke, D. J. (2007a). Precision of actuarial risk assessment instruments: Evaluating the "margins of error" of group v. individual predictions of violence.  British Journal of Psychiatry, 190, s60–s65. 
  • Mossman, D. and Sellke, T. (2007). Avoiding errors about "margins of error" [Letter]. British Journal of Psychiatry, 191, 561. 
  • Harris, G. T., Rice, M. E. and Quinsey, V. L. (2008). Shall evidence-based risk assessment be abandoned? [Letter]. British Journal of Psychiatry, 192, 154. 
  • Cooke, D. J. and Michie, C. (2010). Limitations of diagnostic precision and predictive utility in the individual case: A challenge for forensic practice. Law and Human Behavior, 34, 259–274. 
  • Hanson, R. K. and Howard, P. D. (2010). Individual confidence intervals do not inform decision makers about the accuracy of risk assessment evaluations. Law and Human Behavior, 34, 275–281. 
*The experimental instrument used for this study was derived from the SVR-20, a structured professional judgment tool. The average recidivism rate among the total sample was 18 percent, with 10 percent of offenders in the low-risk group and 33 percent of those in the high-risk group reoffending. The instrument's Area Under the Curve, a measure of predictive validity, was .72, which is in line with that of other actuarial instruments.