Showing posts with label civil commitment. Show all posts
Showing posts with label civil commitment. Show all posts

August 9, 2013

Deaths at Minnesota detention site bringing public scrutiny

State legislator calls SVP practices Unconstitutional 

Two back-to-back deaths at Minnesota's draconian Moose Lake facility have prompted new calls for reform of the United States' largest per capita preventive detention apparatus. More than 600 men are being indefinitely warehoused behind razor wire at Moose Lake, after having served prison terms for sexual offending. Only one has ever been released.

Yesterday, a state legislator publicly decried the state's current civil detention practices as Unconstitutional, in an interview on Minnesota Public Radio.

LISTEN TO THE INTERVIEW (7 MINUTES)

"Minnesota just can't continue to … lock people up with no hope of release. It isn't Constitutional, and I think there's wide recognition of that fact," said Rep. Tina Liebling, who is leading reform efforts.

Moose Lake
In the wake of a federal class-action lawsuit by detainees, a federal task force recommended a number of changes to the program. But the state legislature is balking at implementing changes, which could include setting up alternative placement facilities and wrestling some power away from the Moose Lake treatment bureaucracy by giving the courts more discretion and mandating bi-annual case reviews by independent forensic experts.

Liebling said that out of the 20 U.S. states with laws allowing civil incapacitation of dangerous sex offenders after they have completed their prison terms, no other state has a "one-size-fits-all" procedure that doesn't allow for any hope of release.

"We can't hold people for their entire lives because we are worried about what they might do in the future," she told reporter Cathy Wurzer. "Unless we're prepared to lock up everybody who might pose any kind of risk … we need to get better at dealing with people as individuals … [and not solely based on] what they did 10 or 15 or 20 years ago."

Liebling expressed optimism about increasing public interest and knowledge stemming from the class-action lawsuit and recent deaths, which included one suicide and one of unexplained causes. "This is definitely something the public needs to be aware of…. People need to know that there are sex offenders living among us, and most of them are doing so successfully."

* * * * *

Related blog posts: 

"Most civilly detained sex offenders would not reoffend, study finds: Other new research finds further flaws with actuarial methods in forensic practice" (July 18, 2013) 

Blogger urges new paradigm for sex offenders (Feb. 23, 2012)

Challenge to Minnesota commitment gains ground (Sept. 23, 2012)

July 18, 2013

Most civilly detained sex offenders would not reoffend, study finds

Other new research finds further flaws with actuarial methods in forensic practice

At least three out of every four men being indefinitely detained as Sexually Violent Predators in Minnesota would never commit another sex crime if they were released.

That’s the conclusion of a new study by the chief researcher for the Department of Corrections in Minnesota, the state with the highest per capita rate of preventive detention in the United States.

Using special statistical procedures and a new actuarial instrument called the MnSOST-3 that is better calibrated to current recidivism rates, Grant Duwe estimated that the recidivism rate for civilly committed sex offenders -- if released -- would be between about 5 and 16 percent over four years, and about 18 percent over their lifetimes. Only two of the 600 men detained since Minnesota's law was enacted have been released, making hollow the law's promise of rehabilitation after treatment.

Duwe -- a criminologist and author of a book on the history of mass murder in the United States -- downplays the troubling Constitutional implications of this finding, focusing instead on the SVP law’s exorbitant costs and weak public safety benefits. He notes that "Three Strikes" laws, enacted in some U.S. states during the same time period as SVP laws based on a similar theory of selective incapacitation of the worst of the worst, have also not had a significant impact on crime rates.

The problem for the field of forensic psychology is that forensic risk assessment procedures have astronomical rates of false positives, or over-predictions of danger, and it is difficult to determine which small proportion of those predicted to reoffend would actually do so.

Minnesota has taken the lead in civilly detaining men with sex crime convictions, despite the state's only middling crime rates. Unlike in most U.S. states with SVP laws, sex offenders referred for possible detention are not entitled to a jury trial and, once detained, do not have a right to periodic reviews. Detention also varies greatly by county, so geographic locale can make the difference between a lifetime behind bars and a chance to move on with life after prison.

Ironically, as noted by other researchers, by the time an offender has done enough bad deeds to be flagged for civil commitment, his offending trajectory is often on the decline. Like other criminals, sex offenders tend to age out of criminality by their 40s, making endless incarceration both pointless and wasteful.

The study, To what extent does civil commitment reduce sexual recidivism? Estimating the selective incapacitation effects in Minnesota, is forthcoming from the Journal of Criminal Justice. Contact the author (HERE) to request a copy. 

Other hot-off-the-press articles of related interest:

Risk Assessment in the Law: Legal Admissibility, Scientific Validity, and Some Disparities between Research and Practice 


Daniel A. Krauss and Nicholas Scurich, Behavioral Sciences and the Law

ABSTRACT: Risk assessment expert testimony remains an area of considerable concern within the U.S. legal system. Historically, controversy has surrounded the constitutionality of such testimony, while more recently, following the adoption of new evidentiary standards that focus on scientific validity, the admissibility of expert testimony has received greater scrutiny. Based on examples from recent appellate court cases involving sexual violent predator (SVP) hearings, we highlight difficulties that courts continue to face in evaluating this complex expert testimony. In each instance, we point to specific problems in courts’ reasoning that lead it to admit expert testimony of questionable scientific validity.We conclude by offering suggestions for how courts might more effectively evaluate the scientific validity of risk expert testimony and how mental health professionals might better communicate their expertise to the courts.
Contact Dr. Krauss (HERE) for a copy of this very interesting and relevant article. The following two articles are freely available online:

The utility of assessing "external risk factors" when selecting Static-99R reference groups


Brian Abbott, Open Access Journal of Forensic Psychology

ABSTRACT: The Static-99 has been one of the most widely used sexual recidivism actuarial instruments. It has been nearly four years since the revised instrument, the Static-99R, has been released for use. Peer-reviewed literature has been published regarding the basis for changing the scoring system for the age-at-release item, the utility of relative risk data, and variability of sexual recidivism rate s across samples. Thus far, the peer-reviewed literature about the Static-99R has not adequately addressed the reliability and validity of the system to select among four possible actuarial samples (reference groups) from which to obtain score-wise observed and predicted sexual recidivism rates to apply to the individual being assessed. Rather, users have been relying upon the Static-99R developers to obtain this information through a website and workshops. This article provides a critical analysis of the reliability and validity of using the level of density of risk factors external to the Static-99R to select a single reference group among three options and discusses its implications in clinical and forensic practice. The use of alternate methods to select Static-99R reference groups is explored.

Calibration performance indicators for the Static-99R: 2013 update


Greg DeClue and Terence Campbell, Open Access Journal of Forensic Psychology

ABSTRACT: Providing comprehensive statistical descriptions of tool performance can help give researchers, clinicians, and policymakers a clearer picture of whether structured assessment instruments may be useful in practice. We report positive predictive value (PPV), negative predictive value (NPV), number needed to detain (NND), and number safely discharged (NSD), along with associated confidence intervals (CIs) for each value of the Static-99R, for one data set. Values reported herein apply to detected sexual recidivism during a 5-year fixed follow-up for the samples that the Static-99R developers consider to be roughly representative of all adjudicated sex offenders.

BLOGGER NOTE: I'm posting this research update while stranded at LAX en route to Brisbane, Australia, where I will be giving a series of seminars and trainings at Bond University before flying to Honolulu to give a full-day continuing education training at the American Psychological Association convention. (Registration for that is still open, I am told.) I'll try to blog as time allows, and I hope to see some of you at these venues.

June 4, 2013

Newspaper unfairly maligned forensic psychologist, news council holds

In an unprecedented case, a Washington news council has determined that the Seattle Times was inaccurate and unfair to a forensic psychologist targeted in an investigative series on the state's sexually violent predator program.

Reporter Christine Willmsen went too far in her four-part investigative series on the costs of implementing SVP laws by singling out psychologist Richard Wollert for public censure. Relying on prosecution sources, she portrayed Wollert as a defense hack who promulgated unorthodox theories in order to line his own pockets, quoting detractors who called him an "outlier" and "a symphony with one note" who spoke "mumbo jumbo."

During Saturday's three-hour hearing, Wollert testified that the Times series had "tainted the Washington jury pool" by implying that psychologists who testified for the defense were not credible, and damaged his professional reputation. He said his annual income plummeted from about $450,000 to between $175,000 and $200,000 in the wake of the January 2012 series. 

Click on the video to watch the three-hour hearing. Then vote (HERE).

By a 7-to-2 vote, the Washington News Council found that the Times did not make sufficient efforts to contact sources other than prosecutors. On the larger question of whether the "Price of Protection" series was "accurate, fair, complete and balanced" in its portrayal of Wollert, the council sided with Wollert by a narrower, 6-to-4 margin. The council was evenly split on whether the headline of the first article in the series, "State Wastes Millions Helping Sex Predators Avoid Lock-up," was "accurate and fair." The votes by the ombudsman body, after a public hearing live-streamed from Seattle Town Hall, have no legal authority. The Times refused to attend what it labeled a "quasi-judicial spectacle," objecting to the council's "assumed authority." 

Accuracy, fairness and journalistic ethics

This is the second investigative series by Times reporter Willmsen to raise protests; in 2004, a local school board challenged the fairness of a series on "Coaches Who Prey." The SVP series illustrates how what passes for investigative journalism these days is often just piling on against the underdog. Reflective of the corporate monopolization of daily media, it bears little resemblance to the muckraking spirit that dominated when I was in journalism school, in the heady afterglow of Woodward and Bernstein's Watergate expose.  

Web page of the Seattle Times series
The basic premise of this series was certainly newsworthy: That Washington’s SVP law-- the first in the United States -- created "a cottage industry of forensic psychologists" who are gorging themselves at the public troughs. (That's a theme of this blog as well.) But by relying almost exclusively on prosecution sources, Willmsen became nothing more than a mouthpiece for government efforts to discredit and silence experts who present judges and juries with information that they don't like.

The case raises important distinctions among accuracy, balance and fairness in journalism. Journalists' voluntary code of ethics calls for accuracy, fairness and honesty in reporting and interpreting information. Balance, on the other hand, is not always desirable. As the Times noted in its rebuttal letter to the news council, "The pursuit of balance resulted in years of articles and broadcasts that gave the 1 percent of scientists who were climate-change deniers the same weight as the 99 percent who were certain that human activities were having an adverse impact on global climate."

Yet in this case, Willmsen's embeddedness with prosecutors resulted in such a profound lack of balance that the series was blatantly biased, and crossed the line from news reporting into advocacy. Consider Willmsen's one-sidedness in reporting on three of her major themes:

Money:

Richard Wollert testifying (from Seattle Times video)
The main theme of the series was that defense-retained experts were gouging the state. Willmsen wrote that Wollert made more than $100,000 on one SVP case; in a video from the series, Wollert is shown testifying that he earned $1.2 million from sexually violent predator cases in Washington and other states over a two-year period. That's a big chunk of taxpayer money, and the revelation undoubtedly caused public outrage against defense attorneys and their experts.

Willmsen wrote that government experts were not paid that much. However, this is demonstrably false. During the period that Willmsen was collecting data for the series, a California psychiatrist who is popular with Washington prosecutors was charging $450 per hour (the average among forensic psychologists being about half that) and -- like Wollert -- had billed more than $100,000 in a single case. His name does not show up anywhere in the series.

Following publication of the series, Washington capped the fees of defense-retained SVP experts at $10,000 for evaluations, a fee that includes all travel expenses, and $6,000 for testifying (including preparation time, travel, and deposition testimony). The fees of prosecution-retained experts were not capped.

Boilerplate reports:

In the video, Willmsen targets defense-retained expert Ted Donaldson for writing boilerplate reports in which the names were changed, but the text remained virtually the same.

Boilerplate reports are indeed a travesty. Especially when someone is facing potentially lifelong detention for a crime that is only a future possibility, experts should present a keen understanding of what makes that specific individual tick. But, having reviewed dozens of reports in Washington SVP cases, I can attest to the fact that government hacks are at least as guilty of this sin. In fact, one of the most popular of prosecution-retained experts in Washington is infamous for writing novel-length boilerplate reports. In two recent cases that I am aware of, he even forgot to excise the name of the previous individual. So, one will be reading along in his report on Mr. Smith, and suddenly come across big chunks of material describing Mr. Jones.

Faulty science:

In her reporting, Willmsen lambasted an actuarial tool developed by Wollert, the MATS-1, as being illegitimate. The idea behind the MATS-1 is to fully account for the reduction in risk that occurs as offenders age. The reporter quoted Karl Hanson, a Canadian researcher who is unhappy with Wollert because Wollert modified his popular Static-99 tool to create the MATS-1. But a fair and unbiased report would not rely for the unvarnished truth on a source who is essentially a business rival. In truth, as I've blogged about myriad times, there are plenty of grounds to critique the methodology and accuracy of any actuarial technique. In contrast to her disdain for the newer MATS-1, Willmsen lauds the Static-99 as the most widely used actuarial tool for assessing sex offender risk. But just because McDonald's sells way more burgers than In and Out Burgers does not mean their beef is any purer. 

As Wollert told the council members, "When we're talking about the advancement of science, people have different ideas"; one test of good research is whether "it's accepted over time." By that standard, Wollert's theories are doing pretty well. His published thesis that actuarial tools overestimated risk among elderly offenders was once controversial but is now widely accepted. Similarly, he was one of the first to publish criticism of the predictive abilities of the MnSOST-R actuarial tool; recently, that instrument was pulled from use because of its inaccuracy in predicting sex offender recidivism. And Wollert was emphasizing Bayesian reasoning -- most recently popularized by Nate Silver in The Signal and the Noise -- before many in our field realized how essential it was. 

While it is certainly laudable for the press to investigate bloated fees and the waste of taxpayer money, by laying all blame on the defense, the Times likely prejudiced the jury pool and squelched zealous representation by defense attorneys, who in terms of both resources and legal clout are like David battling the state’s mighty Goliath. Instead of proffering Wollert as a whipping boy, then, an impartial investigation would have uncovered exemplars of problematic practices on both sides of the legal aisle.

A true muckracking journalist would have dug even further, to ask:
  • How did opportunist politicians bamboozle the public into enacting costly laws that do little to protect the public, while simultaneously distracting from more fruitful efforts at reducing sexual violence? 
  • How has the SVP laws' legal requirement of a mental disorder expanded the influence of forensic psychology, with battles of the experts over sham diagnoses boosting the fortunes of shrewd practitioners, many of whom were toiling away as lowly prison and hospital hacks prior to these laws?  

Talking to the press

During their public deliberations, several council members chastised Wollert for declining Willmsen's repeated requests for an interview. Wollert said that the reporter had been rude and exuded bias when she approached him. But the panelists said Wollert had no legitimate expectation of deference or politeness.

"You cut your own throat," said John Knowlton, a council member and journalism professor at Green River Community College.

But must a source submit to an interview, even if he knows that it is a trap, and that his words will be twisted and used against him? This is a dodgy question. While many journalists are conscientious and above-board, others are not. Recall Janet Malcolm's provocative opening words in The Journalist and the Murderer, examining the ethics and morality of journalism in connection with journalist Joseph McGuiness's betrayal of murder suspect Jeffrey MacDonald in Fatal Vision:
"Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible. He is a kind of confidence man, preying on people's vanity, ignorance or loneliness, gaining their trust and betraying them without remorse. Like the credulous widow who wakes up one day to find the charming young man and all her savings gone, so the consenting subject of a piece of nonfiction learns -- when the article or book appears -- his hard lesson."
Perhaps Wollert could have adequately protected himself by asking the reporter to submit her questions in advance, and by responding in writing, via email. But who knows?

His decision to trust his instincts may have come back to bite him. Then again, he might have been bit even worse had he let down his guard with an agenda-driven journalist who had him in her crosshairs.

THE WASHINGTON NEWS COUNCIL ENCOURAGES THE PUBLIC TO JOIN IN THE DEBATE. CLICK HERE TO VIEW THE VIDEO AND HERE TO CAST YOUR VOTE. COMPREHENSIVE RESOURCES ON THE CASE ARE AVAILABLE HERE. SEATTLE'S NONPROFIT JOURNALISM PROJECT CROSSCUT HAS A GOOD REPORT ON THE NEWS COUNCIL DECISION; YOU CAN ADD YOUR COMMENTS ON THAT WEBSITE.

April 7, 2013

Risk screening worthless with juvenile sex offenders, study finds

Boys labeled as 'sexually violent predators' not more dangerous

Juveniles tagged for preventive detention due to their supposedly higher level of sexual violence risk are no more likely to sexually reoffend than adolescents who are not so branded, a new study has found.

Only about 12 percent of youths who were targeted for civil commitment as sexually violent predators (SVP's) but then freed went on to commit a new sex offense. That compares with about 17 percent of youths screened out as lower risk and tracked over the same five-year follow-up period.

Although the two groups had essentially similar rates of sexual and violent reoffending, overall criminal reoffending was almost twice as high among the youths who were NOT petitioned for civil commitment (66 percent versus 35 percent), further calling into question the judgment of the forensic evaluators.

Because of the youths' overall low rates of sexual recidivism, civil detention has no measurable impact on rates of sexual violence by youthful offenders, asserted study author Michael Caldwell, a psychology professor at the University of Wisconsin and an expert on juvenile sex offending.

The study, just published in the journal Sexual Abuse, is one in a growing corpus pointing to flaws in clinical prediction of risk.

It tracked about 200 juvenile delinquents eligible for civil commitment as Sexually Violent Persons (SVP's). The state where the study was conducted was not specified; at least eight of the 20 U.S. states with SVP laws permit civil detention of juveniles, and all allow commitment of adults based on offenses committed as a juvenile.

As they approached the end of their confinement period, the incarcerated juveniles underwent a two-stage screening process. In the first phase, one of a pool of psychologists at the institution evaluated them to determine whether they had a mental disorder that made them "likely" to commit a future act of sexual violence. Just over one in every four boys was found to meet this criterion, thereby triggering a prosecutorial petition for civil commitment.

After the initial probable cause hearing but before the final civil commitment hearing, an evaluator from a different pool of psychologists conducted a second risk assessment. These  psychologists were also employed by the institution but were independent of the treatment team. Astonishingly, the second set of psychologists disagreed with the first in more than nine out of ten cases, screening out 50 of the remaining 54 youths. (Only four youths were civilly committed, and a judge overturned one of these commitments, so ultimately all but three boys from the initial group of 198 could be tracked in the community to see whether or not they actually reoffended.)

Evaluators typically did not rely on actuarial risk scales to reach their opinions, Caldwell noted, and their methods remained something of a mystery. Youths were more likely to be tagged for civil detention at the first stage if they were white, had multiple male victims, and had engaged in multiple instances of sexual misconduct in custody, Caldwell found.

However, no matter what method they used or which factors they considered, the psychologists likely would have had little success in predicting which youths would reoffend. Even "the most carefully developed and thoroughly studied" methods for predicting juvenile recidivism have shown very limited accuracy, Caldwell pointed out. This is mainly due to a combination of youths' rapid social maturation and their very low base rates of recidivism; it is quite hard to successfully predict a rare event.

Indeed, a recent meta-analysis revealed that none of the six most well-known and best-researched instruments for appraising risk among juvenile sex offenders showed consistently accurate results. Studies that did find significant predictive validity for an instrument were typically conducted by that instrument's authors rather than independent researchers, raising questions about their objectivity.

"Juveniles are still developing their personality, cognitions, and moral judgment, processes that reflect considerable plasticity," noted lead author Inge Hempel, a psychology graduate student in the Netherlands, and her colleagues. "There are still many possible developmental pathways, and no one knows what causes persistent sexual offending."

Caldwell agrees with Hempel and her colleagues that experts' inability to accurately predict which juveniles will commit future sex crimes calls into question the ethics of civil commitment.

"From the perspective of public policy, these results raise questions about whether SVP commitment laws, as written, should apply to juveniles adjudicated for sexual offenses," he wrote. "If SVP laws could be reliably applied to high risk juvenile offenders, the benefit of preventing a lifetime of potential victims makes for a compelling case. However, the task of identifying the small subgroup of juveniles adjudicated for sexual offenses who are likely to persist in sexual violence into adulthood is at least extremely difficult, and may be technically infeasible."

* * * * *

The articles are:

Michael Caldwell: Accuracy of Sexually Violent Person Assessments of Juveniles Adjudicated for Sexual Offenses, Sexual Abuse: A Journal of Research and Treatment. Request it from the author HERE.

Inge Hempel, Nicole Buck, Maaike Cima and Hjalmar van Marle: Review of Risk Assessment Instruments for Juvenile Sex Offenders: What is Next? International Journal of Offender Therapy and Comparative Criminology. Request it from the first author HERE.

March 25, 2013

Miracle of the day: 80-year-old man recaptures long-lost youth

(Or: How committing a new sex crime can paradoxically LOWER risk on the Static-99R)

"How old is the offender?"

 Age is an essential variable in many forensic contexts. Older people are at lower risk for criminal recidivism. Antisocial behaviors, and even psychopathic character traits, diminish as criminals reach their 30s and 40s. Men who have committed sex offenses become at considerably lower risk for further such misconduct, due to a combination of decreased testosterone levels and the changes in thinking, health, and lifestyle that happen naturally with age.

Calculating a person's age would seem very straightforward, and certainly not something requiring a PhD: Just look up his date of birth, subtract that from today's date, and -- voila! Numerous published tests provide fill-in-the-blank boxes to make this calculation easy enough for a fourth-grader.

One forensic instrument, however, bucks this common-sense practice. The developers of the Static-99R, the most widely used tool for estimating the risk of future sexual recidivism, have given contradictory instructions on how to score its very first item: Offender age.

In a new paper, forensic evaluator Dean Cauley and PsyD graduate student Michelle Brownfield report that divergent field practices in the scoring of this item are producing vastly different risk estimates in legal cases -- estimates that in some cases defy all logic and common sense.

Take Fred. Fred is 80 years old, and facing possible civil commitment for the rapes of two women when he was 18 years old. He served 12 years in prison for those rapes. Released from prison at age 30, he committed several strings of bank robberies that landed him back in prison on six separate occasions.

At age 80 (and especially with his only known sex offenses committed at age 18), his risk for committing a new sex offense if released from custody is extremely low -- something on the order of 3 percent. But evaluators now have the option of using any of three separate approaches with Fred, with each approach producing quite distinct opinions and recommendations.

Procedure 1: Age is age (the old-fashioned method)

The first, and simplest, approach, is to list Fred's actual chronological age on Item 1 of the Static-99R. Using this approach, Fred gets a three-point reduction in risk for a total of one point, making his actuarial risk of committing a new sex offense around 3.8 percent.

Evaluators adopting this approach argue that advancing age mitigates risk, independent of any technicalities about when an offender was released from various periods of incarceration. These evaluators point to the Static-99R's coding manuals and workbook, along with recent publications, online seminars, and sworn testimony by members of the Static-99 Advisory Committee. Additionally, they point to a wealth of age-related literature from the fields of criminology and psychology to support their scoring.

Procedure 2: Reject the Static-99R as inappropriate

A second approach is not to use the Static-99R at all, because Fred's release from prison for his "index offenses" (the rapes) was far more than two years ago, making Fred unlike the members of the samples from which the Static-99R's risk levels were calculated. Evaluators adopting this approach point to publications by members of the Static-99 Advisory Committee, generally accepted testing standards and actuarial science test standards to support their choice to not use the test at all.

Procedure 3: The amazing elixir of youth

But there is a third approach. One that magically transports Fred back to his youth, back to the days when a career in bank robbing seemed so promising. (Bank robbery is no longer alluring; it is quietly fading away like the career of a blacksmith.) The last five decades of Fred's life fade away, and he becomes 30 again -- his age when he was last released from custody on a sex offense conviction.

Now Fred not only loses his three-point age reduction, but he gains a point for being between the ages of 18 and 34.9. A four point difference! The argument for this approach is that it most closely conforms to the scoring methods used on the underlying samples of sex offenders, who were scored based on their date of release from their index sexual offense. These evaluators can correctly point to information imparted at training seminars, advice given by some members of the Static-99R Advisory Committee, and sworn testimony by developers of the test itself. They can also point to an undated FAQ #27 on the Static-99 website to support their opinion.

Fred could rape someone to reduce his risk!

Back-dating age to the time of the last release from a sex offense-related incarceration allows for a very bizarre twist:

Let's say that after Fred was released from prison on his most recent robbery stint, back when he was a vigorous young man of 61, he committed another rape. Being 60 or over, Fred would now get the four-point reduction in risk to which his age entitles him. This would cut his risk by two-thirds -- from 11.4 percent (at a score of 5) all the way down to a mere 3.8 percent (at a score of 1)!

While such a scenario might seem far-fetched, it is not at all unusual for an offender to be released from prison at, say, age 58 or 59, but to not undergo a civil commitment trial for a couple of years, until age 60 or 61. Such an offender's score will vary by two points (out of a total of 12 maximum points) depending upon how the age item is scored. And, as Cauley and Brownfield describe, the members of the Static-99R development team have, at different times, given contradictory advice on how to score the age item.

By completely negating the very substantial body of research on age and crime, this technocratic method creates other very concerning -- and paradoxical -- implications, Cauley and Brownfield argue: As the risk estimate for a more persistent offender is lowered, the offender who does not reoffend is stuck with a risk score that is forever jacked up.

Back-dating an offender's age is also at odds with the research that generated the test itself, they say, because the offenders in the samples used to construct the Static-99R had finished serving their sentences on their index sexual offenses within two years of being studied. In other words, none of the offenders had been released many years earlier, and there was none of this curious time-travel business in regard to their ages. As the instrument's developers noted in a publication just last year, the Static-99 "was developed on, and intended for, sexual offenders with a current or recent sexual offense."

So, if you are evaluating an old geezer in the local pen and he tells you that he is only 30 years old, don't assume that he has a delusional belief that he has discovered the elixir of youth -- or that he's pulling your leg. He just might be reciting the age that he was just assigned by a technocratic Static-99R evaluator.

The paper, "Static-99R: Item #1 -- What is the Offender's Age? A lack of consensus leads to a defective actuarial," is available for download both HERE and HERE.

March 19, 2013

California high court upholds parolee confidentiality right

Two years ago, I reported on a California appellate opinion upholding the sacredness of patient-therapist confidentiality even for convicted felons who are mandated to treatment as a condition of parole. Today, the California Supreme Court upheld the gist of the ruling -- but with a proviso. Using strained logic, the court held that the breach of confidentiality was not so prejudicial as to merit overturning Ramiro Gonzales's civil commitment, as the Sixth District Court of Appeal had done.

Gonzales is a developmentally disabled man whose therapist turned over prejudicial therapy records to a prosecutor seeking to civilly detain him as a sexually violent predator (SVP). Forensic psychology experts Brian Abbott and Tim Derning testified for the defense; called by the prosecution were psychologists Thomas MacSpeiden and Jack Vognsen.

As I wrote two years ago, the ruling is good news for psychology ethics and should serve as a reminder that we are obligated to actively resist subpoenas requesting confidential records of therapy.

Today's California Supreme Court ruling is HERE. My prior post, with much more detail on the case, is HERE. The Sixth District Court of Appeal opinion from 2011, available HERE, provides a nice overview of both federal and California case law on confidentiality in forensic cases.
 
Hat tip: Adam Alban

March 5, 2013

Remarkable experiment proves pull of adversarial allegiance

 Psychologists' scoring of forensic tools depends on which side they believe has hired them

A brilliant experiment has proven that adversarial pressures skew forensic psychologists' scoring of supposedly objective risk assessment tests, and that this "adversarial allegiance" is not due to selection bias, or preexisting differences among evaluators.

The researchers duped about 100 experienced forensic psychologists into believing they were part of a large-scale forensic case consultation at the behest of either a public defender service or a specialized prosecution unit. After two days of formal training by recognized experts on two widely used forensic instruments -- the Psychopathy Checklist-R (PCL-R) and the Static-99R -- the psychologists were paid $400 to spend a third day reviewing cases and scoring subjects. The National Science Foundation picked up the $40,000 tab.

Unbeknownst to them, the psychologists were all looking at the same set of four cases. But they were "primed" to consider the case from either a defense or prosecution point of view by a research confederate, an actual attorney who pretended to work on a Sexually Violent Predator (SVP) unit. In his defense attorney guise, the confederate made mildly partisan but realistic statements such as "We try to help the court understand that ... not every sex offender really poses a high risk of reoffending." In his prosecutor role, he said, "We try to help the court understand that the offenders we bring to trial are a select group [who] are more likely than other sex offenders to reoffend." In both conditions, he hinted at future work opportunities if the consultation went well. 

The deception was so cunning that only four astute participants smelled a rat; their data were discarded.

As expected, the adversarial allegiance effect was stronger for the PCL-R, which is more subjectively scored. (Evaluators must decide, for example, whether a subject is "glib" or "superficially charming.") Scoring differences on the Static-99R only reached statistical significance in one out of the four cases.

The groundbreaking research, to be published in the journal Psychological Science, echoes previous findings by the same group regarding partisan bias in actual court cases. But by conducting a true experiment in which participants were randomly assigned to either a defense or prosecution condition, the researchers could rule out selection bias as a cause. In other words, the adversarial allegiance bias cannot be solely due to attorneys shopping around for simpatico experts, as the experimental participants were randomly assigned and had no group differences in their attitudes about civil commitment laws for sex offenders.

Sexually Violent Predator cases are an excellent arena for studying adversarial allegiance, because the typical case boils down to a "battle of the experts." Often, the only witnesses are psychologists, all of whom have reviewed essentially the same material but have differing interpretations about mental disorder and risk. In actual cases, the researchers note, the adversarial pressures are far higher than in this experiment:
"This evidence of allegiance was particularly striking because our experimental manipulation was less powerful than experts are likely to encounter in most real cases. For example, our participating experts spent only 15 minutes with the retaining attorney, whereas experts in the field may have extensive contact with retaining attorneys over weeks or months. Our experts formed opinions based on files only, which were identical across opposing experts. But experts in the field may elicit different information by seeking different collateral sources or interviewing offenders in different ways. Therefore, the pull toward allegiance in this study was relatively weak compared to the pull typical of most cases in the field. So the large group differences provide compelling evidence for adversarial allegiance."

This is just the latest in a series of stunning findings by this team of psychologists led by Daniel Murrie of the University of Virginia and Marcus Boccaccini of Sam Houston University on an allegiance bias among psychologists. The tendency of experts to skew data to fit the side who retains them should come as no big surprise. After all, it is consistent with 2009 findings by the National Academies of Science calling into question the reliability of all types of forensic science evidence, including supposedly more objective techniques such as DNA typing and fingerprint analysis.

Although the group's findings have heretofore been published only in academic journals and have found a limited audience outside of the profession, this might change. A Huffington Post blogger, Wray Herbert, has published a piece on the current findings, which he called "disturbing." And I predict more public interest if and when mainstream journalists and science writers learn of this extraordinary line of research.

In the latest study, Murrie and Boccaccini conducted follow-up analyses to determine how often matched pairs of experts differed in the expected direction. On the three cases in which clear allegiance effects showed up in PCL-R scoring, more than one-fourth of score pairings had differences of more than six points in the expected direction. Six points equates to about two standard errors of measurement (SEM's), which should  happen by chance in only 2 percent of cases. A similar, albeit milder, effect was found with the Static-99R.

Adversarial allegiance effects might be even stronger in less structured assessment contexts, the researchers warn. For example, clinical diagnoses and assessments of emotional injuries involve even more subjective judgment than scoring of the Static-99 or PCL-R.

But ... WHICH psychologists?!


For me, this study raised a tantalizing question: Since only some of the psychologists succumbed to the allegiance effect, what distinguished those who were swayed by the partisan pressures from those who were not?

The short answer is, "Who knows?"

The researchers told me that they ran all kinds of post-hoc analyses in an effort to answer this question, and could not find a smoking gun. As in a previous research project that I blogged about, they did find evidence for individual differences in scoring of the PCL-R, with some evaluators assigning higher scores than others across all cases. However, they found nothing about individual evaluators that would explain susceptibility to adversarial allegiance. Likewise, the allegiance effect could not be attributed to a handful of grossly biased experts in the mix.

In fact, although score differences tended to go in the expected direction -- with prosecution experts giving higher scores than defense experts on both instruments -- there was a lot of variation even among the experts on the same side, and plenty of overlap between experts on opposing sides.

So, on average prosecution experts scored the PCL-R about three points higher than did the defense experts. But the scores given by experts on any given case ranged widely even within the same group. For example, in one case, prosecution experts gave PCL-R scores ranging from about 12 to 35 (out of a total of 40 possible points), with a similarly wide range among defense experts, from about 17 to 34 points. There was quite a bit of variability on scoring of the Static-99R, too; on one of the four cases, scores ranged all the way from a low of two to a high of ten (the maximum score being 12).

When the researchers debriefed the participants themselves, they didn't have a clue as to what caused the effect. That's likely because bias is mostly unconscious, and people tend to recognize it in others but not in themselves. So, when asked about factors that make psychologists vulnerable to allegiance effects, the participants endorsed things that applied to others and not to them: Those who worked at state facilities thought private practitioners were more vulnerable; experienced evaluators thought that inexperience was the culprit. (It wasn't.)

I tend to think that greater training in how to avoid falling prey to cognitive biases (see my previous post exploring this) could make a difference. But this may be wrong; the experiment to test my hypothesis has not been run. 

The study is: "Are forensic experts biased by the side that retained them?" by Daniel C. Murrie, Marcus T. Boccaccini, Lucy A. Guarnera and Katrina Rufino, forthcoming from Psychological Science. Contact the first author (HERE) if you would like to be put on the list to receive a copy of the article as soon as it becomes available.

Click on these links for lists of my numerous prior blog posts on the PCL-R, adversarial allegiance, and other creative research by Murrie, Boccaccini and their prolific team. Among my all-time favorite experiments from this research team is: "Psychopathy: A Rorschach test for pychologists?"

February 5, 2013

Texas SVP jurors ignoring actuarial risk scores

Expert witness for defense makes a (small) difference, study finds

The fiery debates surrounding the validity of actuarial tools to predict violence risk begs the question: How much influence do these instruments really have on legal decision-makers? The answer, at least when it comes to jurors in Sexually Violent Predator trials in Texas:

Not much.

"Despite great academic emphasis on risk measures - and ongoing debates about the value, accuracy, and utility of risk-measure scores reported in SVP hearings - our findings suggest these risk measure scores may have little impact on jurors in actual SVP hearings."

The researchers surveyed 299 jurors at the end of 26 sexually violent predator trials. Unfortunately, they could not directly measure the relationship between risk scores and civil commitment decisions because, this being Texas, juries slam-dunked 25 out of 26 sex offenders, hanging in only one case (which ultimately ended in commitment after a retrial).  

Instead of the ultimate legal outcome, the researchers had to rely on proxy outcome measures, including jurors' ratings of how dangerous an individual was (specifically, how likely he would be to commit a new sex offense within one year of release), and their assessment of how difficult it was to make a decision in their case.

There was no evidence that jurors' assessments of risk or decision difficulty varied based on respondents' scores on risk assessment tools, which in each case included the Static-99, MnSOST-R and the PCL-R. This finding, by the prolific team of Marcus Boccaccini, Daniel Murrie and colleagues, extends into the real world prior mock trial evidence that jurors in capital cases and other legal proceedings involving psychology experts are more heavily influenced by clinical than actuarial testimony.

What did make a difference to jurors was whether the defense called at least one witness, and in particular an expert witness. Overall, there was a huge imbalance in expert testimony, with almost all of the trials featuring two state experts, but only seven of 26 including even one expert called by the defense.

"Skepticism effect"

The introduction of a defense expert produced a "skepticism effect," the researchers found, in which jurors became more skeptical of experts' ability to predict future offending. However, jurors' lower risk ratings in these cases could also have been due to real differences in the cases. In SVP cases involving legitimately dangerous sex offenders, defense attorneys often have trouble finding experts willing to testify. In other words, the researchers note, "the reduced ratings of perceived risk associated with the presence of a defense expert may be due to nonrandom selection … as opposed to these defense experts' influencing jurors."

A back story here pertains to the jury pool in the Texas county in which civil commitment trials are held. All SVP trials take place in Montgomery County, a "very white community," an attorney there told me. A special e-juror selection process for SVP jurors whitens the jury pool even more, disproportionately eliminating Hispanics and African Americans. Meanwhile, many of those being referred for civil commitment are racial minorities. The potentially Unconstitutional race discrepancy is the basis for one of many current legal challenges to the SVP system in Texas.

Once a petition for civil commitment as a sexually violent predator is filed in Texas, the outcome is a fait accompli. Since the inception of the state's SVP law, only one jury has unanimously voted against civil commitment. Almost 300 men have been committed, and not a single one has been released.

Overall, the broad majority of jurors in the 26 SVP trials were of the opinion that respondents were likely to reoffend in the next year. Based on this heightened perception of risk, the researchers hypothesize that jurors may have found precise risk assessment ratings irrelevant because any risk was enough to justify civil commitment.

In a previous survey of Texas jurors, more than half reported that even a 1 percent chance of recidivism was enough to qualify a sex offender as dangerous. To be civilly committed in Texas, a sex offender must be found "likely" to reoffend, but the state's courts have not clarified what that term means.  

Risk scores could also be irrelevant to jurors motivated more by a desire for retribution than a genuine wish to protect the public, the researchers pointed out. "Although SVP laws are ostensibly designed to provide treatment and protect the public, experimental research suggests that many mock jurors make civil commitment decisions based more on retributive motives - that is, the desire to punish sexual offenses—than the utilitarian goal of protecting the public…. Jurors who adopt this mindset may spend little time thinking about risk-measure scores."

All this is not to say that actuarial scores are irrelevant. They are highly influential in the decisions that take place leading up to an SVP trial, including administrative referrals for full evaluations, the opinions of the evaluators themselves as to whether an offender meets civil commitment criteria, and decisions by prosecutors as to which cases to select for trial.

"But the influence of risk scores appears to end at the point when laypersons make decisions about civilly committing a select subgroup of sexual offenders," the researchers noted.

Bottom line: Once a petition for civil commitment as a sexually violent predator is filed in Texas, it's the end of the line. The juries are ultra-punitive, and the deck is stacked, with government experts outnumbering experts called by the defense in every case. It remains unclear to what extent these results might generalize to SVP proceedings in other states with less conservative jury pools and/or more balanced proceedings.

  • The study, "Do Scores From Risk Measures Matter to Jurors?" by Marcus Boccaccini, Darrel Turner, Craig Henderson and Caroline Chevalier of Sam Houston State University and Daniel Murrie of the University of Virginia, is slated for publication in an upcoming issue of Psychology, Public Policy, and Law. To request a copy, email the lead researcher (HERE).

January 5, 2013

SVP verdict overturned for prosecutorial misconduct -- again

Prosecutor impugned defense witness in hebephilia case

In a highly unusual development, a California appeals court has overturned the civil commitment of a convicted sex offender for the second time in a row due to egregious prosecutorial misconduct.

The prosecutor in the most recent trial engaged in a "pervasive pattern" of misconduct and "flagrantly" violated the law by implying that jurors would become social pariahs if they did not vote to civilly commit sex offender Dariel Shazier, the appellate court wrote.

Prosecutor Jay Boyarsky, now the second in command of the district attorney's office in Santa Clara County (San Jose), also improperly impugned the reputation of the forensic psychologist who testified for the defense, according to the scathing opinion by the Sixth District Court of Appeal.
Prosecutor Jay Boyarsky
"This is not a case in which the prosecutor engaged in a few minor incidents of improper conduct. Rather, the prosecutor engaged in a pervasive pattern of inappropriate questions, comments and argument, throughout the entire trial, each one building on the next, to such a degree as to undermine the fairness of the proceedings. The misconduct culminated in the prosecutor flagrantly violating the law in closing argument, telling the jury to consider the reaction of their friends and family to their verdict, implying they would be subject to ridicule and condemnation if they found in favor of defendant."
This was the second civil commitment verdict against Dariel Shazier to be overturned on appeal due to prosecutorial misconduct. The license of the previous prosecutor, Benjamin Field, was suspended in 2010 based on his severe misconduct in several cases, including Shazier's 2006 trial. In the first of Shazier's three trials, a jury deadlocked as to whether the convicted sex offender qualified for civil detention as a sexually violent predator.

The case revolves around the controversial diagnosis of hebephilia. Shazier served nine years in prison for sexual misconduct with teenage boys. At the end of his sentence, in 2003, the district attorney began efforts to commit him indefinitely to a locked hospital based on his risk of reoffense. At Shazier's most recent trial, two state evaluators testified that he suffered from hebephilia, thereby making him eligible for civil commitment. However, they admitted that hebephilia was highly controversial and had only come into vogue with the advent of civil commitment laws.

Incendiary questioning of defense expert witness

The appellate court chastised the prosecutor for stepping far over the line in his questioning of a psychologist who was called by the defense to rebut the diagnosis of hebephilia. Psychologist Ted Donaldson testified that hebephilia is not a legitimate mental disorder, and that socially unacceptable or immoral conduct does not constitute a mental illness.

On cross-examination, Boyarsky questioned Donaldson about previous cases in which he had testified that sex offenders were not mentally disordered. Naturally, Donaldson had not brought the files from all of his old cases to court with him. This, the appellate court wrote, gave the prosecutor an excuse to recite inflammatory facts from select cases, which the defense correctly complained "were only brought up to incite the passions and prejudice of the jury."

The appellate court also chastised Boyarsky for impugning Donaldson's character. In his closing argument, the prosecutor described Donaldson as "completely biased and not helpful," called his opinion "laughable," and implied that he was biased because he had repeatedly testified for the defense:
"He has got a streak that would make Cal Ripken jealous. Cal Ripken the baseball player and the Iron Man that played in something like 4,000 straight games. Dr. Donaldson’s streak of 289 straight times testifying exclusively for the defense. Now he would like to tell you that is not his fault, because he offered to teach the State of California all his wisdom. His brilliance has yet to be fully appreciated by this society. It is appreciated by defense attorneys who pay him...."
Boyarsky also improperly attacked a psychiatric technician at Atascadero State Hospital (where Shazier was undergoing sex offender treatment while awaiting the outcome of his case) who testified for the defense. The appellate court critiqued "rhetorical attempts to degrade and disparage" that witness during cross-examination. The justices highlighted Boyarsky's question: "Mr. Ross, you don't know what you’re talking about, do you?"
"Here, the prosecutor’s questioning … was clearly argumentative, and was not intended to glean relevant information. 'An argumentative question is a speech to the jury masquerading as a question. The questioner is not seeking to elicit relevant testimony. Often it is apparent that the questioner does not even expect an answer. The question may, indeed, be unanswerable. . . . An argumentative question that essentially talks past the witness, and makes an argument to the jury, is improper because it does not seek to elicit relevant, competent testimony, or often any testimony at all.'(People v. Chatman (2006) 38 Cal.4th 344, 384.)"
The appellate opinion strongly rebuked trial judge Alfonso Fernandez for overruling repeated objections by defense attorney Patrick Hoopes. "Defense counsel objected to all of the prosecutor's improper questions, statements and arguments. We observe that not one of counsel's well-taken objections was sustained by the court. The court erred in overruling these objections."

Who’s grooming who?

In a humorous twist, Boyarsky was also reprimanded for misusing the loaded term "grooming" during his closing argument.

During the trial, a government expert had testified that Shazier "groomed" his victims by slowly manipulating them into situations in which he could violate sexual boundaries with them.

The prosecutor tagged off this in his closing argument, warning the jury that Shazier had "groomed" them during his testimony. "The grooming behavior, the manipulation, it still continues," Boyarski stated.

The appellate court agreed with the defense that this statement was "intended to inflame the jury, making them each feel like victims in the case." The justices went even further, noting that Shazier was not necessarily the one doing the grooming:
"During trial, Dr. Murphy defined grooming as a 'slow, steady manipulation to get a person in a compromising position or violate boundaries without awareness.' The irony here is that the prosecutor's conduct toward the jury throughout the trial closely fit Dr. Murphy's definition of grooming."

The unanimous appellate ruling is HERE. San Jose Mercury News coverage is HERE; the San Francisco Chronicle's, HERE.

December 16, 2012

Training: Controversies in sexually violent predator evaluations

I am excited to announce that the American Psychology-Law Society has accepted a panel that I put together on "Emergent controversies in civil commitment evaluations of sexually violent predators." I hope some of you will join me at the annual conference in Portland, Oregon on March 7-9.

The symposium will address three areas of controversy in the sex offender civil commitment field:
  • Mental abnormality and psychiatric diagnosis in court (my topic)
  • Recidivism risk assessment (addressed by my esteemed colleague Jeffrey Singer)
  • Volitional control (Frederick Winsmann, clinical instructor at Harvard Medical School, will present a promising new assessment model)
Here's the symposium abstract:
Over the past three decades, Sexually Violent Predator litigation has emerged as perhaps the most contentious area of forensic psychology practice. In an effort to assist the courts, a cadre of experts has proffered a confusing array of constantly changing assessment methods, psychiatric diagnoses, and theories of sex offending. Now, some federal and state courts are beginning to subject these often-competing claims to greater scrutiny, for example via Daubert and Frye evidentiary hearings. This symposium will alert forensic practitioners, lawyers and academics to some of the most prominent minefields on the SVP battleground, revolving around three central areas of contestation: psychiatric diagnosis, risk assessment, and the elusive construct of volitional control. The presenters will review recent scholarly literature and court rulings addressing: (1) the reliability and validity of psychiatric diagnoses in sexually dangerous person litigation, (2) forensic risk assessment tools and how risk data should be reported to triers of fact, and (3) how best to address the issue of volitional impairment, a Constitutionally required element for civil commitment. The focus will be on how to assist the courts while remaining within the limits of scientific knowledge and our profession's ethical boundaries.
The conference schedule hasn't been issued yet so I don’t know which day our panel is presenting, but I will keep you posted when I find out, probably in January. In the meantime, if you are looking to pick up Continuing Education (CE) credits, the pre-conference workshops are a good way to get some high-quality forensic training:
  • The ever-informative Randy Otto on "Improving Clinical Judgment and Decision Making in Forensic Psychological Evaluation," with a heavy focus on identifying and reducing bias (full-day workshop) 
  • Paul J. Frick on "Developmental Pathways to Conduct Disorder: Implications for Understanding and Treating Severely Aggressive and Antisocial Youth" (full-day workshop)
  • Amanda Zelechoski on "Trauma-Informed Care in Forensic Settings" (full-day workshop)
  • Kathy Pezdek on "How to Present Statistical Information to Judges and Jurors" (half-day workshop)
  • Steven Penrod on "Things That Jurors (and Judges) Ought to Know About Eyewitness Reliability" (half-day workshop)
Portland is a lovely city, especially in the spring, so register now, and mark your calendars for what is sure to be a lively and educational event.

December 14, 2012

Judge bars Static-99R risk tool from SVP trial

Developers staunchly refused requests to turn over data
For several years now, the developers of the most widely used sex offender risk assessment tool in the world have refused to share their data with independent researchers and statisticians seeking to cross-check the  instrument's methodology.

Now, a Wisconsin judge has ordered the influential Static-99R instrument excluded from a sexually violent predator (SVP) trial, on the grounds that failure to release the data violates a respondent's legal right to due process.

The ruling may be the first time that the Static-99R has been excluded altogether from court. At least one prior court, in New Hampshire, barred an experimental method that is currently popular among government evaluators, in which Static-99R risk estimates are artificially inflated by comparing sex offenders to a specially selected "high-risk" sub-group, a procedure that has not been empirically validated in any published research. 

In the Wisconsin case, the state was seeking to civilly commit Homer Perren Jr. as a sexually dangerous predator after he completed a 10-year prison term for an attempted sexual assault on a child age 16 or under. The exclusion of the Static-99R ultimately did not help Perren.  This week, after a 1.5-day trial, a jury deliberated for only one hour before deciding that he met the criteria for indefinite civil commitment at the Sand Ridge Secure Treatment Center.*

Dec. 18 note: After publishing this post, I learned that the judge admitted other "actuarial" risk assessment instruments, including the original Static-99 and the MnSOST-R, which is way less accurate than the Static-99R and vastly overpredicts risk. He excluded the RRASOR, a four-item ancestor of the Static-99. In hindsight, for the defense to get the Static-99R excluded was a bit like cutting off one's nose to spite one's face.

The ruling by La Crosse County Judge Elliott Levine came after David Thornton, one of the developers of the Static-99R and a government witness in the case, failed to turn over data requested as part of a Daubert challenge by the defense. Under the U.S. Supreme Court's 1993 ruling in Daubert v. Merrell Dow Pharmaceuticals, judges are charged with the gatekeeper function of filtering evidence for scientific reliability and validity prior to its admission in court.

Defense attorney Anthony Rios began seeking the data a year ago so that his own expert, psychologist Richard Wollert, could directly compare the predictive accuracy of the Static-99R with that of a competing instrument, the Multisample Age-Stratified Table of Sexual Recidivism Rates," or MATS-1. Wollert developed the MATS-1 in an effort to improve the accuracy of risk estimation by more precisely considering the effects of advancing age. It incorporates recidivism data on 3,425 offenders published by Static-99R developer Karl Hanson in 2006, and uses the statistical method of Bayes's Theorem to calculate likelihood ratios for recidivism at different levels of risk.

The state's attorney objected to the disclosure request, calling the data "a trade secret."

Hanson, the Canadian psychologist who heads the Static-99 enterprise, has steadfastly rebuffed repeated requests to release data on which the family of instruments is based. Public Safety Canada, his agency, takes the position that it will not release data on which research is still being conducted, and that "external experts can review the data set only to verify substantive claims (i.e., verify fraud), not to conduct new analyses,"  according to a document filed in the case.

Thornton estimated that the raw data will remain proprietary for another five years, until the research group finishes its current projects and releases the data to the public domain.

While declining to release the data to the defense, Hanson agreed to release it to Thornton, the government's expert and a co-developer of the original Static-99, so that Thornton could analyze the relative accuracy of the two instruments. 

The American Psychological Association's Ethics Code requires psychologists to furnish data, after their research results are published, to "other competent professionals who seek to verify the substantive claims through reanalysis" (Section 8.14).

At least three five researchers have been rebuffed in their attempts to review Static-99 data over the past few years, for purposes of research replication and reanalysis. As described in their 2008 article, Hanson's steadfast refusals to share data required Wollert and his colleagues, statisticians Elliot Cramer and Jacqueline Waggoner, to perform complex statistical manipulations to develop their alternate methodology. (Correspondence between Hanson and Cramer can be viewed HERE.) Hanson also rejected a request by forensic psychologists Brian Abbott and Ted Donaldson; see comments section, below.


Since the Static-99 family of instruments (which include the Static-99, Static-99R, and Static-2000) began to be developed more than a decade ago, they have been in a near-constant state of flux, with risk estimates and instructions for interpretation subject to frequent and dizzying changes.

It is unfortunate, with the stakes so high, that all of these researchers cannot come together in a spirit of open exchange. I'm sure that would result in more scientifically sound, and defensible, risk estimations in court.

The timing of this latest brouhaha is apropos, as reports of bias, inaccuracy and outright fraud have  shaken the psychological sciences this year and led to more urgent calls for transparency and sharing of data by researchers. Earlier this year, a large-scale project was launched to systematically try to replicate studies published in three prominent psychological journals.

A special issue of Perspectives on Psychological Science dedicated to the problem of research bias in psychology is available online for free (HERE).

*Hat tip to blog reader David Thompson for alerting me that the trial had concluded.