Showing posts sorted by date for query PCL-R. Sort by relevance Show all posts
Showing posts sorted by date for query PCL-R. Sort by relevance Show all posts

February 15, 2020

Flawed science? Two efforts launched to improve scientific validity of psychological test evidence in court

There’s this forensic psychologist, we’ll call him Dr. Harms, who is infamous for his unorthodox approach. He scampers around the country deploying a bizarre admixture of obscure, outdated and unpublished tests that no one else has ever heard of.

Oh, and the Psychopathy Checklist (PCL-R). Dr. Harms never omits that. To him, everyone is a chillingly dangerous psychopath. Even a 30-year-old whose last crime was at age 15.

What’s most bizarre about Dr. Harms’s esoteric method is that he gets away with it. Attorneys may try to challenge him in court, but their protests usually fall flat. Judges rule that any weaknesses in his method should go to the “weight” that jurors give Dr. Harm’s opinions, rather than the admissibility of his tests.

Psychological tests hold a magical allure as objective truth. They retain their luster even while forensic science techniques previously regarded as bulletproof are undergoing unprecedented scrutiny. Based in large part on our briefcases full of tests, courts have granted psychologists unprecedented influence over an ever-increasing array of thorny issues, from future dangerousness to parental fitness to refugee trauma. Behind the scenes, meanwhile, a lucrative test-production industry is gleefully rubbing its hands all the way to the bank.

In other forensic “science” niches such as bite-mark analysis and similar types of pattern matching that have contributed to wrongful convictions, appellate attorneys have had to wage grueling, decades-long efforts to reign in shoddy practice. (See Radley Balko's The Cadaver King and the Country Dentist for more on this.) But leaders in the field of forensic psychology are grabbing the bull by the horns and inviting us to do better, proposing novel ways for us to self-police.

New report slams "junk science” psychological assessments


In one of two significant developments, a group of researchers today released evidence of systematic problems with the state of psychological test admissibility in court. The researchers' comprehensive survey found that only about two-thirds of the tools used by clinicians in forensic settings were generally accepted in the field, while even fewer -- only about four in ten -- were favorably reviewed in authoritative sources such as the Mental Measurements Yearbook.

Despite this, psychological tests are rarely challenged when they are introduced in court, Tess M.S. Neal and her colleagues found. Even when they are, the challenges fail about two-thirds of the time. Worse yet, there is little relationship between a tool’s psychometric quality and the likelihood of it being challenged.

Slick ad for one of a myriad of new psych tests.
“Some of the weakest tools tend to get a pass from the courts,” write the authors of the newly issued report, "Psychological Assessments in Legal Contexts: Are Courts Keeping 'Junk Science' Out of the Courtroom?”

The report, currently in press in the journal Psychological Science in the Public Interest, proposes that standard batteries be developed for forensic use, based on the consensus of experts in the field as to which tests are the most reliable and valid for assessing a given psycholegal issue. It further cautions against forensic deployment of newly developed tests that are being marketed by for-profit corporations before adequate research or review by independent professionals.

"Life or death" call to halt prejudicial use of psychopathy test


In a parallel development in the field, 13 prominent forensic psychologists have issued a rare public rebuke of improper use of the controversial Psychopathy Checklist (PCL-R) in court. The group is calling for a halt to the use of the PCL-R in the sentencing phase of death-penalty cases as evidence that a convicted killer will be especially dangerous if sentenced to life in prison rather than death.

As I’ve reported previously in a series of posts (here and here, for example), scores on the PCL-R swing wildly in forensic settings based on which side hired the expert. In a phenomenon known as adversarial allegiance, prosecution-retained experts produce scores in the high-psychopathy range in about half of cases, as compared with less than one out of ten cases for defense experts.

Research does not support testimony being given by prosecution experts in capital trials that PCL-R scores can accurately predict serious violence in institutional settings such as prison, according to the newly formed Group of Concerned Forensic Mental Health Professionals. And once such a claim is made in court, its prejudicial impact on jurors is hard to overcome, potentially leading to a vote for execution.

The "Statement of Concerned Experts," whose authors include prominent professionals who helped to develop and test the PCL-R, is forthcoming from the respected journal Psychology, Public Policy, and Law.

Beware the all-powerful law of unintended consequences


This scrutiny of how psychological instruments are being used in forensic practice is much needed and long overdue. Perhaps eventually it may even trickle down to our friend Dr. Harms, although I have a feeling it won't be before his retirement.

But never underestimate the law of unintended consequences.

The research group that surveyed psychological test use in the courts developed a complex, seemingly objective method to sort tests according to whether they were generally accepted in the field and/or favorably reviewed by independent researchers and test reviewers.

Ironically enough, one of the tests that they categorized as meeting both criteria – general acceptance and favorable review – was the PCL-R, the same test being targeted by the other consortium for its improper deployment and prejudicial impact in court. (Perhaps not so coincidentally, that test is a favorite of the aforementioned Dr. Harms, who likes to score it high.)

The disconnect illustrates the fact that science doesn’t exist in a vacuum. Psychopathy is a value-laden construct that owes its popularity in large part to current cultural values, which favor the individual-pathology model of criminal conduct over notions of rehabilitation and desistance from crime.

It’s certainly understandable why reformers would suggest the development of “standard batteries … based on the best clinical tools available.” The problem comes in deciding what is “best.”

Who will be privileged to make those choices (which will inevitably reify the dominant orthodoxy and its implicit assumptions)?

What alternatives will those choices exclude? And at whose expense?

And will that truly result in fairer and more scientifically defensible practice in the courtroom?

It’s exciting that forensic psychology leaders are drawing attention to the dark underbelly of psychological test deployment in forensic practice. But despite our best efforts, I fear that equitable solutions may remain thorny and elusive.

September 3, 2015

Adversarial allegiance: Frontier of forensic psychology research

A colleague recently commented on how favorably impressed he was about the open-mindedness of two other forensic examiners, who had had the courage to change their opinions in the face of new evidence. The two had initially recommended that a man be civilly committed as a sexually violent predator, but changed their minds three years later .

My colleague's admiration was short-lived. It evaporated when he realized that the experts’ change of heart had come only after they switched teams: Initially retained by the government, they were now in the employ of the defense.

"Adversarial allegiance" is the name of this well-known phenomenon in which some experts' opinions tend to drift toward the party retaining their services. This bias is insidious because it operates largely outside of conscious awareness, and can affect even ostensibly objective procedures such as the scoring and interpretation of standardized psychological tests.

Partisan bias is nothing new to legal observers, but formal research on its workings is in its infancy. Now, the researchers spearheading the exploration of this intriguing topic have put together a summary review of the empirical evidence they have developed over the course of the past decade. The review, by Daniel Murrie of the Institute of Law, Psychiatry and Public Policy at the University of Virginia and Marcus Boccaccini of Sam Houston State University, is forthcoming in the Annual Review of Law and Social Science.

Forensic psychologists’ growing reliance on structured assessment instruments gave Murrie and Boccaccini a way to systematically explore partisan bias. Because many forensic assessment tools boast excellent interrater reliability in the laboratory, the team could quantify the degradation of fidelity that occurs in real-world settings. And when scoring trends correlate systematically with which side the evaluator is testifying for, adversarial allegiance is a plausible culprit.

Daniel Murrie
Such bias has been especially pronounced with the Psychopathy Checklist-Revised, which is increasingly deployed as a weapon by prosecutors in cases involving future risk, such as capital murder sentencing hearings, juvenile transfer to adult courts, and sexually violent predator commitment trials. In a series of ground-breaking experiments, the Murrie-Boccaccini team found that scores on the PCL-R vary hugely and systematically based on whether an expert is retained by the prosecution or the defense, with the differences often exceeding what is statistically plausible based on chance.

Systematic bias was also found in the scoring of two measures designed to predict future sexual offending, the popular Static-99 and the now-defunct Minnesota Sex Offender Screening Tool Revised (MnSOST-R).

One shortcoming of the team’s initial observational research was that it couldn’t eliminate the possibility that savvy attorneys preselected who were predisposed toward one side or the other. To test this possibility, two years ago the team designed a devious experimental study in which they recruited forensic psychologists and psychiatrists and randomly assigned them to either a prosecution or defense legal unit. To increase validity, the experts were even paid $400 a day for their services.

Marcus Boccaccini
The findings provided proof-positive of the strength of the adversarial allegiance effect. Forensic experts assigned to the bogus prosecution unit gave higher scores on both the PCL-R and the Static-99R than did those assigned to the defense. The pattern was especially pronounced on the PCL-R, due to the subjectivity of many of its items. ("Glibness" and "superficiality," for example, cannot be objectively measured.)

The research brought further bad tidings. Even when experts assign the same score on the relatively simple Static-99R instrument, they often present these scores in such a way as to exaggerate or downplay risk, depending on which side they are on. Specifically, prosecution-retained experts are far more likely to endorse use of "high-risk" norms that significantly elevate risk.

Several somewhat complimentary theories have been advanced to explain why adversarial allegiance occurs. Prominent forensic psychologist Stanley Brodsky has attributed it to the social psychological process of in-group allegiance. Forensic psychologists Tess Neal and Tom Grisso have favored a more cognitive explanation, positing heuristic biases such as the human tendency to favor confirmatory over disconfirmatory information. More cynically, others have attributed partisan bias to conscious machinations in the service of earning more money. Murrie and Boccaccini remain agnostic, saying that all of these factors could play a role, depending upon the evaluator and the situation.   

One glimmer of hope is that the allegiance effect is not universal. The research team found that only some of the forensic experts they studied are swayed by which side retains them. Hopefully, the burgeoning interest in adversarial allegiance will lead to future research exploring not only the individual and situational factors that trigger bias, but also what keeps some experts from shading their opinions toward the retaining party.

Even better would be if the courts took an active interest in this problem of bias. Some Australian courts, for example, have introduced a method called "hot tubs" in which experts for all of the sides must come together and hash out their differences outside of court. 

In the meantime, watch out if someone tries to recruit you at $400 a day to come and work for a newly formed legal unit. It might be another ruse, designed to see how you hold up to adversarial pressure.

* * * * *

The article is: Adversarial Allegiance among Expert Witnesses, forthcoming from The Annual Review of Law and Social Science. To request it from the first author, click HERE


Related blog posts:

April 19, 2015

Static-99: A bumpy developmental path

By Brian Abbott, PhD and Karen Franklin, PhD* 

The Static-99 is the most widely used instrument for assessing sex offenders’ future risk to the public. Indeed, some state governments and other agencies even mandate its use. But bureaucratic faith may be misplaced. Conventional psychological tests go through a standard process of development, beginning with the generation and refinement of items and proceeding through set stages that include pilot testing and replication, leading finally to peer review and formal publication. The trajectory of the Static-99 has been more haphazard: Since its debut 15 years ago, the tool has been in a near-constant state of flux. Myriad changes in items, instructions, norms and real-world patterns of use have cast a shadow over its scientific validity. Here, we chart the unorthodox developmental course of this tremendously popular tool.
 
 
Static-99 and 99R Developmental Timeline
Date
Event
1990
The first Sexually Violent Predator (SVP) law passes in the United States, in Washington. A wave of similar laws begins to sweep the nation.
1997
The US Supreme Court upholds the Constitutionality of preventive detention of sex offenders. 
1997
R. Karl Hanson, a psychologist working for the Canadian prison system, releases a four-item tool to assess sex offender risk. The Rapid Risk Assessment for Sex Offence Recidivism (RRASOR) uses data from six settings in Canada and one in California.[1]
1998
Psychologists David Thornton and Don Grubin of the UK prison system release a similar instrument, the Structured Anchored Clinical Judgment (SACJ- Min) scale.[2]
1999
Hanson and Thornton combine the RRASOR and SACJ-Min to produce the Static-99, which is accompanied by a three-page list of coding rules.[3] The instrument's original validity data derive from four groups of sex offenders, including three from Canada and one from the UK (and none from the United States). The new instrument is atheoretical, with scores interpreted based on the recidivism patterns among these 1,208 offenders, most of them released from prison in the 1970s.
2000
Hanson and Thornton publish a peer-reviewed article on the new instrument.[4]
2003
New coding rules are released for the Static-99, in an 84-page, unpublished booklet that is not peer reviewed.[5] The complex and sometimes counterintuitive rules may lead to problems with scoring consistency, although research generally shows the instrument can be scored reliably.
2003
The developers release a new instrument, the Static-2002, intended to "address some of the weaknesses of Static-99."[6] The new instrument is designed to be more logical and easier to score; one item from the Static-99 – pertaining to whether the subject had lived with a lover for at least two years – was dropped due to issues with its reliability and validity. Despite its advantages, Static-2002 never caught on, and did not achieve the popularity of the Static-99 in forensic settings. 
2007
Leslie Helmus, A graduate student working with Karl Hanson, reports that contemporary samples of sex offenders have much lower offense rates than did the antiquated, non-US samples upon which the Static-99 was originally developed, both in terms of base rates of offending and rates of recidivism after release from custody.[7]
September 2008
Helmus releases a revised actuarial table for Static-99, to which evaluators may compare the total scores of their subjects to corresponding estimates of risk.[8] Another Static-99 developer, Amy Phenix, releases the first of several "Evaluators’ Handbooks."[9]
October 2008
At an annual convention of the Association for the Treatment of Sexual Abusers (ATSA), Andrew Harris, a Canadian colleague of Hanson's, releases a new version of the Static-99 with  three separate "reference groups" (Complete, CSC and High Risk) to which subjects can be compared. Evaluators are instructed to report a range of risks for recidivism, with the lower bound coming from a set of Canadian prison cases (the so-called CSC, or Correctional Service of Canada group), and the upper bound derived from a so-called "high-risk" group of offenders. The risk of the third, or "Complete," group was hypothesized as falling somewhere between those of the other two groups.[10]
November 2008
At a workshop sponsored by a civil commitment center in Minnesota, Thornton and a government evaluator named Dennis Doren propose yet another new method of selecting among the new reference groups.  In a procedure called "cohort matching,” they suggest comparing an offender with either the CSC or High-Risk reference group based on how well the subject matched a list of external characteristics they had created but never empirically tested or validated.[11]
December 2008
Phenix and California psychologist Dale Arnold put forth yet a new idea for improving the accuracy of the Static-99: After reporting the range of risk based on a combination of the CSC and High-Risk reference groups, evaluators are encouraged to consider a set of external factors, such as whether the offender had dropped out of treatment and the offender's score on Robert Hare's controversial Psychopathy Checklist-Revised (PCL-R). This new method does not seem to catch on.[12] [13]
2009
An official Static-99 website, www.static99.org, debuts.[14]
Winter 2009
The Static-99 developers admit that norms they developed in 2000 are not being replicated: The same score on the Static-99 equates with wide variations in recidivism rates depending on the sample to which it is compared. They theorize that the problem is due to large reductions in Canadian and U.S. recidivism rates since the 1970s-1980s. They call for the development of new norms.[15]
September 2009
Hanson and colleagues roll out a new version of the Static-99, the Static-99R.[16] The new instrument addresses a major criticism by more precisely considering an offender's age at release, an essential factor in reoffense risk.  The old Static-99 norms are deemed obsolete. They are replaced by data from 23 samples collected by Helmus for her unpublished Master's thesis. The samples vary widely in regard to risk. For estimating risk, the developers now recommend use of the cohort matching procedure to select among four new reference group options. They also introduce the concepts of percentile ranks and relative risk ratios, along with a new Evaluators’ Workbook for Static-99R and Static-2002R. Instructions for selecting reference groups other than routine corrections are confusing and speculative. Research is lacking to demonstrate that selecting other than routine corrections reference group produces more accurate risk estimates.[17]
November 2009
Just two months after their introduction, the Evaluators’ Workbook for Static-99R and Static-2002R is withdrawn due to errors in its actuarial tables.[18] The replacement workbook provides the same confusing and speculative method for selecting a nonroutine reference group, a method that lacks scientific validation and reliability.
2010
An international team of researchers presents large-scale data from the United States, New Zealand and Australia indicating that the Static-99 would be more accurate if it took better account of an offender's age.[19] The Static-99 developers do not immediately embrace these researchers' suggestions.
January 2012
Amy Phenix and colleagues introduce a revised Evaluators’ Workbook for Static-99R and Static-2002R.[20] The new manual makes a number of revisions both to the underlying data (including percentile rank and relative risk ratio data) and to the recommended procedure for selecting a reference group. Now, in an increasingly complex procedure, offenders are to be compared to one of three reference groups, based on how many external risk factors they had. The groups included Routine Corrections (low risk), Preselected Treatment Need (moderate risk), and Preselected High Risk Need (high risk). Subsequent research shows that using density of external risk factors to select among the three reference group options is not valid and has no proven reliability.[21]A fourth reference group, Nonroutine Corrections, may be selected using a separate cohort-matching procedure. New research indicates that evaluators who are retained most often by the prosecution are more likely than others to select the high-risk reference group, [22]  which has base rates much higher than in contemporary sexual recidivism studies and will thus produce exaggerated risk estimates.[23]    
July 2012
Six months later, the percentile ranks and relative risk ratios are once again modified, with the issuance of the third edition of the Static-99R and Static-2002R Evaluators’ Handbook.[24] No additional data is provided to justify that the selection of nonroutine reference groups produces more accurate risk estimates than choosing the routine corrections reference group.
October 2012
In an article published in Criminal Justice & Behavior, the developers concede that risk estimates for the 23 offender samples undergirding the Static-99 vary widely. Further, absolute risk levels for typical sex offenders are far lower than previously reported, with the typical sex offender having about a 7% chance of committing a new sex offense within five years. They theorize that the Static-99 might be inflating risk of reoffense due to the fact that the offenders in its underlying samples tended to be higher risk than average.[25]
2012
The repeated refusal of the Static-99 developers to share their underlying data with other researchers, so that its accuracy can be verified, leads to a court order excluding use of the instrument in a Wisconsin case.[26]
October 2013
At an annual ATSA convention, Hanson and Phenix report that an entirely new reference group selection system will be released in a peer-reviewed article in Spring 2014.[27] The new system will include only two reference groups: Routine Corrections and Preselected High Risk High Need.  An atypical sample of offenders from a state hospital in Bridgewater, Massachusetts dating back to 1958 is to be removed altogether, along with some other samples, while some new data sets are to be added.
October 2014
At the annual ATSA convention, the developers once again announce that the anticipated rollout of the new system has been pushed back pending acceptance of the manuscript for publication. Helmus nonetheless presents an overview.[28] She reports that the new system will abandon two out of the current four reference groups, retaining only Routine Corrections and Preselected High Risk Need.   Evaluators should now use the Routine Corrections norms as the default unless local norms (with a minimum of 100 recidivists) are available. Evaluators will be permitted to choose the Preselected High Risk Need norms based on “strong, case-specific justification.” No specific guidance nor empirical evidence to support such a procedure is proffered. A number of other new options for reporting risk information are also presented, including the idea of combining Static-99 data with that from newly developed, so-called "dynamic risk instruments."   
January 2015
At an ATSA convention presentation followed by an article in the journal Sexual Abuse,[29] the developers announce further changes in their data sets and how Static-99R scores should be interpreted. Only two of the original four "reference groups" are still standing. Of these, the Routine group has grown by 80% (to 4,325 subjects), while the High-Risk group has shrunk by 35%, to a paltry 860 individuals. Absent from the article is any actuarial table on the High-Risk group, meaning the controversial practice by some government evaluators of inflating risk estimates by comparing sex offenders' Static-99R scores with the High-Risk group data has still not passed any formal peer review process. The developers also correct a previous statistical method as recommended by Ted Donaldson and colleagues back in 2012,[30] the effect of which is to further lower risk estimates in the high-risk group. Only sex offenders in the Routine group with Static-99R scores of 10 are now statistically more likely than not to reoffend. It is unknown how many sex offenders were civilly committed in part due to reliance on the now-obsolete data.

References


[1] Hanson, R. K. (1997). The development of a brief actuarial risk scale for sexual offense recidivism. (Unpublished report 97-04). Ottawa: Department of the Solicitor General of Canada.
[2] Grubin, D. (1998). Sex offending against children: Understanding the risk. Unpublished report, Police Research Series Paper 99. London: Home Office.
[3] Hanson, R.K. & Thornton, D. (1999).  Static 99: Improving Actuarial Risk Assessments for Sex Offenders. Unpublished paper
[4] Hanson, R. K., & Thornton, D. (2000). Improving risk assessments for sex offenders: A comparison of three actuarial scales. Law and Human Behavior, 24(1), 119-136.
[5] Harris, A. J. R., Phenix, A., Hanson, R. K., & Thornton, D. (2003). Static-99 coding rules: Revised 2003. Ottawa, ON: Solicitor General Canada.
[6] Hanson, R.K., Helmus, L., & Thornton, D (2010). Predicting recidivism amongst sexual offenders: A multi-site study of Static-2002. Law & Human Behavior 34, 198-211.
[7] Helmus, L. (2007). A multi-site comparison of the validity and utility of the Static-99 and Static-2002 for risk assessment with sexual offenders. Unpublished Honour’s thesis, Carleton University, Ottawa, ON, Canada.
[8] Helmus, L. (2008, September). Static-99 Recidivism Percentages by Risk Level. Last Updated September 25, 2008. Unpublished paper.
[9] Phenix, A., Helmus, L., & Hanson, R.K. (2008, September). Evaluators’ Workbook. Unpublished, September 28, 2008
[10] Harris, A. J. R., Hanson, K., & Helmus, L. (2008). Are new norms needed for Static-99? Workshop presented at the ATSA 27th Annual Research and Treatment Conference on October 23, 2008, Atlanta: GA. Available at www.static99.org.
[11] Doren, D., & Thornton, D. (2008). New Norms for Static-99: A Briefing. A workshop sponsored by Sand Ridge Secure Treatment Center on November 10, 2008. Madison, WI.
[12] Phenix, A. & Arnold, D. (2008, December). Proposed Considerations for Conducting Sex Offender Risk Assessment Draft 12-14-08. Unpublished paper.
[13] Abbott, B. (2009). Applicability of the new Static-99 experience tables in sexually violent predator risk assessments. Sexual Offender Treatment, 1, 1-24.
[14] Helmus, L., Hanson, R. K., & Thornton, D. (2009). Reporting Static-99 in light of new research on recidivism norms. The Forum, 21(1), Winter 2009, 38-45.
[15] Ibid.
[16] Hanson, R. K., Phenix, A., & Helmus, L. (2009, September). Static-99(R) and Static-2002(R): How to Interpret and Report in Light of Recent Research. Paper presented at the 28th Annual Research and Treatment Conference of the Association for the Treatment of Sexual Abusers, Dallas, TX, September 28, 2009.
[17] DeClue, G. & Zavodny, D. (2014). Forensic use of the Static-99R: Part 4. Risk Communication. Journal of Threat Assessment and Management, 1(3), 145-161.
[18] Phenix, A., Helmus, L., & Hanson, R.K. (2009, November). Evaluators’ Workbook. Unpublished, November 3, 2009.
[19] Wollert, R., Cramer, E., Waggoner, J., Skelton, A., & Vess, J. (2010). Recent Research (N = 9,305) Underscores the Importance of Using Age-Stratified Actuarial Tables in Sex Offender Risk Assessments. Sexual Abuse: A Journal of Research and Treatment, 22 (4), 471-490. See also: "Age tables improve sex offender risk estimates," In the News blog, Dec. 1, 2010.
[20] Phenix, A., Helmus, L., & Hanson, R.K. (2012, January). Evaluators’ Workbook. Unpublished, January 9, 2012.
[21] Abbott, B.R. (2013). The Utility of Assessing “External Risk Factors” When Selecting Static-99R Reference Groups. Open Access Journal of Forensic Psychology, 5, 89-118.
[22] Chevalier, C., Boccaccini, M. T., Murrie, D. C. & Varela, J. G. (2014), Static-99R Reporting Practices in Sexually Violent Predator Cases: Does Norm Selection Reflect  Adversarial Allegiance? Law & Human Behavior. To request a copy from the author, click HERE.
[23] Abbott (2013) op. cit.
[24] Phenix, A., Helmus, L., & Hanson, R.K. (2012, July). Evaluators’ Workbook. Unpublished, July 26, 2012.
[25] Helmus, Hanson, Thornton, Babchishin, & Harris (2012), Absolute recidivism rates predicted by Static-99R and Static-2002R sex offender risk assessment tools vary across samples: A meta-analysis, Criminal Justice & Behavior. See also: "Static-99R risk estimates wildly unstable, developers admit," In the News blog, Oct. 18, 2012.
[27] Hanson, R.K. & Phenix, A. (2013, October). Report writing for the Static-99R and Static-2002R. Preconference seminar presented at the 32nd Annual Research and Treatment Conference of the Association for the Treatment of Sexual Abusers, Chicago, IL, October 30, 2013. See also: "Static-99 'norms du jour' get yet another makeover," In the News blog, Nov. 17, 2013.
[28] Helmus, L.M. (2014, October). Absolute recidivism estimates for Static-99R and Static-2002R: Current research and recommendations. Paper presented at the 33rd Annual Research and Treatment Conference of the Association for the Treatment of Sexual Abusers, San Diego, CA, October 30, 2014.
Hanson, R. K., Thornton, D., Helmus, L-M, & Babchishin, K. (2015). What sexual recidivism rates are associated with Static-99R and Static-2002R scores? Sexual Abuse: A Journal of Research and Treatment, 1-35.
Donaldson, T., Abbott, B., & Michie,  C. (2012). Problems with the Static-99R prediction estimates and confidence intervals. Open Access Journal of Forensic Psychology, 4,
1-23.

* * * * *

*Many thanks to Marcus Boccaccini, Gregory DeClue, Daniel Murrie and other knowledgeable colleagues for their valuable feedback.  


* * * * *

Related blog posts:
·        Static-99 "norms du jour" get yet another makeover (Nov. 17, 2013)
·        Age tables improve sex offender risk estimates (Dec. 1, 2010)
·        New study: Do popular actuarials work? (April 20, 2010)
·        Delusional campaign for a world without risk (April 3, 2010)