October 5, 2009

ABF doctoral fellowship opportunity

The American Bar Foundation is recruiting fellows for its Doctoral Fellowships in Law and Social Science for the 2010-2011 academic year. The goal is to "develop the next generation of scholars in the field of law and social science" by supporting "original and significant research on law, the legal profession, and legal institutions." The stipend is $27,000 plus expenses.

Eligible applicants must have completed all doctoral requirements except the dissertation by September 1, 2010. Doctoral and proposed research must be in the area of sociolegal studies or in social scientific approaches to law, the legal profession, or legal institutions. The research must address significant issues in the field and show promise of a major contribution to social scientific understanding of law and legal process. Minority students are especially encouraged to apply.

The Foundation has other fellowship and student opportunities as well, including the Law and Social Science Dissertation Fellowship and Mentoring Program, focusing on the study of law and inequality, and the Summer Research Diversity Program.

For more details, visit the ABF website's fellowships page.

October 4, 2009

SVP industry sneak peek: Problems in Actuaryland

You psychologists and attorneys working in the trenches of Sexually Violent Predator (SVP) litigation will be interested in the controversy over the Static-99 and its progeny, the Static-2002, that erupted at the annual conference of the Association for the Treatment of Sexual Abusers (ATSA) in Dallas.

By way of background, the Static-99 is -- as its website advertises -- "the most widely used sex offender risk assessment instrument in the world, and is extensively used in the United States, Canada, the United Kingdom, Australia, and many European nations." Government evaluators rely on it in certifying individuals as dangerous enough to merit civil commitment on the basis of possible future offending. Some states, including California, New York, and Texas, mandate its use in certain forensic evaluations of sex offenders.


Underlying the instrument's popularity is its scientific veneer, based on two simple-sounding premises:

1. that it represents a "pure actuarial approach" to risk, and

2. that such an approach is inherently superior to "clinical judgment."

But, as with so many things that seem deceptively simple, it turns out that neither premise is entirely accurate.

Why the actuarial approach?

An actuarial method is a statistical algorithm in which variables are combined to predict the likelihood of a given outcome. For example, actuarial formulas determine how much you will pay for automobile or homeowners' insurance by combining relevant factors specific to you (e.g., your age, gender, claims history) and your context (e.g., type of car, local crime rates, regional disaster patterns).

The idea of using such a mechanical approach in clinical predictions traces back to Paul Meehl's famous 1954 monograph. Reviewing about 20 studies of event forecasting, from academic success to future violence, Meehl found that simple statistical models usually did better than human judges at predicting outcomes. Over the ensuing half-century, Meehl's work has attained mythical stature as evidence that clinical judgment is inherently unreliable.

But, as preeminent scholars Daniel Kahneman (a Nobel laureate) and Gary Klein point out in the current issue of the American Psychologist, "this conclusion is unwarranted." Algorithms outperform human experts only under certain conditions, that is, when environmental conditions are highly complex and future outcomes uncertain. Algorithms work better in these limited circumstances mainly because they eliminate inconsistency. In contrast, in more "high-validity," or predictable, environments, experienced and skillful judges often do better than mechanical predictions:
Where simple and valid cues exist, humans will find them if they are given sufficient experience and enough rapid feedback to do so -- except in the environments ... labeled 'wicked,' in which the feedback is misleading.
Even more crucially, in reference to using the Static-99 to predict relatively rare events such as sex offender recidivism, Meehl never claimed that statistical models were especially accurate. He just said they were wrong a bit less often than clinical judgments. Predicting future human behavior will never be simple because -- unlike machines -- humans can decide to change course.

Predictive accuracy

Putting it generously, the Static-99 is considered only "moderately" more accurate than chance, or the flip of a coin, at predicting whether or not a convicted sex offender will commit a new sex crime. (For you more statistically minded folks, its accuracy as measured by the "Area Under the Curve," or AUC statistic, ranges from about .65 to .71, which in medical research is classified as poor.)

The largest cross-validation study to date -- forthcoming in the respected journal Psychology, Public Policy, & Law -- paints a bleaker picture of the Static-99's predictive accuracy in a setting other than that in which it was normed. In the study of its use with almost 2,000 Texas offenders, the researchers found its performance may be "poorer than often assumed." More worrisomely from the perspective of individual liberties, both the Static-99 and a sister actuarial, the MnSOST-R, tend to overestimate risk. The study found that three basic offender characteristics -- age at release, number of prior arrests, and type of release (unconditional versus supervised) -- often predicted recidivism as well as, or even better than, the actuarials. The study's other take-home message is that every jurisdiction that uses the Static-99 (or any similar tool) needs to do local studies to see if it really works. That is, even if it had some validity in predicting the behavior of offenders in faraway times and/or faraway places, does it help make accurate predictions in the here and now?

Recent controversies

Even before this week's controversy, the Static-99 had seen its share of disputation. At last year's ATSA conference, the developers conceded that the old risk estimates, in use since the instrument was developed in 1999, are now invalid. They announced new estimates that significantly lower average risks. Whereas some in the SVP industry had insisted for years that you do not need to know the base rates of offending in order to accurately predict risk, the latest risk estimates -- likely reflective of the dramatic decline in sex offending in recent decades -- appear to validate the concerns of psychologists such as Rich Wollert who have long argued that consideration of population-specific base rates is essential to accurately predicting an individual offender's risk.

In another change presented at the ATSA conference, the developers conceded that an offender's current age is critical to estimating his risk, as critics have long insisted. Accordingly, a new age-at-release item has been added to the instrument. The new item will benefit older offenders, and provide fertile ground for appeals by older men who were committed under SVP laws using now-obsolete Static-99 risk calculations. Certain younger offenders, however, will see their risk estimates rise.

Clinical judgment introduced

In what may prove to be the instrument's most calamitous quagmire, the developers instructed evaluators at a training session on Wednesday to choose one of four reference groups in order to determine an individual sex offender's risk. The groups are labeled as follows:
  • routine sample
  • non-routine sample
  • pre-selected for treatment need
  • pre-selected for high risk/need
The scientific rationale to justify use of these smaller data sets as comparison groups is not clear at this time, little guidance is being given on how to reliably select the proper reference group, and some worry that criterion contamination may invalidate this procedure. In the highly polarized SVP arena, this new system will give prosecution-oriented evaluators a quick and easy way to elevate their estimate of an offender's risk by comparing the individual to the highest-risk group rather than to the lower recidivism figures for sex offenders as a whole. This, in turn, will create at least a strong appearance of bias.

Thus, this new procedure will introduce a large element of clinical judgment into a procedure whose very existence is predicated on doing away with such subjectivity. There is also a very real danger that evaluators will be overconfident in their judgments. Although truly skilled experts know when and what they don’t know, as Kahneman and Klein remind us:
    Nonexperts (whether or not they think they are) certainly do not know when they don't know. Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions.
With the limited information available at the time, it is not surprising that some state legislatures chose to mandate the use of the Static-99 and related actuarial tools in civil commitment proceedings. After all, the use of mechanical or statistical procedures can reduce inconsistency and thereby limit the role of bias, prejudice, and illusory correlation in decision-making. This is especially essential in an emotionally charged arena like the sex offender civil commitment industry.

But if, as some suspect, the actuarials' poor predictive validity owes primarily to the low base rates of recidivism among convicted sex offenders, then reliance on any actuarial device may have limited utility in the real world. People have the capacity to change, and the less likely an event is to occur, the harder it is to accurately predict. In other words, out of 100 convicted sex offenders standing in the middle of a field, it is very hard to accurately pick out those five or ten who will be rearrested for another sex crime in the next five years.

Unfortunately, with its modest accuracy at best, its complex statistical language and, now, its injection of clinical judgment into a supposedly actuarial calculation, the Static-99 also has the potential to create confusion and lend an aura of scientific certitude above and beyond what the state of the science merits.

The new scoring information is slated to appear on the Static-99 website on Monday (October 5).

Related resource: Ethical and practical concerns regarding the current status of sex offender risk assessment, Douglas P. Boer, Sexual Offender Treatment (2008)


Photo credit: Chip 2904 (Creative Commons license).
Hat tip to colleagues at the ATSA conference who contributed to this report.

October 1, 2009

Elizabeth Smart testifies at competency hearing

Kidnap victim Elizabeth Smart provided dramatic testimony today in David Mitchell's long-anticipated competency-to-stand-trial hearing.

But Mitchell wasn't in the room to hear her. He was removed from the courtroom when he refused to stop singing a Mormon hymn, as he does whenever he comes to court.

Smart's testimony was ostensibly intended to establish that Mitchell was acting rationally in order to further his criminal conduct, rather than being motivated by religious delusions as the defense has maintained.

A "calm, poised, articulate" Smart testified that Mitchell was obsessed with sex and used religion to further his predatory goals. She described Mitchell as "evil, wicked, manipulative, sneaky, slimy, selfish, greedy."

But defense attorney Robert Steele said Smart's testimony hinted that Mitchell is delusional, according to coverage in the Salt Lake Tribune. Last week, he argued unsuccessfully that Smart should not be allowed to offer opinions about Mitchell's state of mind or motivations.

Mitchell has refused to submit to any psychological evaluations or diagnostic tests.

His wife and co-defendant, Wanda Barzee, has twice been found incompetent for trial and is undergoing forced treatment with antipsychotic medications. Her next competency hearing is scheduled for Oct. 23.

A transcript of Smart’s 100-minute testimony is online HERE.

September 21, 2009

Intellectual competence and the death penalty

That's the title of a new blog some of you will be interested in. Produced by Kevin McGrew, director of the Institute for Applied Psychometrics, its focus is "psychometric measurement issues and research related to intelligence testing that may have bearing on capital punishment cases for individuals with an intellectual disability."

The blog is just a few months old, but it's already loaded with resources pertinent to capital litigation, including recent court cases as well as professional journals, associations, blogs, and experts. It's even got a poll you can take, indicating what topics you would like Dr. McGrew to tackle next. The professor clearly enjoys blogging, as he's already running at least two other IQ-related forums.

Clicking on the image at left will take you directly to the site, which today just happens to feature my blog.

September 18, 2009

Should forensic psychologists have minimal training?

Would you trust a "master's level dentist" to pull your tooth? Or a "bachelor's degree attorney" to defend you in court?

Not hardly.

Terminal master's degree programs in forensic psychology represent just this type of degradation in quality, says Carl Clements, a psychology professor at the University of Alabama, who argues that forensic psychology training should remain at the traditional doctoral or postdoctoral level.

But critics like Clements are spitting in the wind. Paralleling forensic psychology's breakneck growth and immense popularity, degree programs -- including many online, distance-learning options -- are sprouting up like mushrooms after a heavy rain. And just like mushrooms, they will be impossible to eliminate.

The field's perceived glamour, including the allure of the mythical profiler, has produced a bumper crop of impressionable young people willing to shell out cash for a forensic degree. Massive prison growth, along with prisoner's rights cases mandating mental health evaluation and treatment, have produced abundant jobs for psychologists.

Educational institutions have responded with alacrity. New training programs take a variety of forms, according to a survey in the current issue of Training and Education in Professional Psychology:
  • PhD in clinical psychology with specialty track in forensic psychology (about 10 programs)
  • PsyD in clinical psychology with forensic specialty track (about 10 programs)
  • PhD in nonclinical (e.g., social or experimental) psychology with forensic or legal emphasis (about 10)
  • Joint psychology-law degree programs (6)
  • Master's degree in forensic psychology (12)
  • Bachelor's degree in forensic psychology (John Jay College of Criminal Justice)
  • Undergraduate psychology-law courses (increasingly common and popular)
In addition to all of these different degree options, more and more predoctoral internships offer forensic rotations. About 17% of APA-accredited internships now offer a major forensic rotation, with another 47% offering a minor rotation, according to the Association of Psychology Postdoctoral and Internship Centers (APPIC).

Yet with all of this rapid growth, there is no consensus as to what training models and curricula are adequate in order to prepare students for real-world forensic practice. With that in mind, David DeMatteo of Drexel University and colleagues are proposing a set of core competencies for doctoral-level forensic psychology training curricula. At minimum, they say, students should get training and experience in the traditional areas of substantive psychology and research methodology, along with specialized advanced training in:
  • Legal knowledge
  • Integrative law-psychology knowledge
  • Ethics and professional issues in forensic psychology
  • Clinical forensic psychology
Aren't all of these areas already integrated into current forensic psychology degree programs?

Again, not hardly.

Reviewing the curricula for the roughly 35 [as of his review] doctoral or joint-degree programs with training in forensic psychology, DeMatteo and colleagues found* only three programs that included all four components. For example, only about 40% offered courses falling under "legal knowledge." More alarmingly, only three programs reported offering courses specifically addressing ethical and professional issues in forensic psychology.

So, will all of the self-described forensic psychologists emerging from these newly minted degree programs be able to find work in the field? I predict that those who travel the traditional path of postdoctoral specialization will fare the best. Those with terminal master's (or even bachelor's) degrees will be restricted to lower-level occupations such as correctional counselor or social services case manager. While they may meet the demands of the prison industry for warm bodies with letters after their names, these practitioners certainly won't be called as experts in court.

But there is a greater danger in these bare-bones forensic training programs. Not only do they offer false promises to students, but they sacrifice the intensive clinical training, including experience working with severely mentally ill populations, that is a key foundation for forensic work. The lack of adequate training in the law and in ethics will likely cause even more disastrous outcomes when these professionals take on forensic cases.

I know, I know. I am just spitting in the wind, too. Financial exigencies always win out.

Related resources:

What's it take to become a forensic psychologist?


*SOURCE: David DeMatteo, Geoffrey Marczyk, Daniel Krauss & Jeffrey Burl (2009), Educational and training models in forensic psychology. Training and Education in Professional psychology 3 (3), pp 184-191. Request from the author HERE.