Showing posts with label psychological testing. Show all posts
Showing posts with label psychological testing. Show all posts

April 30, 2010

Criminal Justice and Behavior: Free articles

During the month of May only, Sage Publications is offering free access to select articles from the 2009 volume of Criminal Justice and Behavior. Selections of potential interest to my readers include the following:
The Prediction of Violence in Adult Offenders: A Meta-Analytic Comparison of Instruments and Methods of Assessment
by Mary Ann Campbell, Sheila French and Paul Gendreau

Closing The Revolving Door? Substance Abuse Treatment as an Alternative to Traditional Sentencing for Drug-Dependent Offenders
by Tara D. Warner and John H. Kramer

Inferring Sexually Deviant Behavior from Corresponding Fantasies: The Role of Personality and Pornography Consumption
by Kevin M. Williams, Barry S. Cooper, Teresa M. Howell, John C. Yuille and Delroy L. Paulhus

Credit goes to Jarrod Steffan, a forensic and clinical psychologist in Wichita, Kansas who specializes in criminal forensic psychology, for alerting me to this special offer.

April 3, 2010

Delusional campaign for a world without risk

Convicted sex offender Anthony Sowell seemed run-of-the-mill. He scored a low "1" on the Static-99, the popular actuarial tool designed to quantify risk for sexual recidivism. Now, he is suspected in the murders of 11 women whose remains were found at his Cleveland, Ohio home. (His publicly leaked risk evaluation is HERE.)

John Albert Gardner III, who like Oscar-winning film director Roman Polanski was convicted of molesting a 13-year-old girl, looked almost as routine. Paroling after a 5-year prison stint, he scored a "2" on the Static. Now, he stands charged with the highly publicized San Diego rape-murder of teenager Chelsea King.

While the United States crashes and burns -- jobs disappearing, home values plummeting, public school teachers begging for basic supplies like paper and pencils -- politicians are hosting emergency hearings to determine "what went wrong."

Who you gonna blame? The Static-99

The same California politicians who enthusiastically enacted a law -- the ambitiously titled Sex Offender Control and Containment Act of 2006 -- mandating the use of this scientifically flawed actuarial tool are now jumping all over prison bureaucrats for mandating its use to determine which paroling sex offenders should be most carefully monitored. Maybe they should have listened to those who have been saying all along that actuarial tools are not a panacea.

When I got a call from a news reporter exploring that angle, I found myself in the amusing position of (half-heartedly) defending the Static-99. As I tried to explain to the reporter (who then misquoted me), finding a needle in a haystack ain’t easy. At the risk of sounding perseverative: it's the statistical problem of low base rates. If only about one of every ten paroling sex offenders will reoffend sexually, picking out that one is difficult. And picking the one who will commit an exceedingly rare crime like the Chelsea King murder is virtually impossible. The hysterical masses can't seem to grasp that:
  • The broad majority of men who are apprehended and prosecuted for a sex offense are never rearrested for another, and
  • The broad majority of sex crimes are committed by men who fly below the radar because they have never been apprehended before. To catch these guys, you'd have to engage in massive over-prediction, producing an epidemic of what we call "false positives."
And that's just what the mobs are calling for. As one man in the crowd lobbying for the new "Chelsea's Law" put it, anyone who "touches a child" should automatically lose all Constitutional rights.

Be careful what you wish for. Even in a fascist police state, bad stuff will still happen. In fact, a misplaced emphasis on eliminating risk will paradoxically decrease public safety, by eliminating primary prevention programs that actually work to reduce crime. In California, prison officials told an emergency meeting of the Assembly Select Committee on Prisons and Rehabilitation Reform they would need $1 billion more each a year to return every paroled sex offender to prison on the basis of minor violations like Gardner's. That would mean taking even more pencils away from teachers in a state near bankrupted by its massive prison infrastructure.

All aboard the opportunist train

It's understandable why parents of crime victims like Chelsea King lobby for tougher laws. It's a way to deny their impotence and channel their feelings of sadness, guilt and rage.

And it's similarly easy to understand why politicians jump on the bandwagon. Powerless to fix our shattered economy and lacking the political will to tackle more complex social problems, they seize on random horrors to make themselves look good. Illusory efficacy wins votes.

And then the other opportunists jump on board. Crime Victims United used this week's hearing to lobby against early release of nonviolent prisoners. (Can you say non sequitur?)

Not to be outdone, a group of embittered forensic psychologists have jumped on the Chelsea bandwagon. Forming a secret "consortium," they have complained to the state Attorney General's Office that if they were still evaluating paroling prisoners for potential civil commitment as Sexually Violent Predators (SVP's), they would have done a better job of protecting the public. The evaluators, who are shielding their identities through an attorney, claim that the state's new contract bidding policy for SVP evaluations "results in the loss of life of untold victims" "for the sake of economic expediency." Their propaganda, aired on the incendiary Larry King Live show, conveniently omits mention of their pecuniary interest: Many of these state contractors were billing more than $1 million per year, again while school teachers begged for budget crumbs.

Cultural Myopia and moral relativism

Underlying these empty moral campaigns are a set of intertwined myths and lopsided values:
  • Rare sex crimes are a significant threat to public safety. As the mob vents its impotent rage against the government and its spawn -- the mythical sexual predator -- the fact remains that the biggest killer of 15- to 24-year-olds worldwide remains motor vehicle accidents. The is followed closely by suicides, the fourth-leading killer of children over age 10 in seven developed nations.
  • Only sex crimes count. Is sexual assault really all that much worse than murder, torture, or other serious crimes? Why is it treated so differently? Are legislatures assigning as much resources to combating the "pseudocommander mass murderer" or the burgeoning militia movement stoked up by its racist hatred of Obama?
  • Only a certain type of sex crime counts. Many of the same angry folks who want to suspend the Constitutional rights of some accused sex criminals are busy defending others. When the Chelsea King case broke, the reaction to another breaking sex crime story was its polar opposite. Responding to news that star football quarterback Ben Roethlisberger was accused of a second rape, media pundits went wild on lying, gold-digging women who falsely accuse men of rape. In fact, some of the men who most vitriolically despise sexual predators are rapists themselves. Rape is endemic on many college campuses, with fraternity boys virtually immune from prosecution. As an excellent National Public Radio series describes, young men face few consequences for using alcohol as weapon with which to sexually assault naive young women who are then often forced to quit school. If we really want to make the world a safer place, we need to look a little closer to home. Instead of focusing on an easy bogeyman, let's put our efforts into primary prevention of rape and child molestation. And if we truly want to stop criminals from reoffending, let's not eliminate rehabilitation programs in prison!

  • Science is capable of eliminating (or at least drastically reducing) risk. The search for blame has become reflexive. Whenever anything bad happens, the what-went-wrong tenor of media coverage encourages finger-pointing, public wrath, and -- ultimately -- pointless (or worse) legal tweaks by opportunist politicians. When hoodlums sneak into a zoo and taunt a tiger into attacking them, it's the zoo's fault for not building high enough fences. (Remember that 2007 case?) When a speeding truck careens over the side of a bridge, traffic engineers get blamed. "They" -- shorthand for the amorphous "government" -- can never do enough to protect their citizens from all conceivable danger.
It's hard to accept that random danger is a part of life. Sometimes, bad stuff just happens.

February 11, 2010

Skeem to give psychopathy training in Oregon

Save the date: Friday, April 9

On the heels of a hugely successful training featuring Stephen Hart of Simon Fraser University on sex offender risk assessment, Alexander Millkey and Michelle Guyton at Northwest Forensic Institute in Portland are doing it again. This time, they've scored Jennifer Skeem of the University of California at Irvine, who will provide advanced training on the controversial construct of psychopathy.

As many of you know, Dr. Skeem is an eminent scholar who has received the prestigious Saleem Shah Award for Early Career Excellence from the American Psychology-Law Society (APA Div 41) and the Distinguished Assistant Professor Award for Research at UC Irvine. She has published more than 70 scientific articles, chapters, and books, and is co-editor and an author of the excellent new book, Psychological Science in the Courtroom: Consensus and Controversy. Her research areas include psychopathy, violence risk assessment, and effective supervision strategies for individuals mandated for psychiatric care.

In this training, she will challenge prevailing assumptions that psychopathy is a unitary and homogeneous construct or something that can be reduced to a score on the Psychopathy Checklist-Revised (PCL-R). She will also present data challenging the deeply entrenched idea that people with psychopathic traits are incurable cases that should be diverted from treatment settings to environments where their behavior can merely be monitored and controlled.

The all-day training is on Friday, April 9 at Portland State University, and is followed by a networking reception with Dr. Skeem. Registrants will receive six hours of continuing education credits. The cost is only $175, or $75 for students.

For more information and to register, go to the Institute's website.

October 14, 2009

Texas death case illustrates Atkins quagmire

The U.S. Supreme Court's 2002 decision in Atkins v. Virginia to outlaw the death penalty for mentally retarded defendants has opened up a "welter of uncertainty" in courts around the nation. So-called "Atkins inquiries" into whether a defendant is mentally retarded rely heavily on mental health experts, who may disagree on everything from the definition and identification of mental retardation to whether the specific defendant meets the threshold criteria.

This familiar spectacle of dueling experts takes a particularly ominous turn when experts misstate the science in these high-stakes (literally, life or death) cases. Fact-finders are often ill-equipped to disentangle the highly complex technical and scientific issues pertaining to whether or not a defendant meets the magic cutoff that will spare his life.

Over at his new blog, Intellectual competence and the death penalty, Kevin McGrew critically analyzes the latest case exemplifying these legal pitfalls, especially in the increasingly common situation in which the defendant is from another culture or speaks a language other than English. The case is that of Virgilio Maldonado, out of the U.S. District Court for the Southern District of Texas.

McGrew believes this case represents "a miscarriage of justice" that typifies the problems inherent in Atkins inquiries:
"The courts appear ill-equipped to handle the complex psychological measurement issues presented, issues that are, at times, confounded by the inclusion of data from dubious procedures, interpretations of test scores that are not grounded in any solid empirical research, and the deference to a single intelligence battery (the WAIS series) as the 'gold standard' when a more appropriate instrument (or combination of WAIS-III/IV and other measures) might have been administered, but the results of the more appropriate measure are summarily dismissed based on personal opinion (and not sound theory or empirical research)."
Those of you who practice in this area will be interested in McGrew's in-depth dissection of the IQ testing problems when defendants are not proficient in English language. Often, tests are wrongly selected, misadministered and misinterpreted under these circumstances.

In the Maldonado case, the prosecution's psychological expert decided to upwardly adjust the defendant's IQ score to a specific number based on his "clinical judgment" as to cultural and educational factors.

"It’s around the 80s, I guess, if you had to pin me down. Around the 80s; somewhere in there," the psychologist testified.

As McGrew points out:
"Adjusting obtained IQ scores, either up or down, … in the absence of any scientifically established procedure … is troubling and is not consistent with accepted psychological assessment practices or standards."
McGrew also critiques courts' frequent practice of putting the WAIS tests on a pedestal as the "gold standard," to the point of dismissing Spanish-language tests that are normed on relevant Spanish-speaking populations.

McGrew's in-depth analysis is HERE. The 144-page Maldonado decision is online HERE.

October 4, 2009

SVP industry sneak peek: Problems in Actuaryland

You psychologists and attorneys working in the trenches of Sexually Violent Predator (SVP) litigation will be interested in the controversy over the Static-99 and its progeny, the Static-2002, that erupted at the annual conference of the Association for the Treatment of Sexual Abusers (ATSA) in Dallas.

By way of background, the Static-99 is -- as its website advertises -- "the most widely used sex offender risk assessment instrument in the world, and is extensively used in the United States, Canada, the United Kingdom, Australia, and many European nations." Government evaluators rely on it in certifying individuals as dangerous enough to merit civil commitment on the basis of possible future offending. Some states, including California, New York, and Texas, mandate its use in certain forensic evaluations of sex offenders.


Underlying the instrument's popularity is its scientific veneer, based on two simple-sounding premises:

1. that it represents a "pure actuarial approach" to risk, and

2. that such an approach is inherently superior to "clinical judgment."

But, as with so many things that seem deceptively simple, it turns out that neither premise is entirely accurate.

Why the actuarial approach?

An actuarial method is a statistical algorithm in which variables are combined to predict the likelihood of a given outcome. For example, actuarial formulas determine how much you will pay for automobile or homeowners' insurance by combining relevant factors specific to you (e.g., your age, gender, claims history) and your context (e.g., type of car, local crime rates, regional disaster patterns).

The idea of using such a mechanical approach in clinical predictions traces back to Paul Meehl's famous 1954 monograph. Reviewing about 20 studies of event forecasting, from academic success to future violence, Meehl found that simple statistical models usually did better than human judges at predicting outcomes. Over the ensuing half-century, Meehl's work has attained mythical stature as evidence that clinical judgment is inherently unreliable.

But, as preeminent scholars Daniel Kahneman (a Nobel laureate) and Gary Klein point out in the current issue of the American Psychologist, "this conclusion is unwarranted." Algorithms outperform human experts only under certain conditions, that is, when environmental conditions are highly complex and future outcomes uncertain. Algorithms work better in these limited circumstances mainly because they eliminate inconsistency. In contrast, in more "high-validity," or predictable, environments, experienced and skillful judges often do better than mechanical predictions:
Where simple and valid cues exist, humans will find them if they are given sufficient experience and enough rapid feedback to do so -- except in the environments ... labeled 'wicked,' in which the feedback is misleading.
Even more crucially, in reference to using the Static-99 to predict relatively rare events such as sex offender recidivism, Meehl never claimed that statistical models were especially accurate. He just said they were wrong a bit less often than clinical judgments. Predicting future human behavior will never be simple because -- unlike machines -- humans can decide to change course.

Predictive accuracy

Putting it generously, the Static-99 is considered only "moderately" more accurate than chance, or the flip of a coin, at predicting whether or not a convicted sex offender will commit a new sex crime. (For you more statistically minded folks, its accuracy as measured by the "Area Under the Curve," or AUC statistic, ranges from about .65 to .71, which in medical research is classified as poor.)

The largest cross-validation study to date -- forthcoming in the respected journal Psychology, Public Policy, & Law -- paints a bleaker picture of the Static-99's predictive accuracy in a setting other than that in which it was normed. In the study of its use with almost 2,000 Texas offenders, the researchers found its performance may be "poorer than often assumed." More worrisomely from the perspective of individual liberties, both the Static-99 and a sister actuarial, the MnSOST-R, tend to overestimate risk. The study found that three basic offender characteristics -- age at release, number of prior arrests, and type of release (unconditional versus supervised) -- often predicted recidivism as well as, or even better than, the actuarials. The study's other take-home message is that every jurisdiction that uses the Static-99 (or any similar tool) needs to do local studies to see if it really works. That is, even if it had some validity in predicting the behavior of offenders in faraway times and/or faraway places, does it help make accurate predictions in the here and now?

Recent controversies

Even before this week's controversy, the Static-99 had seen its share of disputation. At last year's ATSA conference, the developers conceded that the old risk estimates, in use since the instrument was developed in 1999, are now invalid. They announced new estimates that significantly lower average risks. Whereas some in the SVP industry had insisted for years that you do not need to know the base rates of offending in order to accurately predict risk, the latest risk estimates -- likely reflective of the dramatic decline in sex offending in recent decades -- appear to validate the concerns of psychologists such as Rich Wollert who have long argued that consideration of population-specific base rates is essential to accurately predicting an individual offender's risk.

In another change presented at the ATSA conference, the developers conceded that an offender's current age is critical to estimating his risk, as critics have long insisted. Accordingly, a new age-at-release item has been added to the instrument. The new item will benefit older offenders, and provide fertile ground for appeals by older men who were committed under SVP laws using now-obsolete Static-99 risk calculations. Certain younger offenders, however, will see their risk estimates rise.

Clinical judgment introduced

In what may prove to be the instrument's most calamitous quagmire, the developers instructed evaluators at a training session on Wednesday to choose one of four reference groups in order to determine an individual sex offender's risk. The groups are labeled as follows:
  • routine sample
  • non-routine sample
  • pre-selected for treatment need
  • pre-selected for high risk/need
The scientific rationale to justify use of these smaller data sets as comparison groups is not clear at this time, little guidance is being given on how to reliably select the proper reference group, and some worry that criterion contamination may invalidate this procedure. In the highly polarized SVP arena, this new system will give prosecution-oriented evaluators a quick and easy way to elevate their estimate of an offender's risk by comparing the individual to the highest-risk group rather than to the lower recidivism figures for sex offenders as a whole. This, in turn, will create at least a strong appearance of bias.

Thus, this new procedure will introduce a large element of clinical judgment into a procedure whose very existence is predicated on doing away with such subjectivity. There is also a very real danger that evaluators will be overconfident in their judgments. Although truly skilled experts know when and what they don’t know, as Kahneman and Klein remind us:
    Nonexperts (whether or not they think they are) certainly do not know when they don't know. Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions.
With the limited information available at the time, it is not surprising that some state legislatures chose to mandate the use of the Static-99 and related actuarial tools in civil commitment proceedings. After all, the use of mechanical or statistical procedures can reduce inconsistency and thereby limit the role of bias, prejudice, and illusory correlation in decision-making. This is especially essential in an emotionally charged arena like the sex offender civil commitment industry.

But if, as some suspect, the actuarials' poor predictive validity owes primarily to the low base rates of recidivism among convicted sex offenders, then reliance on any actuarial device may have limited utility in the real world. People have the capacity to change, and the less likely an event is to occur, the harder it is to accurately predict. In other words, out of 100 convicted sex offenders standing in the middle of a field, it is very hard to accurately pick out those five or ten who will be rearrested for another sex crime in the next five years.

Unfortunately, with its modest accuracy at best, its complex statistical language and, now, its injection of clinical judgment into a supposedly actuarial calculation, the Static-99 also has the potential to create confusion and lend an aura of scientific certitude above and beyond what the state of the science merits.

The new scoring information is slated to appear on the Static-99 website on Monday (October 5).

Related resource: Ethical and practical concerns regarding the current status of sex offender risk assessment, Douglas P. Boer, Sexual Offender Treatment (2008)


Photo credit: Chip 2904 (Creative Commons license).
Hat tip to colleagues at the ATSA conference who contributed to this report.

August 23, 2009

MMPI feud hits prime time

MMPI-2-RF version and Fake Bad Test at issue

The Minnesota Multiphasic Personality Inventory is the most widely used and best known personality test in the world. Daily, it is introduced in court in everything from child custody cases to civil lawsuits to criminal proceedings. Despite the fact that public tax monies sponsored its original development, it's become a major cash cow for the University of Minnesota, raking in about $1 million a year in royalties. But now, media focus on a bitter professional dispute is causing some in the field to wonder whether the legendary test has seen its day.

"Feud over famed test erupts at U," blared the headline on an in-depth investigation by Maura Lerner of the Minneapolis/St. Paul Star-Tribune.

At issue is last year's "dramatic makeover" of the highly profitable septuagenarian. The slimmed-down MMPI-2-RF (restructured form) has just 338 questions rather than the old test's 567.

Leading the critics is James Butcher, a retired psychologist whose career centered around the old MMPI-2. He claims the MMPI-2-RF is so radically altered that it amounts to a new instrument. "These folks have made a new test and they are using the name MMPI ... with all the 70 years of tradition to market [it]." If he's right, of course, then all of the myriad studies and normative data on the old test are irrelevant to interpreting scores on the MMPI-2-RF.

Of even more direct relevance to us forensic folks is the controversy surrounding the new "Fake Bad Scale." As I've blogged about previously (links below), the "FBS" is being used to brand personal injury claimants as malingerers. Critics say the 43-item scale "discriminates against women and is prone to 'false positives.' "

Defenders of the new test deride critics as "the Mult Cult" (a twist on Multiphasic). And, to be fair, some do have a vested interest in the previous edition: Butcher earns a 30 percent cut of the $600,000 annual royalties on the MMPI-2's computerized interpretation system.

All of the hoopla has led to a series of formal investigations. In one, a university audit revealed that most of the MMPI research grant money was going to projects involving advisory board members of University Press's test division, who in some cases had even reviewed their own grant proposals.

The Press's solution to the controversy, according to the Star Tribune, is to "let the marketplace decide."

Hmmm. Is the marketplace really the best judge of good science?

The hoopla is likely to benefit the MMPI's competitors. In forensic circles, the test's former monopoly hold is giving way as the Personality Assessment Inventory and other newer instruments gain ground.

Whatever side we practicing psychologists choose, the major test distributors won't care. They distribute them all, and they will continue to rake in enormous sums of money. Don't you just love those over-the-top shipping and handling fees charged by you-know-who?

Related blog posts:

August 20, 2009

Upcoming trainings

American Psychology-Law Society

Mark your calendars for the upcoming AP-LS conference, to be held March 18-20, 2010 in Vancouver, BC. The deadline for conference submissions is October 5. Information about the conference is available at the conference website. Proposals for posters, papers, and symposium can be submitted HERE.

Risk for Sexual Violence Protocol (RSVP)

On October 22, Stephen Hart is going to be down in Portland, Oregon, giving an all-day training on his newly developed instrument, the RSVP (which replaces the Sexual Violence Risk-20 instrument). Dr. Hart is well worth catching. A member of the Mental Health, Law, and Policy Institute at Simon Fraser University in Canada, he is an internationally renowned researcher, forensic psychologist and past president of the American Psychology-Law Society. More information on the training is at the website of Northwest Forensic Institute.

February 23, 2009

Latest on controversial "Fake Bad Scale"

I wanted to alert my psychologist readers to the latest in the controversy over the "Fake Bad Scale" of the Minnesota Multiphasic Personality Inventory, a topic I have blogged about previously (HERE). If you are planning to use this Scale, you should be aware of this article and the others on both sides of the controversy.

The Fake Bad Scale (FBS) was developed to identify malingering of emotional distress among claimants in personal injury cases. It was recently added to MMPI-2 scoring materials, resulting in its widespread dissemination to clinicians who conduct psychological evaluations.

The latest article, in the interesting new journal Psychological Injury & Law, summarizes concerns about the Scale's reliability, validity, and potential bias against women, trauma victims, and people with disabilities.

The article concludes that the scale is not sufficiently reliable or valid to be used in court:
"Based on a review and a careful analysis of a large amount of published FBS research, the FBS does not appear to be a sufficiently reliable or valid test for measuring 'faking bad,' nor should it be used to impute the motivation to malinger in those reaching its variable and imprecise cutting scores. We agree with the conclusions of the three judges in Florida that the FBS does not meet the Frye standards of being scientifically sound and generally accepted in the field, and that expert testimony based on the scale should be excluded from consideration in court. The samples used to develop the FBS are not broadly representative of the populations evaluated by the MMPI-2, nor are its criteria used to define malingering objective and replicable. There is insufficient evidence of its psychometric reliability or validity, and there is no consensus about appropriate cut-off scores or use of norms."
The article is "Potential for Bias in MMPI-2 Assessments Using the Fake Bad Scale (FBS)." The Abstract and a "free preview" are online HERE; the full article requires a subscription but can be requested directly from the first author, James Butcher. Butcher and co-authors Carlton Gass, Edward Cumella, Zina Kally and Carolyn Williams present just one side of the heated controversy; a rebuttal is scheduled for publication in an upcoming issue of the journal, followed by other pro and con articles.

Related blog resources:

New MMPI scale invalid as forensic lie detector, courts rule: Injured plaintiffs falsely branded malingerers? (March 5, 2008) – contains links and citations to other sources

"Fake Bad Scale": Lawyers advocate exposing in court (May 20, 2008)

A list of FBS references and statement from the test's publisher is HERE

Hat tip: Ken Pope

December 30, 2008

Will “revolutionary” Diana Screen end pedophile menace?

Vatican enlisting psychologists to perform miracles

The new movie Doubt paints the issue of pedophilic priests in shades of gray. Is the priest (played by Philip Seymour Hoffman) really a pedophile? Or is the head nun (Meryl Streep) just after him because, with his friendly manner and long fingernails, he fits her stereotype? Most provocative of all is the ostracized boy's mother (Viola Davis), who cares more about the priest's kindness to her son than about whether the relationship is sexual.

The movie is set in the 1960s, two decades before the pedophilia scandals sprang into the limelight to tarnish the reputation of the Catholic Church. Revelations of sexual misconduct by priests resulted in staggering financial losses - an estimated $2 billion in civil damages paid by the U.S. Catholic Church alone.

Anxious to mend its reputation and plug the money drain, the Vatican just announced a new fix: Candidates for the priesthood will undergo psychological screening to determine their suitability for the job.

What makes a candidate unsuitable, according to the Vatican? "Uncertain sexual identity," "deep-seated homosexual tendencies," and "grave immaturity" are among the factors. Painting a pseudoscientific veneer on the campaign, the Vatican said "expert" psychologists will screen select candidates on a case-by-case basis.

Mental health professionals, already flush with domain expansion into the emergent sex offender industry, are rushing into this new and potentially lucrative niche.

Leading the charge is Gene Abel, the psychiatrist who invented the controversial Abel Screen, which measures sexual proclivities based on how long men look at visual images of different types of models. Abel is promoting a new "pass/fail" test called the Diana Screen as a "breakthrough in technology" that can accurately identify men who have molested children.

"Who should use it?" asks the tool's website. "Any organization where there are professionals or volunteers who work with children," including churches, youth groups, schools, hospitals, foster care homes, and amusement parks.

In an appeal that combines sex panic emotionalism with a promise of revenue, Abel asks professionals to step forward and "make a difference" by becoming Diana Screen administrators: "You don't just add to your business opportunity, you take a stand against molestation and you help others to also take a stand."

Who can resist an appeal like that?

A quick web search found several psychologists already offering to do Diana Screens for employers. One bragged of having a "Certificate of Achievement" from Abel "in recognition of [his] knowledge about this important technology."

Child molesters are a heterogeneous bunch, with no unitary psychological "profile." So, before rushing to sign on, I decided to read the published literature on the Diana Screen to find out how it works, and whether it is reliable and valid.

Searching "Diana Screen" in an academic database, I did not get any hits. An Internet search was slightly more productive. I found several presentations by Abel. He presented the Diana Screen to the Society for Sex Therapy and Research; the Assessment, Treatment and Safe Management of Sexually Abusing Children, Adolescents, and Adults conference, and the California Coalition on Sexual Offending (CCOSO).

At these conferences, Abel reported on research he conducted with 100-plus applicants for priesthood training jobs. Unfortunately, the research does not appear to have been peer-reviewed or published, as required for admissibility in court under the Daubert standard.

Searching further, I found some strategically placed advertising; searches with the keywords "child molestation" cause Diana Screen ads to pop up on some news sites. The Screen was also a featured exhibitor at this year's conference of the Chartered Property Casualty Underwriters Society, which offers "cutting-edge tools" for "risk management professionals."

More humorously, in the blogosphere I bumped into a group of sex offenders discussing how easy it is to beat the test (and its precursor, the Abel). All you have to do, wrote one man, is ignore the instructions to rate your sexual arousal level to each slide, and instead respond at "a regular timing interval," which is what is really being measured. [PS: The link to their conversation went dead after this post was published.]

"You'll laugh when you find out just how easily the test can be beaten! The entire thing rides on the theory that no one will know what it's really testing."

Another agreed: "It's so seriously EASY to play the test like a harp."

These sex offenders would likely quarrel with the Screen developers' claim that it can identify "over 50 percent of actual child sexual abusers."

But my own question about the 50 percent success rate was, How can they know they are identifying half of all pedophiles? And, perhaps more importantly from an ethical point of view, what is the rate of false positives, or people whom the test wrongly identify as child molesters?

Hoping to learn more, I contact the company directly and asked for any published research. In due time, I received a packet of materials - glossy brochures and fliers, a sample report, graphs, and more promises that the Screen will help "bring an end to child molestation." No references to published research, though.

The materials did include a handout on the aforementioned (unpublished?) study of candidates for religious ordination. Of the 135 applicants screened, 18 (or about 13 percent) failed the test. Of those, 7 "were found to be true sexual risks to children" (based on followup inquiry and polygraph testing), while 2 "were found to have mental health problems" and 9 "required a closer look, but were found to have little or no risk."

Stated another way, that's a false positive rate of at least 50 percent. Even if it is just a screening test, psychologists should be cautious in administering a test with such a high false-positive rate and no published, peer-reviewed data on its reliability or validity.

More fundamentally, this type of testing raises philosophical issues about how far society should go in the name of protecting children, especially when most victimization is done not by teachers or amusement park workers but by family members. Who, for example, should be screened? As a colleague commented, it is one thing to screen airline pilots for alcohol abuse, but if priests, teachers, hospital employees, and even carnival workers will be screened, where will we draw the line? How much personal information are employers entitled to know? And what recourse will there be for those who are denied employment or lose their jobs based on their innermost thoughts, their sexual identity, an incident in their distant pasts, or - worst of all - erroneous test results?

The most pernicious problem with false positives is, how can one really know? As the movie Doubt suggests, proving innocence is difficult, and those who claim to be protecting children may have more complicated motives.

* * * * *

JULY 2015 POSTSCRIPT: The Atlantic has just published an interesting article on the controversies swirling around Abel Assessment by Maurice Chammah, a staff writer at The Marshall Project.

October 9, 2008

Challenge to juvenile sex offender risk prediction

Harsh federal law on shaky scientific ground

Did you know that each year, about 10,000 children will have to register as sex offenders for life?

That's part of the Sex Offender Registration and Notification Act, embedded in the Adam Walsh Child Protection and Safety Act passed by the U.S. Congress two years ago. Under SORNA, these arrested juveniles will be subject to warrantless searches for the rest of their lives, despite the fact that as kids they did not have the same types of due process rights that protect adults in criminal court.

SORNA marks a huge departure from past juvenile justice practices, which recognized that children are different, and that most juvenile crime is "adolescent-limited."

So, here's some food for thought:
  • What if it turns out that this new practice is not just extremely harsh, but paradoxically puts the public at heightened risk by impeding rehabilitation, and consigning kids who would otherwise move on with their lives to the status of permanent social pariahs?
  • And what if it turns out that the "scientific" methods the states use to determine which juveniles are at high risk for sexual reoffending are completely worthless?
Well, it looks like both of those things are true.

Prediction tools don't work

This month's Psychology, Public Policy, and Law published an important study showing that the systems in place to determine which juveniles are at high risk for recidivism simply don't do the job.

The researchers followed high-risk juvenile males for an average of about six years. They rated them on the highly touted Juvenile Sex Offender Assessment Protocol (J-SOAP-II) and the risk protocols developed by three states (Texas, New Jersey, and Wisconsin). Not only did the systems not work, but they were not even consistent with each other!

"This finding suggests that a juvenile's assessed level of risk may be more dependent on the state he lives in than on his actual recidivism risk," the authors concluded.

And SORNA's own tiered risk system fared even worse: Juveniles designated as high risk actually recidivated at lower rates than others.

In summary, the researchers concluded that the risk tools that have such important implications for the lives and futures of adolescents are both "nonscientific" and "arbitrary."

Treatment works

Although the efficacy of sex offender treatment among adults is contested, among adolescents the study findings were clear: Developmental factors play a big role in adolescent sexual behavior, and risk for reoffense can be reduced through high-quality treatment.

This is consistent with other recent research showing that even the most intractable offenders can be rehabilitated -- and at a cost far lower than the cost of punishment.

The authors concluded that SORNA as it applies to youth is not only misguided but is likely to do more harm than good:
"The legislation … is based on the assumption that juvenile sex offenders are on a singular trajectory to becoming adult sexual offenders. This assumption is not supported by these results, is inconsistent with the fundamental purpose of the juvenile court, and may actually impede the rehabilitation of youth."
Now, consider these facts:
  • Most juvenile sex offenders stop offending by early adulthood.
  • Among delinquents, just as many non-sex offenders as sex offenders go on to engage in adult sexual offending.
  • At least one in five adolescent males commits a sexual assault. (See Abbey, referenced below.)
What do these facts add up to?

The need for widescale prevention efforts, instead of ineffective stigmatization of a few unlucky individuals. (Funding for such efforts has dropped precipitously, probably not coincidentally to the rise of increasingly punitive sanctions; see Koss citation, below.)

Other challenges to SORNA

Meanwhile, other aspects of SORNA face challenges, and a few such challenges are headed for the U.S. Supreme Court. Specifically, legal challenges assert that SORNA exceeds federal rights by encroaching on state and local decision-making.

As summarized in the current issue of the American Bar Association journal, at least two courts have sided with critics and invalidated some or all of the registry law, and in a third case the new law has been put on hold until arguments are heard. (I reported on one of those cases, U.S. v. Waybright, back in August – the blog post with links is here.)

SORNA-style databases are already being extended to domestic violence offenders, and if they are upheld by the U.S. Supreme Court they are likely to extend even further. That is the conclusion of Wayne A. Logan, a law professor at Florida State University and author of the forthcoming book Knowledge as Power: A History of Criminal Registration Laws in America.

So, warn your kids now: Don't ever get arrested. You may be publicly stigmatized - and perhaps even subject to warrantless searches - for the rest of your life.

For further information:

Caldwell, M.F., Ziemke, M.H., & Vitacco, M.J. (2008). An examination of the Sex Offender Registration and Notification Act as applied to juveniles: Evaluating the ability to predict sexual recidivism. Psychology, Public Policy, and Law, 14 (2). 89-114.

Abbey, A. (2005). Lessons learned and unanswered questions about sexual assault perpetration. Journal of Interpersonal Violence, 20 (1). 39-42.

Koss, M.P. (2005). Empirically enhanced reflections on 20 years of rape research. Journal of Interpersonal Violence, 20 (1). 100-107.

For further information on the juvenile registration requirements of SORNA, see the U.S. Department of Justice's online fact sheet; this month's Police Chief magazine also has a summary of SORNA that includes the juvenile provisions (online here). The full text of the Adam Walsh Child Protection and Safety Act is here.

The American Bar Association article, "The National Pulse: Crime Registries Under Fire -- Adam Walsh Act mandates sex offender lists, but some say it's unconstitutional," is available here.

August 6, 2008

Two new journals

Just what we all need – more journals!

Psychological Injury and Law

The first issue of Psychological Injury and Law has hit the news stands.

Well, not exactly. But it's hit the web, and articles in the premiere issue are available for free downloads without a subscription.

The journal bills itself as "a multidisciplinary forum for the dissemination of research articles and scholarly exchanges about issues pertaining to the interface of psychology and law in the area of trauma, injury, and their psychological impact."

Spearheading the new journal - and an associated new organization, the Association for Scientific Advancement in Psychological Injury and Law - is Gerald Young, a psychology professor at York University in Ontario and co-author of the text, Causality of Psychological Injury: Presenting Evidence in Court and similar texts.

Young and colleagues hope to promote research, guide the application of that research in forensic cases, and improve cross-disciplinary communication.

Topics of focus will include PTSD, chronic pain, traumatic brain injury, and malingering.

Articles in the first issue, available here for free download, include:
  • Expert Testimony on Psychological Injury: Procedural and Evidentiary Issues
  • Forensic Psychology, Psychological Injuries and the Law
  • Psychological Injury and Law: Assumptions and Foundations, Controversies and Myths, Needed Directions
  • Posttraumatic Stress Disorder: Current Concepts and Controversies
That final article, by Steven Taylor and Gordon Asmundson, provides a concise summary of PTSD research, with a focus on malingering in the forensic context.

Happy downloading!

The Jury Expert

Also new online is the American Society of Trial Consultants' The Jury Expert. Now in its second issue, the e-journal "features articles by academics, researchers, popular writers and speakers, and trial consultants. The focus is on practical tips for litigators and
on the accurate interpretation and translation of social sciences
theory into litigation practice."

The current issue includes articles on case themes, witness preparation, an overview of eyewitness research, tips for using RSS feeds, a new form of forensic animation, and the use of religion research in legal cases.

The Jury Expert will publish six times per year and - best of all - subscriptions are free.

Check it out here.

July 3, 2008

MnSOST-R actuarial instrument critiqued

More questions about validity of controversial SVP tool

WARNING: This post is technical, and meant as a heads-up to professionals working in the SVP field, especially those who are still encountering (or using) the MnSOST-R. I would advise readers and subscribers who do not work in this area to skip this post – and have a nice 4th of July Holiday.

The current issue of the preeminent forensic psychology journal Law & Human Behavior has a scathing critique of the Minnesota Sex Offender Screening Tool – Revised (MnSOST-R) by University of Minnesota Professor William Grove and graduate student Scott Vrieze. Through a series of statistical analyses, the authors argue that this instrument does not result in more accurate prediction of sex offender recidivism than simply knowing the base rate for such recidivism. The instrument fails to meet basic evidentiary standards and should be excluded from SVP civil commitment trials, they argue.

Despite the fact that the MnSOST-R is used in at least 13 of the 17 states that have SVP civil commitment laws, there is little published information on its reliability or validity. The authors review the available information, which in and of itself makes the article imperative for those using the instrument.

Another contribution is the authors' critique of the recently popularized technique of using AUC's (the Area Under the Curve, from signal detection theory) as a measure of test accuracy. Recidivism rates of sex offenders would have to be about seven times higher than they are in order for AUC estimates to be reliable, the authors argue:
"An AUC statistic … can lull the clinician into thinking that, if the AUC is suitably high, the test will perform satisfactorily…. This is far from necessarily so…."
In the same issue of the journal, Douglas Mossman offers a rebuttal: "Contrary to what Vrieze and Grove suggest, ARAIs (actuarial risk assessment instruments) of modest accuracy yield probabilistic information that is more relevant to legal decision-making than just ‘betting the base rate.' "

The Vrieze and Grove critique follows a series of similar, statistically based critiques of the MnSOST-R and similar actuarials by Richard Wollert. These include:
  • Wollert, R. (2002). The importance of cross-validation in actuarial test construction: Shrinkage in the risk estimates for the Minnesota Sex Offender Screening Tool- Revised. Journal of Threat Assessment. 2(1), 89-104.
  • Wollert, R. (2002b). Additional flaws in the Minnesota Sex Offender Screening Tool- Revised. Journal of Threat Assessment. 2(4), 65-78
  • Wollert, R. (2006) Low Base Rates Limit Expert Certainty When Current Actuarials Are Used to Identify Sexually Violent Predators: An Application of Bayes's Theorem. Psychology, Public Policy, and Law. Feb Vol 12(1) 56-85
These articles are not light reading; they amount to complicated battles among statisticians. But forensic psychologists are expected to be aware of these debates when they testify about the use of actuarial instruments in SVP proceedings.

The Vrieze and Grove abstract is here; the Mossman rebuttal abstract is here. For the full articles you either have to pay or have access to a university database. A handy medical primer on ROC/AUC statistics, complete with slidable graphics, is here.

May 22, 2008

Major ruling on forensic neuropsychology

Flexible wins epic
Battle of the Batteries


The Democrats have Obama versus Clinton. American Idol has the battle of the two Davids. But whoever heard of the battle between the fixed and the flexible batteries?

The New Hampshire Supreme Court, for one. And in that more obscure battle in the field of neuropsychology, the court this week handed a resounding victory to the flexible battery. Although I haven't seen anyone dancing in the streets, it's a victory that forensic psychologists and neuropsychologists should be celebrating.

A bit of background: The "fixed" battery approach involves rigid administration of a fixed set of tests. The most popular such batteries are the Halstead-Reitan and the Luria. The flexible or "Boston Process" approach, in contrast, involves administering a core set of tests, supplemented by extra tests chosen on the basis of specific case factors and hypotheses.

When I was a neuropsychology intern, I was trained in the Boston Process Approach. As it turns out, the overwhelming majority of neuropsychologists in a recent survey - 94% - said they use some type of flexible battery approach. As the New Hampshire Supreme Court pointed out, that makes it the standard of practice in the field.

The case involves the alleged lead poisoning of Shelby Baxter, now 13, when she was a toddler. The civil case against Ms. Baxter's landlord, whom the Baxters claim knew the apartment was contaminated, was dismissed after the trial judge excluded neuropsychological evidence using the Boston Process approach as not scientific. The case will now go forward.

The plaintiffs' neuropsychologist, Barbara Bruno-Golden, Ed.D, had substantial experience with lead-exposed children, and each individual test in her battery was published, tested, and peer reviewed, as befitting reliable science under the legal standard of Daubert and New Hampshire statutory law.

At a 6-day Daubert evidentiary hearing, the defense called controversial neuropsychologist David Faust, Ph.D., who testified that although Dr. Bruno-Golden's approach was generally accepted in clinical practice, it was not so in a forensic setting. The plaintiff's experts, as well as the American Academy of Clinical Neuropsychology in an amicus brief, correctly countered that there is no separate standard for forensic practice.

In its exhaustive and thoroughly reasoned opinion, the Supreme Court soundly rejected Faust's reasoning, issuing a monumental blow to the minority of forensic neuropsychologists who staunchly cling to the fixed battery approach.

"Under the defendants' position, no psychologist who uses a flexible battery would qualify as an expert, even though the flexible battery approach is the prevalent and well-accepted methodology for neuropsychology," the court pointed out. "Therefore, the implication … is that no neuropsychologist, or even psychiatrist or psychologist since, in their view, all combinations of tests need to be validated and reliable, could ever assist a trier of fact in a legal case."

The court held that any weaknesses in Bruno-Golden’s methodology - if indeed such existed - were properly handled through cross-examination and counterbalancing evidence in the adversarial trial process.

The case, Baxter v. Temple, is online here. A news article is here. A blog commentary at Traumatic Brain Injury is here.

Photo credit: 02ma (Creative Commons license)

May 20, 2008

"Fake Bad Scale": Lawyers advocate exposing in court

When a controversial test is being used against their client, attorneys may weigh the following questions:
  • Should I seek an evidentiary hearing (under Frye or Daubert) and try to exclude the test?
  • Or, should I let the test come in as evidence, and educate the jury about weaknesses in the underlying science?
This question regularly comes up at Sexually Violent Predator trials, regarding the controversial Static-99 risk assessment tool. Now, it is coming up in civil personal injury trials, regarding the MMPI-2's "Fake Bad Scale" (which I blogged about here back in March).

Increasingly, attorneys are choosing the second option when the science underlying a test is weak. They are openly critiquing the test and its findings, and allowing jurors to form their own conclusions. Yesterday's Lawyers USA features an article on how plaintiffs' attorneys are "turning the tables" on the Fake Bad Scale:
Although plaintiffs' attorneys are unanimous in despising the Fake Bad Scale, there is a mini-debate about whether it is more effective to exclude the test before trial or allow it in and discredit it while cross-examining the defense expert.

"It's a tough call, frankly," said Dorothy Clay Sims, a founding partner of Sims, McCarty, Amat & Stakenborg in Ocala, Fla., who has won three hearings over excluding the test.

"Frye and Daubert hearings are tough, but courts don't seem to like this test, so it's difficult to give up a hearing that you have a good chance of winning," she said. "On the other hand, once the Fake Bad Scale is demystified for the jury, and you pierce through it, they look at the defense doctor and say 'Oh, come on.' "
The article features the case of Sarah Jenkins, a medical receptionist who suffered tissue injuries and cognitive problems after her pick-up truck was hit by a delivery truck. She scored in the faking range on the Fake Bad Scale.

Rather than fighting to exclude the test, experienced trial attorney Dean Heiling made it a centerpiece. He cross-examined the defense expert at length about the test, and through his own expert exposed the controversy in the field about the test's validity.

Most interestingly, he put his client on the stand in rebuttal, and had her go through each test item and her answer with the jury.

Jurors deliberated only three hours before awarding a verdict of $225,749.

The lesson to forensic psychologists: Know your tests, and know their weaknesses.


The full story, by Sylvia Hsieh, is here, although it is only available to subscribers. For more on the controversy over the scale, see my previous post here.

Hat tip: Ken Pope

March 5, 2008

New MMPI scale invalid as forensic lie detector, courts rule

Injured plaintiffs falsely branded malingerers?

Psychology's most widely used personality test, the MMPI, jumped into the national spotlight today in a fascinating David-and-Goliath controversy pitting corporate interests such as Halliburton against the proverbial little guy.

At issue is the "Fake Bad Scale" that was incorporated into the Minnesota Multiphasic Personality Inventory last year for use in personal injury litigation. A front-page critique in today's Wall Street Journal includes publication of the items on the contested scale, a test security breach that will no doubt have the publisher seeing red.

Although a majority of forensic neuropsychologists said in a recent survey that they use the scale, critics say it brands too many people - especially women - as liars. Research finding an unacceptably large false-positive rate includes a large-scale study by MMPI expert James Butcher, who found that the scale classified high percentages of bonafide psychiatric inpatients as fakers.

One possible reason for this is that the scale includes many items that people with true pain or trauma-induced disorders might endorse, such as "My sleep is fitful and disturbed" and "I have nightmares every few nights." Yet hearing the term "Fake Bad" will likely make a prejudicial impact on jurors even if they hear from opposing experts who say a plaintiff is not faking.

The controversy came to a head last year in two Florida courtrooms, where judges barred use of the scale after special hearings on its scientific validity. In a case being brought against a petroleum company, a judge ruled that there was "no hard medical science to support the use of this scale to predict truthfulness.” Other recent cases in which the scale has been contested include one against Halliburton brought by a former truck driver in Iraq.

The 43-item scale was developed by psychologist Paul Lees-Haley, who works mainly for defendants in personal injury cases and charges $600 an hour for his depositions and court appearances, according to the Journal article. In 1991, he paid to have an article supportive of the scale published in Psychological Reports, which the WSJ describes as "a small Montana-based medical journal."

The scale was not officially incorporated into the MMPI until last year, after a panel of experts convened by the University of Minnesota Press reported that it was supported by a "preponderance of the current literature." Critics maintain that the review process was biased: At least 10 of the 19 studies considered were done by Lees-Haley or other insurance defense psychologists, while 21 other studies – including Butcher's – were allegedly excluded from consideration.

Later last year, the American Psychological Association's committee on disabilities protested to the publisher that the scale had been added to the MMPI prematurely.

Lees-Haley, meanwhile, defends the scale as empirically validated and says criticism is being orchestrated by plaintiff's attorneys such as Dorothy Clay Sims, who has written guides on how to challenge the Fake Bad scale in court.

Even if the scale was valid before today, questions are certain to arise about the extent to which it will remain valid once litigants start studying for it by using today's publication of all 43 items along with the scoring key.

The lesson for forensic practitioners: Be aware of critical literature and controversy surrounding any test that you use in a forensic context, and be prepared to defend your use of the test in court.

The article, "Malingerer Test Roils Personal-Injury Law; 'Fake Bad Scale' Bars Real Victims, Its Critics Contend," which includes ample details on the controversy, is only available to Wall Street Journal subscribers, but you can try retrieving it with a Google news search using the term "MMPI Fake Bad." The University of Minnesota Press webpage on the contested scale is here, along with a list of research citations.

Here are citations to the major pro and con research articles:

"Meta-analysis of the MMPI-2 Fake Bad Scale: Utility in forensic practice," Nelson, Nathaniel W., Sweet, Jerry J., & Demakis, George J., Clinical Neuropsychologist, Vol 20(1), Feb 2006, pp. 39-58

"The construct validity of the Lees-Haley Fake Bad Scale: Does this measure somatic malingering and feigned emotional distress?: Butcher, James N., Arbisi, Paul A., & Atlis, Mera M., Archives of Clinical Neuropsychology, Vol 18(5), Jul 2003, pp. 473-485.

Postscript: Test distributor Pearson Assessments responded with alacrity - not to the heart of the controversy but to the Journal's reprinting of test items. The company, which
makes a mint from selling and scoring the MMPI and other psychological tests,got the WSJ to remove the online link to the test items. In a "news flash," Pearson says it is "evaluating the impact of the article" and asks psychologists to report any other instances of "illegal" reproduction of the scale in publications, websites, chat rooms, or blogs.

NOTE: For more of my posts about the MMPI-2's Fake Bad Scale, search the blog using the term "MMPI" (the search box is in the upper left corner of the page).

January 23, 2008

Appellate courts grapple with controversial sex offender risk assessment tools

Rulings on Abel and Static 99

Without scientific-sounding risk assessment tools, forensic psychologists in the sex offender civil commitment industry would have a hard time earning a living. Increasingly, instruments designed specifically for this burgeoning industry are being scrutinized by the courts. Here are two new appellate cases in point:

Louisiana appellate court approves profiling with Abel

In a troubling ruling out of Louisiana, an appellate court OK'd expert witness testimony that a man was 81 percent likely to have molested a child based on his psychological test results.

Interpreting the defendant's scores on the Abel Assessment for Sexual Interest, clinical psychologist Maureen Brennan had testified that "there is an 81 percent chance that anyone with that pattern has at some point in their life been sexually inappropriate with a child" and that the defendant would falsely deny that fact.

After hearing that powerful testimony back in 2006, a jury deliberated only one hour before convicting schoolteacher Timothy Brannon of Beauregard Parish of all 12 counts against him.

Over defense objections, the trial judge had qualified Dr. Brennan as an expert in the "characteristics and diagnosis of child sexual abuse and perpetrators."

The Third Circuit Court of Appeals found no problem with Dr. Brennan's testimony, including her use of the Rorschach inkblot testing to help predict Brannon's conduct. Among other reasons for not finding error, the appellate court pointed out that substantial other evidence implicated the schoolteacher.

Both the Abel instrument and the Rorschach are highly controversial in court. Abel has responded to criticisms by clarifying that the instrument is not intended to assist triers of fact to reach decisions about an individual's guilt or innocence. The Abel uses visual reaction times to sexual imagery to deduce individuals' relative sexual interests in different types of people.

Even more importantly, even when reliable and valid psychological tests are administered, the science is never strong enough to assign a mathematical probability of guilt.

I have placed the opinion online here.

7th Circuit questions reliability of Static 99

In this case, 31-year-old Christopher McIlrath was appealing his 4-year sentence in an internet sting conviction. He argued that the trial judge improperly dismissed the testimony of forensic psychologist Eric Ostrov, who had administered the Static 99 actuarial risk assessment tool and testified that McIlrath matched the characteristics of offenders with a 9 to 13 percent chance of recidivism.

The appellate court rejected McIlrath’s argument that he should have been sentenced just to home confinement based on the Static 99 data. While not directly ruling on the admissibility of the instrument (the rules of evidence don’t apply at sentencing hearings), the court expressed skepticism about the Static 99’s reliability in predicting recidivism risk.

The EvidenceProf blog has more on that case; the case itself can be found here.

Hat tip: Wendy Murphy

December 4, 2007

Detection of feigned mental retardation

The Clinical Neuropsychologist has a new article on the problems in detecting feigned mental retardation. That issue is getting more attention these days in the wake of the U.S. Supreme Court ruling in Atkins v. Virginia, outlawing the execution of mentally retarded defendants. The British Psychological Society's Research Digest blog has more on the study, which is entitled "Identification of feigned mental retardation using the new generation of malingering detection instruments: Preliminary findings."

July 15, 2007

Attorneys in the testing room – Yes or No?

Hey, you forensic psychologists: When an attorney asks if she can sit in while you test her client, what do you tell her?

If you think there is one accepted answer, think again.

The National Academy of Neuropsychology and the American Academy of Clinical Neuropsychology say you should keep that attorney out. Her presence may violate test standardization, skewing the results.

But some forensic psychologists, such as Randy Otto, say banning third-party observers may be legally problematic. Some states allow defendants in court-ordered evaluations to bring in observers. And when a defendant speaks another language, we may need an interpreter in the room.

This issue is heating up, as the Committee on Psychological Tests and Assessment (CPTA) of the American Psychological Association’s Board of Scientific Affairs prepares to issue a policy statement on third-party observers.

You can watch the fireworks as proponents debate their positions at the American Psychological Association convention in San Francisco next month. The debate, “Third-Party Observers in Psychological and Neuropsychological Forensic Psychological Assessment,” will be Saturday, August 18, at noon.

Another source of information is the latest issue of the journal Ethics & Behavior. Robert Cramer and the eminent forensic psychologist Stanley Brodsky have co-authored an article, “Undue Influence or Ensuring Rights? Attorney Presence During Forensic Psychology Evaluations."

The article summarizes the neuropsychological literature on extraneous influences in testing and the limited literature on the effects of attorney presence in the testing room. It also discusses legal and ethical mandates pertaining to attorney presence and offers suggestions for forensic evaluators on how to answer the attorney who asks to sit in.

Requests for reprints of the article may be sent to crame001@bama.ua.edu.