Wednesday, June 30, 2010

Response bias: Faith or science?

Most extensively studied topic in applied psychological measurement

After one hundred years and thousands of research studies, perhaps we are no closer than ever to understanding how response bias -- a test-taker's overly positive or negative self-presentation -- affects psychological testing. Perhaps what we think we know -– especially in the forensic context -– is mostly based on faith and speculation, with little real-life evidence.

That is the sure-to-be controversial conclusion of a landmark analysis by Robert E. McGrath of Fairleigh Dickinson University, an expert on test measurement issues, and colleagues. The dryly titled "Evidence for response bias as a source of error variance in applied assessment," published in Psychological Bulletin, issues a challenge to those who believe the validity of testing bias indicators has been established, especially in the forensic arena.

The authors conducted an exhaustive literature review, sifting through about 4,000 potential studies, in search of research on the real-world validity of measures of test response bias. They sought studies that examined whether response bias indicators actually did what we think they do -- suppress or moderate scores on the substantive tests being administered. They searched high and low across five testing contexts -- personality assessment, workplace testing, emotional disorders, disability evaluations, and forensic settings. Surprisingly, out of the initial mountain of candidate research, they found only 41 such applied studies.

Of relevance here, not a single study could be found that tested the validity of response bias indicators in real-world child custody or criminal court proceedings. Indeed, only one study specifically targeting a forensic population met inclusion criteria. That was a 2006 study by John Edens and Mark Ruiz in Psychological Assessment looking at the relationship between institutional misconduct and defensive responding on test validity scales.

Does the "Response Bias Hypothesis" hold water?


The authors tested what they labeled the response bias hypothesis, namely, the presumption that using a valid measure of response bias enhances the predictive accuracy of a valid substantive indicator (think of the K correction on the MMPI personality test). Across all five contexts, "the evidence was simply insufficient" to support that widely accepted belief.

McGrath and colleagues theorize that biased responding may be a more complex and subtle phenomenon than most measures are capable of gauging. This might explain why the procedure used in typical quick-and-dirty research studies -- round up a bunch of college kids and tell them to either fake or deny impairment in exchange for psych 101 credits -- doesn't translate into the real world, where more subtle factors such as religiosity or type of job application can affect response styles.

It is also possible, they say, that clinical lore has wildly exaggerated base rates of dishonest responding, which may be rarer than commonly believed. They cite evidence calling into question clinicians' widespread beliefs that both chronic pain patients and veterans seeking disability for posttraumatic stress disorder are highly inclined toward symptom exaggeration.

Unless and until measures of response bias are proven to work in applied settings, using them is problematic, the authors assert. In particular, courts may frown upon use of such instruments due to their apparent bias against members of racial and cultural minorities. For example, use of response bias indicators has been found to disproportionately eliminate otherwise qualified minority candidates from job consideration, due to their higher scores on positive impression management. (Such a finding is not surprising, given Claude Steele's work on the pervasive effects of stereotype threat.)

"What is troubling about the failure to find consistent support for bias indicators is the extent to which they are regularly used in high-stakes circumstances, such as employee selection or hearings to evaluate competence to stand trial and sanity," the authors conclude. "The research implications of this review are straightforward: Proponents of the evaluation of bias in applied settings have some obligation to demonstrate that their methods are justified, using optimal statistical techniques for that purpose…. [R]egardless of all the journal space devoted to the discussion of response bias, the case remains open whether bias indicators are of sufficient utility to justify their use in applied settings to detect misrepresentation."

This is a must-read article that challenges dominant beliefs and practices in forensic psychological assessment.

5 comments:

  1. AnonymousJune 30, 2010

    McGrath's paper is trash. It fails to review 99% of all papers ever written on Symptom Valdity Testing. It is a treatise based on ignorance and prejudice. As I said, it is junk. Read it for yourself and see. This man should never be APA President if he writes prejudiced trash like this.

    ReplyDelete
  2. AnonymousJuly 04, 2010

    When you need to make an important decision, i.e., public safety, you can't disregard impression management!

    ReplyDelete
  3. AnonymousJuly 05, 2010

    Bottom line for forensic psychologists: When dangerousness/public safety is a factor, response bias cannot be ignored.

    ReplyDelete
  4. Do you have the list of the 41 studies that did look at applied settings? Could you post them or where to get access to them?

    It would be interesting to examine the profiles of how people present themselves given different settings. I saw some slides that showed considerable reduction in the overall profile dependent upon setting.

    ReplyDelete
  5. Hi Dr. Jackson, I do not have the list. However, they are referenced in the article, which is available from the author at: mcgrath@fdu.edu.

    ReplyDelete

Note: Only a member of this blog may post a comment.

 
Real Time Web Analytics