Computer-based Assessment of Malingering: Response Requirement Effects

Philip Schatz, Ph.D., Saint Joseph's University, Psychology Department
Stephen Moelter, M.S., Drexel University, Neuropsychology Program
Maria Fragnito, Saint Joseph's University, Psychology Department


Abstract

Four computer-based tests were administered to 42 normal subjects who were provided with detailed instructions to do their best or fake bad. The tasks were: 1) a revised version of the Rey 15-item test; 2) computerized version of the Rey Dot-Counting test; 3) the University of Pennsylvania Trucks test, and 4) a custom Fill-in-the-blank "Phrases" test (i.e., _____ birthday to you.) Experimental tasks were administered in this order to all subjects, who participated in both experimental conditions which were counterbalanced. Millisecond accurate reaction time data was recorded (PowerLaboratory, Chute & Westall, 1996) for all subject responses for all aspects of the experiments, including actual responses to stimuli as well as responses to instruction screens.

Subjects who were "faking bad" achieved significantly lower percent correct on all experimental tasks. On tasks where obvious "bad" responses were not readily apparent (15-item, Phrases), subjects faking bad required significantly more time to provide responses. In contrast, where the alternate response was a "mouse-click" away (Trucks) or obviously a number (Dot counting), groups did not differ on response time, and those faking bad actually performed these tasks in less total time.

Analysis of response time to identical instruction screens revealed a learning curve for both groups. This would be expected for those subjects who were doing their best. However, for subjects who were faking bad, such a learning curve in the context of an inability to count dots, recall simple items, or complete over-learned phrases is clearly "out of register".

The response requirements and reaction time components of these tests are sufficiently complex that profiling of individuals exhibiting inconsistent response sets or questionable motivation on evaluations may be possible.


Introduction:

Neuropsychologists are often asked to identify potential malingerers. In making these judgments the neuropsychologist attempts to document, with objective assessment procedures, evidence of neurobehavioral and/or neuropsychological dysfunction (Iverson & Franzen, 1996).

Individuals attempting to "fake bad" will often perform worse than individuals with actual brain injuries. Symptom Validity Tests refer to measures utilized to detect test performance that is so poor that it is below the level of probability, even for impaired populations (Etcoff & Kampfer, 1996).

Computer-based assessment techniques can objectively and unobtrusively record response reaction times. Many individuals attempting to malinger fail to realize this measurement is being recorded.

Individuals attempting to malinger may know to perform to a certain level to avoid suspicion of symptom exaggeration. However, even if they are aware that reaction time is being recorded, it is difficult for these individuals to calculate or manipulate response reaction time with any degree of sophistication (Guttierrez & Gur, 1998).


Purpose

To determine if computer-based tests of malingering will successfully discriminate between individuals who are faking symptoms and those who are attempting to do their best. Response reaction times will also be evaluated as a means of discriminating between these groups. Performance on forced-choice tasks which have obvious incorrect alternatives will be compared to performance on tasks which require user-generated solutions with less obvious alternatives.

H1 - Individuals who are faking symptoms will make significantly more errors than those performing their best.

H2 - Reaction times on instruction screens will be similar for individuals faking symptoms and those performing their best. The lack of delayed response on instruction screens will be "out of register" with impaired performance on subsequent tasks for individuals faking symptoms.

H3 - Reaction times will be longer for individuals faking symptoms when they must generate an incorrect response to a simple item with an obvious correct answer.


Methods

Participants: Participants were 42 students at Saint Joseph's University participating in partial completion of course requirements.

Apparatus: Tasks were administered to participants on Macintosh Computers running PowerLaboratory Software (© Chute & Westall, 1996). Experimental stimuli (Figure 1) consisted of:

  1. a revised version of the Rey 15-item test;
  2. computerized version of the Rey Dot-Counting test;
  3. the University of Pennsylvania Trucks test,
  4. a custom Fill-in-the-blank "Phrases" test (i.e., _____ birthday to you.)

Procedures: Experimental tasks were administered in the above order to all participants, who participated in both experimental conditions (fake symptoms/no symptoms) which were counterbalanced. Millisecond accurate reaction time data was recorded for all responses for all aspects of the experiments, including actual responses to stimuli as well as responses to instruction screens. All participants were provided with the same instructions explaining the symptoms which they were to portray (Figure 2).


Results:

Individuals faking symptoms provided significantly more incorrect responses on all tasks (Fig. 3).
TaskFake SymptomsNot FakingResults
15-item26%76%F(1,74)=24.7; p<.001
Dots-Random31%88%F(1,64)=74.4; p<.001
Dots-Grouped41%94%F(1,64)=53.0; p<.001
Trucks23%43%F(1,64)= 5.4; p<.05
Phrases:Fill-in61%100%F(1,73)=34.4; p<.001
Phrases: Words60%98%F(1,73)=31.4; p<.001
Phrases: Pictures42%98%F(1,73)=25.4; p<.001

On tasks where correct answers were obvious (15-item, Dots-Grouped, Phrases-All), tasks response reaction times were significantly longer for individuals faking symptoms. On tasks where correct answers were less obvious (Dots-Random, Trucks), task RT's were significantly shorter for individuals faking symptoms (Figure 4).

Response reaction times for Instruction screens revealed a learning curve for both groups. While this would be expected for individuals free from symptomotology, this is clearly out of register for individuals with symptoms (Figure 5).


Discussion:

Computer-based assessment of malingering offers the advantage of unobtrusively recording response reaction times. Even when individuals faking symptoms are aware that response reactions are being recorded, sophisticated strategies required to fake symptoms are readily available.

Our results revealing learning curves on reaction times for instruction screens support the notion that individuals faking symptoms are unaware that response reaction times are being recorded

Significant differences between individuals faking and not symptoms on all tasks suggest that these tasks warrant further psychometric study regarding their utility in detecting malingerers.

Individuals faking symptoms required more time to generate incorrect responses on tasks where correct answers were obvious. On tasks where correct answers were not obvious, these same individuals required less time and appeared to simply choose a tangible alternate answer.

The combined response requirements and reaction time components of these tests are sufficiently complex that profiling of individuals exhibiting inconsistent response sets or questionable motivation on evaluations may be possible.


References:

Chute, D.L. & Westall, B. (1996). PowerLaboratory Software. Phila. PA: MacLaboratory, Inc.

Etcoff, L.M. & Kampfer, K.M. (1996). Practical guidelines in the use of symptom validity and other psychological tests to measure malingering and symptom exaggeration in traumatic brain injury cases. Neuropsychology Review, 6(4), 171-201.

Guttierrez, J.M. & Gur, R.C. (1998). Detection of malingering using forced-choice techniques. In Reynolds (Ed.), Detection of Malingering During Head Injury Litigation, (pp. 81-104). New York: Plenum Press.

Iverson, G.L. & Franzen, M.D. (1996). Using multiple objective memory procedures to detect simulated malingering. Journal of Clinical and Experimental Neuropsychology, 18(1), 38-51.