Philip Schatz Ph.D., Brendan Putz, B.S,
Saint Joseph's University, Department of Psychology, Philadelphia PA.
We documented stability of scores on three custom-developed computer-based neurological assessment measures over a one-year period, Trails A & B and the Digit-Symbol subtest of the WAIS-R. Thirty-eight student athletes participated in preseason assessments during the 2000 (Time 1) and 2001 (Time 2) athletic seasons. Using traditional Pearson's correlation coefficients, test-retest scores were low for the Trails A (r=.10), Trails B (r=.30), and Digit Symbol (r=.11). Dependent samples t-tests revealed significant improvements over the one year interval on the Trails B (51.8 seconds versus 42.3 seconds) [t(37) -3.43, p=.001] and the Digit Symbol sub-test (127 seconds to complete all items versus 102 seconds) [t(37)=-7.13, p=.001]. Reliable change indices (RCI) were used to determine the percentage of scores that had changed significantly over the one-year period. RCI equations revealed significant change indices for 11% of participants completing the Trails A, 23% on the Trails B, and 27% on the Digit Symbol. Results suggest that a relatively low percentage of individual scores changing over an assessment period of one-year can significantly decrease test-retest coefficients. While it is not clear why such task improvements occurred over this one-year period, the computer-based nature of the tasks may have contributed to these improvements, secondary to widespread computer ownership and use by college students.
Researchers have demonstrated psychometric equivalence between traditional and computerized versions of tests. (Elwood & Griffin, 1972; Campbell, et al., 1999).
Computer-based assessment has inherent features which are absent in traditional forms, such as timing of response latencies, automated analysis of response patterns, transfer of results to a database for further analysis, or the ease with which normative data can be collected or compared to existing normative databases.
Computer-based assessment measures are better able to provide precise control over the presentation of test stimuli, thereby potentially increasing test reliability.(Maroon, et al., 2000).
The Reliable Change Index (RCI) is designed to detect statistical significance of observed changes from one testing occasion to another (Maassen, 2000). Any change is significant if the magnitude of change is sufficiently large in proportion to the associated error variance of that test.
The purpose of this study was to compare RCI, t-tests, and correlations, as measures of test stability on scores obtained one year apart
Participants were 38 college student athletes, ages 17-22, who volunteered for pre-season testing prior to participation in the 2000 and 2001 athletic seasons. Participants were 23 females and 15 males from the Field Hockey and Soccer teams at Saint JosephÕs University. Inclusion in the project required completion of two successive years of pre-season testing.
Participants completed computer-based versions of the Trail-Making Tests A&B and Digit-Symbol Sub-test of the WAIS-R, created using PowerLaboratory Software. Trails A & B Total time was recorded. Digit Symbol time to complete all items was recorded.
Spearman's rank-order correlations revealed low test-retest reliability coefficients for all measures.
Dependent samples t-tests revealed significant differences between performance at Time 1 and Time 2, with significant decreases noted in total time to complete all measures.
Reliable Change Indices revealed only a small percentage of subjects had experienced significant changes in time to complete tasks from Time 1 to Time 2.
All results are presented in Table 1.
|Measure||Time 1||Time 2||t-test||Spearman's r||RCI %Change|
|Trails A Time||29.5 sec.||28.4 sec.||t(37)= -.53; p=.60||r=-.095; p=.58||11%|
|Trails B Time||51.8 sec.||43.2 sec.||t(37)= -3.43; p=.001||r= .30; p=.065||23%|
|Dig Symbol Time||127 sec.||102 sec.||t(37)= -7.13; p=.001||r=.11; p=.49||27%|
Individual scores changing over an assessment period of one-year can significantly decrease test-retest coefficients.
Computer-based tasks may more susceptible to test-retest improvements, secondary to widespread computer ownership and use by college students.
Developers of computer-based measures should perform longitudinal research on test psychometrics, controlling for computer expertise.
Campbell KA, Rohlman DS, Storzbach D, et al. Test-retest reliability of psychological and neurobehavioral tests self-administered by computer. Assessment. 1999;6:21-32..
Elwood DL, Griffin R. Individual intelligence testing without the examiner. J Consult Clin Psychol. 1972;38:9-14.
Maroon, J. C., Lovell, M. R., Norwig, J., Podell, K., Powell, J. W., & Hartl, R. Cerebral concussion in athletes: evaluation and neuropsychological testing. Neurosurgery. 2000;4:659-669.