Correlation
- estimate portion of variance that is error variance
- degree of consistency or agreement between two independently derived sets of scores
- stated as a correlation coefficient -1.0 to +1.0
- Example
Pearson's Product-Moment Correlation Coefficient
- person's position in group and amount of deviation from group mean
- significance depends on size of sample
- 10 cases r=.40 not significant
- 100 cases r=.40 significant
- more detail
Spearman's Rank-Order Correlation Coefficient
Test-retest Reliability
- repeat identical test on a second occassion
- correlation between scores obtained by same person
- The scores should be the same minus the error variance
- error variance corresponds to random fluctuations in performance
- i.e., broken pencil, illness, fatigue...
- practice effects
- must state interval, as r decreases with time
- >6 months = Coefficient of Stability
- MMPI example
Scorer/Inter-Rater Reliability
- measure of examiner variance
- objective versus subjective measures
- high degree of judgement = high chance of variance
- measure degree of consistency between two or three examiners
- .80 or better is good
- can use Pearson's, Spearmen's or Kappa
- more detail
Alternate-Form Reliability:
- to avoid problems with test-retest
- use of comparable forms
- measures "temporal stability"
- also measures consistency of response to different item samples
- concept of "item sampling"
lucky break versus hard test... what extent to scores on the test depend on
factors specific to selection of items
- short interval = measure of relationship between forms
- long interval = measure of test-retest and alternate forms
- very time consuming and work intensive
- more detail
Split-Half Reliability
- single administration of test - split in half
- two scores for each person
- measure of consistency of content sampling
- odd versus even
- more detail
Methods of Estimating Internal Consistency
Method used to determine the extent to which all the items on a given test are measuring the same skill (e.g., the test is "consistently" measuring the same skill.)
- Spearman-Brown Formula.
- Inter-item Consistency Formula:
- The degree of correlation between all the items on a scale (e.g., Cronbach's alpha).
- Tests of Homogeneity:
- The degree to which a test measures a single factor, or the extent to which items in a scale are unifactorial.
- Tests of Heterogeneity: The degree to which a test measures different factors.
- The Kuder-Richardson formulas: Used on tests with dichotomous items.
- Kuder-Richardson 20 (KR20) - mean of all possible split-half coefficients
- The Coefficient Alpha: used for tests that contain non-dichotomous items (items that can individually be scored along a range of values, such as attitude polls and essay tests).
- Most often used for survey scales and objective assessments
- yields a lower bound estimate of reliability
- can be negative if inter-item correlations are negative
- if items are dichotomously scored, coefficient alpha equals KR20 value
|