Correlation
 estimate portion of variance that is error variance
 degree of consistency or agreement between two independently derived sets of scores
 stated as a correlation coefficient 1.0 to +1.0
 Example
Problem of Intervening Variables: Spurious Relationships
 x= ice cream sales; y=violent crime; z= heat waves
 x= ice cream sales; y=drownings; z= heat waves
 x= number of electrical appliances; y=decreased birth rates; z= industrialization
 x= smoking; y=lung cancer; z= tissue damage
 x= age; y=reading ability; z= education
Pearson's ProductMoment Correlation Coefficient
 person's position in group and amount of deviation from group mean
 significance depends on size of sample
 10 cases r=.40 not significant
 100 cases r=.40 significant
 more detail
Spearman's RankOrder Correlation Coefficient
Correlation versus Regression (Variance)
 r_{x,y = correlation between x and x
ex: x= SAT scores, y= first year first semester (FYFS) GPA: r=.30 to .40
}
 r^{2}_{x,y = variance in y explained by x
ex: x= SAT scores, y= FYFS GPA: r2 = 10 to 20%
}
Testretest Reliability
 repeat identical test on a second occassion
 correlation between scores obtained by same person
 The scores should be the same minus the error variance
 error variance corresponds to random fluctuations in performance
 i.e., broken pencil, illness, fatigue...
 practice effects
 must state interval, as r decreases with time
 >6 months = Coefficient of Stability
 MMPI example
Scorer/InterRater Reliability
 measure of examiner variance
 objective versus subjective measures
 high degree of judgement = high chance of variance
 measure degree of consistency between two or three examiners
 .80 or better is good
 can use Pearson's, Spearmen's or Kappa
 more detail
AlternateForm Reliability:
 to avoid problems with testretest
 use of comparable forms
 measures "temporal stability"
 also measures consistency of response to different item samples
 concept of "item sampling"
lucky break versus hard test... what extent to scores on the test depend on
factors specific to selection of items
 short interval = measure of relationship between forms
 long interval = measure of testretest and alternate forms
 very time consuming and work intensive
 more detail
SplitHalf Reliability
 single administration of test  split in half
 two scores for each person
 measure of consistency of content sampling
 odd versus even
 more detail
Methods of Estimating Internal Consistency
Method used to determine the extent to which all the items on a given test are measuring the same skill (e.g., the test is "consistently" measuring the same skill.)
 SpearmanBrown Formula.
 Interitem Consistency Formula:
 The degree of correlation between all the items on a scale (e.g., Cronbach's alpha).
 Tests of Homogeneity:
 The degree to which a test measures a single factor, or the extent to which items in a scale are unifactorial.
 Tests of Heterogeneity: The degree to which a test measures different factors.
 The KuderRichardson formulas: Used on tests with dichotomous items.
 KuderRichardson 20 (KR20)  mean of all possible splithalf coefficients
 The Coefficient Alpha: used for tests that contain nondichotomous items (items that can individually be scored along a range of values, such as attitude polls and essay tests).
 Most often used for survey scales and objective assessments
 yields a lower bound estimate of reliability
 can be negative if interitem correlations are negative
 if items are dichotomously scored, coefficient alpha equals KR20 value
