Peter H. Schönemann
Professor Emeritus • Department of Psychological Sciences • Purdue University
Alttext

Note: numbers in brackets refer to Publications list

Factor Analysis

Early work on procrustes methods (least squares maps T for given A, B in  B = AT+E, to minimize the sum of squared residuals in E, usually under some constraint on E, such as, in this case, orthogonality  [1, 3, 6, 8, 17, 21]) and methods for machine rotation [4] to simple structure.

After finally having been made aware of it (by Heerman at a meeting of the Psychometric Society in 1964,  after obtaining a Ph.D. in psychometrics at the UofI),  intensive study of the implications and ramifications of  factor indeterminacy:  In the classical factor model,  the number of factors always exceeds number of observed variables, so that no unique solution for "factor scores" exists  [11, 12, 20, 24, 26 ].  This  defect of the factor model vitiates any claims that factors provide an  objective basis for defining " intelligence", which had been Spearman's declared objective [40, 47, 52, 57, 75, 76]. The same indeterminacy affects models of the LISREL type.

E.B. Wilson, an acknowledged scholar of  first rank, first  drew attention to this issue in 1928. It was subsequently discussed by numerous  competent psychometricians and statisticians (see [24] for an unsanitized history of this problem). During the Thurstone era of classical psychometrics, this whole problem area faded into oblivion, until it was eventually revived again in the early 70s. It is still the subject of debate today  [79, 80].  Most importantly, it bears directly on  Jensen's specious claim that Spearman's g  factor provides "an operational definition of intelligence".

Papers [26, 40, 52] highlight one of several peculiar consequences this indeterminacy implies for the classical the factor model :

 

The factors of the factor model can always be chosen in such a way that they predict any criterion whatever perfectly (in a multiple regression sense).

 

For example, regardless of the observed variables from which the factors are derived,  they can  always  be so chosen as to predict the dates of Easter Sunday perfectly [52].

In paper  [32] it is shown that the power of maximum likelihood factor analysis is poor , barely exceeding twice the alpha level for moderate sample sizes 100-200. Paper [20] presents a comprehensive discussion of an alternative to the factor  model (Regression component analysis). This methodology is not afflicted by any indeterminacy problems, and in practice it gives very similar numerical results. However, in contrast to the factor model, in the absence of further constraints it does not pose as a falsifiable theory, but rather is a purely descriptive data reduction method of the data at hand.