Peter H. Schonemann and James H. SteigerRegression component analysis
British Journal of Mathematical and Statistical Psychology, 1976, 29, 175-189.
Regression component decompositions (RCD) are defined as a special case of component decompositions where the pattern contains the regression weights for predicting the observed variables from the latent variables. Compared to factor analysis, RCD has a broader range of applicability, greater ease and simplicity of computation, and a more logical and straightforward theory.
The usual distinction between factor analysis as a falsifiable model, and component analysis as a tautology, is shown to be misleading, since a special case of regression component decomposition can be defined which is not only falsifiable, but empirically indistinguishable from the factor model.
In this paper we explore alternatives to the factor model that are unaffected by indeterminacy problems. We found that a straightforward generalization of the familiar principal component decomposition fills the bill in almost all respects - ease of computability, least squares properties, uniquely defined (component-) scores etc. - except one: Such component decompositions are in general not falsifiable. They are simply tautological transformations of the data - nothing is proven by them. However, as noted in the Abstract, they can be rendered falsifiable by adjoining suitable constraints (e.g., that the last p-m latent roots be equal, or that the regression weights can be rotated to simple structure etc.).
In fact, it is precisely this type of invariance condition across samples of tests and subjects, the so-called hierarchy requirement, which rendered Spearman's factor model into a falsifiable (and, unfortunately, universally falsified) theory of intelligence. Another example of a strong falsifiability contraint is of course Simple Structure, though it was usually not treated as a falsifiable hypothesis but taken for granted as if it were as a gift from heaven.
By substituting the first principal component (PC1) for Spearman's g, Jensen in effect deflated Spearman's intelligence theory into a tautological component decomposition which is no longer falsifable. It always has to work locally (on a given data set). However, in this case the problem is to establish invariance across studies and samples of tests, which is by no means trivial. In particular, PC1s from different studies are not "the same" simply because the regression weights look similar. For more on this abuse of language see Schonemann (1998) , especially the Reply to Garriga-Trillo (p. 804f).
The main point is that component decompositions and factor analyses, though superficially similar in computational respects, are fundamentally different in purpose and claim: If the factor model were to fit for a small number of dimensions, it would establish a striking fact of nature, a significant discovery. That had been Spearman's objective and claim. On the other hand, if one uses the technology of factor analysis as a "general scientific method" (Thurstone) without testing any falisfiable hypotheses, then it degenerates into an unnecessarily complicated tool for data reduction, for which regression component decompositions provide a much simpler and less problematic alternative.