Factor the most significant factors or dimensions

Factor extractionThis is the next step in factor analysis. This step determines the most significant factors or dimensions which depict the interrelations among the set of variables (Pallant, 2007).

It applies a method to reduce the number of dimensions while retaining most of the variance in the original data set (Johnson and Wichern, 2007). SPSS provides seven common extraction approaches (Table 5.10) to determine the factors. The factor loading values will be printed in the Factor Matrix to indicate the correlations between variables and factors. The values range from -1 to +1, and a higher value (positive or negative) indicates a closer correlation (Bruin, 2006; Walker and Maddan, 2012).

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Extraction Technique Goal of AnalysisPrincipal components Maximise variance extracted by orthogonal components to convert correlatedvariables into principal components (Pearson, 1901; Hotelling, 1933).Principal axis factoring (Canonical factor analysis) Maximise covariance extracted by orthogonal factors to identify factors which have the highest canonical correlations with the observed variables (Rao, 1955)Image factoring Using multiple regression based on the correlation matrix of the predicted variablesto provides an empirical factor analysis (Kaiser, 1963).Maximum likelihood Estimate factor loadings for population that maximise the likelihood of sampling theobserved correlation matrix (Lawley and Maxwell, 1962)Alphas factoring Maximise the generalisability of orthogonal factors (Kaiser and Caffrey, 1965)Unweighted least squares Minimise squared residual correlations (Jöreskog, 1977)Generalised least squares Weights items by shared variance before minimising squared residual correlations(Browne, 1973).Table 5.10 The comparisons of the factor extraction techniques in SPSS (Tabachnick and Fidell, 2007, p633)Out of these seven techniques, Principal Components Analysis (PCA) (Hotelling, 1933) and Principal Axis Factoring (PAF, or Canonical Factor Analysis, developed by Rao, 1955) are the most popular approaches to extract the factors (Field, 2005; Walker and Maddan, 2012). In case of PCA, the original variables are transformed into a smaller set of linear combinations, using both variance and covariance (Tabachnick and Fidell, 2007; Walker and Maddan, 2012), to locate principle components based on their eigenvectors (principal direction) and eigenvalues (strength/length) (Smith, 2002; Stevens, 2002). In case of PAF, factors are estimated, using only the covariance (common variance), to identify underlying and unique common factors (Suhr, 2005; Pallant, 2007; Tabachnick and Fidell, 2007).

Both PCA and PAF can yield very similar results (Stevens, 2002), particularly in case of correlation coefficients between the original variables are strong (Walford, 2009). Nevertheless, PAF may be more reliable if some measurement error is present (Walford, 2009; Henriques, 2011). PCA is consider as a better choice for a simple empirical summary of the data set (Tabachnick and Fidell, 2007). Once the extraction is completed, it is necessary to determine the numbers of factors that should be retained. To facilitate this decision, three common techniques can be used: (1) Kaiser’s criterion (Kaiser, 1960)- in this technique, only factors with an eigenvalue of 1 or above are retained.

The eigenvalue of a factor represents the average amount of the variance explained by that factor, hence, a factor with an eigenvalue of 1 makes an average contribution to the overall variance; (2) Cattell’s screen plot (Cattell, 1966)- This technique makes use of a graphical representation to indicate the incremental variance of the eigenvalues of the factors. Cattell (1966) proposed the retaining of all factors where the screen plot becomes horizontal or levels off (e.g.

, Figure 5.7); (3) Horn’s Parallel analysis (Horn, 1965)- This technique uses a simulation method (e.g., Monte Carlo Method by Metropolis and Ulam, 1949) to compare the size of the observed eigenvalues with those obtained from a randomly generated data set of the same size (Pallant, 2007; Walker and Maddan, 2012). In this technique, only the factors with associated eigenvalues that exceed the corresponding eigenvalues derived from the random data set are retained (Pallant, 2007).

This is a statistical sampling technique (Eckhardt, 1987) that can correct the bias in the Kaiser’s criterion by using a ‘sufficiently large’ sample (Dino, 2009).Figure 5.7 Screen plot (Walker and Maddan, 2012, p466)

x

Hi!
I'm Casey!

Would you like to get a custom essay? How about receiving a customized one?

Check it out