# 16 The Relationship Between The Three Tables

To see the relationships among the three tables let’s first start from the Factor Matrix (or Component Matrix in PCA). We will use the term factor to represent components in PCA as well. These elements represent the correlation of the item with each factor. Now, square each element to obtain squared loadings or the proportion of variance explained by each factor for each item. Summing the squared loadings across factors you get the proportion of variance explained by all factors in the model. This is known as common variance or communality, hence the result is the Communalities table. Going back to the Factor Matrix, if you square the loadings and sum down the items you get Sums of Squared Loadings (in PAF) or eigenvalues (in PCA) for each factor. These now become elements of the Total Variance Explained table. Summing down the rows (i.e., summing down the factors) under the Extraction column we get 2.511+0.499=3.012.511+0.499=3.01 or the total (common) variance explained. In words, this is the total (common) variance explained by the two factor solution for all eight items. Equivalently, since the Communalities table represents the total common variance explained by both factors for each item, summing down the items in the Communalities table also gives you the total (common) variance explained, in this case

(0.437)2+(0.052)2+(0.319)2+(0.460)2+(0.344)2+(0.309)2+(0.851)2+(0.236)2=3.01(0.437)2+(0.052)2+(0.319)2+(0.460)2+(0.344)2+(0.309)2+(0.851)2+(0.236)2=3.01

which is the same result we obtained from the Total Variance Explained table. Here is a table that that may help clarify what we’ve talked about:

In summary:

- Squaring the elements in the Factor Matrix gives you the squared loadings
- Summing the squared loadings of the Factor Matrix across the factors gives you the communality estimates for each item in the Extraction column of the Communalities table.
- Summing the squared loadings of the Factor Matrix down the items gives you the Sums of Squared Loadings (PAF) or eigenvalue (PCA) for each factor across all items.
- Summing the eigenvalues or Sums of Squared Loadings in the Total Variance Explained table gives you the total common variance explained.
- Summing down all items of the Communalities table is the same as summing the eigenvalues or Sums of Squared Loadings down all factors under the Extraction column of the Total Variance Explained table.

**Quiz**

**True or False** (the following assumes a two-factor Principal Axis Factor solution with 8 items)

- The elements of the Factor Matrix represent correlations of each item with a factor.
- Each squared element of Item 1 in the Factor Matrix represents the communality.
- Summing the squared elements of the Factor Matrix down all 8 items within Factor 1 equals the first Sums of Squared Loading under the Extraction column of Total Variance Explained table.
- Summing down all 8 items in the Extraction column of the Communalities table gives us the total common variance explained by both factors.
- The total common variance explained is obtained by summing all Sums of Squared Loadings of the Initial column of the Total Variance Explained table
- The total Sums of Squared Loadings in the Extraction column under the Total Variance Explained table represents the total variance which consists of total common variance plus unique variance.
- In common factor analysis, the sum of squared loadings is the eigenvalue.

**Answers:** 1. T, 2. F, the sum of the squared elements across both factors, 3. T, 4. T, 5. F, sum all eigenvalues from the Extraction column of the Total Variance Explained table, 6. F, the total Sums of Squared Loadings represents only the total *common* variance excluding unique variance, 7. F, eigenvalues are only applicable for PCA.

**Maximum Likelihood Estimation (2-factor ML)**

Since this is a non-technical introduction to factor analysis, we won’t go into detail about the differences between Principal Axis Factoring (PAF) and Maximum Likelihood (ML). The main concept to know is that ML also assumes a common factor analysis using the R2R2 to obtain initial estimates of the communalities, but uses a different iterative process to obtain the extraction solution. To run a factor analysis using maximum likelihood estimation under Analyze – Dimension Reduction – Factor – Extraction – Method choose Maximum Likelihood.

Although the initial communalities are the same between PAF and ML, the final extraction loadings will be different, which means you will have different Communalities, Total Variance Explained, and Factor Matrix tables (although Initial columns will overlap). The other main difference is that you will obtain a Goodness-of-fit Test table, which gives you a absolute test of model fit. Non-significant values suggest a good fitting model. Here the *p*-value is less than 0.05 so we reject the two-factor model.

Goodness-of-fit Test | ||

Chi-Square | df | Sig. |

198.617 | 13 | 0.000 |

In practice, you would obtain chi-square values for multiple factor analysis runs, which we tabulate below from 1 to 8 factors. The table shows the number of factors extracted (or attempted to extract) as well as the chi-square, degrees of freedom, p-value and iterations needed to converge. Note that as you increase the number of factors, the chi-square value and degrees of freedom decreases but the iterations needed and p-value increases. Practically, you want to make sure the number of iterations you specify exceeds the iterations needed. Additionally, NS means no solution and N/A means not applicable. In SPSS, no solution is obtained when you run 5 to 7 factors because the degrees of freedom is negative (which cannot happen). For the eight factor solution, it is not even applicable in SPSS because it will spew out a warning that “You cannot request as many factors as variables with any extraction method except PC. The number of factors will be reduced by one.” This means that if you try to extract an eight factor solution for the SAQ-8, it will default back to the 7 factor solution. Now that we understand the table, let’s see if we can find the threshold at which the absolute fit indicates a good fitting model. It looks like here that the *p*-value becomes non-significant at a 3 factor solution. Note that differs from the eigenvalues greater than 1 criteria which chose 2 factors and using Percent of Variance explained you would choose 4-5 factors. We talk to the Principal Investigator and at this point, we still prefer the two-factor solution. Note that there is no “right” answer in picking the best factor model, only what makes sense for your theory. We will talk about interpreting the factor loadings when we talk about factor rotation to further guide us in choosing the correct number of factors.

Number of Factors | Chi-square | Df | p-value |
Iterations needed |

1 | 553.08 | 20 | <0.05 | 4 |

2 | 198.62 | 13 | < 0.05 | 39 |

3 | 13.81 | 7 | 0.055 | 57 |

4 | 1.386 | 2 | 0.5 | 168 |

5 | NS | -2 | NS | NS |

6 | NS | -5 | NS | NS |

7 | NS | -7 | NS | NS |

8 | N/A | N/A | N/A | N/A |

**Quiz**

**True or False**

- The Initial column of the Communalities table for the Principal Axis Factoring and the Maximum Likelihood method are the same given the same analysis.
- Since they are both factor analysis methods, Principal Axis Factoring and the Maximum Likelihood method will result in the same Factor Matrix.
- In SPSS, both Principal Axis Factoring and Maximum Likelihood methods give chi-square goodness of fit tests.
- You can extract as many factors as there are items as when using ML or PAF.
- When looking at the Goodness-of-fit Test table, a
*p*-value less than 0.05 means the model is a good fitting model. - In the Goodness-of-fit Test table, the lower the degrees of freedom the more factors you are fitting.

**Answers:** 1. T, 2. F, the two use the same starting communalities but a different estimation process to obtain extraction loadings, 3. F, only Maximum Likelihood gives you chi-square values, 4. F, you can extract as many components as items in PCA, but SPSS will only extract up to the total number of items minus 1, 5. F, greater than 0.05, 6. T, we are taking away degrees of freedom but extracting more factors.