Factor Analysis does not completely mitigate the singular covariance matrix problem!

I created a [Jupyter notebook](https://gist.github.com/fumoboy007/14e5b8e7dc7f6168213f470797d185dd) to gather some empirical evidence. The notebook uses Factor Analysis to generate a model and then finds the minimum training set size needed to avoid the singular covariance matrix problem. Here are the results:

[![A graph of the relationship between the number of factors and the minimum training set size needed to avoid singularity][1]][1]

As you can see, the training set size still needs to be greater than the number of factors to avoid the singular covariance matrix problem.

I don’t know the mathematical reason but I guess it makes sense conceptually. Factor Analysis reduces the dimensionality to $k$ factors, so the training set still needs to have more than $k$ examples to avoid Perfect Multicollinearity.

 [1]: https://i.sstatic.net/MsJQz.png