为什么对一些矩阵做PCA得到的矩阵少一行?
很多时候会出现把一个N*M的矩阵做pca(对M降维)之后却得到一个M*(M-1)矩阵这样的结果。之前都是数学推导得到这个结论,但是,
今天看到一个很形象的解释:
Consider what PCA does. Put simply, PCA (as most typically run) creates a new coordinate system by (1) shifting the origin to the centroid of your data, (2) squeezes and/or stretches the axes to make them equal in length, and (3) rotates your axes into a new orientation. (For more details, see this excellent CV thread: Making sense of principal component analysis, eigenvectors & eigenvalues.) However, it doesn't just rotate your axes any old way. Your new X1 (the first principal component) is oriented in your data's direction of maximal variation. The second principal component is oriented in the direction of the next greatest amount of variation that is orthogonal to the first principal component. The remaining principal components are formed likewise.
With this in mind, let's examine @amoeba's example. Here is a data matrix with two points in a three dimensional space:
Let's view these points in a (pseudo) three dimensional scatterplot:
So let's follow the steps listed above. (1) The origin of the new coordinate system will be located at (1.5,1.5,1.5). (2) The axes are already equal. (3) The first principal component will go diagonally from (0,0,0) to (3,3,3), which is the direction of greatest variation for these data. Now, the second principal component must be orthogonal to the first, and should go in the direction of the greatestremaining variation. But what direction is that? Is it from (0,0,3) to (3,3,0), or from (0,3,0) to (3,0,3), or something else? There is no remaining variation, so there cannot be any more principal components.
With N=2 data, we can fit (at most) N−1=1 principal components.