
If we do an eigen decomposition on the covariance matrix, we lose the original data (everything is filtered through the covariance matrix), but we have redescribed the data in terms of how they map onto the new vectors.The eignevalues describe the variance If we look at the the variability along this new first component, it looks like it will have a standard deviation of around 2 and the second dimension will have a standand devitaion much smaller–maybe less than 1. Any point would be re-described by its closest point on the first or second new dimension. So, suppose we wanted to rotate the axes and redescribe the data based on these two dimensions.

Since we have only two dimensions, the only remaining orthogonal dimension is the dimension along the negative diagonal–the second principal component. This is the vector of highest variance, and might be considered the first principal component. The data appear to be correlated, and falling along the line x=y.
