Python中的主成分分析

我想使用主成分分析(PCA)降维。 是numpy或scipy已经有了,还是我必须使用numpy.linalg.eigh推出自己的?

我不只是想用奇异值分解(SVD),因为我的input数据是非常高维的(〜460维),所以我认为SVD比计算协方差matrix的特征向量要慢。

我希望find一个预制的,debugging过的实现,它已经为什么时候使用哪种方法做出了正确的决定,也许还有其他我不知道的优化。

你可能会看看MDP 。

我没有机会自己testing,但是我已经为PCAfunction准备了它。

几个月后,这是一个小class的PCA和一张照片:

 #!/usr/bin/env python """ a small class for Principal Component Analysis Usage: p = PCA( A, fraction=0.90 ) In: A: an array of eg 1000 observations x 20 variables, 1000 rows x 20 columns fraction: use principal components that account for eg 90 % of the total variance Out: pU, pd, p.Vt: from numpy.linalg.svd, A = U . d . Vt p.dinv: 1/d or 0, see NR p.eigen: the eigenvalues of A*A, in decreasing order (pd**2). eigen[j] / eigen.sum() is variable j's fraction of the total variance; look at the first few eigen[] to see how many PCs get to 90 %, 95 % ... p.npc: number of principal components, eg 2 if the top 2 eigenvalues are >= `fraction` of the total. It's ok to change this; methods use the current value. Methods: The methods of class PCA transform vectors or arrays of eg 20 variables, 2 principal components and 1000 observations, using partial matrices U' d' Vt', parts of the full U d Vt: A ~ U' . d' . Vt' where eg U' is 1000 x 2 d' is diag([ d0, d1 ]), the 2 largest singular values Vt' is 2 x 20. Dropping the primes, d . Vt 2 principal vars = p.vars_pc( 20 vars ) U 1000 obs = p.pc_obs( 2 principal vars ) U . d . Vt 1000 obs, p.obs( 20 vars ) = pc_obs( vars_pc( vars )) fast approximate A . vars, using the `npc` principal components Ut 2 pcs = p.obs_pc( 1000 obs ) V . dinv 20 vars = p.pc_vars( 2 principal vars ) V . dinv . Ut 20 vars, p.vars( 1000 obs ) = pc_vars( obs_pc( obs )), fast approximate Ainverse . obs: vars that give ~ those obs. Notes: PCA does not center or scale A; you usually want to first A -= A.mean(A, axis=0) A /= A.std(A, axis=0) with the little class Center or the like, below. See also: http://en.wikipedia.org/wiki/Principal_component_analysis http://en.wikipedia.org/wiki/Singular_value_decomposition Press et al., Numerical Recipes (2 or 3 ed), SVD PCA micro-tutorial iris-pca .py .png """ from __future__ import division import numpy as np dot = np.dot # import bz.numpyutil as nu # dot = nu.pdot __version__ = "2010-04-14 apr" __author_email__ = "denis-bz-py at t-online dot de" #............................................................................... class PCA: def __init__( self, A, fraction=0.90 ): assert 0 <= fraction <= 1 # A = U . diag(d) . Vt, O( mn^2 ), lapack_lite -- self.U, self.d, self.Vt = np.linalg.svd( A, full_matrices=False ) assert np.all( self.d[:-1] >= self.d[1:] ) # sorted self.eigen = self.d**2 self.sumvariance = np.cumsum(self.eigen) self.sumvariance /= self.sumvariance[-1] self.npc = np.searchsorted( self.sumvariance, fraction ) + 1 self.dinv = np.array([ 1/d if d > self.d[0] * 1e-6 else 0 for d in self.d ]) def pc( self ): """ eg 1000 x 2 U[:, :npc] * d[:npc], to plot etc. """ n = self.npc return self.U[:, :n] * self.d[:n] # These 1-line methods may not be worth the bother; # then use U d Vt directly -- def vars_pc( self, x ): n = self.npc return self.d[:n] * dot( self.Vt[:n], xT ).T # 20 vars -> 2 principal def pc_vars( self, p ): n = self.npc return dot( self.Vt[:n].T, (self.dinv[:n] * p).T ) .T # 2 PC -> 20 vars def pc_obs( self, p ): n = self.npc return dot( self.U[:, :n], pT ) # 2 principal -> 1000 obs def obs_pc( self, obs ): n = self.npc return dot( self.U[:, :n].T, obs ) .T # 1000 obs -> 2 principal def obs( self, x ): return self.pc_obs( self.vars_pc(x) ) # 20 vars -> 2 principal -> 1000 obs def vars( self, obs ): return self.pc_vars( self.obs_pc(obs) ) # 1000 obs -> 2 principal -> 20 vars class Center: """ A -= A.mean() /= A.std(), inplace -- use A.copy() if need be uncenter(x) == original A . x """ # mttiw def __init__( self, A, axis=0, scale=True, verbose=1 ): self.mean = A.mean(axis=axis) if verbose: print "Center -= A.mean:", self.mean A -= self.mean if scale: std = A.std(axis=axis) self.std = np.where( std, std, 1. ) if verbose: print "Center /= A.std:", self.std A /= self.std else: self.std = np.ones( A.shape[-1] ) self.A = A def uncenter( self, x ): return np.dot( self.A, x * self.std ) + np.dot( x, self.mean ) #............................................................................... if __name__ == "__main__": import sys csv = "iris4.csv" # wikipedia Iris_flower_data_set # 5.1,3.5,1.4,0.2 # ,Iris-setosa ... N = 1000 K = 20 fraction = .90 seed = 1 exec "\n".join( sys.argv[1:] ) # N= ... np.random.seed(seed) np.set_printoptions( 1, threshold=100, suppress=True ) # .1f try: A = np.genfromtxt( csv, delimiter="," ) N, K = A.shape except IOError: A = np.random.normal( size=(N, K) ) # gen correlated ? print "csv: %s N: %d K: %d fraction: %.2g" % (csv, N, K, fraction) Center(A) print "A:", A print "PCA ..." , p = PCA( A, fraction=fraction ) print "npc:", p.npc print "% variance:", p.sumvariance * 100 print "Vt[0], weights that give PC 0:", p.Vt[0] print "A . Vt[0]:", dot( A, p.Vt[0] ) print "pc:", p.pc() print "\nobs <-> pc <-> x: with fraction=1, diffs should be ~ 0" x = np.ones(K) # x = np.ones(( 3, K )) print "x:", x pc = p.vars_pc(x) # d' Vt' x print "vars_pc(x):", pc print "back to ~ x:", p.pc_vars(pc) Ax = dot( A, xT ) pcx = p.obs(x) # U' d' Vt' x print "Ax:", Ax print "A'x:", pcx print "max |Ax - A'x|: %.2g" % np.linalg.norm( Ax - pcx, np.inf ) b = Ax # ~ back to original x, Ainv A x back = p.vars(b) print "~ back again:", back print "max |back - x|: %.2g" % np.linalg.norm( back - x, np.inf ) # end pca.py 

在这里输入图像说明

PCA使用numpy.linalg.svd是超级简单。 这是一个简单的演示:

 import numpy as np import matplotlib.pyplot as plt from scipy.misc import lena # the underlying signal is a sinusoidally modulated image img = lena() t = np.arange(100) time = np.sin(0.1*t) real = time[:,np.newaxis,np.newaxis] * img[np.newaxis,...] # we add some noise noisy = real + np.random.randn(*real.shape)*255 # (observations, features) matrix M = noisy.reshape(noisy.shape[0],-1) # singular value decomposition factorises your data matrix such that: # # M = U*S*VT (where '*' is matrix multiplication) # # * U and V are the singular matrices, containing orthogonal vectors of # unit length in their rows and columns respectively. # # * S is a diagonal matrix containing the singular values of M - these # values squared divided by the number of observations will give the # variance explained by each PC. # # * if M is considered to be an (observations, features) matrix, the PCs # themselves would correspond to the rows of S^(1/2)*VT if M is # (features, observations) then the PCs would be the columns of # U*S^(1/2). # # * since U and V both contain orthonormal vectors, U*VT is equivalent # to a whitened version of M. U, s, Vt = np.linalg.svd(M, full_matrices=False) V = Vt.T # PCs are already sorted by descending order # of the singular values (ie by the # proportion of total variance they explain) # if we use all of the PCs we can reconstruct the noisy signal perfectly S = np.diag(s) Mhat = np.dot(U, np.dot(S, VT)) print "Using all PCs, MSE = %.6G" %(np.mean((M - Mhat)**2)) # if we use only the first 20 PCs the reconstruction is less accurate Mhat2 = np.dot(U[:, :20], np.dot(S[:20, :20], V[:,:20].T)) print "Using first 20 PCs, MSE = %.6G" %(np.mean((M - Mhat2)**2)) fig, [ax1, ax2, ax3] = plt.subplots(1, 3) ax1.imshow(img) ax1.set_title('true image') ax2.imshow(noisy.mean(0)) ax2.set_title('mean of noisy images') ax3.imshow((s[0]**(1./2) * V[:,0]).reshape(img.shape)) ax3.set_title('first spatial PC') plt.show() 

matplotlib.mlab有一个PCA实现 。

你可以使用sklearn:

 import sklearn.decomposition as deco import numpy as np x = (x - np.mean(x, 0)) / np.std(x, 0) # You need to normalize your data first pca = deco.PCA(n_components) # n_components is the components number after reduction x_r = pca.fit(x).transform(x) print ('explained variance (first %d components): %.2f'%(n_components, sum(pca.explained_variance_ratio_))) 

SVD应该适用于460尺寸。 我的Atom上网本需要7秒左右的时间。 eig()方法需要更多的时间(因为它应该使用更多的浮点操作),并且几乎总是不太精确。

如果你有不到460个例子,那么你想要做的就是将散点matrix(x – datamean)^ T(x – mean)对angular化,假设你的数据点是列,然后左乘以(x – datamean)。 在维度比数据更多的情况下,这可能会更快。

你可以很容易地使用scipy.linalg “滚动”自己(假设一个预先居中的数据集data ):

 covmat = data.dot(data.T) evs, evmat = scipy.linalg.eig(covmat) 

然后evs是你的特征值, evmat是你的投影matrix。

如果要保留d维,请使用第一个特征值和第一个特征向量。

鉴于scipy.linalg有matrix乘法的分解和numpy,你还需要什么?

我刚读完机器学习:一个algorithm的视angular 。 本书中的所有代码示例都是由Python编写的(几乎与Numpy一起)。 chatper10.2主成分分析的代码片段可能值得一读。 它使用numpy.linalg.eig。
顺便说一句,我认为SVD可以很好地处理460 * 460的尺寸。 我已经计算出一个6500 * 6500 SVD与numpy / scipy.linalg.svd在一台非常古老的PC:Pentium III 733mHz。 说实话,脚本需要大量的内存(大约1.xG)和大量的时间(大约30分钟)才能得到SVD结果。 但是我认为现代PC上的460 * 460不会是一个大问题,除非你需要做SVD很多次。

在计算所有特征值和特征向量时,不需要全面的奇异值分解(SVD),并且对于大matrix可能是禁止的。 scipy及其稀疏模块提供了在稀疏matrix和密集matrix上工作的通用线性函数,其中有eig *函数族:

http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html#matrix-factorizations

Scikit-learn提供了一个Python PCA实现 ,目前只支持密集matrix。

时间:

 In [1]: A = np.random.randn(1000, 1000) In [2]: %timeit scipy.sparse.linalg.eigsh(A) 1 loops, best of 3: 802 ms per loop In [3]: %timeit np.linalg.svd(A) 1 loops, best of 3: 5.91 s per loop 

这里是使用numpy,scipy和C-extensions的python的PCA模块的另一个实现。 该模块使用SVD或在C中实现的NIPALS(非线性迭代偏最小二乘)algorithm来执行PCA。