matlab中pca
使用了matlab中的pca方法[coeff,score,latent,tsquared,explained,mu]=pca(so10);so10为矩阵,有几个问题:1....
使用了matlab中的pca方法[coeff,score,latent,tsquared,explained,mu] = pca(so10);so10为矩阵,有几个问题:1.这个方法有没有对数据进行去中心化处理?2.计算coeff用到了svd算法,对pca和svd的关系不太明白。3.这个score表示什么,它的每一行表示什么?4.这个mu表示什么?
展开
2018-01-19
展开全部
1,4 matlab是有帮助文档的,我没有明白你所指的去中心化处理是什么,PCA的结果在数组自己的维度。
以下是帮助文档,请仔细阅读
coeff = pca(X) returns the principal component coefficients, also known as loadings, for the n-by-p data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is p-by-p. Each column of coeffcontains coefficients for one principal component, and the columns are in descending order of component variance. By default, pca centers the data and uses the singular value decomposition (SVD) algorithm.
example
coeff = pca(X,Name,Value) returns any of the output arguments in the previous syntaxes using additional options for computation and handling of special data types, specified by one or more Name,Value pair arguments.
For example, you can specify the number of principal components pca returns or an algorithm other than SVD to use.
example
[coeff,score,latent] = pca(___) also returns the principal component scores in score and the principal component variances in latent. You can use any of the input arguments in the previous syntaxes.
Principal component scores are the representations of X in the principal component space. Rows of score correspond to observations, and columns correspond to components.
The principal component variances are the eigenvalues of the covariance matrix of X.
example
[coeff,score,latent,tsquared] = pca(___) also returns the Hotelling's T-squared statistic for each observation in X.
example
[coeff,score,latent,tsquared,explained,mu] = pca(___) also returns explained, the percentage of the total variance explained by each principal component and mu, the estimated mean of each variable in X.
2. PCA 和SVD的不同是,他们分解矩阵的方式是不同的。我建议你翻看wikipedia里面SVD和PCA的说明,里面公式很清晰了
以下是帮助文档,请仔细阅读
coeff = pca(X) returns the principal component coefficients, also known as loadings, for the n-by-p data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is p-by-p. Each column of coeffcontains coefficients for one principal component, and the columns are in descending order of component variance. By default, pca centers the data and uses the singular value decomposition (SVD) algorithm.
example
coeff = pca(X,Name,Value) returns any of the output arguments in the previous syntaxes using additional options for computation and handling of special data types, specified by one or more Name,Value pair arguments.
For example, you can specify the number of principal components pca returns or an algorithm other than SVD to use.
example
[coeff,score,latent] = pca(___) also returns the principal component scores in score and the principal component variances in latent. You can use any of the input arguments in the previous syntaxes.
Principal component scores are the representations of X in the principal component space. Rows of score correspond to observations, and columns correspond to components.
The principal component variances are the eigenvalues of the covariance matrix of X.
example
[coeff,score,latent,tsquared] = pca(___) also returns the Hotelling's T-squared statistic for each observation in X.
example
[coeff,score,latent,tsquared,explained,mu] = pca(___) also returns explained, the percentage of the total variance explained by each principal component and mu, the estimated mean of each variable in X.
2. PCA 和SVD的不同是,他们分解矩阵的方式是不同的。我建议你翻看wikipedia里面SVD和PCA的说明,里面公式很清晰了
追问
帮助文档我看了,还是没明白score和mu表示什么,我总觉得pca和svd是两种不同方法,pca中涉及svd的算法,所有我想弄不明白两者关系。
追答
这本来就是两个方法,很接近也很相关
PCA:
T=XW
SVD:
T = USV
PCA得到的是特征举证
SVD得到的是分解的mod
推荐律师服务:
若未解决您的问题,请您详细描述您的问题,通过百度律临进行免费专业咨询