关于SPCA,Larsen,弹性网等的学习

1、full Elastic Net model and Ridge Regression are equal

beta_en = larsen(X, y, 1e-9, 0, [], false, false);    % 将弹性网的1范数约束变为0
beta_ridge = (X'*X + 1e-9*eye(p))\X'*y;
beta_lasso = lasso(Xtilde, ytilde, 0, false, false);   % lasso的1范数约束为0

  

以上两种代码求解代码,结果几乎一样

2、Lar算法:

B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least Angle
% Regression. Ann. Statist. 32(2):407-499, 2004.

Lar算法和Lasso算法的结果几乎一样

b_lar = lar(X,y);
b_lasso = lasso(X, y);
b_ols = X\y;

3、 Lasso和其另外一种解法

clear; close all; clc;

%% TEST
% Assert that LASSO gives expected results when run with orthogonal
% predictors. 

n = 100;
p = 10;
X = gallery('orthog',n,5);
X = X(:,2:p+1);
y = center((1:100)');

b_lasso = lasso(X, y);
b_ols = X'*y;

% First compare using theoretical value of lambda/2 at each breakpoint.
% Call this value gamma
gamma = [sort(abs(b_ols), 'descend'); 0];
b_lasso2 = zeros(size(b_lasso));
for i = 1:length(gamma)
  b_lasso2(:,i) = sign(b_ols).*max(abs(b_ols) - gamma(i),0);
end

assert(norm(b_lasso - b_lasso2) < 1e-12)

% Then compare for some arbitrary value of lambda, the value of lambda is
% given by the lasso procedure
t = 150; % constraint on the L1 norm of beta
[b_lasso info] = lasso(X, y, t, false);
b_lasso2 = sign(b_ols).*max(abs(b_ols) - info.lambda/2, 0);

assert(norm(b_lasso - b_lasso2) < 1e-12)

4、SPCA在没有1范数约束的情况下和PCA得到的投影矩阵一样

clear; close all; clc;

%% TEST
% Assert that the full SPCA model and PCA are equal

n = 100;
p = 25;
Z = rand(p);
C = Z'*Z;

X = center(mvnrnd(zeros(1,p), C, n));

K = p; % all possible components
delta = 5; % any value will do
stop = 0; % no L1 constraint
B = spca(X, [], K, delta, stop);

[U D V] = svd(X, 'econ');

assert(norm(abs(V) - abs(B)) < 1e-12)

  

 5、在SPCA的调用中, stop = -[250 125 100] 这么设置表示每列取多少个非零项;

6、关于AX=B的解法

    可以用MSE那种inv(A'A+lambda*I)*A'*B来解,或者调用LSQR函数,在MSE中测试的结果识别率一样

[eigvector, istop] = lsqr2(A, B, options.ReguAlpha, nRepeat);

  

另外 若对A添加1范数约束,则可以调用如下lars函数,一列一列的解

  
for i = 1:size(B,2)
  eigvector_T = lars(A, B(:,i),'lasso', -(max(LassoCardi)+5),1,Gram,LassoCardi);   eigvector{i} = eigvector_T;
end

7 弹性网

beta = arg min ||y - X*beta||^2 + delta*||beta||^2 + lambda*||beta||_1.

调用函数: [b info] = elasticnet(X, y, delta, stop, storepath, verbose)

For example:

  s1 = RandStream.create('mrg32k3a','Seed', 42);
  s0 = RandStream.setDefaultStream(s1);
  % Create data set
  n = 30; p = 40;
  correlation = 0.2;
  Sigma = correlation*ones(p) + (1 - correlation)*eye(p);
  mu = zeros(p,1);
  X = mvnrnd(mu, Sigma, n);
  % Model is lin.comb. of first three variables plus noise
  y = X(:,1) + X(:,2) + X(:,3) + 0.5*randn(n,1);
  % Preprocess data
  X = normalize(X);
  y = center(y);
  % Run LASSO
  delta = 1e-3;
  [beta info] = elasticnet(X, y, delta, 0, true, true);
  % Plot results
  h1 = figure(1);
  plot(info.s, beta, '.-');
  xlabel('s'), ylabel('\beta', 'Rotation', 0)
  % Restore random stream
  RandStream.setDefaultStream(s0);

  

% [1] H. Zou and T. Hastie. Regularization and variable selection via the
% elastic net. J. Royal Stat. Soc. B. 67(2):301-320, 2005.

ON THE ADAPTIVE ELASTIC-NET WITH A DIVERGING NUMBER OF PARAMETERS 

未完待续17:04:17

  

 

 

posted @ 2016-03-08 17:05  邪恶的亡灵  阅读(988)  评论(0编辑  收藏  举报