6、Self-Taught Learning

  • 总结:

1)就是自编码隐藏学习的特征,送入softmax分类器。

2)算是对于之前编写的代码的一个总结。

3)fprintf('Test Accuracy: %f%%\n', 100*mean(pred(:) == testLabels(:))); 这个简单的输出准确率的代码写的好呀。

4)实验其实对于训练数据的label都整体加了1,说明了label只是标注和在测试中检测的作用,咋标无所谓。

5)

  • 问题:

1)

2)

3)

4)

5)

  • 想法:

1)

2)

3)

4)

5)

  UFLDL Self-Taught Learning

  实验需要下载代码:stl_exercise.zip

  stlExercise.m

clear;close all;clc;
disp('当前正在执行的程序是:');
disp([mfilename('fullpath'),'.m']);
%% CS294A/CS294W Self-taught Learning Exercise

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  self-taught learning. You will need to complete code in feedForwardAutoencoder.m
%  You will also need to have implemented sparseAutoencoderCost.m and 
%  softmaxCost.m from previous exercises.
%
%% ======================================================================
%  STEP 0: Here we provide the relevant parameters values that will
%  allow your sparse autoencoder to get good filters; you do not need to 
%  change the parameters below.

inputSize  = 28 * 28;% 28 * 28=784
numLabels  = 5;% 0-4有监督、5-9无监督,都是5
hiddenSize = 200;
sparsityParam = 0.1; % desired average activation of the hidden units.
                     % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
		             %  in the lecture notes). 
lambda = 3e-3;       % weight decay parameter       
beta = 3;            % weight of sparsity penalty term   
maxIter = 400;

%% ======================================================================
%  STEP 1: Load data from the MNIST database
%
%  This loads our training and test data from the MNIST database files.
%  We have sorted the data for you in this so that you will not have to
%  change it.

% Load MNIST database files
mnistData   = loadMNISTImages('mnist/train-images-idx3-ubyte');
mnistLabels = loadMNISTLabels('mnist/train-labels-idx1-ubyte');

% Set Unlabeled Set (All Images)

% Simulate a Labeled and Unlabeled set
%labeledSet尺寸为[30596,1]
labeledSet   = find(mnistLabels >= 0 & mnistLabels <= 4);
%unlabeledSet尺寸为[29404,1]
unlabeledSet = find(mnistLabels >= 5);
%round:Round to nearest integer
%numTrain=15298,labeledSet=30596,所以训练集和测试集数量一样
numTrain = round(numel(labeledSet)/2);
%把0到4的数据,作为训练集,二等分
%trainSet尺寸为[15298,1]
trainSet = labeledSet(1:numTrain);

testSet  = labeledSet(numTrain+1:end);

unlabeledData = mnistData(:, unlabeledSet);

trainData   = mnistData(:, trainSet);
%label整体加1,就是移动到1-5,这里平移和之前softmax实验的把0替换为10的意义不一样。
%这样替换label是完全对应不上,label都相对于实际数据大1
trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5

display_network(trainData(:,1:100)); % Show the first 100 images
%显示训练集前10个label,发现确实是所有的label都加1了
disp('显示训练集前10个样本的标签');
disp(trainLabels(1:10));
set(gcf,'NumberTitle','off');
set(gcf,'Name','显示训练集前100个数据');

testData   = mnistData(:, testSet);

testLabels = mnistLabels(testSet)' + 1;   % Shift Labels to the Range 1-5

% Output Some Statistics
fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, 2));
fprintf('# examples in supervised training set: %d\n\n', size(trainData, 2));
fprintf('# examples in supervised testing set: %d\n\n', size(testData, 2));

%% ======================================================================
%  STEP 2: Train the sparse autoencoder
%  This trains the sparse autoencoder on the unlabeled training
%  images. 

%  Randomly initialize the parameters
theta = initializeParameters(hiddenSize, inputSize);

%% ----------------- YOUR CODE HERE ----------------------
%  Find opttheta by running the sparse autoencoder on
%  unlabeledTrainingImages
tic;
opttheta = theta; 

%  Use minFunc to minimize the function
addpath minFunc/
options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost
                          % function. Generally, for minFunc to work, you
                          % need a function pointer with two outputs: the
                          % function value and the gradient. In our problem,
                          % sparseAutoencoderCost.m satisfies this.
options.maxIter = 400;	  % Maximum number of iterations of L-BFGS to run 
options.display = 'on';


[opttheta, loss] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                   inputSize, hiddenSize, ...
                                   lambda, sparsityParam, ...
                                   beta, unlabeledData), ...
                              theta, options);

disp(['400次迭代的lbfgs的自编码,费时:',num2str(toc)]);

%% -----------------------------------------------------
                          
% Visualize weights
%取出第一层的权值显示
W1 = reshape(opttheta(1:hiddenSize * inputSize), hiddenSize, inputSize);
figure;
display_network(W1');

set(gcf,'NumberTitle','off');
set(gcf,'Name','稀疏自编码后的第一层的权系数');
print -djpeg weights.jpg

%%======================================================================
%% STEP 3: Extract Features from the Supervised Dataset
%  
%  You need to complete the code in feedForwardAutoencoder.m so that the 
%  following command will extract features from the data.

trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
                                       trainData);

testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
                                       testData);

%%======================================================================
%% STEP 4: Train the softmax classifier

softmaxModel = struct;  
%% ----------------- YOUR CODE HERE ----------------------
%  Use softmaxTrain.m from the previous exercise to train a multi-class
%  classifier. 

%  Use lambda = 1e-4 for the weight regularization for softmax

% You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels
tic;
lambda = 1e-4;

options.maxIter = 100;
%注意由于中间层输出的特征,所以这里的输入尺寸为hiddenSize
softmaxModel = softmaxTrain(hiddenSize, numLabels, lambda, ...
                            trainFeatures, trainLabels, options);

disp(['100次迭代的softmax训练,费时:',num2str(toc)]);                        
                        
%% -----------------------------------------------------


%%======================================================================
%% STEP 5: Testing 

%% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel

% You will have to implement softmaxPredict in softmaxPredict.m
%注意这里送入的,也是隐藏层输出的激活值
[pred] = softmaxPredict(softmaxModel, testFeatures);

%% -----------------------------------------------------

% Classification Score
%下面这个准确率的计算的简单的语句
fprintf('Test Accuracy: %f%%\n', 100*mean(pred(:) == testLabels(:)));
%Test Accuracy: 98.254674%

% (note that we shift the labels by 1, so that digit 0 now corresponds to
%  label 1)
%
% Accuracy is the proportion of correctly classified images
% The results for our implementation was:
%
% Accuracy: 98.3%
%
% 

%% -----------------------------------------------------
%UFLDL中说不使用激活的特征,准确率只有96%,所以做一个对比试验。
%由于送入的是原始的像素,所以也要用原始的图像训练

softmaxModel = struct;  

tic;
lambda = 1e-4;

options.maxIter = 100;

softmaxModel = softmaxTrain(inputSize, numLabels, lambda, ...
                            trainData, trainLabels, options);

disp(['100次迭代的softmax训练,费时:',num2str(toc)]);    

[pred] = softmaxPredict(softmaxModel, testData);


fprintf('Test Accuracy: %f%%\n', 100*mean(pred(:) == testLabels(:)));
%96.764283%

  feedForwardAutoencoder.m

function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data)

% theta: trained weights from the autoencoder
% visibleSize: the number of input units (probably 64) 
% hiddenSize: the number of hidden units (probably 25) 
% data: Our matrix containing the training data as columns.  So, data(:,i) is the i-th training example. 
  
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this 
% follows the notation convention of the lecture notes. 

W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder.

activation=sigmoid(bsxfun(@plus,W1*data,b1));

%-------------------------------------------------------------------

end

%-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients.  This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). 

function sigm = sigmoid(x)
    sigm = 1 ./ (1 + exp(-x));
end

  

posted @ 2015-11-17 10:50  菜鸡一枚  阅读(208)  评论(0编辑  收藏  举报