第七周 编程作业
function sim = gaussianKernel(x1, x2, sigma) %RBFKERNEL returns a radial basis function kernel between x1 and x2 % sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2 % and returns the value in sim % Ensure that x1 and x2 are column vectors x1 = x1(:); x2 = x2(:); % You need to return the following variables correctly. sim = 0; % ====================== YOUR CODE HERE ====================== % Instructions: Fill in this function to return the similarity between x1 % and x2 computed using a Gaussian kernel with bandwidth % sigma % % sim=exp(-sum((x1-x2).^2)/(2*sigma^2)); % ============================================================= end
function [C, sigma] = dataset3Params(X, y, Xval, yval) %DATASET3PARAMS returns your choice of C and sigma for Part 3 of the exercise %where you select the optimal (C, sigma) learning parameters to use for SVM %with RBF kernel % [C, sigma] = DATASET3PARAMS(X, y, Xval, yval) returns your choice of C and % sigma. You should complete this function to return the optimal C and % sigma based on a cross-validation set. % % You need to return the following variables correctly. C = 1; sigma = 0.3; % ====================== YOUR CODE HERE ====================== % Instructions: Fill in this function to return the optimal C and sigma % learning parameters found using the cross validation set. % You can use svmPredict to predict the labels on the cross % validation set. For example, % predictions = svmPredict(model, Xval); % will return the predictions on the cross validation set. % % Note: You can compute the prediction error using % mean(double(predictions ~= yval)) % A = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30]; minerror = Inf; minC=Inf; minsigma=Inf; for i=1:length(A) for j=1:length(A) model=svmTrain(X,y,A(i),@(x1,x2) gaussianKernel(x1,x2,A(j))); predictions = svmPredict(model,Xval); error=mean(double(predictions~=yval)); if(error<minerror) minerror = error; minC = A(i); minsigma=A(j); end end end C=minC; sigma=minsigma; % ========================================================================= end
function [model] = svmTrain(X, Y, C, kernelFunction, ... tol, max_passes) %SVMTRAIN Trains an SVM classifier using a simplified version of the SMO %algorithm. % [model] = SVMTRAIN(X, Y, C, kernelFunction, tol, max_passes) trains an % SVM classifier and returns trained model. X is the matrix of training % examples. Each row is a training example, and the jth column holds the % jth feature. Y is a column matrix containing 1 for positive examples % and 0 for negative examples. C is the standard SVM regularization % parameter. tol is a tolerance value used for determining equality of % floating point numbers. max_passes controls the number of iterations % over the dataset (without changes to alpha) before the algorithm quits. % % Note: This is a simplified version of the SMO algorithm for training % SVMs. In practice, if you want to train an SVM classifier, we % recommend using an optimized package such as: % % LIBSVM (http://www.csie.ntu.edu.tw/~cjlin/libsvm/) % SVMLight (http://svmlight.joachims.org/) % % if ~exist('tol', 'var') || isempty(tol) tol = 1e-3; end if ~exist('max_passes', 'var') || isempty(max_passes) max_passes = 5; end % Data parameters m = size(X, 1); n = size(X, 2); % Map 0 to -1 Y(Y==0) = -1; % Variables alphas = zeros(m, 1); b = 0; E = zeros(m, 1); passes = 0; eta = 0; L = 0; H = 0; % Pre-compute the Kernel Matrix since our dataset is small % (in practice, optimized SVM packages that handle large datasets % gracefully will _not_ do this) % % We have implemented optimized vectorized version of the Kernels here so % that the svm training will run faster. if strcmp(func2str(kernelFunction), 'linearKernel') % Vectorized computation for the Linear Kernel % This is equivalent to computing the kernel on every pair of examples K = X*X'; elseif strfind(func2str(kernelFunction), 'gaussianKernel') % Vectorized RBF Kernel % This is equivalent to computing the kernel on every pair of examples X2 = sum(X.^2, 2); K = bsxfun(@plus, X2, bsxfun(@plus, X2', - 2 * (X * X'))); K = kernelFunction(1, 0) .^ K; else % Pre-compute the Kernel Matrix % The following can be slow due to the lack of vectorization K = zeros(m); for i = 1:m for j = i:m K(i,j) = kernelFunction(X(i,:)', X(j,:)'); K(j,i) = K(i,j); %the matrix is symmetric end end end % Train fprintf('\nTraining ...'); dots = 12; while passes < max_passes, num_changed_alphas = 0; for i = 1:m, % Calculate Ei = f(x(i)) - y(i) using (2). % E(i) = b + sum (X(i, :) * (repmat(alphas.*Y,1,n).*X)') - Y(i); E(i) = b + sum (alphas.*Y.*K(:,i)) - Y(i); if ((Y(i)*E(i) < -tol && alphas(i) < C) || (Y(i)*E(i) > tol && alphas(i) > 0)), % In practice, there are many heuristics one can use to select % the i and j. In this simplified code, we select them randomly. j = ceil(m * rand()); while j == i, % Make sure i \neq j j = ceil(m * rand()); end % Calculate Ej = f(x(j)) - y(j) using (2). E(j) = b + sum (alphas.*Y.*K(:,j)) - Y(j); % Save old alphas alpha_i_old = alphas(i); alpha_j_old = alphas(j); % Compute L and H by (10) or (11). if (Y(i) == Y(j)), L = max(0, alphas(j) + alphas(i) - C); H = min(C, alphas(j) + alphas(i)); else L = max(0, alphas(j) - alphas(i)); H = min(C, C + alphas(j) - alphas(i)); end if (L == H), % continue to next i. continue; end % Compute eta by (14). eta = 2 * K(i,j) - K(i,i) - K(j,j); if (eta >= 0), % continue to next i. continue; end % Compute and clip new value for alpha j using (12) and (15). alphas(j) = alphas(j) - (Y(j) * (E(i) - E(j))) / eta; % Clip alphas(j) = min (H, alphas(j)); alphas(j) = max (L, alphas(j)); % Check if change in alpha is significant if (abs(alphas(j) - alpha_j_old) < tol), % continue to next i. % replace anyway alphas(j) = alpha_j_old; continue; end % Determine value for alpha i using (16). alphas(i) = alphas(i) + Y(i)*Y(j)*(alpha_j_old - alphas(j)); % Compute b1 and b2 using (17) and (18) respectively. b1 = b - E(i) ... - Y(i) * (alphas(i) - alpha_i_old) * K(i,j)' ... - Y(j) * (alphas(j) - alpha_j_old) * K(i,j)'; b2 = b - E(j) ... - Y(i) * (alphas(i) - alpha_i_old) * K(i,j)' ... - Y(j) * (alphas(j) - alpha_j_old) * K(j,j)'; % Compute b by (19). if (0 < alphas(i) && alphas(i) < C), b = b1; elseif (0 < alphas(j) && alphas(j) < C), b = b2; else b = (b1+b2)/2; end num_changed_alphas = num_changed_alphas + 1; end end if (num_changed_alphas == 0), passes = passes + 1; else passes = 0; end fprintf('.'); dots = dots + 1; if dots > 78 dots = 0; fprintf('\n'); end if exist('OCTAVE_VERSION') fflush(stdout); end end fprintf(' Done! \n\n'); % Save the model idx = alphas > 0; model.X= X(idx,:); model.y= Y(idx); model.kernelFunction = kernelFunction; model.b= b; model.alphas= alphas(idx); model.w = ((alphas.*Y)'*X)'; end
function pred = svmPredict(model, X) %SVMPREDICT returns a vector of predictions using a trained SVM model %(svmTrain). % pred = SVMPREDICT(model, X) returns a vector of predictions using a % trained SVM model (svmTrain). X is a mxn matrix where there each % example is a row. model is a svm model returned from svmTrain. % predictions pred is a m x 1 column of predictions of {0, 1} values. % % Check if we are getting a column vector, if so, then assume that we only % need to do prediction for a single example if (size(X, 2) == 1) % Examples should be in rows X = X'; end % Dataset m = size(X, 1); p = zeros(m, 1); pred = zeros(m, 1); if strcmp(func2str(model.kernelFunction), 'linearKernel') % We can use the weights and bias directly if working with the % linear kernel p = X * model.w + model.b; elseif strfind(func2str(model.kernelFunction), 'gaussianKernel') % Vectorized RBF Kernel % This is equivalent to computing the kernel on every pair of examples X1 = sum(X.^2, 2); X2 = sum(model.X.^2, 2)'; K = bsxfun(@plus, X1, bsxfun(@plus, X2, - 2 * X * model.X')); K = model.kernelFunction(1, 0) .^ K; K = bsxfun(@times, model.y', K); K = bsxfun(@times, model.alphas', K); p = sum(K, 2); else % Other Non-linear kernel for i = 1:m prediction = 0; for j = 1:size(model.X, 1) prediction = prediction + ... model.alphas(j) * model.y(j) * ... model.kernelFunction(X(i,:)', model.X(j,:)'); end p(i) = prediction + model.b; end end % Convert predictions into 0 / 1 pred(p >= 0) = 1; pred(p < 0) = 0; end
function word_indices = processEmail(email_contents) %PROCESSEMAIL preprocesses a the body of an email and %returns a list of word_indices % word_indices = PROCESSEMAIL(email_contents) preprocesses % the body of an email and returns a list of indices of the % words contained in the email. % % Load Vocabulary vocabList = getVocabList(); % Init return value word_indices = [];%空的 % ========================== Preprocess Email =========================== % Find the Headers ( \n\n and remove ) % Uncomment the following lines if you are working with raw emails with the % full headers % hdrstart = strfind(email_contents, ([char(10) char(10)])); % email_contents = email_contents(hdrstart(1):end); % Lower case email_contents = lower(email_contents); % Strip all HTML % Looks for any expression that starts with < and ends with > and replace % and does not have any < or > in the tag it with a space email_contents = regexprep(email_contents, '<[^<>]+>', ' '); % Handle Numbers % Look for one or more characters between 0-9 email_contents = regexprep(email_contents, '[0-9]+', 'number'); % Handle URLS % Look for strings starting with http:// or https:// email_contents = regexprep(email_contents, ... '(http|https)://[^\s]*', 'httpaddr'); % Handle Email Addresses % Look for strings with @ in the middle email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr'); % Handle $ sign email_contents = regexprep(email_contents, '[$]+', 'dollar'); % ========================== Tokenize Email =========================== % Output the email to screen as well fprintf('\n==== Processed Email ====\n\n'); % Process file l = 0; while ~isempty(email_contents) % Tokenize and also get rid of any punctuation [str, email_contents] = ... strtok(email_contents, ... [' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]); % Remove any non alphanumeric characters str = regexprep(str, '[^a-zA-Z0-9]', ''); % Stem the word % (the porterStemmer sometimes has issues, so we use a try catch block) try str = porterStemmer(strtrim(str)); catch str = ''; continue; end; % Skip the word if it is too short if length(str) < 1 continue; end % Look up the word in the dictionary and add to word_indices if % found % ====================== YOUR CODE HERE ====================== % Instructions: Fill in this function to add the index of str to % word_indices if it is in the vocabulary. At this point % of the code, you have a stemmed word from the email in % the variable str. You should look up str in the % vocabulary list (vocabList). If a match exists, you % should add the index of the word to the word_indices % vector. Concretely, if str = 'action', then you should % look up the vocabulary list to find where in vocabList % 'action' appears. For example, if vocabList{18} = % 'action', then, you should add 18 to the word_indices % vector (e.g., word_indices = [word_indices ; 18]; ). % % Note: vocabList{idx} returns a the word with index idx in the % vocabulary list. % % Note: You can use strcmp(str1, str2) to compare two strings (str1 and % str2). It will return 1 only if the two strings are equivalent. % for i =1:length(vocabList) if(strcmp(vocabList(i),str)) word_indices = [word_indices;i];%在后面加 end end % ============================================================= % Print to screen, ensuring that the output lines are not too long if (l + length(str) + 1) > 78 fprintf('\n'); l = 0; end fprintf('%s ', str); l = l + length(str) + 1; end % Print footer fprintf('\n\n=========================\n'); end
function x = emailFeatures(word_indices) %EMAILFEATURES takes in a word_indices vector and produces a feature vector %from the word indices % x = EMAILFEATURES(word_indices) takes in a word_indices vector and % produces a feature vector from the word indices. % Total number of words in the dictionary n = 1899; % You need to return the following variables correctly. x = zeros(n, 1); % ====================== YOUR CODE HERE ====================== % Instructions: Fill in this function to return a feature vector for the % given email (word_indices). To help make it easier to % process the emails, we have have already pre-processed each % email and converted each word in the email into an index in % a fixed dictionary (of 1899 words). The variable % word_indices contains the list of indices of the words % which occur in one email. % % Concretely, if an email has the text: % % The quick brown fox jumped over the lazy dog. % % Then, the word_indices vector for this text might look % like: % % 60 100 33 44 10 53 60 58 5 % % where, we have mapped each word onto a number, for example: % % the -- 60 % quick -- 100 % ... % % (note: the above numbers are just an example and are not the % actual mappings). % % Your task is take one such word_indices vector and construct % a binary feature vector that indicates whether a particular % word occurs in the email. That is, x(i) = 1 when word i % is present in the email. Concretely, if the word 'the' (say, % index 60) appears in the email, then x(60) = 1. The feature % vector should look like: % % x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..]; % % x(word_indices)=1; % ========================================================================= end
function sim = linearKernel(x1, x2) %LINEARKERNEL returns a linear kernel between x1 and x2 % sim = linearKernel(x1, x2) returns a linear kernel between x1 and x2 % and returns the value in sim % Ensure that x1 and x2 are column vectors x1 = x1(:); x2 = x2(:); % Compute the kernel sim = x1' * x2; % dot product end