Sparse Autoencoder(二)

Gradient checking and advanced optimization

In this section, we describe a method for numerically checking the derivatives computed by your code to make sure that your implementation is correct. Carrying out the derivative checking procedure described here will significantly increase your confidence in the correctness of your code.

Suppose we want to minimize \textstyle J(\theta) as a function of \textstyle \theta. For this example, suppose \textstyle J : \Re \mapsto \Re, so that \textstyle \theta \in \Re. In this 1-dimensional case, one iteration of gradient descent is given by

\begin{align}
\theta := \theta - \alpha \frac{d}{d\theta}J(\theta).
\end{align}

Suppose also that we have implemented some function \textstyle g(\theta) that purportedly computes \textstyle \frac{d}{d\theta}J(\theta), so that we implement gradient descent using the update \textstyle \theta := \theta - \alpha g(\theta).

 

Recall the mathematical definition of the derivative as

\begin{align}
\frac{d}{d\theta}J(\theta) = \lim_{\epsilon \rightarrow 0}
\frac{J(\theta+ \epsilon) - J(\theta-\epsilon)}{2 \epsilon}.
\end{align}

Thus, at any specific value of \textstyle \theta, we can numerically approximate the derivative as follows:

\begin{align}
\frac{J(\theta+{\rm EPSILON}) - J(\theta-{\rm EPSILON})}{2 \times {\rm EPSILON}}
\end{align}

 

Thus, given a function \textstyle g(\theta) that is supposedly computing \textstyle \frac{d}{d\theta}J(\theta), we can now numerically verify its correctness by checking that

\begin{align}
g(\theta) \approx
\frac{J(\theta+{\rm EPSILON}) - J(\theta-{\rm EPSILON})}{2 \times {\rm EPSILON}}.
\end{align}

The degree to which these two values should approximate each other will depend on the details of \textstyle J. But assuming \textstyle {\rm EPSILON} = 10^{-4}, you'll usually find that the left- and right-hand sides of the above will agree to at least 4 significant digits (and often many more).

 

Suppose we have a function \textstyle g_i(\theta) that purportedly computes \textstyle \frac{\partial}{\partial \theta_i} J(\theta); we'd like to check if \textstyle g_i is outputting correct derivative values. Let \textstyle \theta^{(i+)} = \theta +
{\rm EPSILON} \times \vec{e}_i, where

\begin{align}
\vec{e}_i = \begin{bmatrix}0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0\end{bmatrix}
\end{align}

is the \textstyle i-th basis vector (a vector of the same dimension as \textstyle \theta, with a "1" in the \textstyle i-th position and "0"s everywhere else). So, \textstyle \theta^{(i+)} is the same as \textstyle \theta, except its \textstyle i-th element has been incremented by EPSILON. Similarly, let \textstyle \theta^{(i-)} = \theta - {\rm EPSILON} \times \vec{e}_i be the corresponding vector with the \textstyle i-th element decreased by EPSILON. We can now numerically verify \textstyle g_i(\theta)'s correctness by checking, for each \textstyle i, that:

\begin{align}
g_i(\theta) \approx
\frac{J(\theta^{(i+)}) - J(\theta^{(i-)})}{2 \times {\rm EPSILON}}.
\end{align}

 

参数为向量,为了验证每一维的计算正确性,可以控制其他变量

When implementing backpropagation to train a neural network, in a correct implementation we will have that

\begin{align}
\nabla_{W^{(l)}} J(W,b) &= \left( \frac{1}{m} \Delta W^{(l)} \right) + \lambda W^{(l)} \\
\nabla_{b^{(l)}} J(W,b) &= \frac{1}{m} \Delta b^{(l)}.
\end{align}

This result shows that the final block of psuedo-code in Backpropagation Algorithm is indeed implementing gradient descent. To make sure your implementation of gradient descent is correct, it is usually very helpful to use the method described above to numerically compute the derivatives of \textstyle J(W,b), and thereby verify that your computations of \textstyle \left(\frac{1}{m}\Delta W^{(l)} \right) + \lambda W and\textstyle \frac{1}{m}\Delta b^{(l)} are indeed giving the derivatives you want.

 

Autoencoders and Sparsity

 

Anautoencoder neural network is an unsupervised learning algorithm that applies backpropagation, setting the target values to be equal to the inputs. I.e., it uses \textstyle y^{(i)} = x^{(i)}.

Here is an autoencoder:

Autoencoder636.png

 

we will write \textstyle a^{(2)}_j(x) to denote the activation of this hidden unit when the network is given a specific input \textstyle x. Further, let

\begin{align}
\hat\rho_j = \frac{1}{m} \sum_{i=1}^m \left[ a^{(2)}_j(x^{(i)}) \right]
\end{align}

be the average activation of hidden unit \textstyle j (averaged over the training set). We would like to (approximately) enforce the constraint

\begin{align}
\hat\rho_j = \rho,
\end{align}

where \textstyle \rho is a sparsity parameter, typically a small value close to zero (say \textstyle \rho = 0.05). In other words, we would like the average activation of each hidden neuron \textstyle j to be close to 0.05 (say). To satisfy this constraint, the hidden unit's activations must mostly be near 0.

To achieve this, we will add an extra penalty term to our optimization objective that penalizes \textstyle \hat\rho_j deviating significantly from \textstyle \rho. Many choices of the penalty term will give reasonable results. We will choose the following:

\begin{align}
\sum_{j=1}^{s_2} \rho \log \frac{\rho}{\hat\rho_j} + (1-\rho) \log \frac{1-\rho}{1-\hat\rho_j}.
\end{align}

Here, \textstyle s_2 is the number of neurons in the hidden layer, and the index \textstyle j is summing over the hidden units in our network. If you are familiar with the concept of KL divergence, this penalty term is based on it, and can also be written

\begin{align}
\sum_{j=1}^{s_2} {\rm KL}(\rho || \hat\rho_j),
\end{align}

 

Our overall cost function is now

\begin{align}
J_{\rm sparse}(W,b) = J(W,b) + \beta \sum_{j=1}^{s_2} {\rm KL}(\rho || \hat\rho_j),
\end{align}

where \textstyle J(W,b) is as defined previously, and \textstyle \beta controls the weight of the sparsity penalty term. The term \textstyle \hat\rho_j (implicitly) depends on \textstyle W,b also, because it is the average activation of hidden unit \textstyle j, and the activation of a hidden unit depends on the parameters \textstyle W,b.

 

\begin{align}
\delta^{(2)}_i =
  \left( \left( \sum_{j=1}^{s_{2}} W^{(2)}_{ji} \delta^{(3)}_j \right)
+ \beta \left( - \frac{\rho}{\hat\rho_i} + \frac{1-\rho}{1-\hat\rho_i} \right) \right) f'(z^{(2)}_i) .
\end{align}

 

Visualizing a Trained Autoencoder

 

Consider the case of training an autoencoder on \textstyle 10 \times 10 images, so that \textstyle n = 100. Each hidden unit \textstyle i computes a function of the input:

\begin{align}
a^{(2)}_i = f\left(\sum_{j=1}^{100} W^{(1)}_{ij} x_j  + b^{(1)}_i \right).
\end{align}

We will visualize the function computed by hidden unit \textstyle i---which depends on the parameters \textstyle W^{(1)}_{ij} (ignoring the bias term for now)---using a 2D image. In particular, we think of \textstyle a^{(2)}_i as some non-linear feature of the input \textstyle x

 

If we suppose that the input is norm constrained by \textstyle ||x||^2 = \sum_{i=1}^{100} x_i^2 \leq 1, then one can show (try doing this yourself) that the input which maximally activates hidden unit \textstyle i is given by setting pixel \textstyle x_j (for all 100 pixels, \textstyle j=1,\ldots, 100) to

\begin{align}
x_j = \frac{W^{(1)}_{ij}}{\sqrt{\sum_{j=1}^{100} (W^{(1)}_{ij})^2}}.
\end{align}

By displaying the image formed by these pixel intensity values, we can begin to understand what feature hidden unit \textstyle i is looking for.

对一幅图像进行Autoencoder ,前面的隐藏结点一般捕获的是边缘等初级特征,越靠后隐藏结点捕获的特征语义更深。

posted @ 2014-09-18 15:18  老姨  阅读(211)  评论(0编辑  收藏  举报