摘要: 稀疏编码在稀疏自编码算法中,我们试着学习得到一组权重参数 W(以及相应的截距 b),通过这些参数可以使我们得到稀疏特征向量 σ(Wx + b) ,这些特征向量对于重构输入样本非常有用。稀疏编码可以看作是稀疏自编码方法的一个变形,该方法试图直接学习数据的特征集。利用与此特征集相应的基向量,将学习得到的... 阅读全文
posted @ 2014-09-19 19:57 老姨 阅读(293) 评论(0) 推荐(0) 编辑
摘要: Sparse CodingSparse coding is a class of unsupervised methods for learning sets of over-complete bases to represent data efficiently. —— 过完备的基,无监督 The... 阅读全文
posted @ 2014-09-19 17:19 老姨 阅读(344) 评论(0) 推荐(0) 编辑
摘要: Pooling: Overview After obtaining features using convolution, we would next like to use them for classification. In theory, one could use all the extracted features with a classifier such as a sof... 阅读全文
posted @ 2014-09-19 16:12 老姨 阅读(467) 评论(0) 推荐(0) 编辑
摘要: Fully Connected NetworksIn the sparse autoencoder, one design choice that we had made was to "fully connect" all the hidden units to all the input uni... 阅读全文
posted @ 2014-09-19 15:44 老姨 阅读(221) 评论(0) 推荐(0) 编辑
摘要: Sparse Autoencoder RecapIn the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous des... 阅读全文
posted @ 2014-09-19 15:18 老姨 阅读(181) 评论(0) 推荐(0) 编辑
摘要: 转自:http://www.cnblogs.com/tornadomeet/archive/2013/03/25/2980357.html如果使用多层神经网络的话,那么将可以得到对输入更复杂的函数表示,因为神经网络的每一层都是上一层的非线性变换。当然,此时要求每一层的activation函数是非线性... 阅读全文
posted @ 2014-09-19 10:22 老姨 阅读(700) 评论(0) 推荐(0) 编辑