上一页 1 2 3 4 5 6 7 ··· 13 下一页
摘要: The Noise Channel Model $p(e)$: the language Model $p(f|e)$: the translation model where, $e$: English language; $f$: French Language. 由法语翻译成英语的概率: $$ 阅读全文
posted @ 2016-05-22 14:11 姜楠 阅读(294) 评论(0) 推荐(0) 编辑
摘要: Long short term memory: make that short term memory last for a long time. Paper Reference: "A Critical Review of Recurrent Neural Networks for Sequenc 阅读全文
posted @ 2016-05-18 21:19 姜楠 阅读(1476) 评论(0) 推荐(0) 编辑
摘要: Sigmoid Function $$ \sigma(z)=\frac{1}{1+e^{( z)}} $$ feature: 1. axial symmetry: $$ \sigma(z)+ \sigma( z)=1 $$ 2. gradient: $$ \frac{\partial\sigma(z 阅读全文
posted @ 2016-05-13 14:38 姜楠 阅读(342) 评论(0) 推荐(0) 编辑
摘要: Linear neuron: $$y=b+\sum\limits_i{x_i w_i}$$ Binary threshold neuron: $$z = \sum\limits_i{x_i w_i}$$ $$y=\left\{\begin{aligned} 1,~~~~~~~z\gt \theta 阅读全文
posted @ 2016-05-13 13:29 姜楠 阅读(217) 评论(0) 推荐(0) 编辑
摘要: Softmax function Softmax 函数 $y=[y_1,\cdots,y_m]$ 定义如下: $$y_i=\frac{exp(z_i)}{\sum\limits_{j=1}^m{exp(z_j)}}, i=1,2,\cdots,m$$ 它具有很好的求导性质: $$\frac{\par 阅读全文
posted @ 2016-05-13 13:12 姜楠 阅读(20908) 评论(0) 推荐(3) 编辑
摘要: Paper Reference: word2vec Parameter Learning Explained 1. One-word context Model In our setting, the vocabulary size is $V$, and the hidden layer size is $N$. The input $x$ is a one-hot representa... 阅读全文
posted @ 2016-05-09 19:54 姜楠 阅读(839) 评论(0) 推荐(0) 编辑
摘要: Energy based Model the probability distribution (softmax function): \[p(x)=\frac{\exp(-E(x))}{\sum\limits_x{\exp(-E(x))}}\] when there are hidden unit 阅读全文
posted @ 2016-05-06 18:54 姜楠 阅读(271) 评论(0) 推荐(0) 编辑
摘要: 1. A basic LSTM encoder-decoder. Encoder: X 是 input sentence. C 是encoder 产生的最后一次的hidden state, 记作 Context Vector. \[C=LSTM(X).\] Decoder: 每次的输出值就是下一次的输入值, 第一次的输入值就是 encoder 产生的 Context Vector. Enco... 阅读全文
posted @ 2016-04-20 22:17 姜楠 阅读(509) 评论(0) 推荐(0) 编辑
摘要: update gate $z_t$: defines how much of the previous memory to keep around. \[z_t = \sigma ( W^z x_t+ U^z h_{t-1} )\] reset gate $r_t$: determines how 阅读全文
posted @ 2016-04-14 21:46 姜楠 阅读(4279) 评论(0) 推荐(0) 编辑
摘要: 转载 - Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano The code for this post is on Github. This is part 4, the last part of the Recurrent Neural Network T... 阅读全文
posted @ 2016-03-02 15:49 姜楠 阅读(2119) 评论(0) 推荐(0) 编辑
上一页 1 2 3 4 5 6 7 ··· 13 下一页