Dropout: A Simple Way to Prevent Neural Networks from Overfitting

  • 对于 dropout 层,在训练时节点保留率(keep probability)为某一概率 p(0.5),在预测时(前向预测时)为 1.0

1. dropout 网络与传统网络的区别

传统网络:

  • z(+1)i=jw(+1)ijy()j+b(+1)i=w(+1)iy()+b(+1)i
  • y(+1)i=f(z(+1)i)

而对于 dropout 型网络:

  • r()jBernoulli(p)
  • y˜()=r()y()
  • z(+1)i=jw(+1)ijy˜()j+b(+1)i=w(+1)iy˜()+b(+1)i
  • y(+1)i=f(z(+1)i)



由此可见 dropout 的应用应在 relu 等非线性激活函数之后,

-> CONV/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV/FC ->;

posted on 2017-03-13 15:13  未雨愁眸  阅读(391)  评论(0编辑  收藏  举报