2016-03-09-阅读笔记: dump text + report on MS ResNet + Deep feeling about DL

 

1, 2014年的短文,今日看来无趣; link = here

 

2,  关于ultra-deep network,a newsletter to KaiMing He's 152 layers ResNet on ImageNet

LINK = here

之前理解就是在imagenet上的分类准确率事最高的,但是现在看到新的描述:

“Microsoft's new neural network is as good as the other networks at spotting what's in the photo (which is often better than an untrained human at telling the difference between two very similar breeds of dog, which is one of the tests), but it's twice as good at working out where in the photo it needs to look.”

文中指出训练当中feedback的损失是训练的难点,简单回顾一下训练过程,前向 + 后向传导:

"The way these things learn is you feed data into the lowest layers of the network, the signals propagate to the top layer and then you provide feedback as to whether the learning was good or not,"

“ The reinforcement signal is sent back down through the layers. The problems has been, those signals would get exceedingly weak after just a few layers, so you don't get any correction into the lower layers. It's been a huge limiting factor.”

如果设想能够对于每一层,使用某种链接方式使得各个layers近乎同时拿到所有的反向更新/feedback,那么这样可以确保bp 过程的有效进行(消除了之前所述的feedback signal衰减甚至失效的问题);但是这只是理想方法,具体操作还是存在苦难;

现在一种变通的方式就是在反向传导中忽略掉某些layers,更准确的说像是 短路掉:

Instead, the idea the team came up with was "to organise the layers so that instead of flowing through every layer, the signals can skip several layers to get to the lower layers in the network

借助在residual learning network上的成功,ms接下来的思路分为两类:

a,“Going deeper is just one way [to get better results]“

 b,parallel training, so we train the whole system across the machine, where each machine might have four or eight GPUs, so we can train even deeper networks in parallel.“

 

3, LINK = here 

这个可以看作是Carlos同学对于deep learning的吐槽了,就是知其然,不知其所以然。

问题还是在于深度学习的理论基础的缺失;

吐槽归吐槽,作者在文中提到的其进阶过程还是蛮有意思,给出的链接值得看看,现在也准备再次enrool coursera上的课程,奋进吧,骚年。

 

posted on 2016-03-12 21:30  馒头山小八路  阅读(330)  评论(0编辑  收藏  举报

导航