tensorflow中的dropout是怎么实现的?

1 #调用dropout函数
2 import tensorflow as tf
3 a = tf.Variable([1.0,2.0,3.0,4.5])
4 sess = tf.Session()
5 init_op = tf.global_variables_initializer()
6 sess.run(init_op)
7 a = tf.nn.dropout(a, 0.5)
8 print(sess.run(a))
打印结果:[2. 0. 6. 0.]

阅读了tensorflow的代码,dropout的实现如下

 1 #举个例子
 2 import numpy as np
 3 a = np.array([[1,2,3,4.5]])
 4 keep_prop = 0.5
 5 uniform_data = np.random.uniform(0.0, 1.0, 4)
 6 print("uniform_data:", uniform_data)
 7 binary_data = np.floor(keep_prop + uniform_data)
 8 print("binary:", binary_data)
 9 a = (a/keep_prop)*binary_data
10 print("after dropout:", a)

打印结果:
uniform_data: [0.67023007 0.35026259 0.66169766 0.25046903]
binary: [1. 0. 1. 0.]
after dropout: [[2. 0. 6. 0.]]
keep_prop的范围[0, 1),该值越大,a中每个数据被保留的概率越大。由于使用了随机均匀分布函数生成随机数,所以dropout生成的数据不是唯一的,即每次删除随机位置的神经元。
posted @ 2019-05-16 10:57  啼鸣夜莺  Views(2036)  Comments(0Edit  收藏  举报