TF-池化函数 tf.nn.max_pool 的介绍
转载自此大神 http://blog.csdn.net/mao_xiao_feng/article/details/53453926
max pooling是CNN当中的最大值池化操作,其实用法和卷积很类似
有些地方可以从卷积去参考【TensorFlow】tf.nn.conv2d是怎样实现卷积的?
tf.nn.max_pool(value, ksize, strides, padding, name=None)
参数是四个,和卷积很类似:
第一个参数value:需要池化的输入,一般池化层接在卷积层后面,所以输入通常是feature map,依然是[batch, height, width, channels]这样的shape
第二个参数ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1],因为我们不想在
batch和
channels
上做池化,所以这两个维度设为了1
第三个参数strides:和卷积类似,窗口在每一个维度上滑动的步长,一般也是[1, stride,
stride
, 1]
第四个参数padding:和卷积类似,可以取'VALID' 或者'SAME'
返回一个Tensor,类型不变,shape仍然是[batch, height, width, channels]
这种形式
示例源码:
假设有这样一张图,双通道
第一个通道:
第二个通道:
用程序去做最大值池化:
import tensorflow as tf a=tf.constant([ [[1.0,2.0,3.0,4.0], [5.0,6.0,7.0,8.0], [8.0,7.0,6.0,5.0], [4.0,3.0,2.0,1.0]], [[4.0,3.0,2.0,1.0], [8.0,7.0,6.0,5.0], [1.0,2.0,3.0,4.0], [5.0,6.0,7.0,8.0]] ]) a=tf.reshape(a,[1,4,4,2]) pooling=tf.nn.max_pool(a,[1,2,2,1],[1,1,1,1],padding='VALID') with tf.Session() as sess: print("image:") image=sess.run(a) print (image) print("reslut:") result=sess.run(pooling) print (result)
这里步长为1,窗口大小2×2,输出结果:
image: [[[[ 1. 2.] [ 3. 4.] [ 5. 6.] [ 7. 8.]] [[ 8. 7.] [ 6. 5.] [ 4. 3.] [ 2. 1.]] [[ 4. 3.] [ 2. 1.] [ 8. 7.] [ 6. 5.]] [[ 1. 2.] [ 3. 4.] [ 5. 6.] [ 7. 8.]]]] reslut: [[[[ 8. 7.] [ 6. 6.] [ 7. 8.]] [[ 8. 7.] [ 8. 7.] [ 8. 7.]] [[ 4. 4.] [ 8. 7.] [ 8. 8.]]]]
池化后的图就是:
证明了程序的结果是正确的。
我们还可以改变步长
pooling=tf.nn.max_pool(a,[1,2,2,1],[1,2,2,1],padding='VALID')
最后的result就变成:
reslut: [[[[ 8. 7.] [ 7. 8.]] [[ 4. 4.] [ 8. 8.]]]]
下面是我自己写的测试代码和测试结果:
import tensorflow as tf # def max_pool(value, ksize, strides, padding, data_format="NHWC", name=None) #8x8的4维全1矩阵 value = tf.ones([1, 8, 8, 1], dtype=tf.float32) oplist = [] ksize = [1, 2, 2, 1] strides = [1, 1, 1, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 1']) ksize = [1, 4, 4, 1] strides = [1, 1, 1, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 2']) ksize = [1, 6, 6, 1] strides = [1, 1, 1, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 3']) ksize = [1, 2, 2, 1] strides = [1, 2, 2, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='VALID') oplist.append([reth, 'case 4']) ksize = [1, 2, 2, 1] strides = [1, 2, 2, 1] reth = tf.nn.max_pool(value, ksize, strides, padding='SAME') oplist.append([reth, 'case 5']) with tf.Session() as a_sess: a_sess.run(tf.global_variables_initializer()) for aop in oplist: print("----------{}---------".format(aop[1])) print("shape =",aop[0].shape) print("content=",a_sess.run(aop[0])) print('---------------------\n\n')
结果为
C:\Users\Administrator\Anaconda3\python.exe C:/Users/Administrator/PycharmProjects/p3test/tf_maxpool.py 2017-05-10 16:43:25.690336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.691336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.691336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.692336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.692336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-10 16:43:25.692336: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. ----------case 1--------- shape = (1, 7, 7, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 2--------- shape = (1, 5, 5, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 3--------- shape = (1, 3, 3, 1) content= [[[[ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 4--------- shape = (1, 4, 4, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- ----------case 5--------- shape = (1, 4, 4, 1) content= [[[[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]] [[ 1.] [ 1.] [ 1.] [ 1.]]]] --------------------- Process finished with exit code 0
不对之处欢迎指正