padding 有时候容易迷糊,这里汇总一下

 

卷积中的 padding

当取 SAME 时,原则上保持输入和输出尺寸相同但这只是在 步长为1时 如此

总结来说,

n 表示 输入尺寸

f 表示 卷积核尺寸

s 表示 步长

p 表示 padding 的尺寸

out 表示 输出尺寸

p = (f - 1) / 2

out = (n + 2p - f + 1) / s 向上取整

也可以这么 out = (n + 2p - f) / s + 1 向下取整

变换一下,out = (n + 2p - f) / s + 1 = (n + f -1 - f) / s + 1 = (n - 1) / s + 1 向下取整,不好记

这么变换,out = (n + 2p - f + 1) / s = (n + f - 1 - f + 1) / s = n / s 向上取整

当取 VALID 时

VALID 原则就一条,不填充,就是正常滑动,滑完是多少就是多少,

公式就简单了 out = (n - f + 1) / s 向上取整

 

池化中的 padding

当取 SAME 时,也会填充,但不是保持尺寸不变,而是当 池化野 滑到边界时,如果边界剩余尺寸小于池化野,则填充至一样大,否则不填充

当取 VALID 时,不填充

公式好像和 卷积 一样

 

另外,

卷积核尺寸最好是奇数,更容易找到卷积锚点

池化的步长最好大于1,这样才能起到降维作用

 

示例

x = tf.Variable(np.random.randint(1, 3, size=(1, 5, 5, 3)), dtype=tf.float32)


##### 卷积
w = tf.Variable(np.random.uniform(0, 1, size=(3, 3, 3, 4)), dtype=tf.float32)
w2 = tf.Variable(np.random.uniform(0, 1, size=(2, 2, 3, 4)), dtype=tf.float32)  # size 为偶数
w3 = tf.Variable(np.random.uniform(0, 1, size=(4, 4, 3, 4)), dtype=tf.float32)  # size 为偶数

### padding = SAME
# stride = 1
y1 = tf.nn.conv2d(x, w, strides=(1, 1, 1, 1), padding='SAME')     # p=(3-1)/2=1 (5+2*1-3+1)/1=5
print(y1.shape)     # (1, 5, 5, 4)
y12 = tf.nn.conv2d(x, w2, strides=(1, 1, 1, 1), padding='SAME')   # p=(2-1)/2=0.5 不向上取整 (5+2*0.5-2+1)/1=5
print(y12.shape)     # (1, 5, 5, 4)
# strider != 1
y2 = tf.nn.conv2d(x, w, strides=(1, 2, 2, 1), padding='SAME')     # p同上 (5+2*1-3+1)/2=2.5 向上取整3
print(y2.shape)     # (1, 3, 3, 4)
y22 = tf.nn.conv2d(x, w2, strides=(1, 2, 2, 1), padding='SAME')   # p同上 (5+2*0.5-2+1)/2=2.5 向上取整3
print(y22.shape)     # (1, 3, 3, 4)

### padding = VALID
# stride = 1
y3 = tf.nn.conv2d(x, w, strides=(1, 1, 1, 1), padding='VALID')      # (5-3+1)/1=3
print(y3.shape)     # (1, 3, 3, 4)
y32 = tf.nn.conv2d(x, w2, strides=(1, 1, 1, 1), padding='VALID')      # (5-2+1)/1=4
print(y32.shape)     # (1, 4, 4, 4)
# strider != 1
y4 = tf.nn.conv2d(x, w, strides=(1, 2, 2, 1), padding='VALID')      # (5-3+1)/2=1.5 向上取整2
print(y4.shape)     # (1, 2, 2, 4)
y42 = tf.nn.conv2d(x, w2, strides=(1, 2, 2, 1), padding='VALID')      # (5-2+1)/2=2  此时有列信息被忽略了
print(y42.shape)     # (1, 2, 2, 4)


##### 池化
w = [1, 3, 3, 1]

### padding = SAME
# stride = 1
y5 = tf.nn.max_pool(x, w, strides=(1, 1, 1, 1), padding='SAME')    # 5/1=5
print(y5.shape)     # (1, 5, 5, 3)
# strider != 1
y6 = tf.nn.max_pool(x, w, strides=(1, 2, 2, 1), padding='SAME')     # 5/2=2.5 向上取整3
print(y6.shape)     # (1, 3, 3, 3)

### padding = VALID
# stride = 1
y7 = tf.nn.max_pool(x, w, strides=(1, 1, 1, 1), padding='VALID')    # (5-3+1)/1=3
print(y7.shape)     # (1, 3, 3, 3)
# strider != 1
y8 = tf.nn.max_pool(x, w, strides=(1, 2, 2, 1), padding='VALID')    # (5-3+1)/2=1.5 向上取整2
print(y8.shape)     # (1, 2, 2, 3)

 

 

 

 

 

参考资料:

https://www.cnblogs.com/CK85/p/10287142.html  最简单直接

https://zhuanlan.zhihu.com/p/77471866 

https://cloud.tencent.com/developer/article/1080155

https://zhuanlan.zhihu.com/p/74159232

https://zhuanlan.zhihu.com/p/46744988