Pytorch之SAME padding

Implement "same" padding for convolution operations

mimics TensorFlow SAME padding (I'm writing it down into the functional interface, so that nn.Conv2d can just call into F.conv2d_same_padding):

 1 def conv2d_same_padding(input, weight, bias=None, stride=1, dilation=1, groups=1):
 2   input_rows = input.size(2)
 3   filter_rows = weight.size(2)
 4   effective_filter_size_rows = (filter_rows - 1) * dilation[0] + 1
 5   out_rows = (input_rows + stride[0] - 1) // stride[0]
 6   padding_needed =
 7           max(0, (out_rows - 1) * stride[0] + effective_filter_size_rows -
 8                   input_rows)
 9   padding_rows = max(0, (out_rows - 1) * stride[0] +
10                         (filter_rows - 1) * dilation[0] + 1 - input_rows)
11   rows_odd = (padding_rows % 2 != 0)
12   # same for padding_cols
13 
14   if rows_odd or cols_odd:
15     input = F.pad(input, [0, int(cols_odd), 0, int(rows_odd)])
16 
17   return F.conv2d(input, weight, bias, stride,
18                   padding=(padding_rows // 2, padding_cols // 2),
19                   dilation=dilation, groups=groups)
20   

It was mostly copy-pasted from TensorFlow code in here and here.

“As you can see, there is a lot of hidden things going on there, and that's why it might not be worth it adding a padding='same'. And I think not replicating the SAME behavior in TensorFlow is not ideal either.

 

本文来自于:

Francisco Massa : Implement "same" padding for convolution operations?

谢谢!!!                                                                                                                  

 

posted @ 2018-04-25 22:10  寒杰士  阅读(5559)  评论(0编辑  收藏  举报