关于pytorch中函数调用的问题和源码解读
最近用到 softmax
函数,但是发现 softmax
的写法五花八门,记录如下
# torch._C._VariableFunctions
torch.softmax(x, dim=-1)
# class
softmax = torch.nn.Softmax(dim=-1)
x=softmax(x)
# function
x = torch.nn.functional.softmax(x, dim=-1)
简单测试了一下,用 torch.nn.Softmax
类是最慢的,另外两个差不多
torch.nn.Softmax
源码如下,可以看到这是个类,而他这里的 return F.softmax(input, self.dim, _stacklevel=5)
调用的是 torch.nn.functional.softmax
class Softmax(Module):
r"""Applies the Softmax function to an n-dimensional input Tensor
rescaling them so that the elements of the n-dimensional output Tensor
lie in the range [0,1] and sum to 1.
Softmax is defined as:
.. math::
\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
When the input Tensor is a sparse tensor then the unspecifed
values are treated as ``-inf``.
Shape:
- Input: :math:`(*)` where `*` means, any number of additional
dimensions
- Output: :math:`(*)`, same shape as the input
Returns:
a Tensor of the same dimension and shape as the input with
values in the range [0, 1]
Args:
dim (int): A dimension along which Softmax will be computed (so every slice
along dim will sum to 1).
.. note::
This module doesn't work directly with NLLLoss,
which expects the Log to be computed between the Softmax and itself.
Use `LogSoftmax` instead (it's faster and has better numerical properties).
Examples::
>>> m = nn.Softmax(dim=1)
>>> input = torch.randn(2, 3)
>>> output = m(input)
"""
__constants__ = ['dim']
dim: Optional[int]
def __init__(self, dim: Optional[int] = None) -> None:
super(Softmax, self).__init__()
self.dim = dim
def __setstate__(self, state):
self.__dict__.update(state)
if not hasattr(self, 'dim'):
self.dim = None
def forward(self, input: Tensor) -> Tensor:
return F.softmax(input, self.dim, _stacklevel=5)
def extra_repr(self) -> str:
return 'dim={dim}'.format(dim=self.dim)
torch.nn.functional.softmax
函数源码如下,可以看到 ret = input.softmax(dim)
实际上调用了 torch._C._VariableFunctions
中的 softmax
函数
def softmax(input: Tensor, dim: Optional[int] = None, _stacklevel: int = 3, dtype: Optional[DType] = None) -> Tensor:
r"""Applies a softmax function.
Softmax is defined as:
:math:`\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}`
It is applied to all slices along dim, and will re-scale them so that the elements
lie in the range `[0, 1]` and sum to 1.
See :class:`~torch.nn.Softmax` for more details.
Args:
input (Tensor): input
dim (int): A dimension along which softmax will be computed.
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
If specified, the input tensor is casted to :attr:`dtype` before the operation
is performed. This is useful for preventing data type overflows. Default: None.
.. note::
This function doesn't work directly with NLLLoss,
which expects the Log to be computed between the Softmax and itself.
Use log_softmax instead (it's faster and has better numerical properties).
"""
if has_torch_function_unary(input):
return handle_torch_function(softmax, (input,), input, dim=dim, _stacklevel=_stacklevel, dtype=dtype)
if dim is None:
dim = _get_softmax_dim("softmax", input.dim(), _stacklevel)
if dtype is None:
ret = input.softmax(dim)
else:
ret = input.softmax(dim, dtype=dtype)
return ret
那么不如直接调用 built-in C 的函数?但是有个博客 A selective excursion into the internals of PyTorch 里说
Note: That
bilinear
is exported astorch.bilinear
is somewhat accidental. Do use the documented interfaces, heretorch.nn.functional.bilinear
whenever you can!
意思是说 built-in C 能被 torch.xxx
直接调用是意外的,强烈建议使用 torch.nn.functional.xxx
这样的接口
看到最新的 transformer 官方代码里也用的是 torch.nn.functional.softmax
,还是和他们一致更好(虽然他们之前用的是类。。。)