02 Transformer 中 Add&Norm (残差和标准化)代码实现

python/pytorch 基础

https://www.cnblogs.com/nickchen121

培训机构(Django 类似于 Transformers)

img

首先由一个 norm 函数

norm 里面做残差,会输入( x 和 淡粉色z1,残差值),输出一个值紫粉色的 z1

标准化

\[y = \frac{x-E(x)}{\sqrt{Var(x)+\epsilon}}*\gamma+\beta \]

\(E(x)\) 对 x 求均值

\(Var(x)\) 对 x 求方差

\(\epsilon\) 加在方差上的数字,避免分母为0;

\(\gamma\)\(\beta\) 为学习参数,二者均可学习随着训练过程而变化;

class LayerNorm(nn.Module):

    def __init__(self, feature, eps=1e-6):
        """
        :param feature: self-attention 的 x 的大小
        :param eps:
        """
        super(LayerNorm, self).__init__()
        self.a_2 = nn.Parameter(torch.ones(feature))
        self.b_2 = nn.Parameter(torch.zeros(feature))
        self.eps = eps

    def forward(self, x):
        mean = x.mean(-1, keepdim=True)
        std = x.std(-1, keepdim=True)
        return self.a_2 * (x - mean) / (std + self.eps) + self.b_2

残差+标准化

class SublayerConnection(nn.Module):
    """
    这不仅仅做了残差,这是把残差和 layernorm 一起给做了

    """
    def __init__(self, size, dropout=0.1):
        super(SublayerConnection, self).__init__()
        # 第一步做 layernorm
        self.layer_norm = LayerNorm(size)
        # 第二步做 dropout
        self.dropout = nn.Dropout(p=dropout)

    def forward(self, x, sublayer):
        """
        :param x: 就是self-attention的输入
        :param sublayer: self-attention层
        :return:
        """
        return self.dropout(self.layer_norm(x + sublayer(x)))
posted @ 2022-07-25 19:50  B站-水论文的程序猿  阅读(3900)  评论(0编辑  收藏  举报