Loading

pytorch API

  1. pytorch--多标签分类损失函数
import torch
import numpy as np

pred = np.array([[-0.4089, -1.2471, 0.5907],
                [-0.4897, -0.8267, -0.7349],
                [0.5241, -0.1246, -0.4751]])
label = np.array([[0, 1, 1],
                  [0, 0, 1],
                  [1, 0, 1]])

pred = torch.from_numpy(pred).float()
label = torch.from_numpy(label).float()

## 通过BCEWithLogitsLoss直接计算输入值(pick)
crition1 = torch.nn.BCEWithLogitsLoss()
loss1 = crition1(pred, label)
print(loss1)

crition2 = torch.nn.MultiLabelSoftMarginLoss()
loss2 = crition2(pred, label)
print(loss2)

##  通过BCELoss计算sigmoid处理后的值
crition3 = torch.nn.BCELoss()
loss3 = crition3(torch.sigmoid(pred), label)
print(loss3)
  1. Minimal tutorial on packing (pack_padded_sequence) and unpacking (pad_packed_sequence) sequences in pytorch
  • 介绍了pack_padded_sequence和pad_packed_sequence和LSTM结合的使用教程
  • 使用场景:对于一个batch内的不等长的序列,当填充为相同长度时,较短句子末尾会有pad填充,例如"HAPPY'pad''pad''pad'",
    而我们其实不需要让后面三个"pad"通过LSTM,会影响模型的精度,因此此时可以使用pack_padded_sequence对序列进行压缩。
posted @ 2022-06-01 14:48  摇头晃脑学知识  阅读(36)  评论(0编辑  收藏  举报