交叉熵的计算
交叉熵公式:
$H(p, q)=-\sum_{x} p(x) \log q(x)$
其中:
$p$ 代表真实分布;
$q$ 代表拟合分布;
代码:
# Example of target with class indices
loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
output = loss(input, target)
output.backward()
# Example of target with class probabilities
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5).softmax(dim=1)
output = loss(input, target)
output.backward()
代码:
import torch
import torch.nn as nn
loss = nn.CrossEntropyLoss(reduction="none")
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5).softmax(dim=1)
print(input)
print(target)
output = loss(input, target)
print(output)
result= -target * input.log_softmax(dim = -1)
print(result.sum(dim = -1))
输出:
tensor([[-0.5266, 0.7340, 0.3893, 1.0589, -0.1002], [-0.9927, 0.1257, 0.9907, -2.5100, 1.2567], [-1.2552, 1.3865, -0.8273, 0.2558, 1.1324]], requires_grad=True) tensor([[0.0480, 0.0481, 0.1544, 0.0197, 0.7299], [0.0234, 0.0515, 0.3599, 0.2912, 0.2741], [0.0902, 0.1992, 0.1011, 0.4815, 0.1281]]) tensor([2.0538, 2.0998, 1.8626], grad_fn=<SumBackward1>) tensor([2.0538, 2.0998, 1.8626], grad_fn=<NegBackward0>)
因上求缘,果上努力~~~~ 作者:图神经网络,转载请注明原文链接:https://www.cnblogs.com/BlairGrowing/articles/17343583.html