nn.MarginRankingLoss介绍

nn.MarginRankingLoss

复现论文代码中,它使用了MarginRankingLoss()函数,以下是我百度的内容:

排序损失函数

对于包含N个样本的batch数据 D(x1,x2,y), x1,x2是给定的待排序的两个输入,y代表真实的标签,属于{ 1 , − 1 } 。当Y = 1 是,x1应该排在x2前,Y = − 1 是,x1应该排在x2之后。

第n个样本对应的loss计算如下:

ln=max(0,y(x1x2)+margin)

x1x2排序正确且y(x1x2)>margin,则loss为0

class MarginRankingLoss(_Loss):
__constants__ = ['margin', 'reduction']
def __init__(self, margin=0., size_average=None, reduce=None, reduction='mean'):
super(MarginRankingLoss, self).__init__(size_average, reduce, reduction)
self.margin = margin
def forward(self, input1, input2, target):
return F.margin_ranking_loss(input1, input2, target, margin=self.margin, reduction=self.reduction)

pytorch中通过torch.nn.MarginRankingLoss类实现,也可以直接调用F.margin_ranking_loss 函数,代码中的size_average与reduce已经弃用。reduction有三种取值mean, sum, none,对应不同的返回ℓ ( x , y )。 默认为mean,对应于上述loss的计算

L={l1,,lN}

(x,y)={L, if reduction = 'none' 1Ni=1Nli, if reduction = 'mean' i=1Nli if reduction = 'sum' 

margin默认取0

例子:

import torch
import torch.nn.functional as F
import torch.nn as nn
import math
def validate_MarginRankingLoss(input1, input2, target, margin):
val = 0
for x1, x2, y in zip(input1, input2, target):
loss_val = max(0, -y * (x1 - x2) + margin)
val += loss_val
return val / input1.nelement()
torch.manual_seed(10)
margin = 0
loss = nn.MarginRankingLoss()
input1 = torch.randn([3], requires_grad=True)
input2 = torch.randn([3], requires_grad=True)
target = torch.tensor([1, -1, -1])
print(target)
output = loss(input1, input2, target)
print(output.item())
output = validate_MarginRankingLoss(input1, input2, target, margin)
print(output.item())
loss = nn.MarginRankingLoss(reduction="none")
output = loss(input1, input2, target)
print(output)
'''
tensor([ 1, -1, -1])
0.015400052070617676
0.015400052070617676
tensor([0.0000, 0.0000, 0.0462], grad_fn=<ClampMinBackward>)
'''

本文作者:Jev_0987

本文链接:https://www.cnblogs.com/jev-0987/p/17043369.html

版权声明:本作品采用知识共享署名-非商业性使用-禁止演绎 2.5 中国大陆许可协议进行许可。

posted @   Jev_0987  阅读(801)  评论(0编辑  收藏  举报
点击右上角即可分享
微信分享提示
评论
收藏
关注
推荐
深色
回顶
收起