Precision、Recall、F1-Score、Micro-F1、Macro-F1
https://blog.csdn.net/sinat_28576553/article/details/80258619
https://blog.csdn.net/zzc15806/article/details/83413333 (王者荣耀好例子)
Precision: 预测结果为正时(技巧:从公式可看出),这些预测有几个是对的。(有: 正确预测为正, 错误预测为正)
Recall: 实际结果为正时(技巧:从公式可看出),这些预测有几个是对的。 (有: 正确预测为正, 错误预测为负)
A | B | C | total (Micro-F1微平均) | |
TP | 2 | 2 | 1 | 5 |
FP | 0 | 2 | 2 | 4 |
FN | 2 | 1 | 1 | 4 |
最后看Micro-F1和Macro-F1。在第一个多标签分类任务中,可以对每个“类”,计算F1,显然我们需要把所有类的F1合并起来考虑。
这里有两种合并方式:
第一种计算出所有类别总的Precision和Recall,然后计算F1。
例如依照最上面的表格来计算:Precison=5/(5+4)=0.556,Recall=5/(5+4)=0.556,然后带入F1的公式求出F1,这种方式被称为Micro-F1微平均。
第二种方式是计算出每一个类的Precison和Recall后计算F1,最后将F1平均。
例如上式A类:P=2/(2+0)=1.0,R=2/(2+2)=0.5,F1=(2*1*0.5)/1+0.5=0.667。同理求出B类C类的F1,最后求平均值,这种范式叫做Macro-F1宏平均。
如果这个数据集中各个类的分布不平衡的话,更建议使用mirco-F1,因为macro没有考虑到各个类别的样本大小。
https://blog.csdn.net/lyb3b3b/article/details/84819931
>>> from sklearn.metrics import precision_score, recall_score, f1_score >>> y_true = [0, 1, 1, 0, 1, 0] >>> y_pred = [1, 1, 1, 0, 0, 1] >>> Precision = precision_score(y_true, y_pred, average='binary') >>> print(Precision) 0.5 >>> recall = recall_score(y_true, y_pred, average='binary') >>> print(recall) 0.6666666666666666 >>> f1 = f1_score(y_true, y_pred, average='binary') >>> print(f1) 0.5714285714285715
micro-F1
>>> y_true = [1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4] >>> >>> y_pred = [1, 1, 1, 0, 0, 2, 2, 3, 3, 3, 4, 3, 4, 3] >>> >>> print(f1_score(y_true,y_pred,labels=[1,2,3,4],average='micro')) 0.6153846153846153
================ =========== ======== ========= ============
precision recall f_score numsamples
================ =========== ======== ========= ============
disgust 0 0 0 71
happy 0.65217 0.42254 0.51282 71
neutral 0.44444 0.72222 0.55026 72
scream 0.03509 0.02857 0.0315 70
squint 0.30769 0.61111 0.4093 72
surprise 0.1 0.07042 0.08264 71
Average 0.25657 0.30914 0.26442 427
================ =========== ======== ========= ============
============ === === === === === ===
0 1 2 3 4 5
============ === === === === === ===
neutral (0) 52 2 0 17 1 0
happy (1) 4 30 3 27 4 3
surprise (2) 18 1 5 2 1 44
squint (3) 16 3 0 44 7 2
disgust (4) 9 3 0 53 0 6
scream (5) 18 7 42 0 1 2
============ === === === === === ===
precision of happy
30/( 30 + 2+1+3+3+7) = 0.65217