=== Summary ===(总结)
Correctly Classified Instances(正确分类的实例) 45 90 %
Incorrectly Classified Instances (错误分类的实例) 5 10 %
Kappa statistic(Kappa统计量) 0.792
Mean absolute error(均值绝对误差) 0.1
Root mean squared error(均方根误差) 0.3162
Relative absolute error(相对绝对误差) 20.7954 %
Root relative squared error(相对均方根误差) 62.4666 %
Coverage of cases (0.95 level) 90 %
Mean rel. region size (0.95 level) 50 %
Total Number of Instances(实验的实例总数) 50
=== Detailed Accuracy By Class ===
TP Rate(真阳性率) FP Rate(假阳性率) Precision(查准率) Recall(查全率) F-Measure MCC(Matthews相关系数) ROC Area PRC Area Class(类别)
0.773 0 1 0.773 0.872 0.81 0.886 0.873 true
1 0.227 0.848 1 0.918 0.81 0.886 0.848 false
Weighted Avg. 0.9 0.127 0.915 0.9 0.898 0.81 0.886 0.859
=== Confusion Matrix ===
a b <-- classified as
17 5 | a = true
0 28 | b = false
F-Measure是查准率和查全率的调和平均数
ROC Area一般大于0.5,这个值越接近1,说明模型的诊断效果越好。这个值在0.5~0.7时有较低准确性,在0.7~0.9时有一定准确性,在0.9以上时有较高准确性。如果这个值等于0.5,说明诊断方法完全不起作用,无诊断价值,而小于0.5不符合真实情况,在实际中极少出现。