lime用法浅析

lime全称为Local Interpretable Model-Agnostic Explanations , lime算法是Marco Tulio Ribeiro2016年发表的论文《“Why Should I Trust You?” Explaining the Predictions of Any Classifier》中介绍的局部可解释性模型算法。该算法主要是用在文本类与图像类的模型中。

文本解释器构造方法:

import lime
from lime import lime_text
from lime.lime_text import LimeTextExplainer

# create explainer
explainer = LimeTextExplainer()

# generate explain_instance

exp = explainer.explain_instance(raw_text_instance, predict_fn)

# list form
exp.as_list()

# plot form
# % matplotlib inline
fig = exp.as_pyplot_figure()
exp.show_in_notebook(text=False)
# save explain as html
exp.save_to_file('/tmp/oi.html')
exp.show_in_notebook(text=True)

 


example:http://marcotcr.github.io/lime/tutorials/Lime%20-%20basic%20usage%2C%20two%20class%20case.html

图像解释器构造:
from lime import lime_image

explainer = lime_image.LimeImageExplainer()

explaination = explainer.explain_instance(image=x, classifier_fn=predict, segmentation_fn=segmentation)

img, msk = explaination.get_image_and_mask(explaination.top_labels[0], negative_only=False, positive_only=False,hide_rest=False, num_features=10, min_weight=0.05)

2个注意点:

1:explain_instance()函数的image参数必须为numpy array 类型,格式为[高 , 宽 , 通道] , 传入image参数前必须经过转换

numpy_image = tensor_image.permute(1, 2, 0).numpy().astype(np.double)

2.model需要封装,将传入的numpy array转为tensor类型 , 格式改回[批量 , 通道 , 高 , 宽] , 进行预测前先调.eval()函数

def predict(input):
         model.eval()
         input = torch.from_numpy(input)
         input = torch.as_tensor(input, dtype=torch.float32)
         input = input.permute(0, 3, 1, 2)
         output = model(input)
         return output.detach().numpy()

 

posted @ 2020-10-30 21:54  Mydrizzle  阅读(1481)  评论(0编辑  收藏  举报