Proj CDeepFuzz Paper Reading: CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems
Abstract
背景:
- Q: 现有的方法没有考虑到small perturbations的影响,或者这些perturbation只能限定在某个特定的模型上使用,在其他模型上则本身就不符合example要求(可能是指标签就会改变?)
- 现有的方法多使用浅层的feature constraints来判断生成的adversarial example和原本example的距离,忽略了如object category和scene semantics等的高阶语义
本文:CAGFuzz(Coverage-guided Adversarial Generative Fuzzing)
Task: Generate adversarial examples for DNN models
Method:
- train an Adversarial Case Generator(AEG) based on general datasets
- Q: AEG特点:只考虑数据特征,忽略低层次的通用能力(only consideres the data characteristics and avoids low generalization ability)
- 从original and adversarial examples中抽取高级特征(deep feature), 并使用cosine相似度来确保adversarial example和original example的语义相似度
- use the adversarial examples to retrain the model
实验:
数据集: MNIST(LeNet-1, LeNet-4, LeNet-5), CIFAR-10(VGG-16, VGG-19, Resnet-20), ImageNet(VGG-16, VGG-19, Resnet-50)
Competitors: FGSM, DeepHunter, DeepXplore
效果:在neuron coverage rate, hidden errors 和accuracy上都有所提升