Proj CDeepFuzz Paper Reading: The limitations of deep learning in adversarial settings
Abstract
本文:
Task: formalize the space of adversaries against DNNs and then introduce an adversarial testing
实验:
方法:defining a hardness measure
效果:
- 能以97%的成功率生成adversarial examples(人工核查DNN的错误),平均修改每个samples中4.02%input features
- 评估不同类别对扰动的敏感度
- 对已有Defenses进行了评估