Proj CDeepFuzz Paper Reading: Towards deep learning models resistant to adversarial attacks
Abstract
本文:
Github:
- https://github.com/MadryLab/mnist_challenge
- https://github.com/MadryLab/cifar10_challenge
Task: 1. study the adversarial robustness in the view of robust optimization 2. a concrete security guarantee, the notion of security against a first-order adversary as a natural and broad security guarantee