Proj CDeepFuzz Paper Reading: Threat of adversarial attacks on deep learning in computer vision: A survey

Abstract

本文: review on adversarial attacks on CV models
Task: 1. design of attack 2. defenses 3. evaluate adversarial attacks in real-world scenarios 4. future direction

1. Intro

2. Definitions of Terms

3. Adversarial Attacks

3.1 Attacks For Classification

3.1.1 Box-constrained L-BFGs

3.1.2 Fast Gradient Sign Method

3.1.3 Basic and Least-Likely-Class Iterative Methods

3.1.4 Jacobian-Based Saliency Map Attack(JSMA)

3.1.5 One Pixel Attack

3.1.6 Carlini And Wagner Attacks(C&W)

3.1.7 Deepfool

3.1.8 Universal Adversarial Perturbations

3.1.9 UPSET and ANGRI

3.1.10 HOUDINI

3.1.11 Adversarial Transformation Networks(ATNs)

3.1.12 Miscellaneous Attacks

3.2 Attacks Beyond Classification/Recognization

3.2.1 Attack on Autoencoders and Generative Models

3.2.2 Attack on Recurrent Neural Networks

3.2.3 Attack on Deep Reinforcement Learning

3.2.4 Attack on Semantic Segmentation & Object Detection

3.2.5 Attack on Face Attributes

4. Attacks in the Real World

4.1 Cell-phone Camera Attack

4.2 Road Sign Attack

4.3 Generic Adversarial 3D Objects

4.4 Cyberspace Attacks

4.5 Robotic Vision & Visual QA Attacks

5. On the Existence of Adversarial Examples

5.1 Limits on Adversarial Robustness

5.2 Space of Adversarial Examples

5.3 Boundary Tilting Perspective

5.4 Prediction Uncertainty and Evolutionary Stalling Of Traning Cause Adversaries

5.5 Accuracy-Adversarial Robustness Correlation

5.6 More on Linearity As The Source

5.7 Existence of Universal Perturbations

6. Defenses Against Adversarial Attacks

6.1 Modified Traning/Input

6.2 Modifying The Network

6.3 Network Add-ons

7. Outlook of the Research Direction

8. Conclusion

posted @ 2023-08-29 16:07  雪溯  阅读(3)  评论(0编辑  收藏  举报