随笔分类 -  Robust Learning

摘要:Koh P W, Liang P. Understanding black-box predictions via influence functions[C]. international conference on machine learning, 2017: 1885-1894. @arti 阅读全文
posted @ 2020-05-21 21:06 馒头and花卷 阅读(422) 评论(0) 推荐(0) 编辑
摘要:Ilyas A, Santurkar S, Tsipras D, et al. Adversarial Examples Are Not Bugs, They Are Features[C]. neural information processing systems, 2019: 125-136. 阅读全文
posted @ 2020-05-15 16:07 馒头and花卷 阅读(693) 评论(17) 推荐(1) 编辑
摘要:Moosavidezfooli S, Fawzi A, Frossard P, et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks[C]. computer vision and pattern rec 阅读全文
posted @ 2020-05-07 21:27 馒头and花卷 阅读(609) 评论(2) 推荐(0) 编辑
摘要:Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami, The Limitations of Deep Learning in Adversarial Set 阅读全文
posted @ 2020-05-07 11:34 馒头and花卷 阅读(485) 评论(6) 推荐(1) 编辑
摘要:Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD 概 有很多种方法能够生成对抗样本(adversarial samples), 但是真实世界中是否存在这样的对抗样本呢? 阅读全文
posted @ 2020-05-05 20:31 馒头and花卷 阅读(1108) 评论(4) 推荐(0) 编辑
摘要:Su J, Vargas D V, Sakurai K, et al. One Pixel Attack for Fooling Deep Neural Networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 阅读全文
posted @ 2020-04-14 14:41 馒头and花卷 阅读(337) 评论(0) 推荐(0) 编辑
摘要:Nicholas Carlini, David Wagner, Towards Evaluating the Robustness of Neural Networks 概 提出了在不同范数下0,2,下生成adversarial samples的 阅读全文
posted @ 2020-04-08 16:54 馒头and花卷 阅读(756) 评论(0) 推荐(0) 编辑
摘要:Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami, Distillation as a Defense to Adversarial Perturbations against Deep Neural Net 阅读全文
posted @ 2020-04-08 16:50 馒头and花卷 阅读(408) 评论(1) 推荐(0) 编辑
摘要:Dan Hendrycks, Norman Mu,, et. al, AUGMIX : A SIMPLE DATA PROCESSING METHOD TO IMPROVE ROBUSTNESS AND UNCERTAINTY. 概 本文介绍AUGMIX算法——对现有的的一些augmentation 阅读全文
posted @ 2020-03-24 13:49 馒头and花卷 阅读(595) 评论(0) 推荐(0) 编辑
摘要:Xie C, Tan M, Gong B, et al. Adversarial Examples Improve Image Recognition.[J]. arXiv: Computer Vision and Pattern Recognition, 2019. @article{xie201 阅读全文
posted @ 2020-03-12 14:59 馒头and花卷 阅读(1311) 评论(0) 推荐(0) 编辑
摘要:Zhang H, Yu Y, Jiao J, et al. Theoretically Principled Trade-off between Robustness and Accuracy[J]. arXiv: Learning, 2019. @article{zhang2019theoreti 阅读全文
posted @ 2020-03-12 14:24 馒头and花卷 阅读(1454) 评论(0) 推荐(0) 编辑
摘要:Madry A, Makelov A, Schmidt L, et al. Towards Deep Learning Models Resistant to Adversarial Attacks.[J]. arXiv: Machine Learning, 2017. @article{madry 阅读全文
posted @ 2020-03-04 20:08 馒头and花卷 阅读(904) 评论(0) 推荐(0) 编辑
摘要:Goodfellow I, Shlens J, Szegedy C, et al. Explaining and Harnessing Adversarial Examples[J]. arXiv: Machine Learning, 2014. @article{goodfellow2014exp 阅读全文
posted @ 2020-03-04 19:35 馒头and花卷 阅读(463) 评论(0) 推荐(0) 编辑
摘要:Papernot N, Mcdaniel P, Goodfellow I, et al. Practical Black-Box Attacks against Machine Learning[C]. computer and communications security, 2017: 506- 阅读全文
posted @ 2020-03-04 19:32 馒头and花卷 阅读(375) 评论(0) 推荐(1) 编辑

点击右上角即可分享
微信分享提示