上一页 1 ··· 23 24 25 26 27 28 29 30 31 ··· 35 下一页
摘要: Wang X., He K., Guo C., Weinberger K., Hopcroft H., AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets. arX 阅读全文
posted @ 2020-06-17 11:30 馒头and花卷 阅读(426) 评论(0) 推荐(0) 编辑
摘要: Xiao C, Li B, Zhu J, et al. Generating Adversarial Examples with Adversarial Networks[J]. arXiv: Cryptography and Security, 2018. @article{xiao2018gen 阅读全文
posted @ 2020-06-16 10:46 馒头and花卷 阅读(321) 评论(0) 推荐(0) 编辑
摘要: Wang Y, Zou D, Yi J, et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples[C]. international conference on learning repre 阅读全文
posted @ 2020-06-13 08:52 馒头and花卷 阅读(306) 评论(0) 推荐(0) 编辑
摘要: Kingma D P, Ba J. Adam: A Method for Stochastic Optimization[J]. arXiv: Learning, 2014. @article{kingma2014adam:, title=, author={Kingma, Diederik P a 阅读全文
posted @ 2020-06-04 21:59 馒头and花卷 阅读(1288) 评论(0) 推荐(0) 编辑
摘要: Moosavidezfooli S, Fawzi A, Fawzi O, et al. Universal Adversarial Perturbations[C]. computer vision and pattern recognition, 2017: 86-94. @article{moo 阅读全文
posted @ 2020-06-03 21:51 馒头and花卷 阅读(727) 评论(0) 推荐(0) 编辑
摘要: Schmidt L, Santurkar S, Tsipras D, et al. Adversarially Robust Generalization Requires More Data[C]. neural information processing systems, 2018: 5014 阅读全文
posted @ 2020-06-02 20:46 馒头and花卷 阅读(302) 评论(0) 推荐(0) 编辑
摘要: Samangouei P, Kabkab M, Chellappa R, et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.[J]. arXiv: Comput 阅读全文
posted @ 2020-05-28 15:26 馒头and花卷 阅读(497) 评论(0) 推荐(0) 编辑
摘要: Athalye A, Carlini N, Wagner D, et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples[J]. arXiv: 阅读全文
posted @ 2020-05-28 15:05 馒头and花卷 阅读(607) 评论(0) 推荐(0) 编辑
摘要: [TOC] "Keskar N S, Mudigere D, Nocedal J, et al. On Large Batch Training for Deep Learning: Generalization Gap and Sharp Minima[J]. arXiv: Learning, 2 阅读全文
posted @ 2020-05-24 20:24 馒头and花卷 阅读(327) 评论(0) 推荐(0) 编辑
摘要: Richard D. Gill, Product integration 一般的积分是指黎曼积分, 其计算是把区域无限细分求和并取极限, 有另外一种积分是把区域无限细分求积并取极限, 这个在生存模型中有很多应用. 生存模型 设生存的时间为随机变量$T$, 则生存函数定义为 \[ S(t):= \ma 阅读全文
posted @ 2020-05-23 20:22 馒头and花卷 阅读(217) 评论(0) 推荐(0) 编辑
摘要: Koh P W, Liang P. Understanding black-box predictions via influence functions[C]. international conference on machine learning, 2017: 1885-1894. @arti 阅读全文
posted @ 2020-05-21 21:06 馒头and花卷 阅读(408) 评论(0) 推荐(0) 编辑
摘要: Ilyas A, Santurkar S, Tsipras D, et al. Adversarial Examples Are Not Bugs, They Are Features[C]. neural information processing systems, 2019: 125-136. 阅读全文
posted @ 2020-05-15 16:07 馒头and花卷 阅读(687) 评论(17) 推荐(1) 编辑
摘要: Moosavidezfooli S, Fawzi A, Frossard P, et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks[C]. computer vision and pattern rec 阅读全文
posted @ 2020-05-07 21:27 馒头and花卷 阅读(608) 评论(2) 推荐(0) 编辑
摘要: Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami, The Limitations of Deep Learning in Adversarial Set 阅读全文
posted @ 2020-05-07 11:34 馒头and花卷 阅读(482) 评论(6) 推荐(1) 编辑
摘要: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD 概 有很多种方法能够生成对抗样本(adversarial samples), 但是真实世界中是否存在这样的对抗样本呢? 阅读全文
posted @ 2020-05-05 20:31 馒头and花卷 阅读(1099) 评论(4) 推荐(0) 编辑
摘要: Pirmin Lemberger, Ivan Panico, A Primer on Domain Adaptation Theory and Applications, 2019. 概 机器学习分为训练和测试俩步骤, 且往往假设训练样本的分布和测试样本的分布是一致的, 但是这种情况在实际中并不一定 阅读全文
posted @ 2020-05-04 19:29 馒头and花卷 阅读(325) 评论(0) 推荐(0) 编辑
摘要: Wang J, Chen Y, Chakraborty R, et al. Orthogonal Convolutional Neural Networks.[J]. arXiv: Computer Vision and Pattern Recognition, 2019. @article{wan 阅读全文
posted @ 2020-04-23 22:56 馒头and花卷 阅读(703) 评论(0) 推荐(0) 编辑
摘要: Cyr E C, Gulian M, Patel R G, et al. Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint.[J]. arXiv: Learning, 201 阅读全文
posted @ 2020-04-23 14:53 馒头and花卷 阅读(238) 评论(0) 推荐(0) 编辑
摘要: He K, Zhang X, Ren S, et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification[C]. international conference 阅读全文
posted @ 2020-04-23 13:19 馒头and花卷 阅读(402) 评论(0) 推荐(0) 编辑
摘要: Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks[C]. international conference on artificial intelligence 阅读全文
posted @ 2020-04-23 10:51 馒头and花卷 阅读(419) 评论(0) 推荐(0) 编辑
上一页 1 ··· 23 24 25 26 27 28 29 30 31 ··· 35 下一页