随笔分类 - 【五期邹昱夫】
摘要:> "Zhang, Zhuosheng, et al. "SAFELearning: Secure Aggregation in Federated Learning with Backdoor Detectability." IEEE Transactions on Information For
阅读全文
摘要:> "Li, Haoyang, et al. "3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning." 2023 IEEE Symposium on Security an
阅读全文
摘要:> "Gu, Tianyu, et al. "Badnets: Evaluating backdooring attacks on deep neural networks." IEEE Access 7 (2019): 47230-47244." 本文提出了外包机器学习时选择值得信赖的提供商的重要
阅读全文
摘要:> "Liu, Kang, Brendan Dolan-Gavitt, and Siddharth Garg. "Fine-pruning: Defending against backdooring attacks on deep neural networks." Research in Att
阅读全文
摘要:> "Wang, Haotao, et al. "Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork." Advances in Neural Informa
阅读全文
摘要:> "Wu, Dongxian, and Yisen Wang. "Adversarial neuron pruning purifies backdoored deep models." Advances in Neural Information Processing Systems 34 (2
阅读全文
摘要:"Jia X, Zhang Y, Wu B, et al. LAS-AT: adversarial training with learnable attack strategy[C]//Proceedings of the IEEE/CVF Conference on Computer Visio
阅读全文
摘要:"Jeon J, Lee K, Oh S, et al. Gradient inversion with generative image prior[J]. Advances in neural information processing systems, 2021, 34: 29898-299
阅读全文
摘要:"Geiping J, Bauermeister H, Dröge H, et al. Inverting gradients-how easy is it to break privacy in federated learning?[J]. Advances in Neural Informat
阅读全文
摘要:"Zhao B, Mopuri K R, Bilen H. idlg: Improved deep leakage from gradients[J]. arXiv preprint arXiv:2001.02610, 2020." 本文发现共享梯度肯定会泄露数据真实标签。我们提出了一种简单但可靠的
阅读全文
摘要:"Zhu, Ligeng, Zhijian Liu, and Song Han. "Deep leakage from gradients." Advances in neural information processing systems 32 (2019)." 本文从公开共享的梯度中获得私有训
阅读全文
摘要:"Liu, Yiyong, et al. "Membership inference attacks by exploiting loss trajectory." Proceedings of the 2022 ACM SIGSAC Conference on Computer and Commu
阅读全文
摘要:"Rezaei, Shahbaz, and Xin Liu. "On the difficulty of membership inference attacks." Proceedings of the IEEE/CVF Conference on Computer Vision and Patt
阅读全文
摘要:"Nasr M, Songi S, Thakurta A, et al. Adversary instantiation: Lower bounds for differentially private machine learning[C]//2021 IEEE Symposium on secu
阅读全文
摘要:"Jayaraman B, Evans D. Evaluating differentially private machine learning in practice[C]//USENIX Security Symposium. 2019." 本文对机器学习不同隐私机制进行评估。评估重点放在梯度
阅读全文
摘要:"Salem A, Wen R, Backes M, et al. Dynamic backdoor attacks against machine learning models[C]//2022 IEEE 7th European Symposium on Security and Privac
阅读全文
摘要:"Song C, Shmatikov V. Auditing data provenance in text-generation models[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge
阅读全文
摘要:"Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022." 本文认为成员
阅读全文
摘要:"Jia, Jinyuan, and Neil Zhenqiang Gong. "AttriGuard: A practical defense against attribute inference attacks via adversarial machine learning." 27th U
阅读全文
摘要:"arXiv:2111.09679, 2021." 文章关注机器学习模型的隐私泄露问题,成员推理攻击:给出一条样本,可以推断该样本是否在模型的训练数据集中——即便对模型的参数、结构知之甚少,该攻击仍然有效。本质还是使用影子模型的方法训练攻击模型。但是针对攻击者不知道目标模型的训练集,文章提出了影子学
阅读全文