摘要: "Nasr M, Songi S, Thakurta A, et al. Adversary instantiation: Lower bounds for differentially private machine learning[C]//2021 IEEE Symposium on secu 阅读全文
posted @ 2023-02-10 23:59 方班隐私保护小组 阅读(28) 评论(0) 推荐(0) 编辑
摘要: Mugunthan, V. , A. Peraire-Bueno , and L. Kagal . "PrivacyFL: A simulator for privacy-preserving and secure federated learning.", 10.1145/3340531.3412 阅读全文
posted @ 2023-02-10 23:36 方班隐私保护小组 阅读(42) 评论(0) 推荐(0) 编辑
摘要: Thapa, C. , M. Chamikara , and S. Camtepe . "SplitFed: When Federated Learning Meets Split Learning." (2020). 本文提出了一种联邦学习(FL)和分割学习(SL)的混合方法(SFL),能够同时解 阅读全文
posted @ 2023-02-10 23:35 方班隐私保护小组 阅读(211) 评论(0) 推荐(0) 编辑
摘要: "Jayaraman B, Evans D. Evaluating differentially private machine learning in practice[C]//USENIX Security Symposium. 2019." 本文对机器学习不同隐私机制进行评估。评估重点放在梯度 阅读全文
posted @ 2023-02-10 23:09 方班隐私保护小组 阅读(29) 评论(0) 推荐(0) 编辑
摘要: Rothchild, Daniel, et al. "Fetchsgd: Communication-efficient federated learning with sketching." International Conference on Machine Learning. PMLR, 2 阅读全文
posted @ 2023-02-10 12:52 方班隐私保护小组 阅读(198) 评论(0) 推荐(0) 编辑