(五) PYSYFT + OPACUS:具有差异隐私的联邦学习

注意:如果您想要更多演示来展示您可以使用 PySyft 做什么,您可以在 Twitter 上关注@theoryffel@openminedorg感谢所有反馈!

目前,Privacy-Preserving ML 中的许多作品都在探索联邦学习和差分隐私,但恰好很难将它们一起使用,并且关于如何使用它们的开源示例很少。

我们在这里展示了一个非常简单的示例,将联邦学习 (FL) 与差分隐私 (DP) 相结合,这可能是试验这些伟大技术的有趣基线。更具体地说,我们展示了 PyTorch 发布的 DP Opacus 库如何以很少的开销用于 PySyft FL 工作流。

免责声明:有很多方法可以改进这一点,如果您想弄脏手,我强烈建议您这样做!

了解更多:如果您想了解有关 DP 的更多信息,我在本文末尾列出了 OpenMined 博客文章列表,其中更深入地解释了这些概念 =)

设置

我们正在展示一个基于 MNIST 的简单卷积模型的差异私有和联合训练。使用的模型是一样Kritika最近的一篇博客

我们在这里考虑将数据集划分到两个工作人员之间,他们在 10 个时期内通过对两个模型的平均值进行聚合之前以不同的私有方式在自己的分区上训练他们的模型 1 个时期。当然,有数百万种方法可以改进这一点,包括以异构方式拆分数据集、添加安全聚合等等。

进口

我们为 PyTorch 进行经典导入,以及我们将使用的来自 Opacus 的 PrivacyEngine 引擎。

from tqdm import tqdm

import torch as th
from torchvision import datasets, transforms
from opacus import PrivacyEngine 

接下来是 PySyft 进口,我们的两个工人 alice 和 bob!

import syft as sy

hook = sy.TorchHook(th)
alice = sy.VirtualWorker(hook, id="alice")
bob = sy.VirtualWorker(hook, id="bob")
workers = [alice, bob]

# this is done to have the local worker (you on your notebook!) have a registry
# of objects like every other workers, which is disabled by default but needed here
sy.local_worker.is_client_worker = False

联合设置

我们现在将模拟工作人员持有数据集的一个分区,通过实际将其发送给他们,这是使用该.federate方法完成的在现实世界的环境中,所有工人都会带着他们自己的数据,我们会请求一个指向这些数据的指针。

train_datasets = datasets.MNIST('../mnist',
                 train=True, download=True,
                 transform=transforms.Compose([transforms.ToTensor(),
                 transforms.Normalize((0.1307,), (0.3081,)),])
                 ).federate(*workers)

接下来,我们为每个工人创建模型的副本,并将这些模型发送给他们。我们还将一个专用优化器和一个附加到优化器的 Opacus 隐私引擎放在一起,这将使每个工作人员的培训差异化私有,并跟踪花费的隐私预算。请注意,我们可以为数据集的每个分区设置不同的隐私要求!

我们还创建了一个用于模型聚合的 local_model,正如您现在将看到的。

def make_model():
    return th.nn.Sequential(
        th.nn.Conv2d(1, 16, 8, 2, padding=3),
        th.nn.ReLU(),
        th.nn.MaxPool2d(2, 1),
        th.nn.Conv2d(16, 32, 4, 2),
        th.nn.ReLU(),
        th.nn.MaxPool2d(2, 1),
        th.nn.Flatten(), 
        th.nn.Linear(32 * 4 * 4, 32),
        th.nn.ReLU(),
        th.nn.Linear(32, 10)
    )

# the local version that we will use to do the aggregation
local_model = make_model()

models, dataloaders, optimizers, privacy_engines = [], [], [], []
for worker in workers:
    model = make_model()
    optimizer = th.optim.SGD(model.parameters(), lr=0.1)
    model.send(worker)
    dataset = train_datasets[worker.id]
    dataloader = th.utils.data.DataLoader(dataset, batch_size=128, shuffle=True, drop_last=True)
    privacy_engine = PrivacyEngine(model,
                                   batch_size=128, 
                                   sample_size=len(dataset), 
                                   alphas=range(2,32), 
                                   noise_multiplier=1.2,
                                   max_grad_norm=1.0)
    privacy_engine.attach(optimizer)
    
    models.append(model)
    dataloaders.append(dataloader)
    optimizers.append(optimizer)
    privacy_engines.append(privacy_engine)

最后,我们需要聚合远程模型和发送新更新的功能。我们将它们分为两个功能。send_new_models将 的版本发送local_model给所有各方,同时federated_aggregation执行所有远程模型的聚合并将新版本存储在local_model请注意,我们可以通过根据每个数据集的大小进行加权平均来简单地改进它,但这里的拆分是同质的,因此没有必要。

def send_new_models(local_model, models):
    with th.no_grad():
        for remote_model in models:
            for new_param, remote_param in zip(local_model.parameters(), remote_model.parameters()):
                worker = remote_param.location
                remote_value = new_param.send(worker)
                remote_param.set_(remote_value)

            
def federated_aggregation(local_model, models):
    with th.no_grad():
        for local_param, *remote_params in zip(*([local_model.parameters()] + [model.parameters() for model in models])):
            param_stack = th.zeros(*remote_params[0].shape)
            for remote_param in remote_params:
                param_stack += remote_param.copy().get()
            param_stack /= len(remote_params)
            local_param.set_(param_stack)

训练

现在培训来了!它分 3 个步骤执行:首先,我们将模型的最后一个版本发送给每个工作人员。其次,我们远程训练一个时期的模型并提取所花费的隐私。请注意,worker 上的这个循环可以并行完成,而不是按顺序完成。最后,将模型聚合在一起。

def train(epoch, delta):
        
    # 1. Send new version of the model
    send_new_models(local_model, models)

    # 2. Train remotely the models
    for i, worker in enumerate(workers):
        dataloader = dataloaders[i]
        model = models[i]
        optimizer = optimizers[i]
        
        model.train()
        criterion = th.nn.CrossEntropyLoss()
        losses = []   
        for i, (data, target) in enumerate(tqdm(dataloader)):
            optimizer.zero_grad()
            output = model(data)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()
            losses.append(loss.get().item()) 

        sy.local_worker.clear_objects()
        epsilon, best_alpha = optimizer.privacy_engine.get_privacy_spent(delta) 
        print(
            f"[{worker.id}]\t"
            f"Train Epoch: {epoch} \t"
            f"Loss: {sum(losses)/len(losses):.4f} "
            f"(ε = {epsilon:.2f}, δ = {delta}) for α = {best_alpha}")

    # 3. Federated aggregation of the updated models
    federated_aggregation(local_model, models)
for epoch in range(5):
    train(epoch, delta=1e-5)
100%|██████████| 235/235 [00:49<00:00,  4.76it/s]
[alice]	Train Epoch: 0 	Loss: 0.6405 (ε = 0.86, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:48<00:00,  4.86it/s]
[bob]	Train Epoch: 0 	Loss: 0.5508 (ε = 0.86, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:47<00:00,  4.93it/s]
[alice]	Train Epoch: 1 	Loss: 0.1169 (ε = 0.90, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:47<00:00,  4.91it/s]
[bob]	Train Epoch: 1 	Loss: 0.1080 (ε = 0.90, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:47<00:00,  4.98it/s]
[alice]	Train Epoch: 2 	Loss: 0.0792 (ε = 0.94, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:46<00:00,  5.09it/s]
[bob]	Train Epoch: 2 	Loss: 0.0776 (ε = 0.94, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:59<00:00,  3.96it/s]
[alice]	Train Epoch: 3 	Loss: 0.0619 (ε = 0.97, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:49<00:00,  4.70it/s]
[bob]	Train Epoch: 3 	Loss: 0.0632 (ε = 0.97, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:48<00:00,  4.89it/s]
[alice]	Train Epoch: 4 	Loss: 0.0521 (ε = 1.01, δ = 1e-05) for α = 15
100%|██████████| 235/235 [00:46<00:00,  5.07it/s]
[bob]	Train Epoch: 4 	Loss: 0.0510 (ε = 1.01, δ = 1e-05) for α = 15

你可以观察到损失确实减少了!

这就是您需要知道的全部内容,现在您可以随意自己进行实验并改进此演示!

OpenMined 博客中的差异隐私文章

理论与实例

代码

GitHub 上的 Star PySyft

您还可以通过为存储库加星标来帮助我们的社区!这有助于提高对我们正在构建的酷工具的认识。

加入我们的 Slack!

了解最新进展的最佳方式是加入我们的社区!

让我们把它放在一起

这是完整的代码🙂

from tqdm import tqdm

import torch as th
from torchvision import datasets, transforms
from opacus import PrivacyEngine 
import syft as sy

hook = sy.TorchHook(th)
alice = sy.VirtualWorker(hook, id="alice")
bob = sy.VirtualWorker(hook, id="bob")
workers = [alice, bob]

sy.local_worker.is_client_worker = False

train_datasets = datasets.MNIST('../mnist',
                 train=True, download=True,
                 transform=transforms.Compose([transforms.ToTensor(),
                 transforms.Normalize((0.1307,), (0.3081,)),])
                 ).federate(*workers)

def make_model():
    return th.nn.Sequential(
        th.nn.Conv2d(1, 16, 8, 2, padding=3),
        th.nn.ReLU(),
        th.nn.MaxPool2d(2, 1),
        th.nn.Conv2d(16, 32, 4, 2),
        th.nn.ReLU(),
        th.nn.MaxPool2d(2, 1),
        th.nn.Flatten(), 
        th.nn.Linear(32 * 4 * 4, 32),
        th.nn.ReLU(),
        th.nn.Linear(32, 10)
    )

# the local version that we will use to do the aggregation
local_model = make_model()

models, dataloaders, optimizers, privacy_engines = [], [], [], []
for worker in workers:
    model = make_model()
    optimizer = th.optim.SGD(model.parameters(), lr=0.1)
    model.send(worker)
    dataset = train_datasets[worker.id]
    dataloader = th.utils.data.DataLoader(dataset, batch_size=128, shuffle=True, drop_last=True)
    privacy_engine = PrivacyEngine(model,
                                   batch_size=128, 
                                   sample_size=len(dataset), 
                                   alphas=range(2,32), 
                                   noise_multiplier=1.2,
                                   max_grad_norm=1.0)
    privacy_engine.attach(optimizer)
    
    models.append(model)
    dataloaders.append(dataloader)
    optimizers.append(optimizer)
    privacy_engines.append(privacy_engine)
    
def send_new_models(local_model, models):
    with th.no_grad():
        for remote_model in models:
            for new_param, remote_param in zip(local_model.parameters(), remote_model.parameters()):
                worker = remote_param.location
                remote_value = new_param.send(worker)
                remote_param.set_(remote_value)

            
def federated_aggregation(local_model, models):
    with th.no_grad():
        for local_param, *remote_params in zip(*([local_model.parameters()] + [model.parameters() for model in models])):
            param_stack = th.zeros(*remote_params[0].shape)
            for remote_param in remote_params:
                param_stack += remote_param.copy().get()
            param_stack /= len(remote_params)
            local_param.set_(param_stack)

def train(epoch, delta):
        
    # 1. Send new version of the model
    send_new_models(local_model, models)

    # 2. Train remotely the models
    for i, worker in enumerate(workers):
        dataloader = dataloaders[i]
        model = models[i]
        optimizer = optimizers[i]
        
        model.train()
        criterion = th.nn.CrossEntropyLoss()
        losses = []   
        for i, (data, target) in enumerate(tqdm(dataloader)):
            optimizer.zero_grad()
            output = model(data)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()
            losses.append(loss.get().item()) 

        sy.local_worker.clear_objects()
        epsilon, best_alpha = optimizer.privacy_engine.get_privacy_spent(delta) 
        print(
            f"[{worker.id}]\t"
            f"Train Epoch: {epoch} \t"
            f"Loss: {sum(losses)/len(losses):.4f} "
            f"(ε = {epsilon:.2f}, δ = {delta}) for α = {best_alpha}")

    # 3. Federated aggregation of the updated models
    federated_aggregation(local_model, models)

for epoch in range(5):
    train(epoch, delta=1e-5)

非常感谢 Kritika Prakash 校对这篇博文 =)

  • 这篇文章的作者是:
  •  
https://blog.openmined.org/pysyft-opacus-federated-learning-with-differential-privacy/
posted @ 2021-09-13 21:29  jasonzhangxianrong  阅读(1503)  评论(1编辑  收藏  举报