PyTorch固定参数

In situation of finetuning, parameters in backbone network need to be frozen. To achieve this target, there are two steps.

First, locate the layers and change their requires_grad attributes to be False.

for param in net.backbone.parameters():
    param.requires_grad = False
for pname, param in net.named_parameters():
    if(key_word in pname):
        param.requires_grad = False

Here we use parameters() or named_parameters() method, it will give both bias and weight.

 

Second, filter out those parameters who need to be updated and pass them to the optimizer.

optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad == True, net.parameters()), lr=learning_rate, momentum=mom)

 

posted @ 2020-03-20 22:15  leizhao  阅读(595)  评论(0编辑  收藏  举报