Freeze partial parameters while training

1. requires_grad = False

Set all parameters in the current model frozen:

for p in self.parameters():
    p.requires_grad = False

 

Filter some specific layers by name to be frozen:

for n, m in self.named_modules():
    if 'stc' not in n:
        for p in m.parameters():
            p.requires_grad = False
    else:
        for p in m.parameters():
            p.requires_grad = True

 

2. Filter out unfrozen parameters, pass it to the optimizer

if args.freeze_backbone_update:
    optimizer = torch.optim.SGD(filter(lambda para: para.requires_grad, org_model.parameters()),
                                args.lr,
                                momentum=args.momentum,
                                weight_decay=args.weight_decay)
else:
    optimizer = torch.optim.SGD(org_model.parameters(),
                                args.lr,
                                momentum=args.momentum,
                                weight_decay=args.weight_decay)

 

posted @ 2019-03-29 22:04  leizhao  阅读(263)  评论(0编辑  收藏  举报