nn.Linear nn.Conv2d nn.BatchNorm2d
conv,BN,Linear conv:https://blog.csdn.net/Strive_For_Future/article/details/83240232 1)conv2d.weight shape=[输出channels,输入channels,kernel_size,kernel_size] 2)conv2d.bias shape=[输出channels] BN:https://www.cnblogs.com/tingtin/p/12523701.html 尺寸:输入输出一样 m = nn.BatchNorm2d(2,affine=True) #2表示输出通道数,affine=True表示权重w和偏重b将被使用学习 m.weight:tensor([1., 1.], requires_grad=True) m.bias:tensor([0., 0.], requires_grad=True)#w,b都是大小维输出通道数的向量 Linear:https://www.cnblogs.com/tingtin/p/12425849.html nn.Linear()用于设置全连接层,输入输出均为二维张量,形状为[batch_size, size] def __init__(self, in_features, out_features, bias=True): super(Linear, self).__init__() self.in_features = in_features self.out_features = out_features self.weight = Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) self.reset_parameters()