J2、ResNet50V2算法实战与解析
- 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
- 🍦 参考文章:365天深度学习训练营-第P2周:彩色识别
- 🍖 原作者:K同学啊|接辅导、项目定制
论文原文,何恺明在这篇论文中提出了一种新的残差单元。我们将这篇论文中的ResNet结构称为ResNetV2 Identity Mappings in Deep Residual Networks.pdf (1.1 MB)
2、关于残差结构的不同尝试¶
图中(a-f)都是作者对残差结构的 shortcut 部分进行的不同尝试 ,作者对不同 shortcut 结构的尝试结果如下表所示 。
使用ResNet-110在CIFAR-10测试集上的分类错误,对所有残差单元应用了不同类型的shortcut connections。当测试误差高于20%时,标注为“fail”。
作者用不同 shortcut 结构的 ResNet-110 在 CIFAR-10 数据集上做测试,发现最原始的(a)original 结构是最好的,也就是 identity mapping 恒等映射是最好的
3、关于激活的尝试¶
最好的结果是(e)full pre-activation,其次到(a)original。
二、模型复现¶
In [8]:
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets
from torchvision.datasets import ImageFolder
from sklearn.model_selection import KFold
from torch.optim.lr_scheduler import StepLR, MultiStepLR, LambdaLR, ExponentialLR, CosineAnnealingLR, ReduceLROnPlateau
import os,PIL,pathlib,random
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
Out[8]:
device(type='cuda')
2、遍历数据¶
In [9]:
data_dir = "D:/code/jupyter/data/resnet/bird_photos"
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)
图片总数为: 565
3、导入数据¶
In [10]:
train_transforms = transforms.Compose([
transforms.Resize([224, 224]), # 将输入图片resize成统一尺寸
# transforms.RandomHorizontalFlip(), # 随机水平翻转
transforms.ToTensor(), # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
transforms.Normalize( # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]) # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])
total_data = datasets.ImageFolder("D:/code/jupyter/data/resnet/bird_photos",transform=train_transforms)
In [11]:
total_data.class_to_idx
Out[11]:
{'Bananaquit': 0,
'Black Skimmer': 1,
'Black Throated Bushtiti': 2,
'Cockatoo': 3}
4、划分数据集¶
In [12]:
train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
train_dataset, test_dataset
Out[12]:
(<torch.utils.data.dataset.Subset at 0x204ff63fdf0>,
<torch.utils.data.dataset.Subset at 0x204ff63f5b0>)
In [14]:
batch_size = 32
train_dl = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=3)
test_dl = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=3)
In [15]:
for X, y in test_dl:
print("Shape of X [N, C, H, W]: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
Shape of X [N, C, H, W]: torch.Size([32, 3, 224, 224])
Shape of y: torch.Size([32]) torch.int64
5、显示图片信息¶
In [16]:
#%%
import matplotlib.pyplot as plt
# 指定图片大小,图像大小为20宽、5高的绘图(单位为英寸inch)
plt.figure(figsize=(80, 20))
for i, imgs in enumerate(X[:20]):
# 维度缩减X
npimg = imgs.numpy().transpose((1, 2, 0))
# 将整个figure分成2行10列,绘制第i+1个子图。
plt.subplot(2, 10, i+1)
plt.imshow(npimg, cmap=plt.cm.binary)
plt.axis('off')
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
2、残差块¶
In [19]:
class Block2(nn.Module):
def __init__(self, in_channel, filters, kernel_size=3, stride=1, conv_shortcut=False):
super(Block2, self).__init__()
self.preact = nn.Sequential(
nn.BatchNorm2d(in_channel),
nn.ReLU(True)
)
self.shortcut = conv_shortcut
if self.shortcut:
self.short = nn.Conv2d(in_channel, 4*filters, 1, stride=stride, padding=0, bias=False)
elif stride>1:
self.short = nn.MaxPool2d(kernel_size=1, stride=stride, padding=0)
else:
self.short = nn.Identity()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channel, filters, 1, stride=1, bias=False),
nn.BatchNorm2d(filters),
nn.ReLU(True)
)
self.conv2 = nn.Sequential(
nn.Conv2d(filters, filters, kernel_size, stride=stride, padding=1, bias=False),
nn.BatchNorm2d(filters),
nn.ReLU(True)
)
self.conv3 = nn.Conv2d(filters, 4*filters, 1, stride=1, bias=False)
def forward(self, x):
x1 = self.preact(x)
if self.shortcut:
x2 = self.short(x1)
else:
x2 = self.short(x)
x1 = self.conv1(x1)
x1 = self.conv2(x1)
x1 = self.conv3(x1)
x = x1 + x2
return x
3、堆叠残差块¶
In [20]:
class Stack2(nn.Module):
def __init__(self, in_channel, filters, blocks, stride=2):
super(Stack2, self).__init__()
self.conv = nn.Sequential()
self.conv.add_module(str(0), Block2(in_channel, filters, conv_shortcut=True))
for i in range(1, blocks-1):
self.conv.add_module(str(i), Block2(4*filters, filters))
self.conv.add_module(str(blocks-1), Block2(4*filters, filters, stride=stride))
def forward(self, x):
x = self.conv(x)
return x
4、构建ResNet50V2¶
In [21]:
class ResNet50V2(nn.Module):
def __init__(self,
include_top=True, # 是否包含位于网络顶部的全链接层
preact=True, # 是否使用预激活
use_bias=True, # 是否对卷积层使用偏置
input_shape=[224, 224, 3],
classes=1000,
pooling=None): # 用于分类图像的可选类数
super(ResNet50V2, self).__init__()
self.conv1 = nn.Sequential()
self.conv1.add_module('conv', nn.Conv2d(3, 64, 7, stride=2, padding=3, bias=use_bias, padding_mode='zeros'))
if not preact:
self.conv1.add_module('bn', nn.BatchNorm2d(64))
self.conv1.add_module('relu', nn.ReLU())
self.conv1.add_module('max_pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
self.conv2 = Stack2(64, 64, 3)
self.conv3 = Stack2(256, 128, 4)
self.conv4 = Stack2(512, 256, 6)
self.conv5 = Stack2(1024, 512, 3, stride=1)
self.post = nn.Sequential()
if preact:
self.post.add_module('bn', nn.BatchNorm2d(2048))
self.post.add_module('relu', nn.ReLU())
if include_top:
self.post.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))
self.post.add_module('flatten', nn.Flatten())
self.post.add_module('fc', nn.Linear(2048, classes))
else:
if pooling=='avg':
self.post.add_module('avg_pool', nn.AdaptiveAvgPool2d((1, 1)))
elif pooling=='max':
self.post.add_module('max_pool', nn.AdaptiveMaxPool2d((1, 1)))
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.post(x)
return x
5、查看结构模型¶
In [24]:
model = ResNet50V2().to(device)
model
Out[24]:
ResNet50V2(
(conv1): Sequential(
(conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
(max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
)
(conv2): Stack2(
(conv): Sequential(
(0): Block2(
(preact): Sequential(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(conv1): Sequential(
(0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(1): Block2(
(preact): Sequential(
(0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(2): Block2(
(preact): Sequential(
(0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv1): Sequential(
(0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
)
(conv3): Stack2(
(conv): Sequential(
(0): Block2(
(preact): Sequential(
(0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(conv1): Sequential(
(0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(1): Block2(
(preact): Sequential(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(2): Block2(
(preact): Sequential(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(3): Block2(
(preact): Sequential(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv1): Sequential(
(0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
)
(conv4): Stack2(
(conv): Sequential(
(0): Block2(
(preact): Sequential(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(conv1): Sequential(
(0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(1): Block2(
(preact): Sequential(
(0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(2): Block2(
(preact): Sequential(
(0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(3): Block2(
(preact): Sequential(
(0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(4): Block2(
(preact): Sequential(
(0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(5): Block2(
(preact): Sequential(
(0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv1): Sequential(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
)
(conv5): Stack2(
(conv): Sequential(
(0): Block2(
(preact): Sequential(
(0): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(conv1): Sequential(
(0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(1): Block2(
(preact): Sequential(
(0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(2): Block2(
(preact): Sequential(
(0): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(short): Identity()
(conv1): Sequential(
(0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
)
(post): Sequential(
(bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))
(flatten): Flatten(start_dim=1, end_dim=-1)
(fc): Linear(in_features=2048, out_features=1000, bias=True)
)
)
In [22]:
# 统计模型参数量以及其他指标
import torchsummary as summary
summary.summary(model, (3, 224, 224))
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 112, 112] 9,472
MaxPool2d-2 [-1, 64, 56, 56] 0
BatchNorm2d-3 [-1, 64, 56, 56] 128
ReLU-4 [-1, 64, 56, 56] 0
Conv2d-5 [-1, 256, 56, 56] 16,384
Conv2d-6 [-1, 64, 56, 56] 4,096
BatchNorm2d-7 [-1, 64, 56, 56] 128
ReLU-8 [-1, 64, 56, 56] 0
Conv2d-9 [-1, 64, 56, 56] 36,864
BatchNorm2d-10 [-1, 64, 56, 56] 128
ReLU-11 [-1, 64, 56, 56] 0
Conv2d-12 [-1, 256, 56, 56] 16,384
Block2-13 [-1, 256, 56, 56] 0
BatchNorm2d-14 [-1, 256, 56, 56] 512
ReLU-15 [-1, 256, 56, 56] 0
Identity-16 [-1, 256, 56, 56] 0
Conv2d-17 [-1, 64, 56, 56] 16,384
BatchNorm2d-18 [-1, 64, 56, 56] 128
ReLU-19 [-1, 64, 56, 56] 0
Conv2d-20 [-1, 64, 56, 56] 36,864
BatchNorm2d-21 [-1, 64, 56, 56] 128
ReLU-22 [-1, 64, 56, 56] 0
Conv2d-23 [-1, 256, 56, 56] 16,384
Block2-24 [-1, 256, 56, 56] 0
BatchNorm2d-25 [-1, 256, 56, 56] 512
ReLU-26 [-1, 256, 56, 56] 0
MaxPool2d-27 [-1, 256, 28, 28] 0
Conv2d-28 [-1, 64, 56, 56] 16,384
BatchNorm2d-29 [-1, 64, 56, 56] 128
ReLU-30 [-1, 64, 56, 56] 0
Conv2d-31 [-1, 64, 28, 28] 36,864
BatchNorm2d-32 [-1, 64, 28, 28] 128
ReLU-33 [-1, 64, 28, 28] 0
Conv2d-34 [-1, 256, 28, 28] 16,384
Block2-35 [-1, 256, 28, 28] 0
Stack2-36 [-1, 256, 28, 28] 0
BatchNorm2d-37 [-1, 256, 28, 28] 512
ReLU-38 [-1, 256, 28, 28] 0
Conv2d-39 [-1, 512, 28, 28] 131,072
Conv2d-40 [-1, 128, 28, 28] 32,768
BatchNorm2d-41 [-1, 128, 28, 28] 256
ReLU-42 [-1, 128, 28, 28] 0
Conv2d-43 [-1, 128, 28, 28] 147,456
BatchNorm2d-44 [-1, 128, 28, 28] 256
ReLU-45 [-1, 128, 28, 28] 0
Conv2d-46 [-1, 512, 28, 28] 65,536
Block2-47 [-1, 512, 28, 28] 0
BatchNorm2d-48 [-1, 512, 28, 28] 1,024
ReLU-49 [-1, 512, 28, 28] 0
Identity-50 [-1, 512, 28, 28] 0
Conv2d-51 [-1, 128, 28, 28] 65,536
BatchNorm2d-52 [-1, 128, 28, 28] 256
ReLU-53 [-1, 128, 28, 28] 0
Conv2d-54 [-1, 128, 28, 28] 147,456
BatchNorm2d-55 [-1, 128, 28, 28] 256
ReLU-56 [-1, 128, 28, 28] 0
Conv2d-57 [-1, 512, 28, 28] 65,536
Block2-58 [-1, 512, 28, 28] 0
BatchNorm2d-59 [-1, 512, 28, 28] 1,024
ReLU-60 [-1, 512, 28, 28] 0
Identity-61 [-1, 512, 28, 28] 0
Conv2d-62 [-1, 128, 28, 28] 65,536
BatchNorm2d-63 [-1, 128, 28, 28] 256
ReLU-64 [-1, 128, 28, 28] 0
Conv2d-65 [-1, 128, 28, 28] 147,456
BatchNorm2d-66 [-1, 128, 28, 28] 256
ReLU-67 [-1, 128, 28, 28] 0
Conv2d-68 [-1, 512, 28, 28] 65,536
Block2-69 [-1, 512, 28, 28] 0
BatchNorm2d-70 [-1, 512, 28, 28] 1,024
ReLU-71 [-1, 512, 28, 28] 0
MaxPool2d-72 [-1, 512, 14, 14] 0
Conv2d-73 [-1, 128, 28, 28] 65,536
BatchNorm2d-74 [-1, 128, 28, 28] 256
ReLU-75 [-1, 128, 28, 28] 0
Conv2d-76 [-1, 128, 14, 14] 147,456
BatchNorm2d-77 [-1, 128, 14, 14] 256
ReLU-78 [-1, 128, 14, 14] 0
Conv2d-79 [-1, 512, 14, 14] 65,536
Block2-80 [-1, 512, 14, 14] 0
Stack2-81 [-1, 512, 14, 14] 0
BatchNorm2d-82 [-1, 512, 14, 14] 1,024
ReLU-83 [-1, 512, 14, 14] 0
Conv2d-84 [-1, 1024, 14, 14] 524,288
Conv2d-85 [-1, 256, 14, 14] 131,072
BatchNorm2d-86 [-1, 256, 14, 14] 512
ReLU-87 [-1, 256, 14, 14] 0
Conv2d-88 [-1, 256, 14, 14] 589,824
BatchNorm2d-89 [-1, 256, 14, 14] 512
ReLU-90 [-1, 256, 14, 14] 0
Conv2d-91 [-1, 1024, 14, 14] 262,144
Block2-92 [-1, 1024, 14, 14] 0
BatchNorm2d-93 [-1, 1024, 14, 14] 2,048
ReLU-94 [-1, 1024, 14, 14] 0
Identity-95 [-1, 1024, 14, 14] 0
Conv2d-96 [-1, 256, 14, 14] 262,144
BatchNorm2d-97 [-1, 256, 14, 14] 512
ReLU-98 [-1, 256, 14, 14] 0
Conv2d-99 [-1, 256, 14, 14] 589,824
BatchNorm2d-100 [-1, 256, 14, 14] 512
ReLU-101 [-1, 256, 14, 14] 0
Conv2d-102 [-1, 1024, 14, 14] 262,144
Block2-103 [-1, 1024, 14, 14] 0
BatchNorm2d-104 [-1, 1024, 14, 14] 2,048
ReLU-105 [-1, 1024, 14, 14] 0
Identity-106 [-1, 1024, 14, 14] 0
Conv2d-107 [-1, 256, 14, 14] 262,144
BatchNorm2d-108 [-1, 256, 14, 14] 512
ReLU-109 [-1, 256, 14, 14] 0
Conv2d-110 [-1, 256, 14, 14] 589,824
BatchNorm2d-111 [-1, 256, 14, 14] 512
ReLU-112 [-1, 256, 14, 14] 0
Conv2d-113 [-1, 1024, 14, 14] 262,144
Block2-114 [-1, 1024, 14, 14] 0
BatchNorm2d-115 [-1, 1024, 14, 14] 2,048
ReLU-116 [-1, 1024, 14, 14] 0
Identity-117 [-1, 1024, 14, 14] 0
Conv2d-118 [-1, 256, 14, 14] 262,144
BatchNorm2d-119 [-1, 256, 14, 14] 512
ReLU-120 [-1, 256, 14, 14] 0
Conv2d-121 [-1, 256, 14, 14] 589,824
BatchNorm2d-122 [-1, 256, 14, 14] 512
ReLU-123 [-1, 256, 14, 14] 0
Conv2d-124 [-1, 1024, 14, 14] 262,144
Block2-125 [-1, 1024, 14, 14] 0
BatchNorm2d-126 [-1, 1024, 14, 14] 2,048
ReLU-127 [-1, 1024, 14, 14] 0
Identity-128 [-1, 1024, 14, 14] 0
Conv2d-129 [-1, 256, 14, 14] 262,144
BatchNorm2d-130 [-1, 256, 14, 14] 512
ReLU-131 [-1, 256, 14, 14] 0
Conv2d-132 [-1, 256, 14, 14] 589,824
BatchNorm2d-133 [-1, 256, 14, 14] 512
ReLU-134 [-1, 256, 14, 14] 0
Conv2d-135 [-1, 1024, 14, 14] 262,144
Block2-136 [-1, 1024, 14, 14] 0
BatchNorm2d-137 [-1, 1024, 14, 14] 2,048
ReLU-138 [-1, 1024, 14, 14] 0
MaxPool2d-139 [-1, 1024, 7, 7] 0
Conv2d-140 [-1, 256, 14, 14] 262,144
BatchNorm2d-141 [-1, 256, 14, 14] 512
ReLU-142 [-1, 256, 14, 14] 0
Conv2d-143 [-1, 256, 7, 7] 589,824
BatchNorm2d-144 [-1, 256, 7, 7] 512
ReLU-145 [-1, 256, 7, 7] 0
Conv2d-146 [-1, 1024, 7, 7] 262,144
Block2-147 [-1, 1024, 7, 7] 0
Stack2-148 [-1, 1024, 7, 7] 0
BatchNorm2d-149 [-1, 1024, 7, 7] 2,048
ReLU-150 [-1, 1024, 7, 7] 0
Conv2d-151 [-1, 2048, 7, 7] 2,097,152
Conv2d-152 [-1, 512, 7, 7] 524,288
BatchNorm2d-153 [-1, 512, 7, 7] 1,024
ReLU-154 [-1, 512, 7, 7] 0
Conv2d-155 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-156 [-1, 512, 7, 7] 1,024
ReLU-157 [-1, 512, 7, 7] 0
Conv2d-158 [-1, 2048, 7, 7] 1,048,576
Block2-159 [-1, 2048, 7, 7] 0
BatchNorm2d-160 [-1, 2048, 7, 7] 4,096
ReLU-161 [-1, 2048, 7, 7] 0
Identity-162 [-1, 2048, 7, 7] 0
Conv2d-163 [-1, 512, 7, 7] 1,048,576
BatchNorm2d-164 [-1, 512, 7, 7] 1,024
ReLU-165 [-1, 512, 7, 7] 0
Conv2d-166 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-167 [-1, 512, 7, 7] 1,024
ReLU-168 [-1, 512, 7, 7] 0
Conv2d-169 [-1, 2048, 7, 7] 1,048,576
Block2-170 [-1, 2048, 7, 7] 0
BatchNorm2d-171 [-1, 2048, 7, 7] 4,096
ReLU-172 [-1, 2048, 7, 7] 0
Identity-173 [-1, 2048, 7, 7] 0
Conv2d-174 [-1, 512, 7, 7] 1,048,576
BatchNorm2d-175 [-1, 512, 7, 7] 1,024
ReLU-176 [-1, 512, 7, 7] 0
Conv2d-177 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-178 [-1, 512, 7, 7] 1,024
ReLU-179 [-1, 512, 7, 7] 0
Conv2d-180 [-1, 2048, 7, 7] 1,048,576
Block2-181 [-1, 2048, 7, 7] 0
Stack2-182 [-1, 2048, 7, 7] 0
BatchNorm2d-183 [-1, 2048, 7, 7] 4,096
ReLU-184 [-1, 2048, 7, 7] 0
AdaptiveAvgPool2d-185 [-1, 2048, 1, 1] 0
Flatten-186 [-1, 2048] 0
Linear-187 [-1, 1000] 2,049,000
================================================================
Total params: 25,549,416
Trainable params: 25,549,416
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 241.69
Params size (MB): 97.46
Estimated Total Size (MB): 339.73
----------------------------------------------------------------