【项目实战】CNN手写识别
由于只需要修改之前基于ANN模型代码的模型设计部分所以篇幅较短,简单的加点注释给自己查看即可
视频链接:https://www.bilibili.com/video/BV1Y7411d7Ys?p=10
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5) #卷积层传递
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.pooling = torch.nn.MaxPool2d(2) #池化层
self.fc = torch.nn.Linear(320, 10)
def forward(self, x):
# flatten data from (n,1,28,28) to (n, 784)
batch_size = x.size(0)
x = F.relu(self.pooling(self.conv1(x)))
x = F.relu(self.pooling(self.conv2(x)))
x = x.view(batch_size, -1) # -1 此处自动算出的是320
x = self.fc(x)
return x
留下来的作业是自己修改卷积层,这是我的代码
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.conv3 = torch.nn.Conv2d(20, 30, kernel_size=2)
self.pooling = torch.nn.MaxPool2d(2)
self.fc1 = torch.nn.Linear(30, 20)
self.fc2 = torch.nn.Linear(20, 10)
def forward(self, x):
batch_size = x.size(0)
x = F.relu(self.pooling(self.conv1(x)))
x = F.relu(self.pooling(self.conv2(x)))
x = F.relu(self.pooling(self.conv3(x)))
x = x.view(batch_size, -1)
x = self.fc1(x)
x = self.fc2(x)
return x
这个作业完成过程让我理解了
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
这句代码的意思。他的意思是一层变成10层,卷积为5*5,所以在执行完
self.conv3 = torch.nn.Conv2d(20, 30, kernel_size=2)
这一步后,你就要知道,你传入20通道,输出是30通道,卷积层为2x2,这个时候你的通道数量只有30,而图像也变成1x1了(卷积导致),所以你的全连接层只有30往下降维
另外我用CPU和GPU 分别跑了一下这个任务
开头加入了
start = time.time()
程序结尾加入
end = time.time()
print("程序运行时间为:{}".format(end-start))
这样一来就可以看到分别在CPU和GPU的运行时间
CPU:226s GPU:148s
确实GPU快
本文来自博客园,作者:Lugendary,转载请注明原文链接:https://www.cnblogs.com/lugendary/p/16153560.html