实验五:全连接神经网络手写数字识别实验

【实验目的】

理解神经网络原理,掌握神经网络前向推理和后向传播方法;

掌握使用pytorch框架训练和推理全连接神经网络模型的编程实现方法。

【实验内容】

1.使用pytorch框架,设计一个全连接神经网络,实现Mnist手写数字字符集的训练与识别。

 

【实验报告要求】

修改神经网络结构,改变层数观察层数对训练和检测时间,准确度等参数的影响;
修改神经网络的学习率,观察对训练和检测效果的影响;
修改神经网络结构,增强或减少神经元的数量,观察对训练的检测效果的影响。

 


import torch import torch.nn.functional as functional import torch.optim as optim from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms # global definitions BATCH_SIZE = 100 MNIST_PATH = "../../../Data/MNIST" # transform sequential transform = transforms.Compose([ transforms.ToTensor(), # mean std transforms.Normalize((0.1307,), (0.3081,)) ]) # training dataset train_dataset = datasets.MNIST(root=MNIST_PATH, train=True, download=True, transform=transform) # training loader train_loader = DataLoader(train_dataset, shuffle=True, batch_size=BATCH_SIZE) # test dataset test_dataset = datasets.MNIST(root=MNIST_PATH, train=False, download=True, transform=transform) # test loader test_loader = DataLoader(test_dataset, shuffle=False, batch_size=BATCH_SIZE) class FullyNeuralNetwork(torch.nn.Module): def __init__(self): super().__init__() # layer definitions self.layer_1 = torch.nn.Linear(784, 512) # 28 x 28 = 784 pixels as input self.layer_2 = torch.nn.Linear(512, 256) self.layer_3 = torch.nn.Linear(256, 128) self.layer_4 = torch.nn.Linear(128, 64) self.layer_5 = torch.nn.Linear(64, 10) def forward(self, data): # transform the image view x = data.view(-1, 784) # do forward calculation x = functional.relu(self.layer_1(x)) x = functional.relu(self.layer_2(x)) x = functional.relu(self.layer_3(x)) x = functional.relu(self.layer_4(x)) x = self.layer_5(x) # return results return x def train(epoch, model, criterion, optimizer): running_loss = 0.0 for batch_idx, data in enumerate(train_loader, 0): inputs, target = data optimizer.zero_grad() # forward, backward, update outputs = model(inputs) loss = criterion(outputs, target) loss.backward() optimizer.step() # print loss running_loss += loss.item() if batch_idx % 100 == 0: print('[%d, %5d] loss: %.3f' % (epoch, batch_idx, running_loss / 100)) running_loss = 0.0 def test(model): correct = 0 total = 0 with torch.no_grad(): for images, labels in test_loader: outputs = model(images) _, predicated = torch.max(outputs.data, dim=1) total += labels.size(0) correct += (predicated == labels).sum().item() print("Accuracy on test set: %d %%" % (100 * correct / total)) if __name__ == "__main__": # full neural network model model = FullyNeuralNetwork() # LOSS function criterion = torch.nn.CrossEntropyLoss() # parameters optimizer # stochastic gradient descent optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.5) # training and do gradient descent calculation for epoch in range(5): # training data train(epoch, model, criterion, optimizer) # test model test(model)

 

 

#测试集上准确率
y_test=acc_list_test
plt.plot(y_test)
plt.xlabel("Epoch")
plt.ylabel("Accuracy On TestSet")
plt.show()

 

posted @   梦浮灯  阅读(20)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· ollama系列1:轻松3步本地部署deepseek,普通电脑可用
· 按钮权限的设计及实现
· 【杂谈】分布式事务——高大上的无用知识?
点击右上角即可分享
微信分享提示