7、在多分类实验的基础上使用至少三种不同的激活函数
Relu 激活函数函数
优点:
1. 使用ReLU的SGD算法的收敛速度比 sigmoid 和 tanh 快。
2. 在x>0区域上,不会出现梯度饱和、梯度消失的问题。
3. 计算复杂度低,不需要进行指数运算,只要一个阈值就可以得到激活值。
缺点:
1. ReLU的输出不是0均值的。
2. Dead ReLU Problem(神经元坏死现象):ReLU在负数区域被kill的现象叫做dead relu。ReLU在训练的时很“脆弱”。在x<0时,梯度为0。这个神经元及之后的神经元梯度永远为0,不再对任何数据有所响应,导致相应参数永远不会被更新。
产生这种现象的两个原因:参数初始化问题;learning rate太高导致在训练过程中参数更新太大。
解决方法:采用Xavier初始化方法,以及避免将learning rate设置太大或使用adagrad等自动调节learning rate的算法。
ELU 激活函数函数
指数线性单元(ELU):具有relu的优势,没有Dead ReLU问题,输出均值接近0,实际上PReLU和Leaky ReLU都有这一优点。有负数饱和区域,从而对噪声有一些鲁棒性。可以看做是介于ReLU和Leaky ReLU之间的一个函数。当然,这个函数也需要计算exp,从而计算量上更大一些。
Sigmoid 激活函数函数
优点:
Sigmoid函数的输出在(0,1)之间,输出范围有限,优化稳定,可以用作输出层。
连续函数,便于求导。
缺点:
1. sigmoid函数在变量取绝对值非常大的正值或负值时会出现饱和现象,意味着函数会变得很平,并且对输入的微小改变会变得不敏感。
在反向传播时,当梯度接近于0,权重基本不会更新,很容易就会出现梯度消失的情况,从而无法完成深层网络的训练。
2. sigmoid函数的输出不是0均值的,会导致后层的神经元的输入是非0均值的信号,这会对梯度产生影响。
3. 计算复杂度高,因为sigmoid函数是指数形式。
代码部分
#导入必要的包
import torch
import numpy as np
import torch.nn as nn
from torch.utils.data import TensorDataset,DataLoader
import torchvision
from IPython import display
from torchvision import transforms
#读取数据
mnist_train = datasets.MNIST(root = './data',train = True,download = False,transform =transforms.ToTensor())
mnist_test = datasets.MNIST(root ='./data',train = False,download = False,transform = transforms.ToTensor())
#训练集
batch_size = 256
train_iter = DataLoader(
dataset = mnist_train,
shuffle = True,
batch_size = batch_size,
num_workers = 0
)
#测试集
test_iter = DataLoader(
dataset = mnist_test,
shuffle =False,
batch_size = batch_size,
num_workers = 0
)
#平滑层
class FlattenLayer(torch.nn.Module):
def __init__(self):
super(FlattenLayer, self).__init__()
def forward(self, x):
return x.view(x.shape[0],784)
#定义模型选择函数,选用不同的激活函数
num_input,num_hidden1,num_hidden2,num_output = 28*28,512,256,10
def choose_model(model_type):
if model_type =='ReLU':
activation = nn.ReLU()
elif model_type =='ELU':
activation = nn.ELU()
else:
activation = nn.Sigmoid()
model = nn.Sequential()
model.add_module("flatten",FlattenLayer())
model.add_module("linear1",nn.Linear(num_input,num_hidden1))
model.add_module("activation1",activation)
model.add_module("linear2",nn.Linear(num_hidden1,num_hidden2))
model.add_module("activation2",activation)
model.add_module("linear3",nn.Linear(num_hidden2,num_output))
return model
#选用RELU激活函数
model = choose_model('ReLU')
print(model)
Sequential(
(flatten): FlattenLayer()
(linear1): Linear(in_features=784, out_features=512, bias=True)
(activation1): ReLU()
(linear2): Linear(in_features=512, out_features=256, bias=True)
(activation2): ReLU()
(linear3): Linear(in_features=256, out_features=10, bias=True)
)
#参数初始化
# for param in model.parameters():
# nn.init.normal_(param,mean=0,std=0.001)
for m in model.modules():
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
nn.init.constant_(m.bias, 0.1)
#定义训练函数
def train(net,train_iter,test_iter,loss,num_epochs):
train_ls,test_ls,train_acc,test_acc = [],[],[],[]
for epoch in range(num_epochs):
train_ls_sum,train_acc_sum,n = 0,0,0
for x,y in train_iter:
y_pred = net(x)
l = loss(y_pred,y)
optimizer.zero_grad()
l.backward()
optimizer.step()
train_ls_sum +=l.item()
train_acc_sum += (y_pred.argmax(dim = 1)==y).sum().item()
n += y_pred.shape[0]
train_ls.append(train_ls_sum)
train_acc.append(train_acc_sum/n)
test_ls_sum,test_acc_sum,n = 0,0,0
for x,y in test_iter:
y_pred = net(x)
l = loss(y_pred,y)
test_ls_sum +=l.item()
test_acc_sum += (y_pred.argmax(dim = 1)==y).sum().item()
n += y_pred.shape[0]
test_ls.append(test_ls_sum)
test_acc.append(test_acc_sum/n)
print('epoch %d, train_loss %.6f,test_loss %f, train_acc %.6f,test_acc %f'
%(epoch+1, train_ls[epoch],test_ls[epoch], train_acc[epoch],test_acc[epoch]))
return train_ls,test_ls,train_acc,test_acc_sum
#训练次数和学习率
num_epochs = 20
lr = 0.01
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=lr)
#开始训练
train_loss,test_loss,train_acc,test_acc = train(model,train_iter,test_iter,loss,num_epochs)
#结果可视化
x = np.linspace(0,len(train_loss),len(train_loss))
plt.plot(x,train_loss,label="train_loss",linewidth=1.5)
plt.plot(x,test_loss,label="test_loss",linewidth=1.5)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.legend()
plt.show()
#选用ELU激活函数
model = choose_model('ELU')
print(model)
Sequential(
(flatten): FlattenLayer()
(linear1): Linear(in_features=784, out_features=512, bias=True)
(activation1): ELU(alpha=1.0)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(activation2): ELU(alpha=1.0)
(linear3): Linear(in_features=256, out_features=10, bias=True)
)
#训练次数和学习率
num_epochs = 20
lr = 0.01
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=lr)
#开始训练
train_loss,test_loss,train_acc,test_acc = train(model,train_iter,test_iter,loss,num_epochs)
#结果可视化
x = np.linspace(0,len(train_loss),len(train_loss))
plt.plot(x,train_loss,label="train_loss",linewidth=1.5)
plt.plot(x,test_loss,label="test_loss",linewidth=1.5)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.legend()
plt.show()
#选用Sigmoid激活函数
model = choose_model('')
print(model)
Sequential(
(flatten): FlattenLayer()
(linear1): Linear(in_features=784, out_features=512, bias=True)
(activation1): Sigmoid()
(linear2): Linear(in_features=512, out_features=256, bias=True)
(activation2): Sigmoid()
(linear3): Linear(in_features=256, out_features=10, bias=True)
)
#训练次数和学习率
num_epochs = 20
lr = 0.01
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=lr)
#开始训练
train_loss,test_loss,train_acc,test_acc = train(model,train_iter,test_iter,loss,num_epochs)
#结果可视化
x = np.linspace(0,len(train_loss),len(train_loss))
plt.plot(x,train_loss,label="train_loss",linewidth=1.5)
plt.plot(x,test_loss,label="test_loss",linewidth=1.5)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.legend()
plt.show()