神经网络学习--PyTorch学习03 搭建模型

torch.nn

(1)用于搭建网络结构的序列容器:torch.nn.Sequential 

models = torch.nn.Sequential(
    torch.nn.Linear(input_data, hidden_layer),
    torch.nn.ReLU(),
    torch.nn.Linear(hidden_layer, output_data)
)
from collections import OrderedDict  # 使用有序字典 使模块有自定义的名次
models2 = torch.nn.Sequential(OrderedDict([
    ("Line1",torch.nn.Linear(input_data, hidden_layer)),
    ("ReLu1",torch.nn.ReLU()),
    ("Line2",torch.nn.Linear(hidden_layer, output_data))])
)

(2)线性层:torch.nn.Linear

(3)激活函数:torch.nn.ReLU

(4)损失函数:torch.nn.MSELoss(均方误差函数),troch.nn.L1Loss(平均绝对误差函数),torch.nn.CrossEntropyLoss(交叉熵)

 

import torch
from torch.autograd import Variable
batch_n = 100
hidden_layer = 100
input_data = 1000
output_data = 10

x = Variable(torch.randn(batch_n, input_data), requires_grad=False)  # x封装为节点,设置为不自动求导
y = Variable(torch.randn(batch_n, output_data), requires_grad=False)
models = torch.nn.Sequential(
    torch.nn.Linear(input_data, hidden_layer),
    torch.nn.ReLU(),
    torch.nn.Linear(hidden_layer, output_data)
)
# from collections import OrderedDict  # 使用有序字典 使模块有自定义的名次
# models2 = torch.nn.Sequential(OrderedDict([
#     ("Line1",torch.nn.Linear(input_data, hidden_layer)),
#     ("ReLu1",torch.nn.ReLU()),
#     ("Line2",torch.nn.Linear(hidden_layer, output_data))])
# )
epoch_n = 10000
learning_rate = 0.0001
loss_fn = torch.nn.MSELoss()

for epoch in range(epoch_n):
    y_pred = models(x)
    loss = loss_fn(y_pred,y)
    if epoch%1000 == 0:
        print("Epoch:{},Loss:{:4f}".format(epoch,loss.data[0]))
    models.zero_grad()  # 梯度归零

    loss.backward()

    for param in models.parameters():  # 遍历节点参数更新
        param.data -= param.grad.data*learning_rate

 

torch.optim包

参数自动优化类:SGD,AdaGrad,RMSProp,Adam

import torch
from torch.autograd import Variable
batch_n = 100
hidden_layer = 100
input_data = 1000
output_data =10

x = Variable(torch.randn(batch_n, input_data), requires_grad=False)
y = Variable(torch.randn(batch_n, output_data), requires_grad=False)

models = torch.nn.Sequential(
    torch.nn.Linear(input_data,hidden_layer),
    torch.nn.ReLU(),
    torch.nn.Linear(hidden_layer,output_data)
)

epoch_n = 20
learning_rate = 0.0001
loss_fn = torch.nn.MSELoss()

optimzer = torch.optim.Adam(models.parameters(), lr=learning_rate)  # torch.optim.Adam对梯度更新使用到的学习率进行自适应调节

for epoch in range(epoch_n):
    y_pred = models(x)
    loss = loss_fn(y_pred,y)
    print("Eproch:{},Loss:{:4f}".format(epoch,loss.data[0]))
    optimzer.zero_grad()  # 参数梯度归零

    loss.backward()
    optimzer.step()  # 节点参数更新

 

posted @ 2019-09-04 12:43  键盘已坏  阅读(256)  评论(0编辑  收藏  举报