Python——深度神经网络框架Pytorch安装与用法实例
深度神经网络框架 Pytorch
官网安装方法:
https://pytorch.org/get-started/locally/
测试代码 test.py
import torch x = torch.rand(5, 3) print("x =", x) print("x.shape =",x.shape) print("x.numpy() =", x.numpy()) print("x.tolist() =", x.tolist()) y = torch.rand(5, 3) z = x + y print("z =", z) print("") from torch.autograd import Variable x_var=Variable(torch.ones(2,2),requires_grad=True) print("x_var =", x_var) y_var=x_var.sum() print("y_var =", y_var) print("y_var.grad_fn =", y_var.grad_fn) y_var.backward() print("\n第一次反向传播, x_var.grad =",x_var.grad) y_var.backward() print("第二次反向传播, x_var.grad =",x_var.grad) # 注意:grad在反向传播的过程中是累加的,深度学习是多层神经网络,在每次反向传播结束之后会累加上次的结果。 # 基于上述原因,训练过程中会在首次反向传播之前将梯度置为零。 #将梯度置为零 x_var.grad.data.zero_() print(x_var.grad) y_var.backward() print("置零之后反向传播",x_var.grad)
运行
python3 test.py
输出为
x = tensor([[0.1214, 0.1568, 0.7282], [0.6820, 0.8997, 0.8014], [0.1159, 0.1559, 0.1364], [0.3021, 0.2610, 0.9011], [0.9494, 0.9833, 0.7934]]) x.shape = torch.Size([5, 3]) x.numpy() = [[0.12138915 0.15684658 0.7282294 ] [0.68202746 0.8996668 0.801434 ] [0.1159091 0.15591073 0.13638169] [0.30212373 0.26102382 0.9010761 ] [0.94941247 0.983263 0.79336965]] x.tolist() = [[0.12138915061950684, 0.15684658288955688, 0.7282294034957886],
[0.6820274591445923, 0.8996667861938477, 0.8014339804649353],
[0.11590909957885742, 0.15591073036193848, 0.13638168573379517],
[0.3021237254142761, 0.2610238194465637, 0.901076078414917],
[0.94941246509552, 0.9832630157470703, 0.7933696508407593]] z = tensor([[0.3496, 0.4020, 1.1696], [1.4339, 1.6175, 0.8582], [0.2218, 0.9734, 1.0614], [1.0951, 1.2197, 1.5408], [1.0147, 1.0386, 1.4496]]) x_var = tensor([[1., 1.], [1., 1.]], requires_grad=True) y_var = tensor(4., grad_fn=<SumBackward0>) y_var.grad_fn = <SumBackward0 object at 0x10bd18ca0> 第一次反向传播, x_var.grad = tensor([[1., 1.], [1., 1.]]) 第二次反向传播, x_var.grad = tensor([[2., 2.], [2., 2.]]) tensor([[0., 0.], [0., 0.]]) 置零之后反向传播 tensor([[1., 1.], [1., 1.]])