g
y
7
7
7
7

Pytorch-实战之对Himmelblau函数的优化

1.Himmelblau函数

Himmelblau函数:

F(x,y)=(x²+y-11)²+(x+y²-7)²:具体优化的是,寻找一个最合适的坐标(x,y)使得F(x,y)的值最小。

函数的具体图像,如下图所示:

实现代码

import  numpy as np
from    matplotlib import pyplot as plt
import  torch

# 定义函数
def himmelblau(x_y):
    return (x_y[0] ** 2 + x_y[1] - 11) ** 2 + (x_y[0] + x_y[1] ** 2 - 7) ** 2

# 生成x轴数据列表
x = np.arange(-6, 6, 0.1)
# 生成y轴数据列表
y = np.arange(-6, 6, 0.1)
print('x,y range:', x.shape, y.shape)
# 对x,y数据进行网格化,
X, Y = np.meshgrid(x, y)
print('X,Y maps:', X.shape, Y.shape)
# 计算Z轴数据
Z = himmelblau([X, Y])

fig = plt.figure('himmelblau')
ax = fig.gca(projection='3d')

# 绘制3D图形
ax.plot_surface(X, Y, Z)
ax.view_init(60, -30)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()

if __name__ == '__main__':
    # [1., 0.], [-4, 0.], [4, 0.]
    # x_y存储的是坐标值(x,y),目的就是求解一个最优的x_y。
    x_y = torch.tensor([0., 0.], requires_grad=True)
    # 定义优化器,优化器的目标就是x_y,学习速率learningrate是0.001
    optimizer = torch.optim.Adam([x_y], lr=1e-3)
    for step in range(20000):

        # 输入坐标,得到预测值
        pred = himmelblau(x_y)
        # 当网络参量进行反馈时,梯度是被积累的而不是被替换掉,所以把梯度信息清零
        optimizer.zero_grad()
        # 获取x坐标和y坐标的梯度信息
        pred.backward()

        # 调用一次.step(),就会优化一次x坐标   x'=x-learningrate*▽x
        # 调用一次.step(),就会优化一次y坐标   y'=y-learningrate*▽y
        optimizer.step()

        if step % 2000 == 0:
            print ('step {}: x_y = {}, f(x) = {}'
                   .format(step, x_y.tolist(), pred.item()))

输出结果

x,y range: (120,) (120,)
X,Y maps: (120, 120) (120, 120)
step 0: x_y = [0.0009999999310821295, 0.0009999999310821295], f(x) = 170.0
step 2000: x_y = [2.3331806659698486, 1.9540694952011108], f(x) = 13.730916023254395
step 4000: x_y = [2.9820079803466797, 2.0270984172821045], f(x) = 0.014858869835734367
step 6000: x_y = [2.999983549118042, 2.0000221729278564], f(x) = 1.1074007488787174e-08
step 8000: x_y = [2.9999938011169434, 2.0000083446502686], f(x) = 1.5572823031106964e-09
step 10000: x_y = [2.999997854232788, 2.000002861022949], f(x) = 1.8189894035458565e-10
step 12000: x_y = [2.9999992847442627, 2.0000009536743164], f(x) = 1.6370904631912708e-11
step 14000: x_y = [2.999999761581421, 2.000000238418579], f(x) = 1.8189894035458565e-12
step 16000: x_y = [3.0, 2.0], f(x) = 0.0
step 18000: x_y = [3.0, 2.0], f(x) = 0.0

posted @ 2020-10-09 17:11  gy77  阅读(313)  评论(0编辑  收藏  举报