【六】gym搭建自己环境升级版设计,动态障碍------强化学习

相关文章:

【一】gym环境安装以及安装遇到的错误解决

【二】gym初次入门一学就会-简明教程

【三】gym简单画图

【四】gym搭建自己的环境,全网最详细版本,3分钟你就学会了!

【五】gym搭建自己的环境____详细定义自己myenv.py文件

【六】gym搭建自己环境升级版设计,动态障碍------强化学习

【六】gym搭建自己环境升级版设计,动态障碍------强化学习

1.环境背景介绍:

如图所示:初始状态下的环境,机器人在左上角出发,去寻找右下角的电池,静态障碍:分别在10、19位置,动态障碍:有飞机和轮船,箭头表示它们可以移动到的位置,会随时间改变而改变。这里假设:它们的移动速度和机器人一样。飞机会沿着当前列上下来回移动,轮船则只在当前位置到左边两格的范围来回移动。移动范围:如箭头所示。
 

 

假设每个训练episode中,机器人在每一个step都有四个方向选择,碰到障碍物或者step超过了20步则终止该次训练。

2.环境代码

2.1 定义:__init__(self):

用一个1×25的list来表示5×5的矩阵,从左上到右下分别表示为0~24(states),机器人在位置0,电池在位置24,静态障碍分别在10和19,动态障碍分别在9和21。

在该环境下,上走为-5,下为+5,左为-1,右为+1(actions)。

把状态states和行为actions用拼接的方法拼到了一起,组合成了state_action这样的key,以调用行动结果以及查找该state下做这个action的奖励值(reward):

撞到障碍物或目的地则该次训练结束;为了让结果可视化,需要渲染窗口,设置一个700×700的窗口,每一格的中心的横坐标为[150, 250, 350, 450, 550]重复5次(因为是一个1×25的list,每5个为环境的一行),纵坐标为150,250,350,450,550分别重复5次。

2.2 定义:step(self, action):

具体的思路是:在主函数里决定采取的action,然后传入step,在step里进行了障碍的移动和reward的更新后,判断该次行动带来的reward,以及此次训练是否需要结束等。故此函数需要返回值:下一时刻状态,reward,done info。

第一步:判断agent是否碰上了障碍两种情况:

  • 在上一步中,agent和移动障碍已经相邻,在采取行动后,移动障碍和agent的位置互换了。
  • 在上一步中,agent和障碍没有相邻,但在下一步,位置重叠了。所以我们需要记录障碍在行动前和行动后的两个状态,在环境的step(self,action):中则有:
 def step(self, action):
        self.temp = dict()
        self.temp[self.DynamicObs1] = 1
        self.temp[self.DynamicObs2] = 1

        # Update terminate states
        self.TerminateStates.pop(self.DynamicObs1) #删除动态障碍状态;方法返回从列表中移除的元素对象。
        self.TerminateStates.pop(self.DynamicObs2)

        if self.Obs1Dir == 0:         #控制动态障碍上下移动 0向上
            if self.DynamicObs1 == 1:
                self.Obs1Dir = 1
                self.DynamicObs1 += 5
            else:
                self.DynamicObs1 -= 5
        else:
            if self.DynamicObs1 == 21:
                self.DynamicObs1 -= 5
                self.Obs1Dir = 0
            else:
                self.DynamicObs1 += 5

        if self.Obs2Dir == 2:
            if self.DynamicObs2 == 7:
                self.DynamicObs2 += 1
                self.Obs2Dir = 3
            else:
                self.DynamicObs2 -= 1
        else:
            if self.DynamicObs2 == 9:
                self.Obs2Dir = 2
                self.DynamicObs2 -= 1
            else:
                self.DynamicObs2 += 1

        self.TerminateStates[self.DynamicObs1] = 1
        self.TerminateStates[self.DynamicObs2] = 1

        # Update rewards dictionary
        self.Rewards = dict()
        if self.DynamicObs1 == 21:
            self.Rewards[str(self.DynamicObs1 - 5) + '_1'] = -200.0
            self.Rewards[str(self.DynamicObs1 - 1) + '_3'] = -200.0
            self.Rewards[str(self.DynamicObs1 + 1) + '_2'] = -200.0
        elif self.DynamicObs1 == 1:
            self.Rewards[str(self.DynamicObs1 + 5) + '_0'] = -200.0
            self.Rewards[str(self.DynamicObs1 - 1) + '_3'] = -200.0
            self.Rewards[str(self.DynamicObs1 + 1) + '_2'] = -200.0
        else:
            self.Rewards[str(self.DynamicObs1 - 5) + '_1'] = -200.0
            self.Rewards[str(self.DynamicObs1 - 1) + '_3'] = -200.0
            self.Rewards[str(self.DynamicObs1 + 1) + '_2'] = -200.0
            self.Rewards[str(self.DynamicObs1 + 5) + '_0'] = -200.0

        if self.DynamicObs2 == 9:
            self.Rewards[str(self.DynamicObs2 - 5) + '_1'] = -200.0
            self.Rewards[str(self.DynamicObs2 - 1) + '_3'] = -200.0
            self.Rewards[str(self.DynamicObs2 + 5) + '_0'] = -200.0
        else:
            self.Rewards[str(self.DynamicObs2 - 5) + '_1'] = -200.0
            self.Rewards[str(self.DynamicObs2 - 1) + '_3'] = -200.0
            self.Rewards[str(self.DynamicObs2 + 1) + '_2'] = -200.0
            self.Rewards[str(self.DynamicObs2 + 5) + '_0'] = -200.0

        self.Rewards[str(self.StaticObs1 - 5) + '_1'] = -200.0
        self.Rewards[str(self.StaticObs1 + 1) + '_2'] = -200.0
        self.Rewards[str(self.StaticObs1 + 5) + '_0'] = -200.0
        
        self.Rewards[str(self.StaticObs2 - 5) + '_1'] = -200.0
        self.Rewards[str(self.StaticObs2 - 1) + '_3'] = -200.0
        #self.Rewards[str(self.StaticObs2 + 5) + '_0'] = -200.0
        #self.Rewards[str(self.Terminal - 5) + '_1'] = 100.0
        self.Rewards[str(self.Terminal - 1) + '_3'] = 100.0

至此,我们做好了一步之下,障碍的运动及记录和reward的更新。

接下来,我们要根据传入的action,来判断agent采取该action所得的reward:判断是否撞墙,判断是否撞障碍,判断是否接近了目标等。

state = self.state
        key = "%d_%d"%(state,action)

        # Dectect whether this action will lead to crashing into the wall
        if key in self.T:
            next_state = self.T[key]
        else:
            next_state = state
            r = -200.0
            is_Done = True
            return next_state, r, is_Done, {}

        # Dectect whether this action will lead to crashing into the obstacles
        self.state = next_state     # Update state
        is_Done = False
        if next_state in self.TerminateStates or (next_state in self.temp and state in self.TerminateStates):
            is_Done = True

        if key not in self.Rewards:
            if (self.Terminal - next_state) < (self.Terminal - state):  #距离终点更近了,更有利学习
                r = 20.0
            else:
                r = -50.0
        else:
            r = self.Rewards[key]

        return next_state, r, is_Done, {}

2.3 定义:reset(self):

reset函数重制初始化,直接在__init__(self):里复制粘贴就好

def reset(self):
        # Reset states and directions
        self.Terminal = 24
        self.state = 0
        self.StaticObs1 = 10
        self.StaticObs2 = 19
        self.DynamicObs1 = 21
        self.DynamicObs2 = 14
        self.actions = [0, 1, 2, 3]
        self.Obs1Dir = 0
        self.Obs2Dir = 2
        self.gamma = 0.8
        # self.viewer = None

        # Reset episodes stopping criterion
        self.TerminateStates = dict()
        self.TerminateStates[self.StaticObs1] = 1
        self.TerminateStates[self.StaticObs2] = 1
        self.TerminateStates[self.DynamicObs1] = 1
        self.TerminateStates[self.DynamicObs2] = 1
        self.TerminateStates[self.Terminal] = 1

        # Reset rewards dictionary
        self.Rewards = dict()
        self.Rewards[str(self.DynamicObs1 - 5) + '_1'] = -200.0
        self.Rewards[str(self.DynamicObs1 - 1) + '_3'] = -200.0
        self.Rewards[str(self.DynamicObs1 + 1) + '_2'] = -200.0
        self.Rewards[str(self.DynamicObs2 - 5) + '_1'] = -200.0
        self.Rewards[str(self.DynamicObs2 - 1) + '_3'] = -200.0
        self.Rewards[str(self.DynamicObs2 + 5) + '_0'] = -200.0
        self.Rewards[str(self.StaticObs1 - 5) + '_1'] = -200.0
        self.Rewards[str(self.StaticObs1 + 1) + '_2'] = -200.0
        self.Rewards[str(self.StaticObs1 + 5) + '_0'] = -200.0
        self.Rewards[str(self.StaticObs2 - 5) + '_1'] = -200.0
        self.Rewards[str(self.StaticObs2 - 1) + '_3'] = -200.0
        self.Rewards[str(self.StaticObs2 + 5) + '_0'] = -200.0
        self.Rewards[str(self.Terminal - 5) + '_1'] = 100.0
        self.Rewards[str(self.Terminal - 1) + '_3'] = 100.0

        return self

2.4 定义render(self, mode = ‘human’):


    def render(self, mode='human'):
        from gym.envs.classic_control import rendering
        screen_width = 700
        screen_height = 700

        if self.viewer is None:
            self.viewer = rendering.Viewer(screen_width,screen_height)

            # Plot the GridWorld
            self.line1 = rendering.Line((100,100),(600,100))
            self.line2 = rendering.Line((100, 200), (600, 200))
            self.line3 = rendering.Line((100, 300), (600, 300))
            self.line4 = rendering.Line((100, 400), (600, 400))
            self.line5 = rendering.Line((100, 500), (600, 500))
            self.line6 = rendering.Line((100, 600), (600, 600))

            self.line7 = rendering.Line((100, 100), (100, 600))
            self.line8 = rendering.Line((200, 100), (200, 600))
            self.line9 = rendering.Line((300, 100), (300, 600))
            self.line10 = rendering.Line((400, 100), (400, 600))
            self.line11 = rendering.Line((500, 100), (500, 600))
            self.line12 = rendering.Line((600, 100), (600, 600))


            # Plot dynamic obstacle_1
            self.obs1 = rendering.make_circle(40)
            self.obs1trans = rendering.Transform()    # translation=(250, 150)
            self.obs1.add_attr(self.obs1trans)
            self.obs1.set_color(1, 0, 0)

            # Plot dynamic obstacle_2
            self.obs2 = rendering.make_circle(40)
            self.obs2trans = rendering.Transform()
            self.obs2.add_attr(self.obs2trans)
            self.obs2.set_color(1, 0, 0)

            # Plot static obstacle_1
            self.obstacle_1 = rendering.make_circle(40)
            self.obstacle1trans = rendering.Transform()
            self.obstacle_1.add_attr(self.obstacle1trans)
            self.obstacle_1.set_color(0, 0, 0)

            # Plot static obstacle_2
            self.obstacle_2 = rendering.make_circle(40)
            self.obstacle2trans = rendering.Transform()
            self.obstacle_2.add_attr(self.obstacle2trans)
            self.obstacle_2.set_color(0, 0, 0)

            # Plot Terminal
            self.terminal = rendering.make_circle(40)
            self.circletrans = rendering.Transform(translation=(550, 150))
            self.terminal.add_attr(self.circletrans)
            self.terminal.set_color(0, 0, 1)

            # Plot robot
            self.robot= rendering.make_circle(30)
            self.robotrans = rendering.Transform()
            self.robot.add_attr(self.robotrans)
            self.robot.set_color(0, 1, 0)

            self.line1.set_color(0, 0, 0)
            self.line2.set_color(0, 0, 0)
            self.line3.set_color(0, 0, 0)
            self.line4.set_color(0, 0, 0)
            self.line5.set_color(0, 0, 0)
            self.line6.set_color(0, 0, 0)
            self.line7.set_color(0, 0, 0)
            self.line8.set_color(0, 0, 0)
            self.line9.set_color(0, 0, 0)
            self.line10.set_color(0, 0, 0)
            self.line11.set_color(0, 0, 0)
            self.line12.set_color(0, 0, 0)

            self.viewer.add_geom(self.line1)
            self.viewer.add_geom(self.line2)
            self.viewer.add_geom(self.line3)
            self.viewer.add_geom(self.line4)
            self.viewer.add_geom(self.line5)
            self.viewer.add_geom(self.line6)
            self.viewer.add_geom(self.line7)
            self.viewer.add_geom(self.line8)
            self.viewer.add_geom(self.line9)
            self.viewer.add_geom(self.line10)
            self.viewer.add_geom(self.line11)
            self.viewer.add_geom(self.line12)
            self.viewer.add_geom(self.obs1)
            self.viewer.add_geom(self.obs2)
            self.viewer.add_geom(self.obstacle_1)
            self.viewer.add_geom(self.obstacle_2)
            self.viewer.add_geom(self.terminal)
            self.viewer.add_geom(self.robot)

        if self.state is None:
            return None

        self.robotrans.set_translation(self.x[self.state], self.y[self.state])
        self.obs1trans.set_translation(self.x[self.DynamicObs1], self.y[self.DynamicObs1])
        self.obs2trans.set_translation(self.x[self.DynamicObs2], self.y[self.DynamicObs2])
        self.obstacle1trans.set_translation(self.x[self.StaticObs1], self.y[self.StaticObs1])
        self.obstacle2trans.set_translation(self.x[self.StaticObs2], self.y[self.StaticObs2])
        return self.viewer.render(return_rgb_array=mode == 'rgb_array')

        if self.viewer:
            self.viewer.close()
            self.viewer = None

2.5 测试一下环境

效果图:

 

2.6 基于Q-learning算法程序训练

码源见:https://download.csdn.net/download/sinat_39620217/16792297

https://gitee.com/dingding962285595/myenv/tree/master/gym/girdenv_plus

训练结果:最后路径就固定了

已完成 116 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 94.000000
已完成 116 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 95.500000
已完成 117 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 97.000000
已完成 117 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 97.000000
已完成 118 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 97.000000
已完成 118 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 97.000000
已完成 119 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 97.000000
已完成 119 次训练,本次训练共进行 8 步数。episode_reward:100,平均分: 98.500000
用时 16 s,训练 119 次后,模型到达测试标准!

3.扩展环境后的训练结果

推荐可以把网格做的更大试一试比如:把5*5改到6*6看看效果!测试看一下效果:

代码链接:https://download.csdn.net/download/sinat_39620217/16792814

https://gitee.com/dingding962285595/myenv/tree/master/gym/girdenv_plus/girdenv_plus%E6%94%B9

已完成 103 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 82.000000
已完成 103 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 83.500000
已完成 104 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 85.000000
已完成 104 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 86.500000
已完成 105 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 88.000000
已完成 105 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 89.500000
已完成 106 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 91.000000
已完成 106 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 92.500000
已完成 107 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 94.000000
已完成 107 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 95.500000
已完成 108 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 97.000000
已完成 108 次训练,本次训练共进行 10 步数。episode_reward:100,平均分: 98.500000

个人感觉开可以把环境再扩建扩建,8*8就很完美或者7*8环境

参博客:https://blog.csdn.net/Adobii/article/details/111825011

posted @ 2022-10-27 21:34  汀、人工智能  阅读(155)  评论(0编辑  收藏  举报