baselines中环境包装器EpisodicLifeEnv的分析
如题:
class EpisodicLifeEnv(gym.Wrapper): def __init__(self, env): """Make end-of-life == end-of-episode, but only reset on true game over. Done by DeepMind for the DQN and co. since it helps value estimation. """ gym.Wrapper.__init__(self, env) self.lives = 0 self.was_real_done = True def step(self, action): obs, reward, done, info = self.env.step(action) self.was_real_done = done # check current lives, make loss of life terminal, # then update lives to handle bonus lives lives = self.env.unwrapped.ale.lives() if lives < self.lives and lives > 0: # for Qbert sometimes we stay in lives == 0 condition for a few frames # so it's important to keep lives > 0, so that we only reset once # the environment advertises done. done = True self.lives = lives return obs, reward, done, info def reset(self, **kwargs): """Reset only when lives are exhausted. This way all states are still reachable even though lives are episodic, and the learner need not know about any of this behind-the-scenes. """ if self.was_real_done: obs = self.env.reset(**kwargs) else: # no-op step to advance from terminal/lost life state obs, _, _, _ = self.env.step(0) self.lives = self.env.unwrapped.ale.lives() return obs
EpisodicLifeEnv包装器是针对环境中有多条lives的,游戏中所剩的lives通过: lives = self.env.unwrapped.ale.lives()获得。
主要需要说明的代码为:
if lives < self.lives and lives > 0: # for Qbert sometimes we stay in lives == 0 condition for a few frames # so it's important to keep lives > 0, so that we only reset once # the environment advertises done. done = True
根据注释可以知道,对于游戏Qbert来说当所剩lives为0的时候这时返回的done为false,也就是说还需要几帧画面后才会获得done=True的反馈,如果我们将判断条件:
if lives < self.lives and lives > 0:
改为:
if lives < self.lives and lives >=0:
这样,step返回的 return obs, reward, done, info 将作为一个episode的最后一帧数据来处理,并调用reset函数中的:
else: # no-op step to advance from terminal/lost life state obs, _, _, _ = self.env.step(0)
这样,在随后的几帧数据中由于 self.was_real_done = False,而 lives = self.env.unwrapped.ale.lives()=0,会不断的循环调用reset操作。
当然针对Qbert游戏中的这种问题我们还可以使用其他的修改方式:
class EpisodicLifeEnv(gym.Wrapper): def __init__(self, env): """Make end-of-life == end-of-episode, but only reset on true game over. Done by DeepMind for the DQN and co. since it helps value estimation. """ gym.Wrapper.__init__(self, env) self.lives = 0 self.was_real_done = True def step(self, action): obs, reward, done, info = self.env.step(action) # self.was_real_done = done # check current lives, make loss of life terminal, # then update lives to handle bonus lives lives = self.env.unwrapped.ale.lives() if lives < self.lives: # for Qbert sometimes we stay in lives == 0 condition for a few frames # so it's important to keep lives > 0, so that we only reset once # the environment advertises done. done = True self.lives = lives return obs, reward, done, info def reset(self, **kwargs): """Reset only when lives are exhausted. This way all states are still reachable even though lives are episodic, and the learner need not know about any of this behind-the-scenes. """ # if self.was_real_done: if self.lives == 0: obs = self.env.reset(**kwargs) else: # no-op step to advance from terminal/lost life state obs, _, _, _ = self.env.step(0) self.lives = self.env.unwrapped.ale.lives() return obs
==================================================
本博客是博主个人学习时的一些记录,不保证是为原创,个别文章加入了转载的源地址,还有个别文章是汇总网上多份资料所成,在这之中也必有疏漏未加标注处,如有侵权请与博主联系。
如果未特殊标注则为原创,遵循 CC 4.0 BY-SA 版权协议。
posted on 2022-04-10 12:16 Angry_Panda 阅读(141) 评论(0) 编辑 收藏 举报