Reinforcement Learning Solutions Ed2 Chapter 1 - 2 问题解答

RL到了第三章题目多的不可思议 

前两章比较简单,就在博客随便写写了。之后的用pdf更新。

1.1: Self-play will result different move even from the first step due to randomization of the action choice. The method should then learn two sets of value functions, first hand and second hand. In general, I believe self-play would improve the ability to win over the long run but only converge at slower speed than playing against some one with knowledge. Indeed, the self play sets no learning object of incoming opponents and may result the exploitation of opponent’s weakness a harder job.

1.2: Mirror positions should be bind together to the same status in the value function. Either create 4 images for each status or perform rotation during playtime. However, if the opponent does not take the advantage of symmetry and has some strange belief in some patterns, value function should take each status differently in order to exploit such difference. Although, if trained with a well played opponent, such amendments are not necessary any more. Any way the agent has no information of his opponent as a priori.

1.3: Under the assumption that the agent explores, greedy may be good. For example, in 10-arm bandit problem, traditional solution is indeed greedy. However, less greedy algorithms such as soft max have better performance and convergence speed since it could quickly understand the outcome of all behaviors instead of the seemingly great ones. Of course, with change of policy from the opponent, greedy will be very slow to react and different action choice method has to be considered.

1.4: 略 题意不清。简单的说如果探索 但是把探索行为也更新前值 前面的行为会被错误的赋予一个探索才能引发的后果 会降低或提高所有行为的评价 而该行为序列却可能是不可重复的 因为这是随机的探索罢了

1.5: 略。开放性过大。很多优化其实后面才会说。比如结合DL。

2.1: 75%(sigma reflects possibility to explore the entire action space instead of the one other than the optimal)

2.2: A4 and all. A4 is not optimal and see 2.1 for the reason why all actions can be exploration.

2.3: the one with 0.01 possibility to explore. Limit is just higher than 0.99 which is higher than action with 0.1 possibility to explore. Given enough time step, indeed, the one explores less would always has higher cumulative rewards.

2.4: 数学就略了 跟n扯上关系 除法侧重前 乘法侧重后

2.5: TODO

2.6: 这真的恶心 我的推测是 一开始初始化为5实在太乐观了 所以选择什么 value就抖降 选完一轮降一轮 直到value接近于真实附近的时候 t在逐渐增加 但还不够大 以至于算法极为短暂的利用了几个可能最优的行为(由于取样太小 用value判断最优率也只有40%正确率)形成了图中的spike。但紧接着t增加 算法的第二项增加 促进了新一轮探索 使得收益率又降低 直到算法进入了稳定期 n也足够大 第二项已经名义上接近消失 算法无限接近于最优 所以先升再降再升

2.7: 数学 略 但大概是个exponential average 说的夸张点和kernel method 类似?

2.8: 和2.6差不多 我其实和在一起讲了

2.9: 这问题有点贱 这书明明没讲过sigmoid 是的他们显然是相似的 因为sigmoid本来就可以写成 e^2z/(e^z + e^2z) 相当于softmax选择

2.10: 在这个条件下一切超过0.5收益的算法都是刷流氓 传统的constant alpha或许可行 但其实最优结果目测是来自纯粹greedy。即使给了label

2.11: TODO

 

posted @ 2019-04-21 15:05  LyWangJapan  阅读(428)  评论(0编辑  收藏  举报