【RL】L7-Temporal-difference learning

TD learning of state values

The data/experience required by the algorithm:

  • (s0,r1,s1,,st,rt+1,st+1,) or {(st,rt+1,st+1)}t generated following the given policy π.

The TD learning algorithm is

vt+1(st)=vt(st)αt(st)[vt(st)[rt+1+γvt(st+1)]],vt+1(s)=vt(s),sst

where t=0,1,2, Here, vt(st) is the estimated state value of vπ(st); αt(st) is the learning rate of st at time t.

s: state space

posted @   鸽鸽的书房  阅读(19)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· winform 绘制太阳,地球,月球 运作规律
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 上周热点回顾(3.3-3.9)
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
点击右上角即可分享
微信分享提示