金凯旋 金凯旋 学会使用 FinRl -- 代码阅读理解【0】基础

BaseCallback(ABC):

位于 安装了stable_baselines3/common下 ,安装了Finrl-lib 后可以直接引用该内容

例如:

win 下:C:\Users\admin\AppData\Roaming\Python\Python39\site-packages\stable_baselines3

lnx下: /usr/local/lib/python3.8/dist-packages/stable_baselines3

一个虚基类,封装了on_step,  update_locals, savedata, lodadata 等通用步骤

该基类封装了 agent 的一些基本操作:on_step 等等。

很多其他操作也由此扩展:
 
EventCallback(BaseCallback):
def __init__(self, callback: Optional[BaseCallback] = None, verbose: int = 0):
 
CallbackList(BaseCallback):
def __init__(self, callbacks: List[BaseCallback]):

 

CheckpointCallback(BaseCallback):
def __init__(self, save_freq: int, save_path: str, name_prefix: str = "rl_model", verbose: int = 0):
 
EvalCallback(EventCallback):
    def __init__(
        self,
        eval_env: Union[gym.Env, VecEnv],
        callback_on_new_best: Optional[BaseCallback] = None,
        callback_after_eval: Optional[BaseCallback] = None,
        n_eval_episodes: int = 5,
        eval_freq: int = 10000,
        log_path: Optional[str] = None,
        best_model_save_path: Optional[str] = None,
        deterministic: bool = True,
        render: bool = False,
        verbose: int = 1,
        warn: bool = True,
    ):
 
StopTrainingOnRewardThreshold(BaseCallback):
    def __init__(self, reward_threshold: float, verbose: int = 0):
        super(StopTrainingOnRewardThreshold, self).__init__(verbose=verbose)
        self.reward_threshold = reward_threshold
 
StopTrainingOnMaxEpisodes(BaseCallback):

posted on 2022-05-10 10:27  金凯旋  阅读(106)  评论(0编辑  收藏  举报

导航