xgboost 调参经验


xgboost的遗传算法调参

xgboost的遗传算法调参 - wzd321 - 博客园 (cnblogs.com)


if
__name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = pd.read_csv(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) cv_params = {'n_estimators': [400, 500, 600, 700, 800]} other_params = {'learning_rate': 0.1, 'n_estimators': 500, 'max_depth': 5, 'min_child_weight': 1, 'seed': 0, 'subsample': 0.8, 'colsample_bytree': 0.8, 'gamma': 0, 'reg_alpha': 0, 'reg_lambda': 1} model = xgb.XGBRegressor(**other_params) optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring='r2', cv=5, verbose=1, n_jobs=4) optimized_GBM.fit(X_train, y_train) evalute_result = optimized_GBM.grid_scores_ print('每轮迭代运行结果:{0}'.format(evalute_result)) print('参数的最佳取值:{0}'.format(optimized_GBM.best_params_)) print('最佳模型得分:{0}'.format(optimized_GBM.best_score_)) 最大迭代次数 n_estimators {'n_estimators': [400, 500, 600, 700, 800]} 树结构 max_depth 树的最大深度以及min_child_weight最小叶子节点样本权重 {'max_depth':range(3,10,2),'min_child_weight':range(2,7,2)} 损失函数下降值 gamma 指定了节点分裂所需的最小损失函数下降值 {'gamma':[i/100.0 for i in range(0,100)]} 采样比例 subsample 控制对于每棵树的随机采样的比例和 colsample_bytree用来控制每棵随机采样的列数的占比(每一列是一个特征) 调整正则项 {'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]} {'reg_alpha': [0.05, 0.1, 1, 2, 3], 'reg_lambda': [0.05, 0.1, 1, 2, 3]} 学习速率 {'learning_rate':[0, 0.001, 0.005, 0.01, 0.05,0.1,0.5,1]}


调参代码参考:XGboost数据比赛实战之调参篇(完整流程) - 知乎 (zhihu.com)

参数详细解释:干货|XGBoost进阶—调参+实战 - 知乎 (zhihu.com)

参考资料:

XGboost的调参思路以及预测结果(附代码和流程) (nicethemes.cn)

干货|XGBoost进阶—调参+实战 - 知乎 (zhihu.com)

min_child_weight 限制分列的参数,参考二阶导数值,<min(h左,h右),见下文

XGBoost详解 - 简书 (jianshu.com)

posted @ 2022-06-28 08:57  cup_leo  阅读(160)  评论(0编辑  收藏  举报