【Kaggle_Top方案解析】房价预测模型集成(Stacked Regressions : Top 4% on LeaderBoard by Serigne)

本篇博客主要解读Stacked Regressions : Top 4% on LeaderBoard by Serigne预测模型搭建方案

1.模型总图

在这里插入图片描述

2. Modelling

2.1 Import librairies

from sklearn.linear_model import ElasticNet, Lasso,  BayesianRidge, LassoLarsIC
from sklearn.ensemble import RandomForestRegressor,  GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
import xgboost as xgb
import lightgbm as lgb

2.2 Define a cross validation strategy(定义交叉验证策略)

使用KFold函数的目的是为了在交叉验证之前对数据进行洗牌。

#Validation function校验函数
n_folds = 5
#使用5折交叉验证法
def rmsle_cv(model):
    kf = KFold(n_folds, shuffle=True, random_state=42).get_n_splits(train.values)#存在意义:洗牌
    rmse= np.sqrt(-cross_val_score(model, train.values, y_train, scoring="neg_mean_squared_error", cv = kf))#验证函数:均方误差
    return(rmse)

2.3 Base models

首先初始化基础模型:LASSO、Elastic Net、Kernel Ridge、Gradient Boosting Regression、XGBoost、LightGBM。

2.3.1 LASSO Regression

该模型可能对异常值非常敏感,为此我们在管道上使用sklearn的Robustscaler()方法来提高模型鲁棒性。

# Lasso回归
# RobustScaler():数据标准化,剔除异常值,提高模型鲁棒性
lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.0005, random_state=1))

2.3.2 Elastic Net Regression

# 弹性网络回归
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.9, random_state=3))

2.3.3 Kernel Ridge Regression

# 核岭回归(多项式核函数)
KRR = KernelRidge(alpha=0.6, kernel='polynomial', degree=2, coef0=2.5)

2.3.4 Gradient Boosting Regression

# 集成方法:梯度提升回归树(GBRT)
GBoost = GradientBoostingRegressor(n_estimators=3000, learning_rate=0.05,
                                   max_depth=4, max_features='sqrt',
                                   min_samples_leaf=15, min_samples_split=10, 
                                   loss='huber', random_state =5)

2.3.5 XGBoost

# 集成方法:XGBoost
model_xgb = xgb.XGBRegressor(colsample_bytree=0.4603, gamma=0.0468, 
                             learning_rate=0.05, max_depth=3, 
                             min_child_weight=1.7817, n_estimators=2200,
                             reg_alpha=0.4640, reg_lambda=0.8571,
                             subsample=0.5213, silent=1,
                             random_state =7, nthread = -1)


2.3.6 LightGBM

# 集成方法:LightGBM
model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=5,
                              learning_rate=0.05, n_estimators=720,
                              max_bin = 55, bagging_fraction = 0.8,
                              bagging_freq = 5, feature_fraction = 0.2319,
                              feature_fraction_seed=9, bagging_seed=9,
                              min_data_in_leaf =6, min_sum_hessian_in_leaf = 11)

2.4 Base models scores

通过平均值和标准差了解基础模型的预测情况。

2.4.1 LASSO模型预测情况

score = rmsle_cv(lasso)
print("\nLasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))# 平均值和标准差

Lasso score: 0.1115 (0.0074)

2.4.2 Elastic Net模型预测情况

score = rmsle_cv(ENet)
print("ElasticNet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

ElasticNet score: 0.1116 (0.0074)

2.4.3 KRR模型预测情况

score = rmsle_cv(KRR)
print("Kernel Ridge score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

Kernel Ridge score: 0.1153 (0.0075)

2.4.4 GBoost模型预测情况

score = rmsle_cv(GBoost)
print("Gradient Boosting score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

Gradient Boosting score: 0.1177 (0.0080)

2.4.5 XGBoost模型预测情况


score = rmsle_cv(model_xgb)
print("Xgboost score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

Xgboost score: 0.1161 (0.0079)

2.4.6 LGBM模型预测情况

score = rmsle_cv(model_lgb)
print("LGBM score: {:.4f} ({:.4f})\n" .format(score.mean(), score.std()))

LGBM score: 0.1153 (0.0057)

2.5 Stacking models(模型堆叠)

3.5.1 Averaging base models

我们先使用平均基础模型的方法进行简单的模型堆叠测试。

class AveragingModels(BaseEstimator, RegressorMixin, TransformerMixin):
    def __init__(self, models):
        self.models = models
        
    # we define clones of the original models to fit the data in
    def fit(self, X, y):
        self.models_ = [clone(x) for x in self.models]
        
        # Train cloned base models
        for model in self.models_:
            model.fit(X, y)

        return self
    
    #Now we do the predictions for cloned models and average them
    def predict(self, X):
        predictions = np.column_stack([
            model.predict(X) for model in self.models_
        ])# 预测结果按列合并(行拼接)
        return np.mean(predictions, axis=1)   # 各行求平均值
        
averaged_models = AveragingModels(models = (ENet, GBoost, KRR, lasso))
score = rmsle_cv(averaged_models)
print(" Averaged base models score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

Averaged base models score: 0.1091 (0.0075)

由上述数据可以看出一个简单的模型结果平均就可以得到一个更好的结果。

3.5.2 Stacking averaged Models Class(Adding a Meta-model)

简单来说,就是将ENet、KRR、GBoost三个模型的预测结果再次使用LASSO模型(元模型)进行预测。
在这里插入图片描述

class StackingAveragedModels(BaseEstimator, RegressorMixin, TransformerMixin):
    def __init__(self, base_models, meta_model, n_folds=5):#base_models:基础模型, meta_model:元模型, n_folds=5:5折
        self.base_models = base_models
        self.meta_model = meta_model
        self.n_folds = n_folds
   
    # We again fit the data on clones of the original models
    def fit(self, X, y):
        self.base_models_ = [list() for x in self.base_models]# 基础模型
        self.meta_model_ = clone(self.meta_model)# 元模型
        kfold = KFold(n_splits=self.n_folds, shuffle=True, random_state=156)# 5折交叉验证
        
        # Train cloned base models then create out-of-fold predictions
        # that are needed to train the cloned meta-model
        out_of_fold_predictions = np.zeros((X.shape[0], len(self.base_models)))# 元模型预测结果空间
        for i, model in enumerate(self.base_models):# 遍历基础模型
            for train_index, holdout_index in kfold.split(X, y):# 5折交叉验证法,train_index为训练,holdout_index为验证
                instance = clone(model)# 选定当前基础模型
                self.base_models_[i].append(instance)
                instance.fit(X[train_index], y[train_index])# 训练模型
                y_pred = instance.predict(X[holdout_index])# 预测模型
                out_of_fold_predictions[holdout_index, i] = y_pred# 预测结果写入[样本,模型]
                
        # Now train the cloned  meta-model using the out-of-fold predictions as new feature
        self.meta_model_.fit(out_of_fold_predictions, y)# 元模型训练(在其他模型训练的基础上)
        return self
   
    #Do the predictions of all base models on the test data and use the averaged predictions as 
    #meta-features for the final prediction which is done by the meta-model
    def predict(self, X):
        meta_features = np.column_stack([
            np.column_stack([model.predict(X) for model in base_models]).mean(axis=1)
            for base_models in self.base_models_ ])
        return self.meta_model_.predict(meta_features)
        
stacked_averaged_models = StackingAveragedModels(base_models = (ENet, GBoost, KRR),
                                                 meta_model = lasso)
score = rmsle_cv(stacked_averaged_models)
print("Stacking Averaged models score: {:.4f} ({:.4f})".format(score.mean(), score.std()))

Stacking Averaged models score: 0.1085 (0.0074)

2.6 Final Training and Prediction

2.6.1 定义评估函数(RMSLE:均方对数误差根)

def rmsle(y, y_pred):
    return np.sqrt(mean_squared_error(y, y_pred))

2.6.2 StackedRegressor(模型堆叠预测)

stacked_averaged_models.fit(train.values, y_train)
stacked_train_pred = stacked_averaged_models.predict(train.values)
stacked_pred = np.expm1(stacked_averaged_models.predict(test.values))
print(rmsle(y_train, stacked_train_pred))

0.0781571937916

2.6.3 XGBoost

model_xgb.fit(train, y_train)
xgb_train_pred = model_xgb.predict(train)
xgb_pred = np.expm1(model_xgb.predict(test))
print(rmsle(y_train, xgb_train_pred))

0.0785165142425

2.6.4 LightGBM

model_lgb.fit(train, y_train)
lgb_train_pred = model_lgb.predict(train)
lgb_pred = np.expm1(model_lgb.predict(test.values))
print(rmsle(y_train, lgb_train_pred))

0.072050888492

'''RMSE on the entire Train data when averaging'''

print('RMSLE score on train data:')
print(rmsle(y_train,stacked_train_pred*0.70 +
               xgb_train_pred*0.15 + lgb_train_pred*0.15 ))

RMSLE score on train data:
0.0752374213174

2.7 Ensemble prediction(加权相加)

ensemble = stacked_pred*0.70 + xgb_pred*0.15 + lgb_pred*0.15

2.8 Submission

sub = pd.DataFrame()
sub['Id'] = test_ID
sub['SalePrice'] = ensemble
sub.to_csv('submission.csv',index=False)

3. 总结

通过拜读大佬的Kernel,总体来说收获挺大的。对于模型集成有了一些基本认识,但是对于模型调参还是存在一定疑惑,后续会继续学习相关知识。

创作不易,点个赞再走吧Ծ‸Ծ

posted @ 2021-01-14 22:37  ccql  阅读(23)  评论(0编辑  收藏  举报  来源