在业务中,我们经常需要对数据建模并预测。简单的情况下,我们采用 if else 判断(一棵树)即可。但如果预测结果与众多因素有关,而每一个特征的权重又不尽相同。

所以我们如何把这些特征的权重合理的找出来,xgboost正是这样一种算法。

xgboost的原理大致是会构建多棵决策树,来提高预测率。原谅我渣渣的数学,资料很多:(https://www.jianshu.com/p/7467e616f227)

这里记录下python demo

参考网址:https://machinelearningmastery.com/develop-first-xgboost-model-python-scikit-learn/

经验:在模型训练中,参数的调整固然重要,但特征的辨识度更加重要,所以加入的特征辨识度一定要高,这样训练出的模型准确率才能高。

# First XGBoost model for Pima Indians dataset
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
# split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
# split data into train and test sets
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
# fit model no training data
model = XGBClassifier()
model.fit(X_train, y_train)
# make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
# evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

 


 

posted on 2018-03-29 15:35  知己一生  阅读(215)  评论(0编辑  收藏  举报