逻辑回归自己尝试
摘要:自己逻辑回归尝试 1、固定好坏样本,随机种子 先去看分箱情况 data_sd = X1 num_cols=X1.columns import pycard as pc num_iv_woedf = pd.DataFrame() clf = pc.NumBin(max_bins_num=7,min_b
阅读全文
toad逻辑回归尝试
摘要:from sklearn.model_selection import train_test_split train,test=train_test_split(dd,test_size=0.6) toad.detect(dd) toad.quality(dd,target='target',iv_
阅读全文
LGBM
摘要:import pandas as pd from lightgbm import LGBMClassifier from sklearn.metrics import accuracy_score df3=pd.concat([df1,df2],axis=1) model = LGBMClassif
阅读全文
XGB
摘要:from xgboost import XGBClassifier model = XGBClassifier(learning_rate=0.1,max_depth=5,alpha=0.2) model.fit(x_train,y_train) p=model.predict_proba(x_te
阅读全文
随机森林
摘要:from sklearn.ensemble import RandomForestClassifier model=RandomForestClassifier(n_estimators=22,max_depth=7,min_samples_split=33,min_samples_leaf=18)
阅读全文
决策树
摘要:2、决策树 from sklearn import tree clf = tree.DecisionTreeClassifier(criterion="gini",max_depth=5,min_samples_split=2,min_samples_leaf=52) clf.fit(x_train
阅读全文
逻辑回归
摘要:2、逻辑回归 2.1常规但是要考虑样本均衡问题 import matplotlib.pyplot as plt x=z.iloc[:,0:7] y=z.iloc[:,7:] from sklearn.model_selection import train_test_split from sklea
阅读全文
画ks曲线能得到阈值和精确ks
摘要:尝试模型代码 1、画出p值 实现ks计算 from sklearn.metrics import roc_curve from sklearn.pipeline import make_pipeline import matplotlib import matplotlib.pyplot as pl
阅读全文
画提升度曲线
摘要:画lift曲线 target=np.array([1,0,1,0,1,1,1,0,0,1,1,0,1,0,1,1,1,0,0,1]) y_pre=np.random.rand(20) y_pre def lift(target,y_pre): data=pd.DataFrame({'target':
阅读全文
|
|