萌新向Python数据分析及数据挖掘 第三章 机器学习常用算法 第三节 梯度下降法 (下)实操篇

 

In [1]:
 
 
 
 
 
from sklearn import datasets
 
 
In [2]:
 
 
 
 
 
boston  = datasets.load_boston()
X = boston.data 
y = boston.target
#去除不真实的数据 
X = X[y < 50]
y = y[y < 50]
 
 
In [3]:
 
 
 
 
 
from sklearn.model_selection import train_test_split #载入数据切分工具
 
 
In [5]:
 
 
 
 
 
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2,random_state=666)
#切分数据
 
 
In [6]:
 
 
 
 
 
from sklearn.preprocessing import StandardScaler#数据归一化
 
 
In [9]:
 
 
 
 
 
standardScaler = StandardScaler()
standardScaler.fit(X_train)
 
 
Out[9]:
StandardScaler(copy=True, with_mean=True, with_std=True)
In [10]:
 
 
 
 
 
X_train_standard = standardScaler.transform(X_train)
X_test_standard = standardScaler.transform(X_test)
 
 
In [16]:
 
 
 
 
 
from sklearn.linear_model import SGDRegressor #导入包
sgd_reg = SGDRegressor()#实例化
 
 
 
 
 
 
 
 
Init signature: SGDRegressor(loss='squared_loss', penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, warm_start=False, average=False, n_iter=None)
Docstring:    
Linear model fitted by minimizing a regularized empirical loss with SGD
SGD stands for Stochastic Gradient Descent: the gradient of the loss is
estimated each sample at a time and the model is updated along the way with
a decreasing strength schedule (aka learning rate).
The regularizer is a penalty added to the loss function that shrinks model
parameters towards the zero vector using either the squared euclidean norm
L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
parameter update crosses the 0.0 value because of the regularizer, the
update is truncated to 0.0 to allow for learning sparse models and achieve
online feature selection.
This implementation works with data represented as dense numpy arrays of
floating point values for the features.
Read more in the :ref:`User Guide <sgd>`.
Parameters
----------
loss : str, default: 'squared_loss'
    The loss function to be used. The possible values are 'squared_loss',
    'huber', 'epsilon_insensitive', or 'squared_epsilon_insensitive'
    The 'squared_loss' refers to the ordinary least squares fit.
    'huber' modifies 'squared_loss' to focus less on getting outliers
    correct by switching from squared to linear loss past a distance of
    epsilon. 'epsilon_insensitive' ignores errors less than epsilon and is
    linear past that; this is the loss function used in SVR.
    'squared_epsilon_insensitive' is the same but becomes squared loss past
    a tolerance of epsilon.
penalty : str, 'none', 'l2', 'l1', or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to 'l2'
    which is the standard regularizer for linear SVM models. 'l1' and
    'elasticnet' might bring sparsity to the model (feature selection)
    not achievable with 'l2'.
alpha : float
    Constant that multiplies the regularization term. Defaults to 0.0001
    Also used to compute learning_rate when set to 'optimal'.
l1_ratio : float
    The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1.
    l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1.
    Defaults to 0.15.
fit_intercept : bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.
max_iter : int, optional
    The maximum number of passes over the training data (aka epochs).
    It only impacts the behavior in the ``fit`` method, and not the
    `partial_fit`.
    Defaults to 5. Defaults to 1000 from 0.21, or if tol is not None.
    .. versionadded:: 0.19
tol : float or None, optional
    The stopping criterion. If it is not None, the iterations will stop
    when (loss > previous_loss - tol). Defaults to None.
    Defaults to 1e-3 from 0.21.
    .. versionadded:: 0.19
shuffle : bool, optional
    Whether or not the training data should be shuffled after each epoch.
    Defaults to True.
verbose : integer, optional
    The verbosity level.
epsilon : float
    Epsilon in the epsilon-insensitive loss functions; only if `loss` is
    'huber', 'epsilon_insensitive', or 'squared_epsilon_insensitive'.
    For 'huber', determines the threshold at which it becomes less
    important to get the prediction exactly right.
    For epsilon-insensitive, any differences between the current prediction
    and the correct label are ignored if they are less than this threshold.
random_state : int, RandomState instance or None, optional (default=None)
    The seed of the pseudo random number generator to use when shuffling
    the data.  If int, random_state is the seed used by the random number
    generator; If RandomState instance, random_state is the random number
    generator; If None, the random number generator is the RandomState
    instance used by `np.random`.
learning_rate : string, optional
    The learning rate schedule:
    - 'constant': eta = eta0
    - 'optimal': eta = 1.0 / (alpha * (t + t0)) [default]
    - 'invscaling': eta = eta0 / pow(t, power_t)
    where t0 is chosen by a heuristic proposed by Leon Bottou.
eta0 : double, optional
    The initial learning rate [default 0.01].
power_t : double, optional
    The exponent for inverse scaling learning rate [default 0.25].
warm_start : bool, optional
    When set to True, reuse the solution of the previous call to fit as
    initialization, otherwise, just erase the previous solution.
average : bool or int, optional
    When set to True, computes the averaged SGD weights and stores the
    result in the ``coef_`` attribute. If set to an int greater than 1,
    averaging will begin once the total number of samples seen reaches
    average. So ``average=10`` will begin averaging after seeing 10
    samples.
n_iter : int, optional
    The number of passes over the training data (aka epochs).
    Defaults to None. Deprecated, will be removed in 0.21.
    .. versionchanged:: 0.19
        Deprecated
Attributes
----------
coef_ : array, shape (n_features,)
    Weights assigned to the features.
intercept_ : array, shape (1,)
    The intercept term.
average_coef_ : array, shape (n_features,)
    Averaged weights assigned to the features.
average_intercept_ : array, shape (1,)
    The averaged intercept term.
n_iter_ : int
    The actual number of iterations to reach the stopping criterion.
Examples
--------
>>> import numpy as np
>>> from sklearn import linear_model
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = linear_model.SGDRegressor()
>>> clf.fit(X, y)
... #doctest: +NORMALIZE_WHITESPACE
SGDRegressor(alpha=0.0001, average=False, epsilon=0.1, eta0=0.01,
       fit_intercept=True, l1_ratio=0.15, learning_rate='invscaling',
       loss='squared_loss', max_iter=None, n_iter=None, penalty='l2',
       power_t=0.25, random_state=None, shuffle=True, tol=None,
       verbose=0, warm_start=False)
See also
 
In [14]:
 
 
 
 
 
%time sgd_reg.fit(X_train_standard,y_train)
sgd_reg.score(X_test_standard,y_test)
 
 
 
c:\users\qq123\anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDRegressor'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  "and default tol will be 1e-3." % type(self), FutureWarning)
 
Wall time: 178 ms
Out[14]:
0.8046509470914386
In [15]:
 
 
 
 
 
sgd_reg = SGDRegressor(n_iter=100)
%time sgd_reg.fit(X_train_standard,y_train)
sgd_reg.score(X_test_standard,y_test)
 
 
 
Wall time: 5 ms
 
c:\users\qq123\anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:117: DeprecationWarning: n_iter parameter is deprecated in 0.19 and will be removed in 0.21. Use max_iter and tol instead.
  DeprecationWarning)
Out[15]:
0.8131724938971269
posted @ 2019-04-24 12:41  对抗拖延症的二傻子  阅读(236)  评论(0编辑  收藏  举报