sklearn cluster KMeans
sklearn cluster KMeans
class KMeans(TransformerMixin, ClusterMixin, BaseEstimator): """K-Means clustering. Read more in the :ref:`User Guide <k_means>`. Parameters ---------- n_clusters : int, default=8 The number of clusters to form as well as the number of centroids to generate. init : {'k-means++', 'random'}, callable or array-like of shape \ (n_clusters, n_features), default='k-means++' Method for initialization: 'k-means++' : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details. 'random': choose `n_clusters` observations (rows) at random from data for the initial centroids. If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers. If a callable is passed, it should take arguments X, n_clusters and a random state and return an initialization. n_init : int, default=10 Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia. max_iter : int, default=300 Maximum number of iterations of the k-means algorithm for a single run. tol : float, default=1e-4 Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence. precompute_distances : {'auto', True, False}, default='auto' Precompute distances (faster but takes more memory). 'auto' : do not precompute distances if n_samples * n_clusters > 12 million. This corresponds to about 100MB overhead per job using double precision. True : always precompute distances. False : never precompute distances. .. deprecated:: 0.23 'precompute_distances' was deprecated in version 0.22 and will be removed in 1.0 (renaming of 0.25). It has no effect. verbose : int, default=0 Verbosity mode. random_state : int, RandomState instance or None, default=None Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See :term:`Glossary <random_state>`. copy_x : bool, default=True When pre-computing distances it is more numerically accurate to center the data first. If copy_x is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if copy_x is False. If the original data is sparse, but not in CSR format, a copy will be made even if copy_x is False. n_jobs : int, default=None The number of OpenMP threads to use for the computation. Parallelism is sample-wise on the main cython loop which assigns each sample to its closest center. ``None`` or ``-1`` means using all processors. .. deprecated:: 0.23 ``n_jobs`` was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25). algorithm : {"auto", "full", "elkan"}, default="auto" K-means algorithm to use. The classical EM-style algorithm is "full". The "elkan" variation is more efficient on data with well-defined clusters, by using the triangle inequality. However it's more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters). For now "auto" (kept for backward compatibiliy) chooses "elkan" but it might change in the future for a better heuristic. .. versionchanged:: 0.18 Added Elkan algorithm Attributes ---------- cluster_centers_ : ndarray of shape (n_clusters, n_features) Coordinates of cluster centers. If the algorithm stops before fully converging (see ``tol`` and ``max_iter``), these will not be consistent with ``labels_``. labels_ : ndarray of shape (n_samples,) Labels of each point inertia_ : float Sum of squared distances of samples to their closest cluster center. n_iter_ : int Number of iterations run. See Also -------- MiniBatchKMeans : Alternative online implementation that does incremental updates of the centers positions using mini-batches. For large scale learning (say n_samples > 10k) MiniBatchKMeans is probably much faster than the default batch implementation. Notes ----- The k-means problem is solved using either Lloyd's or Elkan's algorithm. The average complexity is given by O(k n T), were n is the number of samples and T is the number of iteration. The worst case complexity is given by O(n^(k+2/p)) with n = n_samples, p = n_features. (D. Arthur and S. Vassilvitskii, 'How slow is the k-means method?' SoCG2006) In practice, the k-means algorithm is very fast (one of the fastest clustering algorithms available), but it falls in local minima. That's why it can be useful to restart it several times. If the algorithm stops before fully converging (because of ``tol`` or ``max_iter``), ``labels_`` and ``cluster_centers_`` will not be consistent, i.e. the ``cluster_centers_`` will not be the means of the points in each cluster. Also, the estimator will reassign ``labels_`` after the last iteration to make ``labels_`` consistent with ``predict`` on the training set. Examples -------- >>> from sklearn.cluster import KMeans >>> import numpy as np >>> X = np.array([[1, 2], [1, 4], [1, 0], ... [10, 2], [10, 4], [10, 0]]) >>> kmeans = KMeans(n_clusters=2, random_state=0).fit(X) >>> kmeans.labels_ array([1, 1, 1, 0, 0, 0], dtype=int32) >>> kmeans.predict([[0, 0], [12, 3]]) array([1, 0], dtype=int32) >>> kmeans.cluster_centers_ array([[10., 2.], [ 1., 2.]]) """ @_deprecate_positional_args def __init__(self, n_clusters=8, *, init='k-means++', n_init=10, max_iter=300, tol=1e-4, precompute_distances='deprecated', verbose=0, random_state=None, copy_x=True, n_jobs='deprecated', algorithm='auto'): self.n_clusters = n_clusters self.init = init self.max_iter = max_iter self.tol = tol self.precompute_distances = precompute_distances self.n_init = n_init self.verbose = verbose self.random_state = random_state self.copy_x = copy_x self.n_jobs = n_jobs self.algorithm = algorithm def _check_params(self, X): # precompute_distances if self.precompute_distances != 'deprecated': warnings.warn("'precompute_distances' was deprecated in version " "0.23 and will be removed in 1.0 (renaming of 0.25)" ". It has no effect", FutureWarning) # n_jobs if self.n_jobs != 'deprecated': warnings.warn("'n_jobs' was deprecated in version 0.23 and will be" " removed in 1.0 (renaming of 0.25).", FutureWarning) self._n_threads = self.n_jobs else: self._n_threads = None self._n_threads = _openmp_effective_n_threads(self._n_threads) # n_init if self.n_init <= 0: raise ValueError( f"n_init should be > 0, got {self.n_init} instead.") self._n_init = self.n_init # max_iter if self.max_iter <= 0: raise ValueError( f"max_iter should be > 0, got {self.max_iter} instead.") # n_clusters if X.shape[0] < self.n_clusters: raise ValueError(f"n_samples={X.shape[0]} should be >= " f"n_clusters={self.n_clusters}.") # tol self._tol = _tolerance(X, self.tol) # algorithm if self.algorithm not in ("auto", "full", "elkan"): raise ValueError(f"Algorithm must be 'auto', 'full' or 'elkan', " f"got {self.algorithm} instead.") self._algorithm = self.algorithm if self._algorithm == "auto": self._algorithm = "full" if self.n_clusters == 1 else "elkan" if self._algorithm == "elkan" and self.n_clusters == 1: warnings.warn("algorithm='elkan' doesn't make sense for a single " "cluster. Using 'full' instead.", RuntimeWarning) self._algorithm = "full" # init if not (hasattr(self.init, '__array__') or callable(self.init) or (isinstance(self.init, str) and self.init in ["k-means++", "random"])): raise ValueError( f"init should be either 'k-means++', 'random', a ndarray or a " f"callable, got '{self.init}' instead.") if hasattr(self.init, '__array__') and self._n_init != 1: warnings.warn( f"Explicit initial center position passed: performing only" f" one init in {self.__class__.__name__} instead of " f"n_init={self._n_init}.", RuntimeWarning, stacklevel=2) self._n_init = 1 def _validate_center_shape(self, X, centers): """Check if centers is compatible with X and n_clusters.""" if centers.shape[0] != self.n_clusters: raise ValueError( f"The shape of the initial centers {centers.shape} does not " f"match the number of clusters {self.n_clusters}.") if centers.shape[1] != X.shape[1]: raise ValueError( f"The shape of the initial centers {centers.shape} does not " f"match the number of features of the data {X.shape[1]}.") def _check_test_data(self, X): X = self._validate_data(X, accept_sparse='csr', reset=False, dtype=[np.float64, np.float32], order='C', accept_large_sparse=False) return X def _check_mkl_vcomp(self, X, n_samples): """Warns when vcomp and mkl are both present""" # The BLAS call inside a prange in lloyd_iter_chunked_dense is known to # cause a small memory leak when there are less chunks than the number # of available threads. It only happens when the OpenMP library is # vcomp (microsoft OpenMP) and the BLAS library is MKL. see #18653 if sp.issparse(X): return active_threads = int(np.ceil(n_samples / CHUNK_SIZE)) if active_threads < self._n_threads: modules = threadpool_info() has_vcomp = "vcomp" in [module["prefix"] for module in modules] has_mkl = ("mkl", "intel") in [ (module["internal_api"], module.get("threading_layer", None)) for module in modules] if has_vcomp and has_mkl: if not hasattr(self, "batch_size"): # KMeans warnings.warn( f"KMeans is known to have a memory leak on Windows " f"with MKL, when there are less chunks than available " f"threads. You can avoid it by setting the environment" f" variable OMP_NUM_THREADS={active_threads}.") else: # MiniBatchKMeans warnings.warn( f"MiniBatchKMeans is known to have a memory leak on " f"Windows with MKL, when there are less chunks than " f"available threads. You can prevent it by setting " f"batch_size >= {self._n_threads * CHUNK_SIZE} or by " f"setting the environment variable " f"OMP_NUM_THREADS={active_threads}") def _init_centroids(self, X, x_squared_norms, init, random_state, init_size=None): """Compute the initial centroids. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input samples. x_squared_norms : ndarray of shape (n_samples,) Squared euclidean norm of each data point. Pass it if you have it at hands already to avoid it being recomputed here. init : {'k-means++', 'random'}, callable or ndarray of shape \ (n_clusters, n_features) Method for initialization. random_state : RandomState instance Determines random number generation for centroid initialization. See :term:`Glossary <random_state>`. init_size : int, default=None Number of samples to randomly sample for speeding up the initialization (sometimes at the expense of accuracy). Returns ------- centers : ndarray of shape (n_clusters, n_features) """ n_samples = X.shape[0] n_clusters = self.n_clusters if init_size is not None and init_size < n_samples: init_indices = random_state.randint(0, n_samples, init_size) X = X[init_indices] x_squared_norms = x_squared_norms[init_indices] n_samples = X.shape[0] if isinstance(init, str) and init == 'k-means++': centers, _ = _kmeans_plusplus(X, n_clusters, random_state=random_state, x_squared_norms=x_squared_norms) elif isinstance(init, str) and init == 'random': seeds = random_state.permutation(n_samples)[:n_clusters] centers = X[seeds] elif hasattr(init, '__array__'): centers = init elif callable(init): centers = init(X, n_clusters, random_state=random_state) centers = check_array( centers, dtype=X.dtype, copy=False, order='C') self._validate_center_shape(X, centers) if sp.issparse(centers): centers = centers.toarray() return centers def fit(self, X, y=None, sample_weight=None): """Compute k-means clustering. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training instances to cluster. It must be noted that the data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous. If a sparse matrix is passed, a copy will be made if it's not in CSR format. y : Ignored Not used, present here for API consistency by convention. sample_weight : array-like of shape (n_samples,), default=None The weights for each observation in X. If None, all observations are assigned equal weight. .. versionadded:: 0.20 Returns ------- self Fitted estimator. """ X = self._validate_data(X, accept_sparse='csr', dtype=[np.float64, np.float32], order='C', copy=self.copy_x, accept_large_sparse=False) self._check_params(X) random_state = check_random_state(self.random_state) sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) # Validate init array init = self.init if hasattr(init, '__array__'): init = check_array(init, dtype=X.dtype, copy=True, order='C') self._validate_center_shape(X, init) # subtract of mean of x for more accurate distance computations if not sp.issparse(X): X_mean = X.mean(axis=0) # The copy was already done above X -= X_mean if hasattr(init, '__array__'): init -= X_mean # precompute squared norms of data points x_squared_norms = row_norms(X, squared=True) if self._algorithm == "full": kmeans_single = _kmeans_single_lloyd self._check_mkl_vcomp(X, X.shape[0]) else: kmeans_single = _kmeans_single_elkan best_inertia = None for i in range(self._n_init): # Initialize centers centers_init = self._init_centroids( X, x_squared_norms=x_squared_norms, init=init, random_state=random_state) if self.verbose: print("Initialization complete") # run a k-means once labels, inertia, centers, n_iter_ = kmeans_single( X, sample_weight, centers_init, max_iter=self.max_iter, verbose=self.verbose, tol=self._tol, x_squared_norms=x_squared_norms, n_threads=self._n_threads) # determine if these results are the best so far if best_inertia is None or inertia < best_inertia: best_labels = labels best_centers = centers best_inertia = inertia best_n_iter = n_iter_ if not sp.issparse(X): if not self.copy_x: X += X_mean best_centers += X_mean distinct_clusters = len(set(best_labels)) if distinct_clusters < self.n_clusters: warnings.warn( "Number of distinct clusters ({}) found smaller than " "n_clusters ({}). Possibly due to duplicate points " "in X.".format(distinct_clusters, self.n_clusters), ConvergenceWarning, stacklevel=2) self.cluster_centers_ = best_centers self.labels_ = best_labels self.inertia_ = best_inertia self.n_iter_ = best_n_iter return self def fit_predict(self, X, y=None, sample_weight=None): """Compute cluster centers and predict cluster index for each sample. Convenience method; equivalent to calling fit(X) followed by predict(X). Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) New data to transform. y : Ignored Not used, present here for API consistency by convention. sample_weight : array-like of shape (n_samples,), default=None The weights for each observation in X. If None, all observations are assigned equal weight. Returns ------- labels : ndarray of shape (n_samples,) Index of the cluster each sample belongs to. """ return self.fit(X, sample_weight=sample_weight).labels_ def fit_transform(self, X, y=None, sample_weight=None): """Compute clustering and transform X to cluster-distance space. Equivalent to fit(X).transform(X), but more efficiently implemented. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) New data to transform. y : Ignored Not used, present here for API consistency by convention. sample_weight : array-like of shape (n_samples,), default=None The weights for each observation in X. If None, all observations are assigned equal weight. Returns ------- X_new : ndarray of shape (n_samples, n_clusters) X transformed in the new space. """ # Currently, this just skips a copy of the data if it is not in # np.array or CSR format already. # XXX This skips _check_test_data, which may change the dtype; # we should refactor the input validation. return self.fit(X, sample_weight=sample_weight)._transform(X) def transform(self, X): """Transform X to a cluster-distance space. In the new space, each dimension is the distance to the cluster centers. Note that even if X is sparse, the array returned by `transform` will typically be dense. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) New data to transform. Returns ------- X_new : ndarray of shape (n_samples, n_clusters) X transformed in the new space. """ check_is_fitted(self) X = self._check_test_data(X) return self._transform(X) def _transform(self, X): """Guts of transform method; no input validation.""" return euclidean_distances(X, self.cluster_centers_) def predict(self, X, sample_weight=None): """Predict the closest cluster each sample in X belongs to. In the vector quantization literature, `cluster_centers_` is called the code book and each value returned by `predict` is the index of the closest code in the code book. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) New data to predict. sample_weight : array-like of shape (n_samples,), default=None The weights for each observation in X. If None, all observations are assigned equal weight. Returns ------- labels : ndarray of shape (n_samples,) Index of the cluster each sample belongs to. """ check_is_fitted(self) X = self._check_test_data(X) x_squared_norms = row_norms(X, squared=True) sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) return _labels_inertia(X, sample_weight, x_squared_norms, self.cluster_centers_, self._n_threads)[0] def score(self, X, y=None, sample_weight=None): """Opposite of the value of X on the K-means objective. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) New data. y : Ignored Not used, present here for API consistency by convention. sample_weight : array-like of shape (n_samples,), default=None The weights for each observation in X. If None, all observations are assigned equal weight. Returns ------- score : float Opposite of the value of X on the K-means objective. """ check_is_fitted(self) X = self._check_test_data(X) x_squared_norms = row_norms(X, squared=True) sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) return -_labels_inertia(X, sample_weight, x_squared_norms, self.cluster_centers_)[1] def _more_tags(self): return { '_xfail_checks': { 'check_sample_weights_invariance': 'zero sample_weight is not equivalent to removing samples', }, }
############
QQ 3087438119