version 0.20#
Warning
version 0.20 is the last version of scikit-learn to support Python 2.7 and Python 3.4. Scikit-learn 0.21 will require Python 3.5 or higher.
Legend for changelogs
Major Feature something big that you couldn’t do before.
Feature something that you couldn’t do before.
Efficiency an existing feature now may not require as much computation or memory.
Enhancement a miscellaneous minor improvement.
Fix something that previously didn’t work as documented – or according to reasonable expectations – should now work.
API Change you will need to change your code to have the same effect in the future; or a feature will be removed in the future.
version 0.20.4#
July 30, 2019
This is a bug-fix release with some bug fixes applied to version 0.20.3.
Changelog#
The bundled version of joblib was upgraded from 0.13.0 to 0.13.2.
sklearn.cluster#
Fix Fixed a bug in
cluster.KMeanswhere KMeans++ initialisation could rarely result in an IndexError. #11756 by Joel Nothman.
sklearn.compose#
Fix Fixed an issue in
compose.ColumnTransformerwhere using DataFrames whose column order differs betweenfitandtransformcould lead to silently passing incorrect columns to theremaindertransformer. #14237 byAndreas Schuderer <schuderer>.
sklearn.decomposition#
Fix Fixed a bug in
cross_decomposition.CCAimproving numerical stability whenYis close to zero. #13903 by Thomas Fan.
sklearn.model_selection#
Fix Fixed a bug where
model_selection.StratifiedKFoldshuffles each class’s samples with the samerandom_state, makingshuffle=Trueineffective. #13124 by Hanmin Qin.
sklearn.neighbors#
Fix Fixed a bug in
neighbors.KernelDensitywhich could not be restored from a pickle ifsample_weighthad been used. #13772 by Aditya vyas.
version 0.20.3#
March 1, 2019
This is a bug-fix release with some minor documentation improvements and enhancements to features released in 0.20.0.
Changelog#
sklearn.cluster#
Fix Fixed a bug in
cluster.KMeanswhere computation was single threaded whenn_jobs > 1orn_jobs = -1. #12949 by Prabakaran Kumaresshan.
sklearn.compose#
Fix Fixed a bug in
compose.ColumnTransformerto handle negative indexes in the columns list of the transformers. #12946 by Pierre Tallotte.
sklearn.covariance#
Fix Fixed a regression in
covariance.graphical_lassoso that the casen_features=2is handled correctly. #13276 by Aurélien Bellet.
sklearn.decomposition#
Fix Fixed a bug in
decomposition.sparse_encodewhere computation was single threaded whenn_jobs > 1orn_jobs = -1. #13005 by Prabakaran Kumaresshan.
sklearn.datasets#
Efficiency
sklearn.datasets.fetch_openmlnow loads data by streaming, avoiding high memory usage. #13312 by Joris van den Bossche.
sklearn.feature_extraction#
Fix Fixed a bug in
feature_extraction.text.Countvectorizerwhich would result in the sparse feature matrix having conflictingindptrandindicesprecisions under very large vocabularies. #11295 by Gabriel vacaliuc.
sklearn.impute#
Fix add support for non-numeric data in
sklearn.impute.MissingIndicatorwhich was not supported whilesklearn.impute.SimpleImputerwas supporting this for some imputation strategies. #13046 by Guillaume Lemaitre.
sklearn.linear_model#
Fix Fixed a bug in
linear_model.MultiTaskElasticNetandlinear_model.MultiTaskLassowhich were breaking whenwarm_start = True. #12360 by Aakanksha Joshi.
sklearn.preprocessing#
Fix Fixed a bug in
preprocessing.KBinsDiscretizerwherestrategy='kmeans'fails with an error during transformation due to unsorted bin edges. #13134 by Sandro Casagrande.Fix Fixed a bug in
preprocessing.OneHotEncoderwhere the deprecation ofcategorical_featureswas handled incorrectly in combination withhandle_unknown='ignore'. #12881 by Joris van den Bossche.Fix Bins whose width are too small (i.e., <= 1e-8) are removed with a warning in
preprocessing.KBinsDiscretizer. #13165 by Hanmin Qin.
sklearn.svm#
Fix Fixed a bug in
svm.SvC,svm.NuSvC,svm.SvR,svm.NuSvRandsvm.OneClassSvMwhere thescaleoption of parametergammais erroneously defined as1 / (n_features * X.std()). It’s now defined as1 / (n_features * X.var()). #13221 by Hanmin Qin.
Code and Documentation Contributors#
With thanks to:
Adrin Jalali, Agamemnon Krasoulis, Albert Thomas, Andreas Mueller, Aurélien Bellet, bertrandhaut, Bharat Raghunathan, Dowon, Emmanuel Arias, Fibinse Xavier, Finn O’Shea, Gabriel vacaliuc, Gael varoquaux, Guillaume Lemaitre, Hanmin Qin, joaak, Joel Nothman, Joris van den Bossche, Jérémie Méhault, kms15, Kossori Aruku, Lakshya KD, maikia, Manuel López-Ibáñez, Marco Gorelli, MarcoGorelli, mferrari3, Mickaël Schoentgen, Nicolas Hug, pavlos kallis, Pierre Glaser, pierretallotte, Prabakaran Kumaresshan, Reshama Shaikh, Rohit Kapoor, Roman Yurchak, SandroCasagrande, Tashay Green, Thomas Fan, vishaal Kapoor, Zhuyi Xue, Zijie (ZJ) Poh
version 0.20.2#
December 20, 2018
This is a bug-fix release with some minor documentation improvements and enhancements to features released in 0.20.0.
Changed models#
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
sklearn.neighborswhenmetric=='jaccard'(bug fix)use of
'seuclidean'or'mahalanobis'metrics in some cases (bug fix)
Changelog#
sklearn.compose#
Fix Fixed an issue in
compose.make_column_transformerwhich raises unexpected error when columns is pandas Index or pandas Series. #12704 by Hanmin Qin.
sklearn.metrics#
Fix Fixed a bug in
metrics.pairwise_distancesandmetrics.pairwise_distances_chunkedwhere parametersvof"seuclidean"andvIof"mahalanobis"metrics were computed after the data was split into chunks instead of being pre-computed on whole data. #12701 by Jeremie du Boisberranger.
sklearn.neighbors#
Fix Fixed
sklearn.neighbors.DistanceMetricjaccard distance function to return 0 when two all-zero vectors are compared. #12685 by Thomas Fan.
sklearn.utils#
Fix Calling
utils.check_arrayonpandas.Serieswith categorical data, which raised an error in 0.20.0, now returns the expected output again. #12699 by Joris van den Bossche.
Code and Documentation Contributors#
With thanks to:
adanhawth, Adrin Jalali, Albert Thomas, Andreas Mueller, Dan Stine, Feda Curic, Hanmin Qin, Jan S, jeremiedbb, Joel Nothman, Joris van den Bossche, josephsalmon, Katrin Leinweber, Loic Esteve, Muhammad Hassaan Rafique, Nicolas Hug, Olivier Grisel, Paul Paczuski, Reshama Shaikh, Sam Waterbury, Shivam Kotwalia, Thomas Fan
version 0.20.1#
November 21, 2018
This is a bug-fix release with some minor documentation improvements and enhancements to features released in 0.20.0. Note that we also include some API changes in this release, so you might get some extra warnings after updating from 0.20.0 to 0.20.1.
Changed models#
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
decomposition.IncrementalPCA(bug fix)
Changelog#
sklearn.cluster#
Efficiency make
cluster.MeanShiftno longer try to do nested parallelism as the overhead would hurt performance significantly whenn_jobs > 1. #12159 by Olivier Grisel.Fix Fixed a bug in
cluster.DBSCANwith precomputed sparse neighbors graph, which would add explicitly zeros on the diagonal even when already present. #12105 by Tom Dupre la Tour.
sklearn.compose#
Fix Fixed an issue in
compose.ColumnTransformerwhen stacking columns with types not convertible to a numeric. #11912 by Adrin Jalali.API Change
compose.ColumnTransformernow applies thesparse_thresholdeven if all transformation results are sparse. #12304 by Andreas Müller.API Change
compose.make_column_transformernow expects(transformer, columns)instead of(columns, transformer)to keep consistent withcompose.ColumnTransformer. #12339 by Adrin Jalali.
sklearn.datasets#
Fix
datasets.fetch_openmlto correctly use the local cache. #12246 by Jan N. van Rijn.Fix
datasets.fetch_openmlto correctly handle ignore attributes and row id attributes. #12330 by Jan N. van Rijn.Fix Fixed integer overflow in
datasets.make_classificationfor values ofn_informativeparameter larger than 64. #10811 by Roman Feldbauer.Fix Fixed olivetti faces dataset
DESCRattribute to point to the right location indatasets.fetch_olivetti_faces. #12441 by Jérémie du BoisberrangerFix
datasets.fetch_openmlto retry downloading when reading from local cache fails. #12517 by Thomas Fan.
sklearn.decomposition#
Fix Fixed a regression in
decomposition.IncrementalPCAwhere 0.20.0 raised an error if the number of samples in the final batch for fitting IncrementalPCA was smaller than n_components. #12234 by Ming Li.
sklearn.ensemble#
Fix Fixed a bug mostly affecting
ensemble.RandomForestClassifierwhereclass_weight='balanced_subsample'failed with more than 32 classes. #12165 by Joel Nothman.Fix Fixed a bug affecting
ensemble.BaggingClassifier,ensemble.BaggingRegressorandensemble.IsolationForest, wheremax_featureswas sometimes rounded down to zero. #12388 by Connor Tann.
sklearn.feature_extraction#
Fix Fixed a regression in v0.20.0 where
feature_extraction.text.Countvectorizerand other text vectorizers could error during stop words validation with custom preprocessors or tokenizers. #12393 by Roman Yurchak.
sklearn.linear_model#
Fix
linear_model.SGDClassifierand variants withearly_stopping=Truewould not use a consistent validation split in the multiclass case and this would cause a crash when using those estimators as part of parallel parameter search or cross-validation. #12122 by Olivier Grisel.Fix Fixed a bug affecting
linear_model.SGDClassifierin the multiclass case. Each one-versus-all step is run in ajoblib.Parallelcall and mutating a common parameter, causing a segmentation fault if called within a backend using processes and not threads. We now userequire=sharedmemat thejoblib.Parallelinstance creation. #12518 by Pierre Glaser and Olivier Grisel.
sklearn.metrics#
Fix Fixed a bug in
metrics.pairwise.pairwise_distances_argmin_minwhich returned the square root of the distance when the metric parameter was set to “euclidean”. #12481 by Jérémie du Boisberranger.Fix Fixed a bug in
metrics.pairwise.pairwise_distances_chunkedwhich didn’t ensure the diagonal is zero for euclidean distances. #12612 by Andreas Müller.API Change The
metrics.calinski_harabaz_scorehas been renamed tometrics.calinski_harabasz_scoreand will be removed in version 0.23. #12211 by Lisa Thomas, Mark Hannel and Melissa Ferrari.
sklearn.mixture#
Fix Ensure that the
fit_predictmethod ofmixture.GaussianMixtureandmixture.BayesianGaussianMixturealways yield assignments consistent withfitfollowed bypredicteven if the convergence criterion is too loose or not met. #12451 by Olivier Grisel.
sklearn.neighbors#
Fix force the parallelism backend to
threadingforneighbors.KDTreeandneighbors.BallTreein Python 2.7 to avoid pickling errors caused by the serialization of their methods. #12171 by Thomas Moreau.
sklearn.preprocessing#
Fix Fixed bug in
preprocessing.OrdinalEncoderwhen passing manually specified categories. #12365 by Joris van den Bossche.Fix Fixed bug in
preprocessing.KBinsDiscretizerwhere thetransformmethod mutates the_encoderattribute. Thetransformmethod is now thread safe. #12514 by Hanmin Qin.Fix Fixed a bug in
preprocessing.PowerTransformerwhere the Yeo-Johnson transform was incorrect for lambda parameters outside of[0, 2]#12522 by Nicolas Hug.Fix Fixed a bug in
preprocessing.OneHotEncoderwhere transform failed when set to ignore unknown numpy strings of different lengths #12471 by Gabriel Marzinotto.API Change The default value of the
methodargument inpreprocessing.power_transformwill be changed frombox-coxtoyeo-johnsonto matchpreprocessing.PowerTransformerin version 0.23. A FutureWarning is raised when the default value is used. #12317 by Eric Chang.
sklearn.utils#
Fix Use float64 for mean accumulator to avoid floating point precision issues in
preprocessing.StandardScaleranddecomposition.IncrementalPCAwhen using float32 datasets. #12338 by bauks.Fix Calling
utils.check_arrayonpandas.Series, which raised an error in 0.20.0, now returns the expected output again. #12625 by Andreas Müller
Miscellaneous#
Fix When using site joblib by setting the environment variable
SKLEARN_SITE_JOBLIB, added compatibility with joblib 0.11 in addition to 0.12+. #12350 by Joel Nothman and Roman Yurchak.Fix Make sure to avoid raising
FutureWarningwhen callingnp.vstackwith numpy 1.16 and later (use list comprehensions instead of generator expressions in many locations of the scikit-learn code base). #12467 by Olivier Grisel.API Change Removed all mentions of
sklearn.externals.joblib, and deprecated joblib methods exposed insklearn.utils, except forutils.parallel_backendandutils.register_parallel_backend, which allow users to configure parallel computation in scikit-learn. Other functionalities are part of joblib. package and should be used directly, by installing it. The goal of this change is to prepare for unvendoring joblib in future version of scikit-learn. #12345 by Thomas Moreau
Code and Documentation Contributors#
With thanks to:
^__^, Adrin Jalali, Andrea Navarrete, Andreas Mueller, bauks, BenjaStudio, Cheuk Ting Ho, Connossor, Corey Levinson, Dan Stine, daten-kieker, Denis Kataev, Dillon Gardner, Dmitry vukolov, Dougal J. Sutherland, Edward J Brown, Eric Chang, Federico Caselli, Gabriel Marzinotto, Gael varoquaux, GauravAhlawat, Gustavo De Mari Pereira, Hanmin Qin, haroldfox, JackLangerman, Jacopo Notarstefano, janvanrijn, jdethurens, jeremiedbb, Joel Nothman, Joris van den Bossche, Koen, Kushal Chauhan, Lee Yi Jie Joel, Lily Xiong, mail-liam, Mark Hannel, melsyt, Ming Li, Nicholas Smith, Nicolas Hug, Nikolay Shebanov, Oleksandr Pavlyk, Olivier Grisel, Peter Hausamann, Pierre Glaser, Pulkit Maloo, Quentin Batista, Radostin Stoyanov, Ramil Nugmanov, Rebekah Kim, Reshama Shaikh, Rohan Singh, Roman Feldbauer, Roman Yurchak, Roopam Sharma, Sam Waterbury, Scott Lowe, Sebastian Raschka, Stephen Tierney, SylvainLan, TakingItCasual, Thomas Fan, Thomas Moreau, Tom Dupré la Tour, Tulio Casagrande, Utkarsh Upadhyay, Xing Han Lu, Yaroslav Halchenko, Zach Miller
version 0.20.0#
September 25, 2018
This release packs in a mountain of bug fixes, features and enhancements for the Scikit-learn library, and improvements to the documentation and examples. Thanks to our contributors!
This release is dedicated to the memory of Raghav Rajagopalan.
Highlights#
We have tried to improve our support for common data-science use-cases
including missing values, categorical variables, heterogeneous data, and
features/targets with unusual distributions.
Missing values in features, represented by NaNs, are now accepted in
column-wise preprocessing such as scalers. Each feature is fitted disregarding
NaNs, and data containing NaNs can be transformed. The new sklearn.impute
module provides estimators for learning despite missing data.
ColumnTransformer handles the case where different features
or columns of a pandas.DataFrame need different preprocessing.
String or pandas Categorical columns can now be encoded with
OneHotEncoder or
OrdinalEncoder.
TransformedTargetRegressor helps when the regression target
needs to be transformed to be modeled. PowerTransformer
and KBinsDiscretizer join
QuantileTransformer as non-linear transformations.
Beyond this, we have added sample_weight support to several estimators
(including KMeans, BayesianRidge and
KernelDensity) and improved stopping criteria in others
(including MLPRegressor,
GradientBoostingRegressor and
SGDRegressor).
This release is also the first to be accompanied by a Glossary of Common Terms and API Elements developed by Joel Nothman. The glossary is a reference resource to help users and contributors become familiar with the terminology and conventions used in Scikit-learn.
Sorry if your contribution didn’t make it into the highlights. There’s a lot here…
Changed models#
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
cluster.MeanShift(bug fix)decomposition.IncrementalPCAin Python 2 (bug fix)decomposition.SparsePCA(bug fix)ensemble.GradientBoostingClassifier(bug fix affecting feature importances)isotonic.IsotonicRegression(bug fix)linear_model.ARDRegression(bug fix)linear_model.LogisticRegressionCv(bug fix)linear_model.OrthogonalMatchingPursuit(bug fix)linear_model.PassiveAggressiveClassifier(bug fix)linear_model.PassiveAggressiveRegressor(bug fix)linear_model.Perceptron(bug fix)linear_model.SGDClassifier(bug fix)linear_model.SGDRegressor(bug fix)metrics.roc_auc_score(bug fix)metrics.roc_curve(bug fix)neural_network.BaseMultilayerPerceptron(bug fix)neural_network.MLPClassifier(bug fix)neural_network.MLPRegressor(bug fix)The v0.19.0 release notes failed to mention a backwards incompatibility with
model_selection.StratifiedKFoldwhenshuffle=Truedue to #7823.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
Known Major Bugs#
#11924:
linear_model.LogisticRegressionCvwithsolver='lbfgs'andmulti_class='multinomial'may be non-deterministic or otherwise broken on macOS. This appears to be the case on Travis CI servers, but has not been confirmed on personal MacBooks! This issue has been present in previous releases.#9354:
metrics.pairwise.euclidean_distances(which is used several times throughout the library) gives results with poor precision, which particularly affects its use with 32-bit float inputs. This became more problematic in versions 0.18 and 0.19 when some algorithms were changed to avoid casting 32-bit data into 64-bit.
Changelog#
Support for Python 3.3 has been officially dropped.
sklearn.cluster#
Major Feature
cluster.AgglomerativeClusteringnow supports Single Linkage clustering vialinkage='single'. #9372 by Leland McInnes and Steve Astels.Feature
cluster.KMeansandcluster.MiniBatchKMeansnow support sample weights via new parametersample_weightinfitfunction. #10933 by Johannes Hansen.Efficiency
cluster.KMeans,cluster.MiniBatchKMeansandcluster.k_meanspassed withalgorithm='full'now enforce row-major ordering, improving runtime. #10471 by Gaurav Dhingra.Efficiency
cluster.DBSCANnow is parallelized according ton_jobsregardless ofalgorithm. #8003 by Joël Billaud.Enhancement
cluster.KMeansnow gives a warning if the number of distinct clusters found is smaller thann_clusters. This may occur when the number of distinct points in the data set is actually smaller than the number of cluster one is looking for. #10059 by Christian Braune.Fix Fixed a bug where the
fitmethod ofcluster.AffinityPropagationstored cluster centers as 3d array instead of 2d array in case of non-convergence. For the same class, fixed undefined and arbitrary behavior in case of training data where all samples had equal similarity. #9612. By Jonatan Samoocha.Fix Fixed a bug in
cluster.spectral_clusteringwhere the normalization of the spectrum was using a division instead of a multiplication. #8129 by Jan Margeta, Guillaume Lemaitre, and Devansh D..Fix Fixed a bug in
cluster.k_means_elkanwhere the returnediterationwas 1 less than the correct value. Also added the missingn_iter_attribute in the docstring ofcluster.KMeans. #11353 by Jeremie du Boisberranger.Fix Fixed a bug in
cluster.mean_shiftwhere the assigned labels were not deterministic if there were multiple clusters with the same intensities. #11901 by Adrin Jalali.API Change Deprecate
pooling_funcunused parameter incluster.AgglomerativeClustering. #9875 by Kumar Ashutosh.
sklearn.compose#
New module.
Major Feature Added
compose.ColumnTransformer, which allows to apply different transformers to different columns of arrays or pandas DataFrames. #9012 by Andreas Müller and Joris van den Bossche, and #11315 by Thomas Fan.Major Feature Added the
compose.TransformedTargetRegressorwhich transforms the target y before fitting a regression model. The predictions are mapped back to the original space via an inverse transform. #9041 by Andreas Müller and Guillaume Lemaitre.
sklearn.covariance#
Efficiency Runtime improvements to
covariance.GraphicalLasso. #9858 by Steven Brown.API Change The
covariance.graph_lasso,covariance.GraphLassoandcovariance.GraphLassoCvhave been renamed tocovariance.graphical_lasso,covariance.GraphicalLassoandcovariance.GraphicalLassoCvrespectively and will be removed in version 0.22. #9993 by Artiem Krinitsyn
sklearn.datasets#
Major Feature Added
datasets.fetch_openmlto fetch datasets from OpenML. OpenML is a free, open data sharing platform and will be used instead of mldata as it provides better service availability. #9908 by Andreas Müller and Jan N. van Rijn.Feature In
datasets.make_blobs, one can now pass a list to then_samplesparameter to indicate the number of samples to generate per cluster. #8617 by Maskani Filali Mohamed and Konstantinos Katrioplas.Feature Add
filenameattribute tosklearn.datasetsthat have a CSv file. #9101 by alex-33 and Maskani Filali Mohamed.Feature
return_X_yparameter has been added to several dataset loaders. #10774 by Chris Catalfo.Fix Fixed a bug in
datasets.load_bostonwhich had a wrong data point. #10795 by Takeshi Yoshizawa.Fix Fixed a bug in
datasets.load_iriswhich had two wrong data points. #11082 by Sadhana Srinivasan and Hanmin Qin.Fix Fixed a bug in
datasets.fetch_kddcup99, where data were not properly shuffled. #9731 by Nicolas Goix.Fix Fixed a bug in
datasets.make_circles, where no odd number of data points could be generated. #10045 by Christian Braune.API Change Deprecated
sklearn.datasets.fetch_mldatato be removed in version 0.22. mldata.org is no longer operational. Until removal it will remain possible to load cached datasets. #11466 by Joel Nothman.
sklearn.decomposition#
Feature
decomposition.dict_learningfunctions and models now support positivity constraints. This applies to the dictionary and sparse code. #6374 by John Kirkham.Feature Fix
decomposition.SparsePCAnow exposesnormalize_components. When set to True, the train and test data are centered with the train mean respectively during the fit phase and the transform phase. This fixes the behavior of SparsePCA. When set to False, which is the default, the previous abnormal behaviour still holds. The False value is for backward compatibility and should not be used. #11585 by Ivan Panico.Efficiency Efficiency improvements in
decomposition.dict_learning. #11420 and others by John Kirkham.Fix Fix for uninformative error in
decomposition.IncrementalPCA: now an error is raised if the number of components is larger than the chosen batch size. Then_components=Nonecase was adapted accordingly. #6452. By Wally Gauze.Fix Fixed a bug where the
partial_fitmethod ofdecomposition.IncrementalPCAused integer division instead of float division on Python 2. #9492 by James Bourbeau.Fix In
decomposition.PCAselecting a n_components parameter greater than the number of samples now raises an error. Similarly, then_components=Nonecase now selects the minimum ofn_samplesandn_features. #8484 by Wally Gauze.Fix Fixed a bug in
decomposition.PCAwhere users will get unexpected error with large datasets whenn_components='mle'on Python 3 versions. #9886 by Hanmin Qin.Fix Fixed an underflow in calculating KL-divergence for
decomposition.NMF#10142 by Tom Dupre la Tour.Fix Fixed a bug in
decomposition.SparseCoderwhen running OMP sparse coding in parallel using read-only memory mapped datastructures. #5956 by vighnesh Birodkar and Olivier Grisel.
sklearn.discriminant_analysis#
Efficiency Memory usage improvement for
_class_meansand_class_covinsklearn.discriminant_analysis. #10898 by Nanxin Chen.
sklearn.dummy#
Feature
dummy.DummyRegressornow has areturn_stdoption in itspredictmethod. The returned standard deviations will be zeros.Feature
dummy.DummyClassifieranddummy.DummyRegressornow only require X to be an object with finite length or shape. #9832 by vrishank Bhardwaj.Feature
dummy.DummyClassifieranddummy.DummyRegressorcan now be scored without supplying test samples. #11951 by Rüdiger Busche.
sklearn.ensemble#
Feature
ensemble.BaggingRegressorandensemble.BaggingClassifiercan now be fit with missing/non-finite values in X and/or multi-output Y to support wrapping pipelines that perform their own imputation. #9707 by Jimmy Wan.Feature
ensemble.GradientBoostingClassifierandensemble.GradientBoostingRegressornow support early stopping vian_iter_no_change,validation_fractionandtol. #7071 by Raghav RvFeature Added
named_estimators_parameter inensemble.votingClassifierto access fitted estimators. #9157 by Herilalaina Rakotoarison.Fix Fixed a bug when fitting
ensemble.GradientBoostingClassifierorensemble.GradientBoostingRegressorwithwarm_start=Truewhich previously raised a segmentation fault due to a non-conversion of CSC matrix into CSR format expected bydecision_function. Similarly, Fortran-ordered arrays are converted to C-ordered arrays in the dense case. #9991 by Guillaume Lemaitre.Fix Fixed a bug in
ensemble.GradientBoostingRegressorandensemble.GradientBoostingClassifierto have feature importances summed and then normalized, rather than normalizing on a per-tree basis. The previous behavior over-weighted the Gini importance of features that appear in later stages. This issue only affected feature importances. #11176 by Gil Forsyth.API Change The default value of the
n_estimatorsparameter ofensemble.RandomForestClassifier,ensemble.RandomForestRegressor,ensemble.ExtraTreesClassifier,ensemble.ExtraTreesRegressor, andensemble.RandomTreesEmbeddingwill change from 10 in version 0.20 to 100 in 0.22. A FutureWarning is raised when the default value is used. #11542 by Anna Ayzenshtat.API Change Classes derived from
ensemble.BaseBagging. The attributeestimators_samples_will return a list of arrays containing the indices selected for each bootstrap instead of a list of arrays containing the mask of the samples selected for each bootstrap. Indices allows to repeat samples while mask does not allow this functionality. #9524 by Guillaume Lemaitre.Fix
ensemble.BaseBaggingwhere one could not deterministically reproducefitresult using the object attributes whenrandom_stateis set. #9723 by Guillaume Lemaitre.
sklearn.feature_extraction#
Feature Enable the call to
get_feature_namesin unfittedfeature_extraction.text.Countvectorizerinitialized with a vocabulary. #10908 by Mohamed Maskani.Enhancement
idf_can now be set on afeature_extraction.text.TfidfTransformer. #10899 by Sergey Melderis.Fix Fixed a bug in
feature_extraction.image.extract_patches_2dwhich would throw an exception ifmax_patcheswas greater than or equal to the number of all possible patches rather than simply returning the number of possible patches. #10101 by varun AgrawalFix Fixed a bug in
feature_extraction.text.Countvectorizer,feature_extraction.text.Tfidfvectorizer,feature_extraction.text.Hashingvectorizerto support 64 bit sparse array indexing necessary to process large datasets with more than 2·10⁹ tokens (words or n-grams). #9147 by Claes-Fredrik Mannby and Roman Yurchak.Fix Fixed bug in
feature_extraction.text.Tfidfvectorizerwhich was ignoring the parameterdtype. In addition,feature_extraction.text.TfidfTransformerwill preservedtypefor floating and raise a warning ifdtyperequested is integer. #10441 by Mayur Kulkarni and Guillaume Lemaitre.
sklearn.feature_selection#
Feature Added select K best features functionality to
feature_selection.SelectFromModel. #6689 by Nihar Sheth and Quazi Rahman.Feature Added
min_features_to_selectparameter tofeature_selection.RFECvto bound evaluated features counts. #11293 by Brent Yi.Feature
feature_selection.RFECv’s fit method now supports groups. #9656 by Adam Greenhall.Fix Fixed computation of
n_features_to_computefor edge case with tied Cv scores infeature_selection.RFECv. #9222 by Nick Hoh.
sklearn.gaussian_process#
Efficiency In
gaussian_process.GaussianProcessRegressor, methodpredictis faster when usingreturn_std=Truein particular more when called several times in a row. #9234 by andrewww and Minghui Liu.
sklearn.impute#
New module, adopting
preprocessing.Imputerasimpute.SimpleImputerwith minor changes (see under preprocessing below).Major Feature Added
impute.MissingIndicatorwhich generates a binary indicator for missing values. #8075 by Maniteja Nandana and Guillaume Lemaitre.Feature The
impute.SimpleImputerhas a new strategy,'constant', to complete missing values with a fixed one, given by thefill_valueparameter. This strategy supports numeric and non-numeric data, and so does the'most_frequent'strategy now. #11211 by Jeremie du Boisberranger.
sklearn.isotonic#
Fix Fixed a bug in
isotonic.IsotonicRegressionwhich incorrectly combined weights when fitting a model to data involving points with identical X values. #9484 by Dallas Card
sklearn.linear_model#
Feature
linear_model.SGDClassifier,linear_model.SGDRegressor,linear_model.PassiveAggressiveClassifier,linear_model.PassiveAggressiveRegressorandlinear_model.Perceptronnow exposeearly_stopping,validation_fractionandn_iter_no_changeparameters, to stop optimization monitoring the score on a validation set. A new learning rate"adaptive"strategy divides the learning rate by 5 each timen_iter_no_changeconsecutive epochs fail to improve the model. #9043 by Tom Dupre la Tour.Feature Add
sample_weightparameter to the fit method oflinear_model.BayesianRidgefor weighted linear regression. #10112 by Peter St. John.Fix Fixed a bug in
logistic.logistic_regression_pathto ensure that the returned coefficients are correct whenmulticlass='multinomial'. Previously, some of the coefficients would override each other, leading to incorrect results inlinear_model.LogisticRegressionCv. #11724 by Nicolas Hug.Fix Fixed a bug in
linear_model.LogisticRegressionwhere when using the parametermulti_class='multinomial', thepredict_probamethod was returning incorrect probabilities in the case of binary outcomes. #9939 by Roger Westover.Fix Fixed a bug in
linear_model.LogisticRegressionCvwhere thescoremethod always computes accuracy, not the metric given by thescoringparameter. #10998 by Thomas Fan.Fix Fixed a bug in
linear_model.LogisticRegressionCvwhere the ‘ovr’ strategy was always used to compute cross-validation scores in the multiclass setting, even if'multinomial'was set. #8720 by William de vazelhes.Fix Fixed a bug in
linear_model.OrthogonalMatchingPursuitthat was broken when settingnormalize=False. #10071 by Alexandre Gramfort.Fix Fixed a bug in
linear_model.ARDRegressionwhich caused incorrectly updated estimates for the standard deviation and the coefficients. #10153 by Jörg Döpfert.Fix Fixed a bug in
linear_model.ARDRegressionandlinear_model.BayesianRidgewhich caused NaN predictions when fitted with a constant target. #10095 by Jörg Döpfert.Fix Fixed a bug in
linear_model.RidgeClassifierCvwhere the parameterstore_cv_valueswas not implemented though it was documented incv_valuesas a way to set up the storage of cross-validation values for different alphas. #10297 by Mabel villalba-Jiménez.Fix Fixed a bug in
linear_model.ElasticNetwhich caused the input to be overridden when using parametercopy_X=Trueandcheck_input=False. #10581 by Yacine Mazari.Fix Fixed a bug in
sklearn.linear_model.Lassowhere the coefficient had wrong shape whenfit_intercept=False. #10687 by Martin Hahn.Fix Fixed a bug in
sklearn.linear_model.LogisticRegressionwhere themulti_class='multinomial'with binary outputwith warm_start=True#10836 by Aishwarya Srinivasan.Fix Fixed a bug in
linear_model.RidgeCvwhere using integeralphasraised an error. #10397 by Mabel villalba-Jiménez.Fix Fixed condition triggering gap computation in
linear_model.Lassoandlinear_model.ElasticNetwhen working with sparse matrices. #10992 by Alexandre Gramfort.Fix Fixed a bug in
linear_model.SGDClassifier,linear_model.SGDRegressor,linear_model.PassiveAggressiveClassifier,linear_model.PassiveAggressiveRegressorandlinear_model.Perceptron, where the stopping criterion was stopping the algorithm before convergence. A parametern_iter_no_changewas added and set by default to 5. Previous behavior is equivalent to setting the parameter to 1. #9043 by Tom Dupre la Tour.Fix Fixed a bug where liblinear and libsvm-based estimators would segfault if passed a scipy.sparse matrix with 64-bit indices. They now raise a valueError. #11327 by Karan Dhingra and Joel Nothman.
API Change The default values of the
solverandmulti_classparameters oflinear_model.LogisticRegressionwill change respectively from'liblinear'and'ovr'in version 0.20 to'lbfgs'and'auto'in version 0.22. A FutureWarning is raised when the default values are used. #11905 by Tom Dupre la Tour and Joel Nothman.API Change Deprecate
positive=Trueoption inlinear_model.Larsas the underlying implementation is broken. Uselinear_model.Lassoinstead. #9837 by Alexandre Gramfort.API Change
n_iter_may vary from previous releases inlinear_model.LogisticRegressionwithsolver='lbfgs'andlinear_model.HuberRegressor. For Scipy <= 1.0.0, the optimizer could perform more than the requested maximum number of iterations. Now both estimators will report at mostmax_iteriterations even if more were performed. #10723 by Joel Nothman.
sklearn.manifold#
Efficiency Speed improvements for both ‘exact’ and ‘barnes_hut’ methods in
manifold.TSNE. #10593 and #10610 by Tom Dupre la Tour.Feature Support sparse input in
manifold.Isomap.fit. #8554 by Leland McInnes.Feature
manifold.t_sne.trustworthinessaccepts metrics other than Euclidean. #9775 by William de vazelhes.Fix Fixed a bug in
manifold.spectral_embeddingwhere the normalization of the spectrum was using a division instead of a multiplication. #8129 by Jan Margeta, Guillaume Lemaitre, and Devansh D..API Change Feature Deprecate
precomputedparameter in functionmanifold.t_sne.trustworthiness. Instead, the new parametermetricshould be used with any compatible metric including ‘precomputed’, in which case the input matrixXshould be a matrix of pairwise distances or squared distances. #9775 by William de vazelhes.API Change Deprecate
precomputedparameter in functionmanifold.t_sne.trustworthiness. Instead, the new parametermetricshould be used with any compatible metric including ‘precomputed’, in which case the input matrixXshould be a matrix of pairwise distances or squared distances. #9775 by William de vazelhes.
sklearn.metrics#
Major Feature Added the
metrics.davies_bouldin_scoremetric for evaluation of clustering models without a ground truth. #10827 by Luis Osa.Major Feature Added the
metrics.balanced_accuracy_scoremetric and a corresponding'balanced_accuracy'scorer for binary and multiclass classification. #8066 by @xyguo and Aman Dalmia, and #10587 by Joel Nothman.Feature Partial AUC is available via
max_fprparameter inmetrics.roc_auc_score. #3840 by Alexander Niederbühl.Feature A scorer based on
metrics.brier_score_lossis also available. #9521 by Hanmin Qin.Feature Added control over the normalization in
metrics.normalized_mutual_info_scoreandmetrics.adjusted_mutual_info_scorevia theaverage_methodparameter. In version 0.22, the default normalizer for each will become the arithmetic mean of the entropies of each clustering. #11124 by Arya McCarthy.Feature Added
output_dictparameter inmetrics.classification_reportto return classification statistics as dictionary. #11160 by Dan Barkhorn.Feature
metrics.classification_reportnow reports all applicable averages on the given data, including micro, macro and weighted average as well as samples average for multilabel data. #11679 by Alexander Pacha.Feature
metrics.average_precision_scorenow supports binaryy_trueother than{0, 1}or{-1, 1}throughpos_labelparameter. #9980 by Hanmin Qin.Feature
metrics.label_ranking_average_precision_scorenow supportssample_weight. #10845 by Jose Perez-Parras Toledano.Feature Add
dense_outputparameter tometrics.pairwise.linear_kernel. When False and both inputs are sparse, will return a sparse matrix. #10999 by Taylor G Smith.Efficiency
metrics.silhouette_scoreandmetrics.silhouette_samplesare more memory efficient and run faster. This avoids some reported freezes and MemoryErrors. #11135 by Joel Nothman.Fix Fixed a bug in
metrics.precision_recall_fscore_supportwhen truncatedrange(n_labels)is passed as value forlabels. #10377 by Gaurav Dhingra.Fix Fixed a bug due to floating point error in
metrics.roc_auc_scorewith non-integer sample weights. #9786 by Hanmin Qin.Fix Fixed a bug where
metrics.roc_curvesometimes starts on y-axis instead of (0, 0), which is inconsistent with the document and other implementations. Note that this will not influence the result frommetrics.roc_auc_score#10093 by alexryndin and Hanmin Qin.Fix Fixed a bug to avoid integer overflow. Casted product to 64 bits integer in
metrics.mutual_info_score. #9772 by Kumar Ashutosh.Fix Fixed a bug where
metrics.average_precision_scorewill sometimes returnnanwhensample_weightcontains 0. #9980 by Hanmin Qin.Fix Fixed a bug in
metrics.fowlkes_mallows_scoreto avoid integer overflow. Casted return value ofcontingency_matrixtoint64and computed product of square roots rather than square root of product. #9515 by Alan Liddell and Manh Dao.API Change Deprecate
reorderparameter inmetrics.aucas it’s no longer required formetrics.roc_auc_score. Moreover usingreorder=Truecan hide bugs due to floating point error in the input. #9851 by Hanmin Qin.API Change In
metrics.normalized_mutual_info_scoreandmetrics.adjusted_mutual_info_score, warn thataverage_methodwill have a new default value. In version 0.22, the default normalizer for each will become the arithmetic mean of the entropies of each clustering. Currently,metrics.normalized_mutual_info_scoreuses the default ofaverage_method='geometric', andmetrics.adjusted_mutual_info_scoreuses the default ofaverage_method='max'to match their behaviors in version 0.19. #11124 by Arya McCarthy.API Change The
batch_sizeparameter tometrics.pairwise_distances_argmin_minandmetrics.pairwise_distances_argminis deprecated to be removed in v0.22. It no longer has any effect, as batch size is determined by globalworking_memoryconfig. See Limiting Working Memory. #10280 by Joel Nothman and Aman Dalmia.
sklearn.mixture#
Feature Added function fit_predict to
mixture.GaussianMixtureandmixture.GaussianMixture, which is essentially equivalent to calling fit and predict. #10336 by Shu Haoran and Andrew Peng.Fix Fixed a bug in
mixture.BaseMixturewhere the reportedn_iter_was missing an iteration. It affectedmixture.GaussianMixtureandmixture.BayesianGaussianMixture. #10740 by Erich Schubert and Guillaume Lemaitre.Fix Fixed a bug in
mixture.BaseMixtureand its subclassesmixture.GaussianMixtureandmixture.BayesianGaussianMixturewhere thelower_bound_was not the max lower bound across all initializations (whenn_init > 1), but just the lower bound of the last initialization. #10869 by Aurélien Géron.
sklearn.model_selection#
Feature Add
return_estimatorparameter inmodel_selection.cross_validateto return estimators fitted on each split. #9686 by Aurélien Bellet.Feature New
refit_time_attribute will be stored inmodel_selection.GridSearchCvandmodel_selection.RandomizedSearchCvifrefitis set toTrue. This will allow measuring the complete time it takes to perform hyperparameter optimization and refitting the best model on the whole dataset. #11310 by Matthias Feurer.Feature Expose
error_scoreparameter inmodel_selection.cross_validate,model_selection.cross_val_score,model_selection.learning_curveandmodel_selection.validation_curveto control the behavior triggered when an error occurs inmodel_selection._fit_and_score. #11576 by Samuel O. Ronsin.Feature
BaseSearchCvnow has an experimental, private interface to support customized parameter search strategies, through its_run_searchmethod. See the implementations inmodel_selection.GridSearchCvandmodel_selection.RandomizedSearchCvand please provide feedback if you use this. Note that we do not assure the stability of this API beyond version 0.20. #9599 by Joel NothmanEnhancement Add improved error message in
model_selection.cross_val_scorewhen multiple metrics are passed inscoringkeyword. #11006 by Ming Li.API Change The default number of cross-validation folds
cvand the default number of splitsn_splitsin themodel_selection.KFold-like splitters will change from 3 to 5 in 0.22 as 3-fold has a lot of variance. #11557 by Alexandre Boucaud.API Change The default of
iidparameter ofmodel_selection.GridSearchCvandmodel_selection.RandomizedSearchCvwill change fromTruetoFalsein version 0.22 to correspond to the standard definition of cross-validation, and the parameter will be removed in version 0.24 altogether. This parameter is of greatest practical significance where the sizes of different test sets in cross-validation were very unequal, i.e. in group-based Cv strategies. #9085 by Laurent Direr and Andreas Müller.API Change The default value of the
error_scoreparameter inmodel_selection.GridSearchCvandmodel_selection.RandomizedSearchCvwill change tonp.NaNin version 0.22. #10677 by Kirill Zhdanovich.API Change Changed valueError exception raised in
model_selection.ParameterSamplerto a UserWarning for case where the class is instantiated with a greater value ofn_iterthan the total space of parameters in the parameter grid.n_iternow acts as an upper bound on iterations. #10982 by Juliet LawtonAPI Change Invalid input for
model_selection.ParameterGridnow raises TypeError. #10928 by Solutus Immensus
sklearn.multioutput#
Major Feature Added
multioutput.RegressorChainfor multi-target regression. #9257 by Kumar Ashutosh.
sklearn.naive_bayes#
Major Feature Added
naive_bayes.ComplementNB, which implements the Complement Naive Bayes classifier described in Rennie et al. (2003). #8190 by Michael A. Alcorn.Feature Add
var_smoothingparameter innaive_bayes.GaussianNBto give a precise control over variances calculation. #9681 by Dmitry Mottl.Fix Fixed a bug in
naive_bayes.GaussianNBwhich incorrectly raised error for prior list which summed to 1. #10005 by Gaurav Dhingra.Fix Fixed a bug in
naive_bayes.MultinomialNBwhich did not accept vector valued pseudocounts (alpha). #10346 by Tobias Madsen
sklearn.neighbors#
Efficiency
neighbors.RadiusNeighborsRegressorandneighbors.RadiusNeighborsClassifierare now parallelized according ton_jobsregardless ofalgorithm. #10887 by Joël Billaud.Efficiency
sklearn.neighborsquery methods are now more memory efficient whenalgorithm='brute'. #11136 by Joel Nothman and Aman Dalmia.Feature Add
sample_weightparameter to the fit method ofneighbors.KernelDensityto enable weighting in kernel density estimation. #4394 by Samuel O. Ronsin.Feature Novelty detection with
neighbors.LocalOutlierFactor: Add anoveltyparameter toneighbors.LocalOutlierFactor. Whennoveltyis set to True,neighbors.LocalOutlierFactorcan then be used for novelty detection, i.e. predict on new unseen data. Available prediction methods arepredict,decision_functionandscore_samples. By default,noveltyis set toFalse, and only thefit_predictmethod is available. By Albert Thomas.Fix Fixed a bug in
neighbors.NearestNeighborswhere fitting a NearestNeighbors model fails when a) the distance metric used is a callable and b) the input to the NearestNeighbors model is sparse. #9579 by Thomas Kober.Fix Fixed a bug so
predictinneighbors.RadiusNeighborsRegressorcan handle empty neighbor set when using non uniform weights. Also raises a new warning when no neighbors are found for samples. #9655 by Andreas Bjerre-Nielsen.Fix Efficiency Fixed a bug in
KDTreeconstruction that results in faster construction and querying times. #11556 by Jake vanderPlasFix Fixed a bug in
neighbors.KDTreeandneighbors.BallTreewhere pickled tree objects would change their type to the super classBinaryTree. #11774 by Nicolas Hug.
sklearn.neural_network#
Feature Add
n_iter_no_changeparameter inneural_network.BaseMultilayerPerceptron,neural_network.MLPRegressor, andneural_network.MLPClassifierto give control over maximum number of epochs to not meettolimprovement. #9456 by Nicholas Nadeau.Fix Fixed a bug in
neural_network.BaseMultilayerPerceptron,neural_network.MLPRegressor, andneural_network.MLPClassifierwith newn_iter_no_changeparameter now at 10 from previously hardcoded 2. #9456 by Nicholas Nadeau.Fix Fixed a bug in
neural_network.MLPRegressorwhere fitting quit unexpectedly early due to local minima or fluctuations. #9456 by Nicholas Nadeau
sklearn.pipeline#
Feature The
predictmethod ofpipeline.Pipelinenow passes keyword arguments on to the pipeline’s last estimator, enabling the use of parameters such asreturn_stdin a pipeline with caution. #9304 by Breno Freitas.API Change
pipeline.FeatureUnionnow supports'drop'as a transformer to drop features. #11144 by Thomas Fan.
sklearn.preprocessing#
Major Feature Expanded
preprocessing.OneHotEncoderto allow to encode categorical string features as a numeric array using a one-hot (or dummy) encoding scheme, and addedpreprocessing.OrdinalEncoderto convert to ordinal integers. Those two classes now handle encoding of all feature types (also handles string-valued features) and derives the categories based on the unique values in the features instead of the maximum value in the features. #9151 and #10521 by vighnesh Birodkar and Joris van den Bossche.Major Feature Added
preprocessing.KBinsDiscretizerfor turning continuous features into categorical or one-hot encoded features. #7668, #9647, #10195, #10192, #11272, #11467 and #11505. by Henry Lin, Hanmin Qin, Tom Dupre la Tour and Giovanni Giuseppe Costa.Major Feature Added
preprocessing.PowerTransformer, which implements the Yeo-Johnson and Box-Cox power transformations. Power transformations try to find a set of feature-wise parametric transformations to approximately map data to a Gaussian distribution centered at zero and with unit variance. This is useful as a variance-stabilizing transformation in situations where normality and homoscedasticity are desirable. #10210 by Eric Chang and Maniteja Nandana, and #11520 by Nicolas Hug.Major Feature NaN values are ignored and handled in the following preprocessing methods:
preprocessing.MaxAbsScaler,preprocessing.MinMaxScaler,preprocessing.RobustScaler,preprocessing.StandardScaler,preprocessing.PowerTransformer,preprocessing.QuantileTransformerclasses andpreprocessing.maxabs_scale,preprocessing.minmax_scale,preprocessing.robust_scale,preprocessing.scale,preprocessing.power_transform,preprocessing.quantile_transformfunctions respectively addressed in issues #11011, #11005, #11308, #11206, #11306, and #10437. By Lucija Gregov and Guillaume Lemaitre.Feature
preprocessing.PolynomialFeaturesnow supports sparse input. #10452 by Aman Dalmia and Joel Nothman.Feature
preprocessing.RobustScalerandpreprocessing.robust_scalecan be fitted using sparse matrices. #11308 by Guillaume Lemaitre.Feature
preprocessing.OneHotEncodernow supports theget_feature_namesmethod to obtain the transformed feature names. #10181 by Nirvan Anjirbag and Joris van den Bossche.Feature A parameter
check_inversewas added topreprocessing.FunctionTransformerto ensure thatfuncandinverse_funcare the inverse of each other. #9399 by Guillaume Lemaitre.Feature The
transformmethod ofsklearn.preprocessing.MultiLabelBinarizernow ignores any unknown classes. A warning is raised stating the unknown classes classes found which are ignored. #10913 by Rodrigo Agundez.Fix Fixed bugs in
preprocessing.LabelEncoderwhich would sometimes throw errors whentransformorinverse_transformwas called with empty arrays. #10458 by Mayur Kulkarni.Fix Fix valueError in
preprocessing.LabelEncoderwhen usinginverse_transformon unseen labels. #9816 by Charlie Newey.Fix Fix bug in
preprocessing.OneHotEncoderwhich discarded thedtypewhen returning a sparse matrix output. #11042 by Daniel Morales.Fix Fix
fitandpartial_fitinpreprocessing.StandardScalerin the rare case whenwith_mean=Falseandwith_std=Falsewhich was crashing by callingfitmore than once and giving inconsistent results formean_whether the input was a sparse or a dense matrix.mean_will be set toNonewith both sparse and dense inputs.n_samples_seen_will be also reported for both input types. #11235 by Guillaume Lemaitre.API Change Deprecate
n_valuesandcategorical_featuresparameters andactive_features_,feature_indices_andn_values_attributes ofpreprocessing.OneHotEncoder. Then_valuesparameter can be replaced with the newcategoriesparameter, and the attributes with the newcategories_attribute. Selecting the categorical features with thecategorical_featuresparameter is now better supported using thecompose.ColumnTransformer. #10521 by Joris van den Bossche.API Change Deprecate
preprocessing.Imputerand move the corresponding module toimpute.SimpleImputer. #9726 by Kumar Ashutosh.API Change The
axisparameter that was inpreprocessing.Imputeris no longer present inimpute.SimpleImputer. The behavior is equivalent toaxis=0(impute along columns). Row-wise imputation can be performed with FunctionTransformer (e.g.,FunctionTransformer(lambda X: SimpleImputer().fit_transform(X.T).T)). #10829 by Guillaume Lemaitre and Gilberto Olimpio.API Change The NaN marker for the missing values has been changed between the
preprocessing.Imputerand theimpute.SimpleImputer.missing_values='NaN'should now bemissing_values=np.nan. #11211 by Jeremie du Boisberranger.API Change In
preprocessing.FunctionTransformer, the default ofvalidatewill be fromTruetoFalsein 0.22. #10655 by Guillaume Lemaitre.
sklearn.svm#
Fix Fixed a bug in
svm.SvCwhere when the argumentkernelis unicode in Python2, thepredict_probamethod was raising an unexpected TypeError given dense inputs. #10412 by Jiongyan Zhang.API Change Deprecate
random_stateparameter insvm.OneClassSvMas the underlying implementation is not random. #9497 by Albert Thomas.API Change The default value of
gammaparameter ofsvm.SvC,NuSvC,SvR,NuSvR,OneClassSvMwill change from'auto'to'scale'in version 0.22 to account better for unscaled features. #8361 by Gaurav Dhingra and Ting Neo.
sklearn.tree#
Enhancement Although private (and hence not assured API stability),
tree._criterion.ClassificationCriterionandtree._criterion.RegressionCriterionmay now be cimported and extended. #10325 by Camil Staps.Fix Fixed a bug in
tree.BaseDecisionTreewithsplitter="best"where split threshold could become infinite when values in X were near infinite. #10536 by Jonathan Ohayon.Fix Fixed a bug in
tree.MAEto ensure sample weights are being used during the calculation of tree MAE impurity. Previous behaviour could cause suboptimal splits to be chosen since the impurity calculation considered all samples to be of equal weight importance. #11464 by John Stott.
sklearn.utils#
Feature
utils.check_arrayandutils.check_X_ynow haveaccept_large_sparseto control whether scipy.sparse matrices with 64-bit indices should be rejected. #11327 by Karan Dhingra and Joel Nothman.Efficiency Fix Avoid copying the data in
utils.check_arraywhen the input data is a memmap (andcopy=False). #10663 by Arthur Mensch and Loïc Estève.API Change
utils.check_arrayyield aFutureWarningindicating that arrays of bytes/strings will be interpreted as decimal numbers beginning in version 0.22. #10229 by Ryan Lee
Multiple modules#
Feature API Change More consistent outlier detection API: Add a
score_samplesmethod insvm.OneClassSvM,ensemble.IsolationForest,neighbors.LocalOutlierFactor,covariance.EllipticEnvelope. It allows to access raw score functions from original papers. A newoffset_parameter allows to linkscore_samplesanddecision_functionmethods. Thecontaminationparameter ofensemble.IsolationForestandneighbors.LocalOutlierFactordecision_functionmethods is used to define thisoffset_such that outliers (resp. inliers) have negative (resp. positive)decision_functionvalues. By default,contaminationis kept unchanged to 0.1 for a deprecation period. In 0.22, it will be set to “auto”, thus using method-specific score offsets. Incovariance.EllipticEnvelopedecision_functionmethod, theraw_valuesparameter is deprecated as the shifted Mahalanobis distance will be always returned in 0.22. #9015 by Nicolas Goix.Feature API Change A
behaviourparameter has been introduced inensemble.IsolationForestto ensure backward compatibility. In the old behaviour, thedecision_functionis independent of thecontaminationparameter. A threshold attribute depending on thecontaminationparameter is thus used. In the new behaviour thedecision_functionis dependent on thecontaminationparameter, in such a way that 0 becomes its natural threshold to detect outliers. Setting behaviour to “old” is deprecated and will not be possible in version 0.22. Beside, the behaviour parameter will be removed in 0.24. #11553 by Nicolas Goix.API Change Added convergence warning to
svm.LinearSvCandlinear_model.LogisticRegressionwhenverboseis set to 0. #10881 by Alexandre Sevin.API Change Changed warning type from
UserWarningtoexceptions.ConvergenceWarningfor failing convergence inlinear_model.logistic_regression_path,linear_model.RANSACRegressor,linear_model.ridge_regression,gaussian_process.GaussianProcessRegressor,gaussian_process.GaussianProcessClassifier,decomposition.fastica,cross_decomposition.PLSCanonical,cluster.AffinityPropagation, andcluster.Birch. #10306 by Jonathan Siebert.
Miscellaneous#
Major Feature A new configuration parameter,
working_memorywas added to control memory consumption limits in chunked operations, such as the newmetrics.pairwise_distances_chunked. See Limiting Working Memory. #10280 by Joel Nothman and Aman Dalmia.Feature The version of
joblibbundled with Scikit-learn is now 0.12. This uses a new default multiprocessing implementation, named loky. While this may incur some memory and communication overhead, it should provide greater cross-platform stability than relying on Python standard library multiprocessing. #11741 by the Joblib developers, especially Thomas Moreau and Olivier Grisel.Feature An environment variable to use the site joblib instead of the vendored one was added (Environment variables). The main API of joblib is now exposed in
sklearn.utils. #11166 by Gael varoquaux.Feature Add almost complete PyPy 3 support. Known unsupported functionalities are
datasets.load_svmlight_file,feature_extraction.FeatureHasherandfeature_extraction.text.Hashingvectorizer. For running on PyPy, PyPy3-v5.10+, Numpy 1.14.0+, and scipy 1.1.0+ are required. #11010 by Ronan Lamy and Roman Yurchak.Feature A utility method
sklearn.show_versionswas added to print out information relevant for debugging. It includes the user system, the Python executable, the version of the main libraries and BLAS binding information. #11596 by Alexandre BoucaudFix Fixed a bug when setting parameters on meta-estimator, involving both a wrapped estimator and its parameter. #9999 by Marcus voss and Joel Nothman.
Fix Fixed a bug where calling
sklearn.base.clonewas not thread safe and could result in a “pop from empty list” error. #9569 by Andreas Müller.API Change The default value of
n_jobsis changed from1toNonein all related functions and classes.n_jobs=Nonemeansunset. It will generally be interpreted asn_jobs=1, unless the currentjoblib.Parallelbackend context specifies otherwise (See Glossary for additional information). Note that this change happens immediately (i.e., without a deprecation cycle). #11741 by Olivier Grisel.Fix Fixed a bug in validation helpers where passing a Dask DataFrame results in an error. #12462 by Zachariah Miller
Changes to estimator checks#
These changes mostly affect library developers.
Checks for transformers now apply if the estimator implements transform, regardless of whether it inherits from
sklearn.base.TransformerMixin. #10474 by Joel Nothman.Classifiers are now checked for consistency between decision_function and categorical predictions. #10500 by Narine Kokhlikyan.
Allow tests in
utils.estimator_checks.check_estimatorto test functions that accept pairwise data. #9701 by Kyle JohnsonAllow
utils.estimator_checks.check_estimatorto check that there is no private settings apart from parameters during estimator initialization. #9378 by Herilalaina RakotoarisonThe set of checks in
utils.estimator_checks.check_estimatornow includes acheck_set_paramstest which checks thatset_paramsis equivalent to passing parameters in__init__and warns if it encounters parameter validation. #7738 by Alvin ChiangAdd invariance tests for clustering metrics. #8102 by Ankita Sinha and Guillaume Lemaitre.
Add
check_methods_subset_invariancetocheck_estimator, which checks that estimator methods are invariant if applied to a data subset. #10428 by Jonathan OhayonAdd tests in
utils.estimator_checks.check_estimatorto check that an estimator can handle read-only memmap input data. #10663 by Arthur Mensch and Loïc Estève.check_sample_weights_pandas_seriesnow uses 8 rather than 6 samples to accommodate for the default number of clusters incluster.KMeans. #10933 by Johannes Hansen.Estimators are now checked for whether
sample_weight=Noneequates tosample_weight=np.ones(...). #11558 by Sergul Aydore.
Code and Documentation Contributors#
Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.19, including:
211217613, Aarshay Jain, absolutelyNoWarranty, Adam Greenhall, Adam Kleczewski, Adam Richie-Halford, adelr, AdityaDaflapurkar, Adrin Jalali, Aidan Fitzgerald, aishgrt1, Akash Shivram, Alan Liddell, Alan Yee, Albert Thomas, Alexander Lenail, Alexander-N, Alexandre Boucaud, Alexandre Gramfort, Alexandre Sevin, Alex Egg, Alvaro Perez-Diaz, Amanda, Aman Dalmia, Andreas Bjerre-Nielsen, Andreas Mueller, Andrew Peng, Angus Williams, Aniruddha Dave, annaayzenshtat, Anthony Gitter, Antonio Quinonez, Anubhav Marwaha, Arik Pamnani, Arthur Ozga, Artiem K, Arunava, Arya McCarthy, Attractadore, Aurélien Bellet, Aurélien Geron, Ayush Gupta, Balakumaran Manoharan, Bangda Sun, Barry Hart, Bastian venthur, Ben Lawson, Benn Roth, Breno Freitas, Brent Yi, brett koonce, Caio Oliveira, Camil Staps, cclauss, Chady Kamar, Charlie Brummitt, Charlie Newey, chris, Chris, Chris Catalfo, Chris Foster, Chris Holdgraf, Christian Braune, Christian Hirsch, Christian Hogan, Christopher Jenness, Clement Joudet, cnx, cwitte, Dallas Card, Dan Barkhorn, Daniel, Daniel Ferreira, Daniel Gomez, Daniel Klevebring, Danielle Shwed, Daniel Mohns, Danil Baibak, Darius Morawiec, David Beach, David Burns, David Kirkby, David Nicholson, David Pickup, Derek, Didi Bar-Zev, diegodlh, Dillon Gardner, Dillon Niederhut, dilutedsauce, dlovell, Dmitry Mottl, Dmitry Petrov, Dor Cohen, Douglas Duhaime, Ekaterina Tuzova, Eric Chang, Eric Dean Sanchez, Erich Schubert, Eunji, Fang-Chieh Chou, FarahSaeed, felix, Félix Raimundo, fenx, filipj8, FrankHui, Franz Wompner, Freija Descamps, frsi, Gabriele Calvo, Gael varoquaux, Gaurav Dhingra, Georgi Peev, Gil Forsyth, Giovanni Giuseppe Costa, gkevinyen5418, goncalo-rodrigues, Gryllos Prokopis, Guillaume Lemaitre, Guillaume “vermeille” Sanchez, Gustavo De Mari Pereira, hakaa1, Hanmin Qin, Henry Lin, Hong, Honghe, Hossein Pourbozorg, Hristo, Hunan Rostomyan, iampat, Ivan PANICO, Jaewon Chung, Jake vanderPlas, jakirkham, James Bourbeau, James Malcolm, Jamie Cox, Jan Koch, Jan Margeta, Jan Schlüter, janvanrijn, Jason Wolosonovich, JC Liu, Jeb Bearer, jeremiedbb, Jimmy Wan, Jinkun Wang, Jiongyan Zhang, jjabl, jkleint, Joan Massich, Joël Billaud, Joel Nothman, Johannes Hansen, JohnStott, Jonatan Samoocha, Jonathan Ohayon, Jörg Döpfert, Joris van den Bossche, Jose Perez-Parras Toledano, josephsalmon, jotasi, jschendel, Julian Kuhlmann, Julien Chaumond, julietcl, Justin Shenk, Karl F, Kasper Primdal Lauritzen, Katrin Leinweber, Kirill, ksemb, Kuai Yu, Kumar Ashutosh, Kyeongpil Kang, Kye Taylor, kyledrogo, Leland McInnes, Léo DS, Liam Geron, Liutong Zhou, Lizao Li, lkjcalc, Loic Esteve, louib, Luciano viola, Lucija Gregov, Luis Osa, Luis Pedro Coelho, Luke M Craig, Luke Persola, Mabel, Mabel villalba, Maniteja Nandana, MarkIwanchyshyn, Mark Roth, Markus Müller, MarsGuy, Martin Gubri, martin-hahn, martin-kokos, mathurinm, Matthias Feurer, Max Copeland, Mayur Kulkarni, Meghann Agarwal, Melanie Goetz, Michael A. Alcorn, Minghui Liu, Ming Li, Minh Le, Mohamed Ali Jamaoui, Mohamed Maskani, Mohammad Shahebaz, Muayyad Alsadi, Nabarun Pal, Nagarjuna Kumar, Naoya Kanai, Narendran Santhanam, NarineK, Nathaniel Saul, Nathan Suh, Nicholas Nadeau, P.Eng., AvS, Nick Hoh, Nicolas Goix, Nicolas Hug, Nicolau Werneck, nielsenmarkus11, Nihar Sheth, Nikita Titov, Nilesh Kevlani, Nirvan Anjirbag, notmatthancock, nzw, Oleksandr Pavlyk, oliblum90, Oliver Rausch, Olivier Grisel, Oren Milman, Osaid Rehman Nasir, pasbi, Patrick Fernandes, Patrick Olden, Paul Paczuski, Pedro Morales, Peter, Peter St. John, pierreablin, pietruh, Pinaki Nath Chowdhury, Piotr Szymański, Pradeep Reddy Raamana, Pravar D Mahajan, pravarmahajan, QingYing Chen, Raghav Rv, Rajendra arora, RAKOTOARISON Herilalaina, Rameshwar Bhaskaran, RankyLau, Rasul Kerimov, Reiichiro Nakano, Rob, Roman Kosobrodov, Roman Yurchak, Ronan Lamy, rragundez, Rüdiger Busche, Ryan, Sachin Kelkar, Sagnik Bhattacharya, Sailesh Choyal, Sam Radhakrishnan, Sam Steingold, Samuel Bell, Samuel O. Ronsin, Saqib Nizam Shamsi, SATISH J, Saurabh Gupta, Scott Gigante, Sebastian Flennerhag, Sebastian Raschka, Sebastien Dubois, Sébastien Lerique, Sebastin Santy, Sergey Feldman, Sergey Melderis, Sergul Aydore, Shahebaz, Shalil Awaley, Shangwu Yao, Sharad vijalapuram, Sharan Yalburgi, shenhanc78, Shivam Rastogi, Shu Haoran, siftikha, Sinclert Pérez, SolutusImmensus, Somya Anand, srajan paliwal, Sriharsha Hatwar, Sri Krishna, Stefan van der Walt, Stephen McDowell, Steven Brown, syonekura, Taehoon Lee, Takanori Hayashi, tarcusx, Taylor G Smith, theriley106, Thomas, Thomas Fan, Thomas Heavey, Tobias Madsen, tobycheese, Tom Augspurger, Tom Dupré la Tour, Tommy, Trevor Stephens, Trishnendu Ghorai, Tulio Casagrande, twosigmajab, Umar Farouk Umar, Urvang Patel, Utkarsh Upadhyay, vadim Markovtsev, varun Agrawal, vathsala Achar, vilhelm von Ehrenheim, vinayak Mehta, vinit, vinod Kumar L, viraj Mavani, viraj Navkal, vivek Kumar, vlad Niculae, vqean3, vrishank Bhardwaj, vufg, wallygauze, Warut vijitbenjaronk, wdevazelhes, Wenhao Zhang, Wes Barnett, Will, William de vazelhes, Will Rosenfeld, Xin Xiong, Yiming (Paul) Li, ymazari, Yufeng, Zach Griffith, Zé vinícius, Zhenqing Hu, Zhiqing Xiao, Zijie (ZJ) Poh