Version 1.0#
For a short description of the main highlights of the release, please refer to Release Highlights for scikit-learn 1.0.
Legend for changelogs
Major Feature something big that you couldn’t do before.
Feature something that you couldn’t do before.
Efficiency an existing feature now may not require as much computation or memory.
Enhancement a miscellaneous minor improvement.
Fix something that previously didn’t work as documented – or according to reasonable expectations – should now work.
API Change you will need to change your code to have the same effect in the future; or a feature will be removed in the future.
Version 1.0.2#
December 2021
Fix
cluster.Birch,feature_selection.RFECV,ensemble.RandomForestRegressor,ensemble.RandomForestClassifier,ensemble.GradientBoostingRegressor, andensemble.GradientBoostingClassifierdo not raise warning when fitted on a pandas DataFrame anymore. #21578 by Thomas Fan.
Changelog#
sklearn.cluster#
Fix Fixed an infinite loop in
cluster.SpectralClusteringby moving an iteration counter from try to except. #21271 by Tyler Martin.
sklearn.datasets#
Fix
datasets.fetch_openmlis now thread safe. Data is first downloaded to a temporary subfolder and then renamed. #21833 by Siavash Rezazadeh.
sklearn.decomposition#
Fix Fixed the constraint on the objective function of
decomposition.DictionaryLearning,decomposition.MiniBatchDictionaryLearning,decomposition.SparsePCAanddecomposition.MiniBatchSparsePCAto be convex and match the referenced article. #19210 by Jérémie du Boisberranger.
sklearn.ensemble#
Fix
ensemble.RandomForestClassifier,ensemble.RandomForestRegressor,ensemble.ExtraTreesClassifier,ensemble.ExtraTreesRegressor, andensemble.RandomTreesEmbeddingnow raise aValueErrorwhenbootstrap=Falseandmax_samplesis notNone. #21295 Haoyin Xu.Fix Solve a bug in
ensemble.GradientBoostingClassifierwhere the exponential loss was computing the positive gradient instead of the negative one. #22050 by Guillaume Lemaitre.
sklearn.feature_selection#
Fix Fixed
feature_selection.SelectFromModelby improving support for base estimators that do not setfeature_names_in_. #21991 by Thomas Fan.
sklearn.impute#
Fix Fix a bug in
linear_model.RidgeClassifierCVwhere the methodpredictwas performing anargmaxon the scores obtained fromdecision_functioninstead of returning the multilabel indicator matrix. #19869 by Guillaume Lemaitre.
sklearn.linear_model#
Fix
linear_model.LassoLarsICnow correctly computes AIC and BIC. An error is now raised whenn_features > n_samplesand when the noise variance is not provided. #21481 by Guillaume Lemaitre and Andrés Babino.
sklearn.manifold#
Fix Fixed an unnecessary error when fitting
manifold.Isomapwith a precomputed dense distance matrix where the neighbors graph has multiple disconnected components. #21915 by Tom Dupre la Tour.
sklearn.metrics#
Fix All
sklearn.metrics.DistanceMetricsubclasses now correctly support read-only buffer attributes. This fixes a regression introduced in 1.0.0 with respect to 0.24.2. #21694 by Julien Jerphanion.Fix All
sklearn.metrics.MinkowskiDistancenow accepts a weight parameter that makes it possible to write code that behaves consistently both with scipy 1.8 and earlier versions. In turn this means that all neighbors-based estimators (except those that usealgorithm="kd_tree") now accept a weight parameter withmetric="minkowski"to yield results that are always consistent withscipy.spatial.distance.cdist. #21741 by Olivier Grisel.
sklearn.multiclass#
Fix
multiclass.OneVsRestClassifier.predict_probadoes not error when fitted on constant integer targets. #21871 by Thomas Fan.
sklearn.neighbors#
Fix
neighbors.KDTreeandneighbors.BallTreecorrectly support read-only buffer attributes. #21845 by Thomas Fan.
sklearn.preprocessing#
Fix Fixes compatibility bug with NumPy 1.22 in
preprocessing.OneHotEncoder. #21517 by Thomas Fan.
sklearn.tree#
Fix Prevents
tree.plot_treefrom drawing out of the boundary of the figure. #21917 by Thomas Fan.Fix Support loading pickles of decision tree models when the pickle has been generated on a platform with a different bitness. A typical example is to train and pickle the model on 64 bit machine and load the model on a 32 bit machine for prediction. #21552 by Loïc Estève.
sklearn.utils#
Fix
utils.estimator_html_reprnow escapes all the estimator descriptions in the generated HTML. #21493 by Aurélien Geron.
Version 1.0.1#
October 2021
Fixed models#
Fix Non-fit methods in the following classes do not raise a UserWarning when fitted on DataFrames with valid feature names:
covariance.EllipticEnvelope,ensemble.IsolationForest,ensemble.AdaBoostClassifier,neighbors.KNeighborsClassifier,neighbors.KNeighborsRegressor,neighbors.RadiusNeighborsClassifier,neighbors.RadiusNeighborsRegressor. #21199 by Thomas Fan.
sklearn.calibration#
Fix Fixed
calibration.CalibratedClassifierCVto take into accountsample_weightwhen computing the base estimator prediction whenensemble=False. #20638 by Julien Bohné.Fix Fixed a bug in
calibration.CalibratedClassifierCVwithmethod="sigmoid"that was ignoring thesample_weightwhen computing the Bayesian priors. #21179 by Guillaume Lemaitre.
sklearn.cluster#
Fix Fixed a bug in
cluster.KMeans, ensuring reproducibility and equivalence between sparse and dense input. #21195 by Jérémie du Boisberranger.
sklearn.ensemble#
Fix Fixed a bug that could produce a segfault in rare cases for
ensemble.HistGradientBoostingClassifierandensemble.HistGradientBoostingRegressor. #21130 Christian Lorentzen.
sklearn.gaussian_process#
Fix Compute
y_stdproperly with multi-target insklearn.gaussian_process.GaussianProcessRegressorallowing proper normalization in multi-target scene. #20761 by Patrick de C. T. R. Ferreira.
sklearn.feature_extraction#
Efficiency Fixed an efficiency regression introduced in version 1.0.0 in the
transformmethod offeature_extraction.text.CountVectorizerwhich no longer checks for uppercase characters in the provided vocabulary. #21251 by Jérémie du Boisberranger.Fix Fixed a bug in
feature_extraction.text.CountVectorizerandfeature_extraction.text.TfidfVectorizerby raising an error when ‘min_idf’ or ‘max_idf’ are floating-point numbers greater than 1. #20752 by Alek Lefebvre.
sklearn.linear_model#
Fix Improves stability of
linear_model.LassoLarsfor different versions of openblas. #21340 by Thomas Fan.Fix
linear_model.LogisticRegressionnow raises a better error message when the solver does not support sparse matrices with int64 indices. #21093 by Tom Dupre la Tour.
sklearn.neighbors#
Fix
neighbors.KNeighborsClassifier,neighbors.KNeighborsRegressor,neighbors.RadiusNeighborsClassifier,neighbors.RadiusNeighborsRegressorwithmetric="precomputed"raises an error forbsranddoksparse matrices in methods:fit,kneighborsandradius_neighbors, due to handling of explicit zeros inbsranddoksparse graph formats. #21199 by Thomas Fan.
sklearn.pipeline#
Fix
pipeline.Pipeline.get_feature_names_outcorrectly passes feature names out from one step of a pipeline to the next. #21351 by Thomas Fan.
sklearn.svm#
Fix
svm.SVCandsvm.SVRcheck for an inconsistency in its internal representation and raise an error instead of segfaulting. This fix also resolves CVE-2020-28975. #21336 by Thomas Fan.
sklearn.utils#
Enhancement
utils.validation._check_sample_weightcan perform a non-negativity check on the sample weights. It can be turned on using the only_non_negative bool parameter. Estimators that check for non-negative weights are updated:linear_model.LinearRegression(here the previous error message was misleading),ensemble.AdaBoostClassifier,ensemble.AdaBoostRegressor,neighbors.KernelDensity. #20880 by Guillaume Lemaitre and András Simon.Fix Solve a bug in
sklearn.utils.metaestimators.if_delegate_has_methodwhere the underlying check for an attribute did not work with NumPy arrays. #21145 by Zahlii.
Miscellaneous#
Fix Fitting an estimator on a dataset that has no feature names, that was previously fitted on a dataset with feature names no longer keeps the old feature names stored in the
feature_names_in_attribute. #21389 by Jérémie du Boisberranger.
Version 1.0.0#
September 2021
Minimal dependencies#
Version 1.0.0 of scikit-learn requires python 3.7+, numpy 1.14.6+ and scipy 1.1.0+. Optional minimal dependency is matplotlib 2.2.2+.
Enforcing keyword-only arguments#
In an effort to promote clear and non-ambiguous use of the library, most
constructor and function parameters must now be passed as keyword arguments
(i.e. using the param=value syntax) instead of positional. If a keyword-only
parameter is used as positional, a TypeError is now raised.
#15005 #20002 by Joel Nothman, Adrin Jalali, Thomas Fan,
Nicolas Hug, and Tom Dupre la Tour. See SLEP009
for more details.
Changed models#
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
Fix
manifold.TSNEnow avoids numerical underflow issues during affinity matrix computation.Fix
manifold.Isomapnow connects disconnected components of the neighbors graph along some minimum distance pairs, instead of changing every infinite distances to zero.Fix The splitting criterion of
tree.DecisionTreeClassifierandtree.DecisionTreeRegressorcan be impacted by a fix in the handling of rounding errors. Previously some extra spurious splits could occur.Fix
model_selection.train_test_splitwith astratifyparameter andmodel_selection.StratifiedShuffleSplitmay lead to slightly different results.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
Changelog#
API Change The option for using the squared error via
lossandcriterionparameters was made more consistent. The preferred way is by setting the value to"squared_error". Old option names are still valid, produce the same models, but are deprecated and will be removed in version 1.2. #19310 by Christian Lorentzen.For
ensemble.ExtraTreesRegressor,criterion="mse"is deprecated, use"squared_error"instead which is now the default.For
ensemble.GradientBoostingRegressor,loss="ls"is deprecated, use"squared_error"instead which is now the default.For
ensemble.RandomForestRegressor,criterion="mse"is deprecated, use"squared_error"instead which is now the default.For
ensemble.HistGradientBoostingRegressor,loss="least_squares"is deprecated, use"squared_error"instead which is now the default.For
linear_model.RANSACRegressor,loss="squared_loss"is deprecated, use"squared_error"instead.For
linear_model.SGDRegressor,loss="squared_loss"is deprecated, use"squared_error"instead which is now the default.For
tree.DecisionTreeRegressor,criterion="mse"is deprecated, use"squared_error"instead which is now the default.For
tree.ExtraTreeRegressor,criterion="mse"is deprecated, use"squared_error"instead which is now the default.
API Change The option for using the absolute error via
lossandcriterionparameters was made more consistent. The preferred way is by setting the value to"absolute_error". Old option names are still valid, produce the same models, but are deprecated and will be removed in version 1.2. #19733 by Christian Lorentzen.For
ensemble.ExtraTreesRegressor,criterion="mae"is deprecated, use"absolute_error"instead.For
ensemble.GradientBoostingRegressor,loss="lad"is deprecated, use"absolute_error"instead.For
ensemble.RandomForestRegressor,criterion="mae"is deprecated, use"absolute_error"instead.For
ensemble.HistGradientBoostingRegressor,loss="least_absolute_deviation"is deprecated, use"absolute_error"instead.For
linear_model.RANSACRegressor,loss="absolute_loss"is deprecated, use"absolute_error"instead which is now the default.For
tree.DecisionTreeRegressor,criterion="mae"is deprecated, use"absolute_error"instead.For
tree.ExtraTreeRegressor,criterion="mae"is deprecated, use"absolute_error"instead.
API Change
np.matrixusage is deprecated in 1.0 and will raise aTypeErrorin 1.2. #20165 by Thomas Fan.API Change get_feature_names_out has been added to the transformer API to get the names of the output features.
get_feature_nameshas in turn been deprecated. #18444 by Thomas Fan.API Change All estimators store
feature_names_in_when fitted on pandas Dataframes. These feature names are compared to names seen in non-fitmethods, e.g.transformand will raise aFutureWarningif they are not consistent, see also Release Highlights for scikit-learn 1.0. TheseFutureWarnings will becomeValueErrors in 1.2. #18010 by Thomas Fan.
sklearn.base#
Fix
config_contextis now threadsafe. #18736 by Thomas Fan.
sklearn.calibration#
Feature
calibration.CalibrationDisplayadded to plot calibration curves. #17443 by Lucy Liu.Fix The
predictandpredict_probamethods ofcalibration.CalibratedClassifierCVcan now properly be used on prefitted pipelines. #19641 by Alek Lefebvre.Fix Fixed an error when using a
ensemble.VotingClassifierasbase_estimatorincalibration.CalibratedClassifierCV. #20087 by Clément Fauchereau.
sklearn.cluster#
Efficiency The
"k-means++"initialization ofcluster.KMeansandcluster.MiniBatchKMeansis now faster, especially in multicore settings. #19002 by Jon Crall and Jérémie du Boisberranger.Efficiency
cluster.KMeanswithalgorithm='elkan'is now faster in multicore settings. #19052 by Yusuke Nagasaka.Efficiency
cluster.MiniBatchKMeansis now faster in multicore settings. #17622 by Jérémie du Boisberranger.Efficiency
cluster.OPTICScan now cache the output of the computation of the tree, using thememoryparameter. #19024 by Frankie Robertson.Enhancement The
predictandfit_predictmethods ofcluster.AffinityPropagationnow accept sparse data type for input data. #20117 by Venkatachalam NatchiappanFix Fixed a bug in
cluster.MiniBatchKMeanswhere the sample weights were partially ignored when the input is sparse. #17622 by Jérémie du Boisberranger.Fix Improved convergence detection based on center change in
cluster.MiniBatchKMeanswhich was almost never achievable. #17622 by Jérémie du Boisberranger.Fix
cluster.AgglomerativeClusteringnow supports readonly memory-mapped datasets. #19883 by Julien Jerphanion.Fix
cluster.AgglomerativeClusteringcorrectly connects components when connectivity and affinity are both precomputed and the number of connected components is greater than 1. #20597 by Thomas Fan.Fix
cluster.FeatureAgglomerationdoes not accept a**paramskwarg in thefitfunction anymore, resulting in a more concise error message. #20899 by Adam Li.Fix Fixed a bug in
cluster.KMeans, ensuring reproducibility and equivalence between sparse and dense input. #20200 by Jérémie du Boisberranger.API Change
cluster.Birchattributes,fit_andpartial_fit_, are deprecated and will be removed in 1.2. #19297 by Thomas Fan.API Change the default value for the
batch_sizeparameter ofcluster.MiniBatchKMeanswas changed from 100 to 1024 due to efficiency reasons. Then_iter_attribute ofcluster.MiniBatchKMeansnow reports the number of started epochs and then_steps_attribute reports the number of mini batches processed. #17622 by Jérémie du Boisberranger.API Change
cluster.spectral_clusteringraises an improved error when passed anp.matrix. #20560 by Thomas Fan.
sklearn.compose#
Enhancement
compose.ColumnTransformernow records the output of each transformer inoutput_indices_. #18393 by Luca Bittarello.Enhancement
compose.ColumnTransformernow allows DataFrame input to have its columns appear in a changed order intransform. Further, columns that are dropped will not be required in transform, and additional columns will be ignored ifremainder='drop'. #19263 by Thomas Fan.Enhancement Adds
**predict_paramskeyword argument tocompose.TransformedTargetRegressor.predictthat passes keyword argument to the regressor. #19244 by Ricardo.Fix
compose.ColumnTransformer.get_feature_namessupports non-string feature names returned by any of its transformers. However, note thatget_feature_namesis deprecated, useget_feature_names_outinstead. #18459 by Albert Villanova del Moral and Alonso Silva Allende.Fix
compose.TransformedTargetRegressornow takes nD targets with an adequate transformer. #18898 by Oras Phongpanagnam.API Change Adds
verbose_feature_names_outtocompose.ColumnTransformer. This flag controls the prefixing of feature names out in get_feature_names_out. #18444 and #21080 by Thomas Fan.
sklearn.covariance#
Fix Adds arrays check to
covariance.ledoit_wolfandcovariance.ledoit_wolf_shrinkage. #20416 by Hugo Defois.API Change Deprecates the following keys in
cv_results_:'mean_score','std_score', and'split(k)_score'in favor of'mean_test_score''std_test_score', and'split(k)_test_score'. #20583 by Thomas Fan.
sklearn.datasets#
Enhancement
datasets.fetch_openmlnow supports categories with missing values when returning a pandas dataframe. #19365 by Thomas Fan and Amanda Dsouza and EL-ATEIF Sara.Enhancement
datasets.fetch_kddcup99raises a better message when the cached file is invalid. #19669 Thomas Fan.Enhancement Replace usages of
__file__related to resource file I/O withimportlib.resourcesto avoid the assumption that these resource files (e.g.iris.csv) already exist on a filesystem, and by extension to enable compatibility with tools such asPyOxidizer. #20297 by Jack Liu.Fix Shorten data file names in the openml tests to better support installing on Windows and its default 260 character limit on file names. #20209 by Thomas Fan.
Fix
datasets.fetch_kddcup99returns dataframes whenreturn_X_y=Trueandas_frame=True. #19011 by Thomas Fan.API Change Deprecates
datasets.load_bostonin 1.0 and it will be removed in 1.2. Alternative code snippets to load similar datasets are provided. Please report to the docstring of the function for details. #20729 by Guillaume Lemaitre.
sklearn.decomposition#
Enhancement added a new approximate solver (randomized SVD, available with
eigen_solver='randomized') todecomposition.KernelPCA. This significantly accelerates computation when the number of samples is much larger than the desired number of components. #12069 by Sylvain Marié.Fix Fixes incorrect multiple data-conversion warnings when clustering boolean data. #19046 by Surya Prakash.
Fix Fixed
decomposition.dict_learning, used bydecomposition.DictionaryLearning, to ensure determinism of the output. Achieved by flipping signs of the SVD output which is used to initialize the code. #18433 by Bruno Charron.Fix Fixed a bug in
decomposition.MiniBatchDictionaryLearning,decomposition.MiniBatchSparsePCAanddecomposition.dict_learning_onlinewhere the update of the dictionary was incorrect. #19198 by Jérémie du Boisberranger.Fix Fixed a bug in
decomposition.DictionaryLearning,decomposition.SparsePCA,decomposition.MiniBatchDictionaryLearning,decomposition.MiniBatchSparsePCA,decomposition.dict_learninganddecomposition.dict_learning_onlinewhere the restart of unused atoms during the dictionary update was not working as expected. #19198 by Jérémie du Boisberranger.API Change In
decomposition.DictionaryLearning,decomposition.MiniBatchDictionaryLearning,decomposition.dict_learninganddecomposition.dict_learning_online,transform_alphawill be equal toalphainstead of 1.0 by default starting from version 1.2 #19159 by Benoît Malézieux.API Change Rename variable names in
decomposition.KernelPCAto improve readability.lambdas_andalphas_are renamed toeigenvalues_andeigenvectors_, respectively.lambdas_andalphas_are deprecated and will be removed in 1.2. #19908 by Kei Ishikawa.API Change The
alphaandregularizationparameters ofdecomposition.NMFanddecomposition.non_negative_factorizationare deprecated and will be removed in 1.2. Use the new parametersalpha_Wandalpha_Hinstead. #20512 by Jérémie du Boisberranger.
sklearn.dummy#
API Change Attribute
n_features_in_indummy.DummyRegressoranddummy.DummyRegressoris deprecated and will be removed in 1.2. #20960 by Thomas Fan.
sklearn.ensemble#
Enhancement
HistGradientBoostingClassifierandHistGradientBoostingRegressortake cgroups quotas into account when deciding the number of threads used by OpenMP. This avoids performance problems caused by over-subscription when using those classes in a docker container for instance. #20477 by Thomas Fan.Enhancement
HistGradientBoostingClassifierandHistGradientBoostingRegressorare no longer experimental. They are now considered stable and are subject to the same deprecation cycles as all other estimators. #19799 by Nicolas Hug.Enhancement Improve the HTML rendering of the
ensemble.StackingClassifierandensemble.StackingRegressor. #19564 by Thomas Fan.Enhancement Added Poisson criterion to
ensemble.RandomForestRegressor. #19836 by Brian Sun.Fix Do not allow to compute out-of-bag (OOB) score in
ensemble.RandomForestClassifierandensemble.ExtraTreesClassifierwith multiclass-multioutput target since scikit-learn does not provide any metric supporting this type of target. Additional private refactoring was performed. #19162 by Guillaume Lemaitre.Fix Improve numerical precision for weights boosting in
ensemble.AdaBoostClassifierandensemble.AdaBoostRegressorto avoid underflows. #10096 by Fenil Suchak.Fix Fixed the range of the argument
max_samplesto be(0.0, 1.0]inensemble.RandomForestClassifier,ensemble.RandomForestRegressor, wheremax_samples=1.0is interpreted as using alln_samplesfor bootstrapping. #20159 by @murata-yu.Fix Fixed a bug in
ensemble.AdaBoostClassifierandensemble.AdaBoostRegressorwhere thesample_weightparameter got overwritten duringfit. #20534 by Guillaume Lemaitre.API Change Removes
tol=Noneoption inensemble.HistGradientBoostingClassifierandensemble.HistGradientBoostingRegressor. Please usetol=0for the same behavior. #19296 by Thomas Fan.
sklearn.feature_extraction#
Fix Fixed a bug in
feature_extraction.text.HashingVectorizerwhere some input strings would result in negative indices in the transformed data. #19035 by Liu Yu.Fix Fixed a bug in
feature_extraction.DictVectorizerby raising an error with unsupported value type. #19520 by Jeff Zhao.Fix Fixed a bug in
feature_extraction.image.img_to_graphandfeature_extraction.image.grid_to_graphwhere singleton connected components were not handled properly, resulting in a wrong vertex indexing. #18964 by Bertrand Thirion.Fix Raise a warning in
feature_extraction.text.CountVectorizerwithlowercase=Truewhen there are vocabulary entries with uppercase characters to avoid silent misses in the resulting feature vectors. #19401 by Zito Relova
sklearn.feature_selection#
Feature
feature_selection.r_regressioncomputes Pearson’s R correlation coefficients between the features and the target. #17169 by Dmytro Lituiev and Julien Jerphanion.Enhancement
feature_selection.RFE.fitaccepts additional estimator parameters that are passed directly to the estimator’sfitmethod. #20380 by Iván Pulido, Felipe Bidu, Gil Rutter, and Adrin Jalali.Fix Fix a bug in
isotonic.isotonic_regressionwhere thesample_weightpassed by a user were overwritten duringfit. #20515 by Carsten Allefeld.Fix Change
feature_selection.SequentialFeatureSelectorto allow for unsupervised modelling so that thefitsignature need not do anyyvalidation and allow fory=None. #19568 by Shyam Desai.API Change Raises an error in
feature_selection.VarianceThresholdwhen the variance threshold is negative. #20207 by Tomohiro EndoAPI Change Deprecates
grid_scores_in favor of split scores incv_results_infeature_selection.RFECV.grid_scores_will be removed in version 1.2. #20161 by Shuhei Kayawari and @arka204.
sklearn.inspection#
Enhancement Add
max_samplesparameter ininspection.permutation_importance. It enables to draw a subset of the samples to compute the permutation importance. This is useful to keep the method tractable when evaluating feature importance on large datasets. #20431 by Oliver Pfaffel.Enhancement Add kwargs to format ICE and PD lines separately in partial dependence plots
inspection.plot_partial_dependenceandinspection.PartialDependenceDisplay.plot. #19428 by Mehdi Hamoumi.Fix Allow multiple scorers input to
inspection.permutation_importance. #19411 by Simona Maggio.API Change
inspection.PartialDependenceDisplayexposes a class method:from_estimator.inspection.plot_partial_dependenceis deprecated in favor of the class method and will be removed in 1.2. #20959 by Thomas Fan.
sklearn.kernel_approximation#
Fix Fix a bug in
kernel_approximation.Nystroemwhere the attributecomponent_indices_did not correspond to the subset of sample indices used to generate the approximated kernel. #20554 by Xiangyin Kong.
sklearn.linear_model#
Major Feature Added
linear_model.QuantileRegressorwhich implements linear quantile regression with L1 penalty. #9978 by David Dale and Christian Lorentzen.Feature The new
linear_model.SGDOneClassSVMprovides an SGD implementation of the linear One-Class SVM. Combined with kernel approximation techniques, this implementation approximates the solution of a kernelized One Class SVM while benefiting from a linear complexity in the number of samples. #10027 by Albert Thomas.Feature Added
sample_weightparameter tolinear_model.LassoCVandlinear_model.ElasticNetCV. #16449 by Christian Lorentzen.Feature Added new solver
lbfgs(available withsolver="lbfgs") andpositiveargument tolinear_model.Ridge. Whenpositiveis set toTrue, forces the coefficients to be positive (only supported bylbfgs). #20231 by Toshihiro Nakae.Efficiency The implementation of
linear_model.LogisticRegressionhas been optimised for dense matrices when usingsolver='newton-cg'andmulti_class!='multinomial'. #19571 by Julien Jerphanion.Enhancement
fitmethod preserves dtype for numpy.float32 inlinear_model.Lars,linear_model.LassoLars,linear_model.LassoLars,linear_model.LarsCVandlinear_model.LassoLarsCV. #20155 by Takeshi Oura.Enhancement Validate user-supplied gram matrix passed to linear models via the
precomputeargument. #19004 by Adam Midvidy.Fix
linear_model.ElasticNet.fitno longer modifiessample_weightin place. #19055 by Thomas Fan.Fix
linear_model.Lassoandlinear_model.ElasticNetno longer have adual_gap_not corresponding to their objective. #19172 by Mathurin MassiasFix
sample_weightare now fully taken into account in linear models whennormalize=Truefor both feature centering and feature scaling. #19426 by Alexandre Gramfort and Maria Telenczuk.Fix Points with residuals equal to
residual_thresholdare now considered as inliers forlinear_model.RANSACRegressor. This allows fitting a model perfectly on some datasets whenresidual_threshold=0. #19499 by Gregory Strubel.Fix Sample weight invariance for
linear_model.Ridgewas fixed in #19616 by Oliver Grisel and Christian Lorentzen.Fix The dictionary
paramsinlinear_model.enet_pathandlinear_model.lasso_pathshould only contain parameter of the coordinate descent solver. Otherwise, an error will be raised. #19391 by Shao Yang Hong.API Change Raise a warning in
linear_model.RANSACRegressorthat from version 1.2,min_samplesneed to be set explicitly for models other thanlinear_model.LinearRegression. #19390 by Shao Yang Hong.API Change : The parameter
normalizeoflinear_model.LinearRegressionis deprecated and will be removed in 1.2. Motivation for this deprecation:normalizeparameter did not take any effect iffit_interceptwas set to False and therefore was deemed confusing. The behavior of the deprecatedLinearModel(normalize=True)can be reproduced with aPipelinewithLinearModel(whereLinearModelisLinearRegression,Ridge,RidgeClassifier,RidgeCVorRidgeClassifierCV) as follows:make_pipeline(StandardScaler(with_mean=False), LinearModel()). Thenormalizeparameter inLinearRegressionwas deprecated in #17743 by Maria Telenczuk and Alexandre Gramfort. Same forRidge,RidgeClassifier,RidgeCV, andRidgeClassifierCV, in: #17772 by Maria Telenczuk and Alexandre Gramfort. Same forBayesianRidge,ARDRegressionin: #17746 by Maria Telenczuk. Same forLasso,LassoCV,ElasticNet,ElasticNetCV,MultiTaskLasso,MultiTaskLassoCV,MultiTaskElasticNet,MultiTaskElasticNetCV, in: #17785 by Maria Telenczuk and Alexandre Gramfort.API Change The
normalizeparameter ofOrthogonalMatchingPursuitandOrthogonalMatchingPursuitCVwill default to False in 1.2 and will be removed in 1.4. #17750 by Maria Telenczuk and Alexandre Gramfort. Same forLarsLarsCVLassoLarsLassoLarsCVLassoLarsIC, in #17769 by Maria Telenczuk and Alexandre Gramfort.API Change Keyword validation has moved from
__init__andset_paramstofitfor the following estimators conforming to scikit-learn’s conventions:SGDClassifier,SGDRegressor,SGDOneClassSVM,PassiveAggressiveClassifier, andPassiveAggressiveRegressor. #20683 by Guillaume Lemaitre.
sklearn.manifold#
Enhancement Implement
'auto'heuristic for thelearning_rateinmanifold.TSNE. It will become default in 1.2. The default initialization will change topcain 1.2. PCA initialization will be scaled to have standard deviation 1e-4 in 1.2. #19491 by Dmitry Kobak.Fix Change numerical precision to prevent underflow issues during affinity matrix computation for
manifold.TSNE. #19472 by Dmitry Kobak.Fix
manifold.Isomapnow usesscipy.sparse.csgraph.shortest_pathto compute the graph shortest path. It also connects disconnected components of the neighbors graph along some minimum distance pairs, instead of changing every infinite distances to zero. #20531 by Roman Yurchak and Tom Dupre la Tour.Fix Decrease the numerical default tolerance in the lobpcg call in
manifold.spectral_embeddingto prevent numerical instability. #21194 by Andrew Knyazev.
sklearn.metrics#
Feature
metrics.mean_pinball_lossexposes the pinball loss for quantile regression. #19415 by Xavier Dupré and Oliver Grisel.Feature
metrics.d2_tweedie_scorecalculates the D^2 regression score for Tweedie deviances with power parameterpower. This is a generalization of ther2_scoreand can be interpreted as percentage of Tweedie deviance explained. #17036 by Christian Lorentzen.Feature
metrics.mean_squared_log_errornow supportssquared=False. #20326 by Uttam kumar.Efficiency Improved speed of
metrics.confusion_matrixwhen labels are integral. #9843 by Jon Crall.Enhancement A fix to raise an error in
metrics.hinge_losswhenpred_decisionis 1d whereas it is a multiclass classification or whenpred_decisionparameter is not consistent with thelabelsparameter. #19643 by Pierre Attard.Fix
metrics.ConfusionMatrixDisplay.plotuses the correct max for colormap. #19784 by Thomas Fan.Fix Samples with zero
sample_weightvalues do not affect the results frommetrics.det_curve,metrics.precision_recall_curveandmetrics.roc_curve. #18328 by Albert Villanova del Moral and Alonso Silva Allende.Fix avoid overflow in
metrics.adjusted_rand_scorewith large amount of data. #20312 by Divyanshu Deoli.API Change
metrics.ConfusionMatrixDisplayexposes two class methodsfrom_estimatorandfrom_predictionsallowing to create a confusion matrix plot using an estimator or the predictions.metrics.plot_confusion_matrixis deprecated in favor of these two class methods and will be removed in 1.2. #18543 by Guillaume Lemaitre.API Change
metrics.PrecisionRecallDisplayexposes two class methodsfrom_estimatorandfrom_predictionsallowing to create a precision-recall curve using an estimator or the predictions.metrics.plot_precision_recall_curveis deprecated in favor of these two class methods and will be removed in 1.2. #20552 by Guillaume Lemaitre.API Change
metrics.DetCurveDisplayexposes two class methodsfrom_estimatorandfrom_predictionsallowing to create a confusion matrix plot using an estimator or the predictions.metrics.plot_det_curveis deprecated in favor of these two class methods and will be removed in 1.2. #19278 by Guillaume Lemaitre.
sklearn.mixture#
Fix Ensure that the best parameters are set appropriately in the case of divergency for
mixture.GaussianMixtureandmixture.BayesianGaussianMixture. #20030 by Tingshan Liu and Benjamin Pedigo.
sklearn.model_selection#
Feature added
model_selection.StratifiedGroupKFold, that combinesmodel_selection.StratifiedKFoldandmodel_selection.GroupKFold, providing an ability to split data preserving the distribution of classes in each split while keeping each group within a single split. #18649 by Leandro Hermida and Rodion Martynov.Enhancement warn only once in the main process for per-split fit failures in cross-validation. #20619 by Loïc Estève
Enhancement The
model_selection.BaseShuffleSplitbase class is now public. #20056 by @pabloduque0.Fix Avoid premature overflow in
model_selection.train_test_split. #20904 by Tomasz Jakubek.
sklearn.naive_bayes#
Fix The
fitandpartial_fitmethods of the discrete naive Bayes classifiers (naive_bayes.BernoulliNB,naive_bayes.CategoricalNB,naive_bayes.ComplementNB, andnaive_bayes.MultinomialNB) now correctly handle the degenerate case of a single class in the training set. #18925 by David Poznik.API Change The attribute
sigma_is now deprecated innaive_bayes.GaussianNBand will be removed in 1.2. Usevar_instead. #18842 by Hong Shao Yang.
sklearn.neighbors#
Enhancement The creation of
neighbors.KDTreeandneighbors.BallTreehas been improved for their worst-cases time complexity from \(\mathcal{O}(n^2)\) to \(\mathcal{O}(n)\). #19473 by jiefangxuanyan and Julien Jerphanion.Fix
neighbors.DistanceMetricsubclasses now support readonly memory-mapped datasets. #19883 by Julien Jerphanion.Fix
neighbors.NearestNeighbors,neighbors.KNeighborsClassifier,neighbors.RadiusNeighborsClassifier,neighbors.KNeighborsRegressorandneighbors.RadiusNeighborsRegressordo not validateweightsin__init__and validateweightsinfitinstead. #20072 by Juan Carlos Alfaro Jiménez.API Change The parameter
kwargsofneighbors.RadiusNeighborsClassifieris deprecated and will be removed in 1.2. #20842 by Juan Martín Loyola.
sklearn.neural_network#
Fix
neural_network.MLPClassifierandneural_network.MLPRegressornow correctly support continued training when loading from a pickled file. #19631 by Thomas Fan.
sklearn.pipeline#
API Change The
predict_probaandpredict_log_probamethods of thepipeline.Pipelinenow support passing prediction kwargs to the final estimator. #19790 by Christopher Flynn.
sklearn.preprocessing#
Feature The new
preprocessing.SplineTransformeris a feature preprocessing tool for the generation of B-splines, parametrized by the polynomialdegreeof the splines, number of knotsn_knotsand knot positioning strategyknots. #18368 by Christian Lorentzen.preprocessing.SplineTransformeralso supports periodic splines via theextrapolationargument. #19483 by Malte Londschien.preprocessing.SplineTransformersupports sample weights for knot position strategy"quantile". #20526 by Malte Londschien.Feature
preprocessing.OrdinalEncodersupports passing through missing values by default. #19069 by Thomas Fan.Feature
preprocessing.OneHotEncodernow supportshandle_unknown='ignore'and dropping categories. #19041 by Thomas Fan.Feature
preprocessing.PolynomialFeaturesnow supports passing a tuple todegree, i.e.degree=(min_degree, max_degree). #20250 by Christian Lorentzen.Efficiency
preprocessing.StandardScaleris faster and more memory efficient. #20652 by Thomas Fan.Efficiency Changed
algorithmargument forcluster.KMeansinpreprocessing.KBinsDiscretizerfromautotofull. #19934 by Hleb Levitski.Efficiency The implementation of
fitforpreprocessing.PolynomialFeaturestransformer is now faster. This is especially noticeable on large sparse input. #19734 by Fred Robinson.Fix The
preprocessing.StandardScaler.inverse_transformmethod now raises error when the input data is 1D. #19752 by Zhehao Liu.Fix
preprocessing.scale,preprocessing.StandardScalerand similar scalers detect near-constant features to avoid scaling them to very large values. This problem happens in particular when using a scaler on sparse data with a constant column with sample weights, in which case centering is typically disabled. #19527 by Oliver Grisel and Maria Telenczuk and #19788 by Jérémie du Boisberranger.Fix
preprocessing.StandardScaler.inverse_transformnow correctly handles integer dtypes. #19356 by @makoeppel.Fix
preprocessing.OrdinalEncoder.inverse_transformis not supporting sparse matrix and raises the appropriate error message. #19879 by Guillaume Lemaitre.Fix The
fitmethod ofpreprocessing.OrdinalEncoderwill not raise error whenhandle_unknown='ignore'and unknown categories are given tofit. #19906 by Zhehao Liu.Fix Fix a regression in
preprocessing.OrdinalEncoderwhere large Python numeric would raise an error due to overflow when casted to C type (np.float64ornp.int64). #20727 by Guillaume Lemaitre.Fix
preprocessing.FunctionTransformerdoes not setn_features_in_based on the input toinverse_transform. #20961 by Thomas Fan.API Change The
n_input_features_attribute ofpreprocessing.PolynomialFeaturesis deprecated in favor ofn_features_in_and will be removed in 1.2. #20240 by Jérémie du Boisberranger.
sklearn.svm#
API Change The parameter
**paramsofsvm.OneClassSVM.fitis deprecated and will be removed in 1.2. #20843 by Juan Martín Loyola.
sklearn.tree#
Enhancement Add
fontnameargument intree.export_graphvizfor non-English characters. #18959 by Zero and wstates.Fix Improves compatibility of
tree.plot_treewith high DPI screens. #20023 by Thomas Fan.Fix Fixed a bug in
tree.DecisionTreeClassifier,tree.DecisionTreeRegressorwhere a node could be split whereas it should not have been due to incorrect handling of rounding errors. #19336 by Jérémie du Boisberranger.API Change The
n_features_attribute oftree.DecisionTreeClassifier,tree.DecisionTreeRegressor,tree.ExtraTreeClassifierandtree.ExtraTreeRegressoris deprecated in favor ofn_features_in_and will be removed in 1.2. #20272 by Jérémie du Boisberranger.
sklearn.utils#
Enhancement Deprecated the default value of the
random_state=0inrandomized_svd. Starting in 1.2, the default value ofrandom_statewill be set toNone. #19459 by Cindy Bezuidenhout and Clifford Akai-Nettey.Enhancement Added helper decorator
utils.metaestimators.available_ifto provide flexibility in metaestimators making methods available or unavailable on the basis of state, in a more readable way. #19948 by Joel Nothman.Enhancement
utils.validation.check_is_fittednow uses__sklearn_is_fitted__if available, instead of checking for attributes ending with an underscore. This also makespipeline.Pipelineandpreprocessing.FunctionTransformerpasscheck_is_fitted(estimator). #20657 by Adrin Jalali.Fix Fixed a bug in
utils.sparsefuncs.mean_variance_axiswhere the precision of the computed variance was very poor when the real variance is exactly zero. #19766 by Jérémie du Boisberranger.Fix The docstrings of properties that are decorated with
utils.deprecatedare now properly wrapped. #20385 by Thomas Fan.Fix
utils.stats._weighted_percentilenow correctly ignores zero-weighted observations smaller than the smallest observation with positive weight forpercentile=0. Affected classes aredummy.DummyRegressorforquantile=0andensemble.HuberLossFunctionandensemble.HuberLossFunctionforalpha=0. #20528 by Malte Londschien.Fix
utils._safe_indexingexplicitly takes a dataframe copy when integer indices are provided avoiding to raise a warning from Pandas. This warning was previously raised in resampling utilities and functions using those utilities (e.g.model_selection.train_test_split,model_selection.cross_validate,model_selection.cross_val_score,model_selection.cross_val_predict). #20673 by Joris Van den Bossche.Fix Fix a regression in
utils.is_scalar_nanwhere large Python numbers would raise an error due to overflow in C types (np.float64ornp.int64). #20727 by Guillaume Lemaitre.Fix Support for
np.matrixis deprecated incheck_arrayin 1.0 and will raise aTypeErrorin 1.2. #20165 by Thomas Fan.API Change
utils._testing.assert_warnsandutils._testing.assert_warns_messageare deprecated in 1.0 and will be removed in 1.2. Usedpytest.warnscontext manager instead. Note that these functions were not documented and part from the public API. #20521 by Olivier Grisel.API Change Fixed several bugs in
utils.graph.graph_shortest_path, which is now deprecated. Usescipy.sparse.csgraph.shortest_pathinstead. #20531 by Tom Dupre la Tour.
Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.24, including:
Abdulelah S. Al Mesfer, Abhinav Gupta, Adam J. Stewart, Adam Li, Adam Midvidy, Adrian Garcia Badaracco, Adrian Sadłocha, Adrin Jalali, Agamemnon Krasoulis, Alberto Rubiales, Albert Thomas, Albert Villanova del Moral, Alek Lefebvre, Alessia Marcolini, Alexandr Fonari, Alihan Zihna, Aline Ribeiro de Almeida, Amanda, Amanda Dsouza, Amol Deshmukh, Ana Pessoa, Anavelyz, Andreas Mueller, Andrew Delong, Ashish, Ashvith Shetty, Atsushi Nukariya, Aurélien Geron, Avi Gupta, Ayush Singh, baam, BaptBillard, Benjamin Pedigo, Bertrand Thirion, Bharat Raghunathan, bmalezieux, Brian Rice, Brian Sun, Bruno Charron, Bryan Chen, bumblebee, caherrera-meli, Carsten Allefeld, CeeThinwa, Chiara Marmo, chrissobel, Christian Lorentzen, Christopher Yeh, Chuliang Xiao, Clément Fauchereau, cliffordEmmanuel, Conner Shen, Connor Tann, David Dale, David Katz, David Poznik, Dimitri Papadopoulos Orfanos, Divyanshu Deoli, dmallia17, Dmitry Kobak, DS_anas, Eduardo Jardim, EdwinWenink, EL-ATEIF Sara, Eleni Markou, EricEllwanger, Eric Fiegel, Erich Schubert, Ezri-Mudde, Fatos Morina, Felipe Rodrigues, Felix Hafner, Fenil Suchak, flyingdutchman23, Flynn, Fortune Uwha, Francois Berenger, Frankie Robertson, Frans Larsson, Frederick Robinson, frellwan, Gabriel S Vicente, Gael Varoquaux, genvalen, Geoffrey Thomas, geroldcsendes, Hleb Levitski, Glen, Glòria Macià Muñoz, gregorystrubel, groceryheist, Guillaume Lemaitre, guiweber, Haidar Almubarak, Hans Moritz Günther, Haoyin Xu, Harris Mirza, Harry Wei, Harutaka Kawamura, Hassan Alsawadi, Helder Geovane Gomes de Lima, Hugo DEFOIS, Igor Ilic, Ikko Ashimine, Isaack Mungui, Ishaan Bhat, Ishan Mishra, Iván Pulido, iwhalvic, J Alexander, Jack Liu, James Alan Preiss, James Budarz, James Lamb, Jannik, Jeff Zhao, Jennifer Maldonado, Jérémie du Boisberranger, Jesse Lima, Jianzhu Guo, jnboehm, Joel Nothman, JohanWork, John Paton, Jonathan Schneider, Jon Crall, Jon Haitz Legarreta Gorroño, Joris Van den Bossche, José Manuel Nápoles Duarte, Juan Carlos Alfaro Jiménez, Juan Martin Loyola, Julien Jerphanion, Julio Batista Silva, julyrashchenko, JVM, Kadatatlu Kishore, Karen Palacio, Kei Ishikawa, kmatt10, kobaski, Kot271828, Kunj, KurumeYuta, kxytim, lacrosse91, LalliAcqua, Laveen Bagai, Leonardo Rocco, Leonardo Uieda, Leopoldo Corona, Loic Esteve, LSturtew, Luca Bittarello, Luccas Quadros, Lucy Jiménez, Lucy Liu, ly648499246, Mabu Manaileng, Manimaran, makoeppel, Marco Gorelli, Maren Westermann, Mariangela, Maria Telenczuk, marielaraj, Martin Hirzel, Mateo Noreña, Mathieu Blondel, Mathis Batoul, mathurinm, Matthew Calcote, Maxime Prieur, Maxwell, Mehdi Hamoumi, Mehmet Ali Özer, Miao Cai, Michal Karbownik, michalkrawczyk, Mitzi, mlondschien, Mohamed Haseeb, Mohamed Khoualed, Muhammad Jarir Kanji, murata-yu, Nadim Kawwa, Nanshan Li, naozin555, Nate Parsons, Neal Fultz, Nic Annau, Nicolas Hug, Nicolas Miller, Nico Stefani, Nigel Bosch, Nikita Titov, Nodar Okroshiashvili, Norbert Preining, novaya, Ogbonna Chibuike Stephen, OGordon100, Oliver Pfaffel, Olivier Grisel, Oras Phongpanangam, Pablo Duque, Pablo Ibieta-Jimenez, Patric Lacouth, Paulo S. Costa, Paweł Olszewski, Peter Dye, PierreAttard, Pierre-Yves Le Borgne, PranayAnchuri, Prince Canuma, putschblos, qdeffense, RamyaNP, ranjanikrishnan, Ray Bell, Rene Jean Corneille, Reshama Shaikh, ricardojnf, RichardScottOZ, Rodion Martynov, Rohan Paul, Roman Lutz, Roman Yurchak, Samuel Brice, Sandy Khosasi, Sean Benhur J, Sebastian Flores, Sebastian Pölsterl, Shao Yang Hong, shinehide, shinnar, shivamgargsya, Shooter23, Shuhei Kayawari, Shyam Desai, simonamaggio, Sina Tootoonian, solosilence, Steven Kolawole, Steve Stagg, Surya Prakash, swpease, Sylvain Marié, Takeshi Oura, Terence Honles, TFiFiE, Thomas A Caswell, Thomas J. Fan, Tim Gates, TimotheeMathieu, Timothy Wolodzko, Tim Vink, t-jakubek, t-kusanagi, tliu68, Tobias Uhmann, tom1092, Tomás Moreyra, Tomás Ronald Hughes, Tom Dupré la Tour, Tommaso Di Noto, Tomohiro Endo, TONY GEORGE, Toshihiro NAKAE, tsuga, Uttam kumar, vadim-ushtanit, Vangelis Gkiastas, Venkatachalam N, Vilém Zouhar, Vinicius Rios Fuck, Vlasovets, waijean, Whidou, xavier dupré, xiaoyuchai, Yasmeen Alsaedy, yoch, Yosuke KOBAYASHI, Yu Feng, YusukeNagasaka, yzhenman, Zero, ZeyuSun, ZhaoweiWang, Zito, Zito Relova