sklearn.model_selection#
Tools for model selection, such as cross validation and hyper-parameter tuning.
User guide. See the Cross-validation: evaluating estimator performance, Tuning the hyper-parameters of an estimator, and Learning curve sections for further details.
Splitters#
K-fold iterator variant with non-overlapping groups. |
|
Shuffle-Group(s)-Out cross-validation iterator. |
|
K-Fold cross-validator. |
|
Leave One Group Out cross-validator. |
|
Leave-One-Out cross-validator. |
|
Leave P Group(s) Out cross-validator. |
|
Leave-P-Out cross-validator. |
|
Predefined split cross-validator. |
|
Repeated K-Fold cross validator. |
|
Repeated Stratified K-Fold cross validator. |
|
Random permutation cross-validator. |
|
Stratified K-Fold iterator variant with non-overlapping groups. |
|
Stratified K-Fold cross-validator. |
|
Stratified ShuffleSplit cross-validator. |
|
Time Series cross-validator. |
|
Input checker utility for building a cross-validator. |
|
Split arrays or matrices into random train and test subsets. |
Hyper-parameter optimizers#
Exhaustive search over specified parameter values for an estimator. |
|
Search over specified parameter values with successive halving. |
|
Randomized search on hyper parameters. |
|
Grid of parameters with a discrete number of values for each. |
|
Generator on parameters sampled from given distributions. |
|
Randomized search on hyper parameters. |
Post-fit model tuning#
Binary classifier that manually sets the decision threshold. |
|
Classifier that post-tunes the decision threshold using cross-validation. |
Model validation#
Generate cross-validated estimates for each input data point. |
|
Evaluate a score by cross-validation. |
|
Evaluate metric(s) by cross-validation and also record fit/score times. |
|
Learning curve. |
|
Evaluate the significance of a cross-validated score with permutations. |
|
Validation curve. |
Visualization#
Learning Curve visualization. |
|
Validation Curve visualization. |