- Sort Score
- Num 10 results
- Language All
- Labels All
Results 1051 - 1060 of 4,138 for document (6.81 seconds)
Filter
-
3.3. Tuning the decision threshold for class pr...
Classification is best divided into two parts: the statistical problem of learning a model to predict, ideally, class probabilities;, the decision problem to take concrete action based on those pro...scikit-learn.org/stable/modules/classification_threshold.html -
5.1. Partial Dependence and Individual Conditio...
Partial dependence plots (PDP) and individual conditional expectation (ICE) plots can be used to visualize and analyze interaction between the target response 1 and a set of input features of inter...scikit-learn.org/stable/modules/partial_dependence.html -
Plot the decision surface of decision trees tra...
Plot the decision surface of a decision tree trained on pairs of features of the iris dataset. See decision tree for more information on the estimator. For each pair of iris features, the decision ...scikit-learn.org/stable/auto_examples/tree/plot_iris_dtc.html -
Manifold learning on handwritten digits: Locall...
We illustrate various embedding techniques on the digits dataset. Load digits dataset: We will load the digits dataset and only use six first of the ten available classes. We can plot the first hun...scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html -
Decision Boundaries of Multinomial and One-vs-R...
This example compares decision boundaries of multinomial and one-vs-rest logistic regression on a 2D dataset with three classes. We make a comparison of the decision boundaries of both methods that...scikit-learn.org/stable/auto_examples/linear_model/plot_logistic_multinomial.html -
Comparing anomaly detection algorithms for outl...
This example shows characteristics of different anomaly detection algorithms on 2D datasets. Datasets contain one or two modes (regions of high density) to illustrate the ability of algorithms to c...scikit-learn.org/stable/auto_examples/miscellaneous/plot_anomaly_comparison.html -
3.5. Validation curves: plotting scores to eval...
Every estimator has its advantages and drawbacks. Its generalization error can be decomposed in terms of bias, variance and noise. The bias of an estimator is its average error for different traini...scikit-learn.org/stable/modules/learning_curve.html -
Ability of Gaussian process regression (GPR) to...
This example shows the ability of the WhiteKernel to estimate the noise level in the data. Moreover, we show the importance of kernel hyperparameters initialization. Data generation: We will work i...scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy.html -
Decision boundary of semi-supervised classifier...
This example compares decision boundaries learned by two semi-supervised methods, namely LabelSpreading and SelfTrainingClassifier, while varying the proportion of labeled training data from small ...scikit-learn.org/stable/auto_examples/semi_supervised/plot_semi_supervised_versus_svm_iris.html -
1.11. Ensembles: Gradient boosting, random fore...
Ensemble methods combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. Two very famous ...scikit-learn.org/stable/modules/ensemble.html