Search Options

Results per page
Sort
Preferred Languages
Labels
Advance

Results 831 - 840 of 1,745 for document (1.76 sec)

  1. Imputing missing values before building an esti...

    Missing values can be replaced by the mean, the median or the most frequent value using the basic SimpleImputer. In this example we will investigate different imputation techniques: imputation by t...
    scikit-learn.org/stable/auto_examples/impute/plot_missing_values.html
    Sat Oct 11 07:51:25 UTC 2025
      121.1K bytes
      Cache
     
  2. 1.13. Feature selection — scikit-learn 1.7.2 do...

    The classes in the sklearn.feature_selection module can be used for feature selection/dimensionality reduction on sample sets, either to improve estimators’ accuracy scores or to boost their perfor...
    scikit-learn.org/stable/modules/feature_selection.html
    Sat Oct 11 07:51:26 UTC 2025
      73.8K bytes
      Cache
     
  3. 2.8. Density Estimation — scikit-learn 1.7.2 do...

    Density estimation walks the line between unsupervised learning, feature engineering, and data modeling. Some of the most popular and useful density estimation techniques are mixture models such as...
    scikit-learn.org/stable/modules/density.html
    Sat Oct 11 07:51:27 UTC 2025
      45.5K bytes
      Cache
     
  4. 7.3. Preprocessing data — scikit-learn 1.7.2 do...

    The sklearn.preprocessing package provides several common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for the downstream esti...
    scikit-learn.org/stable/modules/preprocessing.html
    Sat Oct 11 07:51:25 UTC 2025
      198.2K bytes
      Cache
     
  5. 1.16. Probability calibration — scikit-learn 1....

    When performing classification you often want not only to predict the class label, but also obtain a probability of the respective label. This probability gives you some kind of confidence on the p...
    scikit-learn.org/stable/modules/calibration.html
    Sat Oct 11 07:51:26 UTC 2025
      63.3K bytes
      Cache
     
  6. 1.10. Decision Trees — scikit-learn 1.7.2 docum...

    Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning s...
    scikit-learn.org/stable/modules/tree.html
    Sat Oct 11 07:51:25 UTC 2025
      94.5K bytes
      Cache
     
  7. 8.1. Toy datasets — scikit-learn 1.7.2 document...

    scikit-learn comes with a few small standard datasets that do not require to download any file from some external website. They can be loaded using the following functions: These datasets are usefu...
    scikit-learn.org/stable/datasets/toy_dataset.html
    Sat Oct 11 07:51:24 UTC 2025
      63.3K bytes
      1 views
      Cache
     
  8. L1-based models for Sparse Signals — scikit-lea...

    The present example compares three l1-based regression models on a synthetic signal obtained from sparse and correlated features that are further corrupted with additive gaussian noise: a Lasso;, a...
    scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_and_elasticnet.html
    Sat Oct 11 07:51:27 UTC 2025
      125.4K bytes
      Cache
     
  9. Model-based and sequential feature selection — ...

    This example illustrates and compares two approaches for feature selection: SelectFromModel which is based on feature importance, and SequentialFeatureSelector which relies on a greedy approach. We...
    scikit-learn.org/stable/auto_examples/feature_selection/plot_select_from_model_diabetes.html
    Sat Oct 11 07:51:26 UTC 2025
      123.3K bytes
      Cache
     
  10. A demo of the mean-shift clustering algorithm —...

    Reference: Dorin Comaniciu and Peter Meer, “Mean Shift: A robust approach toward feature space analysis”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002. pp. 603-619. Generate...
    scikit-learn.org/stable/auto_examples/cluster/plot_mean_shift.html
    Sat Oct 11 07:51:27 UTC 2025
      92.3K bytes
      Cache
     
Back to top