precision_recall_fscore_support#
- sklearn.metrics.precision_recall_fscore_support(y_true, y_pred, *, beta=1.0, labels=None, pos_label=1, average=None, warn_for=('precision', 'recall', 'f-score'), sample_weight=None, zero_division='warn')[source]#
Compute precision, recall, F-measure and support for each class.
The precision is the ratio
tp / (tp + fp)wheretpis the number of true positives andfpthe number of false positives. The precision is intuitively the ability of the classifier not to label a negative sample as positive.The recall is the ratio
tp / (tp + fn)wheretpis the number of true positives andfnthe number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall, where an F-beta score reaches its best value at 1 and worst score at 0.
The F-beta score weights recall more than precision by a factor of
beta.beta == 1.0means recall and precision are equally important.The support is the number of occurrences of each class in
y_true.Support beyond term:
binarytargets is achieved by treating multiclass and multilabel data as a collection of binary problems, one for each label. For the binary case, settingaverage='binary'will return metrics forpos_label. Ifaverageis not'binary',pos_labelis ignored and metrics for both classes are computed, then averaged or both returned (whenaverage=None). Similarly, for multiclass and multilabel targets, metrics for alllabelsare either returned or averaged depending on theaverageparameter. Uselabelsspecify the set of labels to calculate metrics for.Read more in the User Guide.
- Parameters:
- y_true1d array-like, or label indicator array / sparse matrix
Ground truth (correct) target values.
- y_pred1d array-like, or label indicator array / sparse matrix
Estimated targets as returned by a classifier.
- betafloat, default=1.0
The strength of recall versus precision in the F-score.
- labelsarray-like, default=None
The set of labels to include when
average != 'binary', and their order ifaverage is None. Labels present in the data can be excluded, for example in multiclass classification to exclude a “negative class”. Labels not present in the data can be included and will be “assigned” 0 samples. For multilabel targets, labels are column indices. By default, all labels iny_trueandy_predare used in sorted order.Changed in version 0.17: Parameter
labelsimproved for multiclass problem.- pos_labelint, float, bool or str, default=1
The class to report if
average='binary'and the data is binary, otherwise this parameter is ignored. For multiclass or multilabel targets, setlabels=[pos_label]andaverage != 'binary'to report metrics for one label only.- average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None, default=’binary’
This parameter is required for multiclass/multilabel targets. If
None, the metrics for each class are returned. Otherwise, this determines the type of averaging performed on the data:'binary':Only report results for the class specified by
pos_label. This is applicable only if targets (y_{true,pred}) are binary.'micro':Calculate metrics globally by counting the total true positives, false negatives and false positives.
'macro':Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
'weighted':Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
'samples':Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from
accuracy_score).
- warn_forlist, tuple or set, for internal use
This determines which warnings will be made in the case that this function is being used to return only one of its metrics.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- zero_division{“warn”, 0.0, 1.0, np.nan}, default=”warn”
Sets the value to return when there is a zero division:
recall: when there are no positive labels
precision: when there are no positive predictions
f-score: both
Notes:
If set to “warn”, this acts like 0, but a warning is also raised.
If set to
np.nan, such values will be excluded from the average.
Added in version 1.3:
np.nanoption was added.
- Returns:
- precisionfloat (if average is not None) or array of float, shape = [n_unique_labels]
Precision score.
- recallfloat (if average is not None) or array of float, shape = [n_unique_labels]
Recall score.
- fbeta_scorefloat (if average is not None) or array of float, shape = [n_unique_labels]
F-beta score.
- supportNone (if average is not None) or array of int, shape = [n_unique_labels]
The number of occurrences of each label in
y_true.
Notes
When
true positive + false positive == 0, precision is undefined. Whentrue positive + false negative == 0, recall is undefined. Whentrue positive + false negative + false positive == 0, f-score is undefined. In such cases, by default the metric will be set to 0, andUndefinedMetricWarningwill be raised. This behavior can be modified withzero_division.References
Examples
>>> import numpy as np >>> from sklearn.metrics import precision_recall_fscore_support >>> y_true = np.array(['cat', 'dog', 'pig', 'cat', 'dog', 'pig']) >>> y_pred = np.array(['cat', 'pig', 'dog', 'cat', 'cat', 'dog']) >>> precision_recall_fscore_support(y_true, y_pred, average='macro') (0.222, 0.333, 0.267, None) >>> precision_recall_fscore_support(y_true, y_pred, average='micro') (0.33, 0.33, 0.33, None) >>> precision_recall_fscore_support(y_true, y_pred, average='weighted') (0.222, 0.333, 0.267, None)
It is possible to compute per-label precisions, recalls, F1-scores and supports instead of averaging:
>>> precision_recall_fscore_support(y_true, y_pred, average=None, ... labels=['pig', 'dog', 'cat']) (array([0. , 0. , 0.66]), array([0., 0., 1.]), array([0. , 0. , 0.8]), array([2, 2, 2]))