r_regression#
- sklearn.feature_selection.r_regression(X, y, *, center=True, force_finite=True)[source]#
Compute Pearson’s r for each features and the target.
Pearson’s r is also known as the Pearson correlation coefficient.
Linear model for testing the individual effect of each of many regressors. This is a scoring function to be used in a feature selection procedure, not a free standing feature selection procedure.
The cross correlation between each regressor and the target is computed as:
E[(X[:, i] - mean(X[:, i])) * (y - mean(y))] / (std(X[:, i]) * std(y))
for more on usage see the User Guide.
Added in version 1.0.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The data matrix.
- yarray-like of shape (n_samples,)
The target vector.
- centerbool, default=True
Whether or not to center the data matrix
X
and the target vectory
. By default,X
andy
will be centered.- force_finitebool, default=True
Whether or not to force the Pearson’s R correlation to be finite. In the particular case where some features in
X
or the targety
are constant, the Pearson’s R correlation is not defined. Whenforce_finite=false
, a correlation ofnp.nan
is returned to acknowledge this case. Whenforce_finite=True
, this value will be forced to a minimal correlation of0.0
.Added in version 1.1.
- Returns:
- correlation_coefficientndarray of shape (n_features,)
Pearson’s R correlation coefficients of features.
See also
f_regression
Univariate linear regression tests returning f-statistic and p-values.
mutual_info_regression
Mutual information for a continuous target.
f_classif
ANOVA f-value between label/feature for classification tasks.
chi2
Chi-squared stats of non-negative features for classification tasks.
Examples
>>> from sklearn.datasets import make_regression >>> from sklearn.feature_selection import r_regression >>> X, y = make_regression( ... n_samples=50, n_features=3, n_informative=1, noise=1e-4, random_state=42 ... ) >>> r_regression(X, y) array([-0.15..., 1. , -0.22...])