DictionaryLearning#
- class sklearn.decomposition.DictionaryLearning(n_components=None, *, alpha=1, max_iter=1000, tol=1e-08, fit_algorithm='lars', transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, n_jobs=None, code_init=None, dict_init=None, callback=None, verbose=False, split_sign=False, random_state=None, positive_code=False, positive_dict=False, transform_max_iter=1000)[source]#
- Dictionary learning. - Finds a dictionary (a set of atoms) that performs well at sparsely encoding the fitted data. - Solves the optimization problem: - (U^*,V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1 (U,V) with || V_k ||_2 <= 1 for all 0 <= k < n_components - ||.||_Fro stands for the Frobenius norm and ||.||_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. - Read more in the User Guide. - Parameters:
- n_componentsint, default=None
- Number of dictionary elements to extract. If None, then - n_componentsis set to- n_features.
- alphafloat, default=1.0
- Sparsity controlling parameter. 
- max_iterint, default=1000
- Maximum number of iterations to perform. 
- tolfloat, default=1e-8
- Tolerance for numerical error. 
- fit_algorithm{‘lars’, ‘cd’}, default=’lars’
- 'lars': uses the least angle regression method to solve the lasso problem (- lars_path);
- 'cd': uses the coordinate descent method to compute the Lasso solution (- Lasso). Lars will be faster if the estimated components are sparse.
 - Added in version 0.17: cd coordinate descent method to improve speed. 
- transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
- Algorithm used to transform the data: - 'lars': uses the least angle regression method (- lars_path);
- 'lasso_lars': uses Lars to compute the Lasso solution.
- 'lasso_cd': uses the coordinate descent method to compute the Lasso solution (- Lasso).- 'lasso_lars'will be faster if the estimated components are sparse.
- 'omp': uses orthogonal matching pursuit to estimate the sparse solution.
- 'threshold': squashes to zero all coefficients less than alpha from the projection- dictionary * X'.
 - Added in version 0.17: lasso_cd coordinate descent method to improve speed. 
- transform_n_nonzero_coefsint, default=None
- Number of nonzero coefficients to target in each column of the solution. This is only used by - algorithm='lars'and- algorithm='omp'. If- None, then- transform_n_nonzero_coefs=int(n_features / 10).
- transform_alphafloat, default=None
- If - algorithm='lasso_lars'or- algorithm='lasso_cd',- alphais the penalty applied to the L1 norm. If- algorithm='threshold',- alphais the absolute value of the threshold below which coefficients will be squashed to zero. If- None, defaults to- alpha.- Changed in version 1.2: When None, default value changed from 1.0 to - alpha.
- n_jobsint or None, default=None
- Number of parallel jobs to run. - Nonemeans 1 unless in a- joblib.parallel_backendcontext.- -1means using all processors. See Glossary for more details.
- code_initndarray of shape (n_samples, n_components), default=None
- Initial value for the code, for warm restart. Only used if - code_initand- dict_initare not None.
- dict_initndarray of shape (n_components, n_features), default=None
- Initial values for the dictionary, for warm restart. Only used if - code_initand- dict_initare not None.
- callbackcallable, default=None
- Callable that gets invoked every five iterations. - Added in version 1.3. 
- verbosebool, default=False
- To control the verbosity of the procedure. 
- split_signbool, default=False
- Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers. 
- random_stateint, RandomState instance or None, default=None
- Used for initializing the dictionary when - dict_initis not specified, randomly shuffling the data when- shuffleis set to- True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
- positive_codebool, default=False
- Whether to enforce positivity when finding the code. - Added in version 0.20. 
- positive_dictbool, default=False
- Whether to enforce positivity when finding the dictionary. - Added in version 0.20. 
- transform_max_iterint, default=1000
- Maximum number of iterations to perform if - algorithm='lasso_cd'or- 'lasso_lars'.- Added in version 0.22. 
 
- Attributes:
- components_ndarray of shape (n_components, n_features)
- dictionary atoms extracted from the data 
- error_array
- vector of errors at each iteration 
- n_features_in_int
- Number of features seen during fit. - Added in version 0.24. 
- feature_names_in_ndarray of shape (n_features_in_,)
- Names of features seen during fit. Defined only when - Xhas feature names that are all strings.- Added in version 1.0. 
- n_iter_int
- Number of iterations run. 
 
 - See also - MiniBatchDictionaryLearning
- A faster, less accurate, version of the dictionary learning algorithm. 
- MiniBatchSparsePCA
- Mini-batch Sparse Principal Components Analysis. 
- SparseCoder
- Find a sparse representation of data from a fixed, precomputed dictionary. 
- SparsePCA
- Sparse Principal Components Analysis. 
 - References - J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/~fbach/mairal_icml09.pdf) - Examples - >>> import numpy as np >>> from sklearn.datasets import make_sparse_coded_signal >>> from sklearn.decomposition import DictionaryLearning >>> X, dictionary, code = make_sparse_coded_signal( ... n_samples=30, n_components=15, n_features=20, n_nonzero_coefs=10, ... random_state=42, ... ) >>> dict_learner = DictionaryLearning( ... n_components=15, transform_algorithm='lasso_lars', transform_alpha=0.1, ... random_state=42, ... ) >>> X_transformed = dict_learner.fit(X).transform(X) - We can check the level of sparsity of - X_transformed:- >>> np.mean(X_transformed == 0) np.float64(0.527) - We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: - >>> X_hat = X_transformed @ dict_learner.components_ >>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1)) np.float64(0.056) - fit(X, y=None)[source]#
- Fit the model from data in X. - Parameters:
- Xarray-like of shape (n_samples, n_features)
- Training vector, where - n_samplesis the number of samples and- n_featuresis the number of features.
- yIgnored
- Not used, present for API consistency by convention. 
 
- Returns:
- selfobject
- Returns the instance itself. 
 
 
 - fit_transform(X, y=None)[source]#
- Fit the model from data in X and return the transformed data. - Parameters:
- Xarray-like of shape (n_samples, n_features)
- Training vector, where - n_samplesis the number of samples and- n_featuresis the number of features.
- yIgnored
- Not used, present for API consistency by convention. 
 
- Returns:
- Vndarray of shape (n_samples, n_components)
- Transformed data. 
 
 
 - get_feature_names_out(input_features=None)[source]#
- Get output feature names for transformation. - The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: - ["class_name0", "class_name1", "class_name2"].- Parameters:
- input_featuresarray-like of str or None, default=None
- Only used to validate feature names with the names seen in - fit.
 
- Returns:
- feature_names_outndarray of str objects
- Transformed feature names. 
 
 
 - get_metadata_routing()[source]#
- Get metadata routing of this object. - Please check User Guide on how the routing mechanism works. - Returns:
- routingMetadataRequest
- A - MetadataRequestencapsulating routing information.
 
 
 - get_params(deep=True)[source]#
- Get parameters for this estimator. - Parameters:
- deepbool, default=True
- If True, will return the parameters for this estimator and contained subobjects that are estimators. 
 
- Returns:
- paramsdict
- Parameter names mapped to their values. 
 
 
 - inverse_transform(X)[source]#
- Transform data back to its original space. - Parameters:
- Xarray-like of shape (n_samples, n_components)
- Data to be transformed back. Must have the same number of components as the data used to train the model. 
 
- Returns:
- X_originalndarray of shape (n_samples, n_features)
- Transformed data. 
 
 
 - set_output(*, transform=None)[source]#
- Set output container. - See Introducing the set_output API for an example on how to use the API. - Parameters:
- transform{“default”, “pandas”, “polars”}, default=None
- Configure output of - transformand- fit_transform.- "default": Default output format of a transformer
- "pandas": DataFrame output
- "polars": Polars output
- None: Transform configuration is unchanged
 - Added in version 1.4: - "polars"option was added.
 
- Returns:
- selfestimator instance
- Estimator instance. 
 
 
 - set_params(**params)[source]#
- Set the parameters of this estimator. - The method works on simple estimators as well as on nested objects (such as - Pipeline). The latter have parameters of the form- <component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
- Estimator parameters. 
 
- Returns:
- selfestimator instance
- Estimator instance. 
 
 
 - transform(X)[source]#
- Encode the data as a sparse combination of the dictionary atoms. - Coding method is determined by the object parameter - transform_algorithm.- Parameters:
- Xndarray of shape (n_samples, n_features)
- test data to be transformed, must have the same number of features as the data used to train the model. 
 
- Returns:
- X_newndarray of shape (n_samples, n_components)
- Transformed data. 
 
 
 
