Parallel#
- class sklearn.utils.parallel.Parallel(n_jobs=default(None), backend=default(None), return_as='list', verbose=default(0), timeout=None, pre_dispatch='2 * n_jobs', batch_size='auto', temp_folder=default(None), max_nbytes=default('1M'), mmap_mode=default('r'), prefer=default(None), require=default(None))[source]#
Tweak of
joblib.Parallel
that propagates the scikit-learn configuration.This subclass of
joblib.Parallel
ensures that the active configuration (thread-local) of scikit-learn is propagated to the parallel workers for the duration of the execution of the parallel tasks.The API does not change and you can refer to
joblib.Parallel
documentation for more details.Added in version 1.3.
- __call__(iterable)[source]#
Dispatch the tasks and return the results.
- Parameters:
- iterableiterable
Iterable containing tuples of (delayed_function, args, kwargs) that should be consumed.
- Returns:
- resultslist
List of results of the tasks.
- dispatch_next()[source]#
Dispatch more data for parallel processing
This method is meant to be called concurrently by the multiprocessing callback. We rely on the thread-safety of dispatch_one_batch to protect against concurrent consumption of the unprotected iterator.
- dispatch_one_batch(iterator)[source]#
Prefetch the tasks for the next batch and dispatch them.
The effective size of the batch is computed here. If there are no more jobs to dispatch, return False, else return True.
The iterator consumption and dispatching is protected by the same lock so calling this function should be thread safe.