DESIGN: out-of-core / parallelism #39
Description
parallel .apply
, see here
xref dask & distributed
This issue is a placeholder for discussion w.r.t. how much pandas 2.0 should be in charge of out-of-core / parallel operations.
For example in IO operations (see #38), it makes sense to do parallel reading, where pandas can simply front for dask in-line with out the user directly knowning that this is happening. Sure, a more sophisticated user can control this (maybe thru some limited options, or by directly using dask), but the default should simply be it working (in appropriate scenarios) as parallel reads.
Once could argue that .apply
and .groupby
are similar scenarios (again assume that the dataset is 'big' enough), and we have a GIL releasing function.
We also can include the current use of numexpr
for .query
evaluation.
So we are hiding dask / numexpr (or potentially other parallel / out-of-core processing libraries) from the user directly. Things, just work, and are faster.
So how far should we take this? Certainly users can directly use these libraries, but it could be a potential win to include dask as an optional dependency (even though it depends on pandas), and dispatch computation for out-of-core / parallelism as approriate.