Skip to content

Commit aa39279

Browse files
committed
Merge remote-tracking branch 'upstream/master' into coarsen-reduce
* upstream/master: (22 commits) Added fill_value for unstack (pydata#3541) Add DatasetGroupBy.quantile (pydata#3527) ensure rename does not change index type (pydata#3532) Leave empty slot when not using accessors interpolate_na: Add max_gap support. (pydata#3302) units & deprecation merge (pydata#3530) Fix set_index when an existing dimension becomes a level (pydata#3520) add Variable._replace (pydata#3528) Tests for module-level functions with units (pydata#3493) Harmonize `FillValue` and `missing_value` during encoding and decoding steps (pydata#3502) FUNDING.yml (pydata#3523) Allow appending datetime & boolean variables to zarr stores (pydata#3504) warn if dim is passed to rolling operations. (pydata#3513) Deprecate allow_lazy (pydata#3435) Recursive tokenization (pydata#3515) format indexing.rst code with black (pydata#3511) add missing pint integration tests (pydata#3508) DOC: update bottleneck repo url (pydata#3507) add drop_sel, drop_vars, map to api.rst (pydata#3506) remove syntax warning (pydata#3505) ...
2 parents d7f677c + 56c16e4 commit aa39279

33 files changed

+4533
-1059
lines changed

.github/FUNDING.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
github: numfocus
2+
custom: http://numfocus.org/donate-to-xarray

doc/api.rst

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Dataset contents
9494
Dataset.rename_dims
9595
Dataset.swap_dims
9696
Dataset.expand_dims
97-
Dataset.drop
97+
Dataset.drop_vars
9898
Dataset.drop_dims
9999
Dataset.set_coords
100100
Dataset.reset_coords
@@ -118,6 +118,7 @@ Indexing
118118
Dataset.loc
119119
Dataset.isel
120120
Dataset.sel
121+
Dataset.drop_sel
121122
Dataset.head
122123
Dataset.tail
123124
Dataset.thin
@@ -154,7 +155,7 @@ Computation
154155
.. autosummary::
155156
:toctree: generated/
156157

157-
Dataset.apply
158+
Dataset.map
158159
Dataset.reduce
159160
Dataset.groupby
160161
Dataset.groupby_bins
@@ -263,7 +264,7 @@ DataArray contents
263264
DataArray.rename
264265
DataArray.swap_dims
265266
DataArray.expand_dims
266-
DataArray.drop
267+
DataArray.drop_vars
267268
DataArray.reset_coords
268269
DataArray.copy
269270

@@ -283,6 +284,7 @@ Indexing
283284
DataArray.loc
284285
DataArray.isel
285286
DataArray.sel
287+
DataArray.drop_sel
286288
DataArray.head
287289
DataArray.tail
288290
DataArray.thin
@@ -542,10 +544,10 @@ GroupBy objects
542544
:toctree: generated/
543545

544546
core.groupby.DataArrayGroupBy
545-
core.groupby.DataArrayGroupBy.apply
547+
core.groupby.DataArrayGroupBy.map
546548
core.groupby.DataArrayGroupBy.reduce
547549
core.groupby.DatasetGroupBy
548-
core.groupby.DatasetGroupBy.apply
550+
core.groupby.DatasetGroupBy.map
549551
core.groupby.DatasetGroupBy.reduce
550552

551553
Rolling objects
@@ -566,7 +568,7 @@ Resample objects
566568
================
567569

568570
Resample objects also implement the GroupBy interface
569-
(methods like ``apply()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
571+
(methods like ``map()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
570572

571573
.. autosummary::
572574
:toctree: generated/

doc/computation.rst

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,9 @@ for filling missing values via 1D interpolation.
9595
Note that xarray slightly diverges from the pandas ``interpolate`` syntax by
9696
providing the ``use_coordinate`` keyword which facilitates a clear specification
9797
of which values to use as the index in the interpolation.
98+
xarray also provides the ``max_gap`` keyword argument to limit the interpolation to
99+
data gaps of length ``max_gap`` or smaller. See :py:meth:`~xarray.DataArray.interpolate_na`
100+
for more.
98101

99102
Aggregation
100103
===========
@@ -183,7 +186,7 @@ a value when aggregating:
183186

184187
Note that rolling window aggregations are faster and use less memory when bottleneck_ is installed. This only applies to numpy-backed xarray objects.
185188

186-
.. _bottleneck: https://github.com/kwgoodman/bottleneck/
189+
.. _bottleneck: https://github.com/pydata/bottleneck/
187190

188191
We can also manually iterate through ``Rolling`` objects:
189192

@@ -462,13 +465,13 @@ Datasets support most of the same methods found on data arrays:
462465
abs(ds)
463466
464467
Datasets also support NumPy ufuncs (requires NumPy v1.13 or newer), or
465-
alternatively you can use :py:meth:`~xarray.Dataset.apply` to apply a function
468+
alternatively you can use :py:meth:`~xarray.Dataset.map` to map a function
466469
to each variable in a dataset:
467470

468471
.. ipython:: python
469472
470473
np.sin(ds)
471-
ds.apply(np.sin)
474+
ds.map(np.sin)
472475
473476
Datasets also use looping over variables for *broadcasting* in binary
474477
arithmetic. You can do arithmetic between any ``DataArray`` and a dataset:

doc/conf.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -340,9 +340,10 @@
340340
# Example configuration for intersphinx: refer to the Python standard library.
341341
intersphinx_mapping = {
342342
"python": ("https://docs.python.org/3/", None),
343-
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
344-
"iris": ("http://scitools.org.uk/iris/docs/latest/", None),
345-
"numpy": ("https://docs.scipy.org/doc/numpy/", None),
346-
"numba": ("https://numba.pydata.org/numba-doc/latest/", None),
347-
"matplotlib": ("https://matplotlib.org/", None),
343+
"pandas": ("https://pandas.pydata.org/pandas-docs/stable", None),
344+
"iris": ("https://scitools.org.uk/iris/docs/latest", None),
345+
"numpy": ("https://docs.scipy.org/doc/numpy", None),
346+
"scipy": ("https://docs.scipy.org/doc/scipy/reference", None),
347+
"numba": ("https://numba.pydata.org/numba-doc/latest", None),
348+
"matplotlib": ("https://matplotlib.org", None),
348349
}

doc/dask.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ For the best performance when using Dask's multi-threaded scheduler, wrap a
292292
function that already releases the global interpreter lock, which fortunately
293293
already includes most NumPy and Scipy functions. Here we show an example
294294
using NumPy operations and a fast function from
295-
`bottleneck <https://github.com/kwgoodman/bottleneck>`__, which
295+
`bottleneck <https://github.com/pydata/bottleneck>`__, which
296296
we use to calculate `Spearman's rank-correlation coefficient <https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient>`__:
297297

298298
.. code-block:: python

doc/groupby.rst

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,10 +35,11 @@ Let's create a simple example dataset:
3535
3636
.. ipython:: python
3737
38-
ds = xr.Dataset({'foo': (('x', 'y'), np.random.rand(4, 3))},
39-
coords={'x': [10, 20, 30, 40],
40-
'letters': ('x', list('abba'))})
41-
arr = ds['foo']
38+
ds = xr.Dataset(
39+
{"foo": (("x", "y"), np.random.rand(4, 3))},
40+
coords={"x": [10, 20, 30, 40], "letters": ("x", list("abba"))},
41+
)
42+
arr = ds["foo"]
4243
ds
4344
4445
If we groupby the name of a variable or coordinate in a dataset (we can also
@@ -93,15 +94,15 @@ Apply
9394
~~~~~
9495

9596
To apply a function to each group, you can use the flexible
96-
:py:meth:`~xarray.DatasetGroupBy.apply` method. The resulting objects are automatically
97+
:py:meth:`~xarray.DatasetGroupBy.map` method. The resulting objects are automatically
9798
concatenated back together along the group axis:
9899

99100
.. ipython:: python
100101
101102
def standardize(x):
102103
return (x - x.mean()) / x.std()
103104
104-
arr.groupby('letters').apply(standardize)
105+
arr.groupby('letters').map(standardize)
105106
106107
GroupBy objects also have a :py:meth:`~xarray.DatasetGroupBy.reduce` method and
107108
methods like :py:meth:`~xarray.DatasetGroupBy.mean` as shortcuts for applying an
@@ -202,7 +203,7 @@ __ http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#_two_dimen
202203
dims=['ny','nx'])
203204
da
204205
da.groupby('lon').sum(...)
205-
da.groupby('lon').apply(lambda x: x - x.mean(), shortcut=False)
206+
da.groupby('lon').map(lambda x: x - x.mean(), shortcut=False)
206207
207208
Because multidimensional groups have the ability to generate a very large
208209
number of bins, coarse-binning via :py:meth:`~xarray.Dataset.groupby_bins`

doc/howdoi.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ How do I ...
4444
* - convert a possibly irregularly sampled timeseries to a regularly sampled timeseries
4545
- :py:meth:`DataArray.resample`, :py:meth:`Dataset.resample` (see :ref:`resampling` for more)
4646
* - apply a function on all data variables in a Dataset
47-
- :py:meth:`Dataset.apply`
47+
- :py:meth:`Dataset.map`
4848
* - write xarray objects with complex values to a netCDF file
4949
- :py:func:`Dataset.to_netcdf`, :py:func:`DataArray.to_netcdf` specifying ``engine="h5netcdf", invalid_netcdf=True``
5050
* - make xarray objects look like other xarray objects

0 commit comments

Comments
 (0)