Skip to content

Commit 4702c9d

Browse files
committed
Merge branch 'groupby-aggs-using-numpy-groupies' of github.com:andersy005/xarray into groupby_npg
* 'groupby-aggs-using-numpy-groupies' of github.com:andersy005/xarray: Bump actions/github-script from 4.0.2 to 4.1 (pydata#5730) Set coord name concat when `concat`ing along a DataArray (pydata#5611) Add .git-blame-ignore-revs (pydata#5708) Type annotate tests (pydata#5728) Consolidate TypeVars in a single place (pydata#5569) add storage_options arg to to_zarr (pydata#5615) dataset `__repr__` updates (pydata#5580) Xfail failing test on main (pydata#5729) Add xarray-dataclasses to ecosystem in docs (pydata#5725) extend show_versions (pydata#5724) Move docstring for xr.set_options to numpy style (pydata#5702) Refactor more groupby and resample tests (pydata#5707) Remove suggestion to install pytest-xdist in docs (pydata#5713) Add typing to the OPTIONS dict (pydata#5678) Change annotations to allow str keys (pydata#5690) Whatsnew for float-to-top (pydata#5714) Use isort's float-to-top (pydata#5695) Fix errors in test_latex_name_isnt_split for min environments (pydata#5710) Improves rendering of complex LaTeX expressions as `long_name`s when plotting (pydata#5682) Use same bool validator as other inputs (pydata#5703)
2 parents e6bcce9 + 9be0228 commit 4702c9d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+2182
-1924
lines changed

.git-blame-ignore-revs

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# black PR 3142
2+
d089df385e737f71067309ff7abae15994d581ec
3+
4+
# isort PR 1924
5+
0e73e240107caee3ffd1a1149f0150c390d43251

.github/workflows/upstream-dev-ci.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ jobs:
122122
shopt -s globstar
123123
python .github/workflows/parse_logs.py logs/**/*-log
124124
- name: Report failures
125-
uses: actions/github-script@v4.0.2
125+
uses: actions/github-script@v4.1
126126
with:
127127
github-token: ${{ secrets.GITHUB_TOKEN }}
128128
script: |

doc/contributing.rst

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -628,13 +628,7 @@ Or with one of the following constructs::
628628
pytest xarray/tests/[test-module].py::[TestClass]::[test_method]
629629
630630
Using `pytest-xdist <https://pypi.python.org/pypi/pytest-xdist>`_, one can
631-
speed up local testing on multicore machines. To use this feature, you will
632-
need to install `pytest-xdist` via::
633-
634-
pip install pytest-xdist
635-
636-
637-
Then, run pytest with the optional -n argument::
631+
speed up local testing on multicore machines, by running pytest with the optional -n argument::
638632
639633
pytest xarray -n 4
640634

doc/ecosystem.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Extend xarray capabilities
6868
- `hypothesis-gufunc <https://hypothesis-gufunc.readthedocs.io/en/latest/>`_: Extension to hypothesis. Makes it easy to write unit tests with xarray objects as input.
6969
- `nxarray <https://github.com/nxarray/nxarray>`_: NeXus input/output capability for xarray.
7070
- `xarray-compare <https://github.com/astropenguin/xarray-compare>`_: xarray extension for data comparison.
71-
- `xarray-custom <https://github.com/astropenguin/xarray-custom>`_: Data classes for custom xarray creation.
71+
- `xarray-dataclasses <https://github.com/astropenguin/xarray-dataclasses>`_: xarray extension for typed DataArray and Dataset creation.
7272
- `xarray_extras <https://github.com/crusaderky/xarray_extras>`_: Advanced algorithms for xarray objects (e.g. integrations/interpolations).
7373
- `xpublish <https://xpublish.readthedocs.io/>`_: Publish Xarray Datasets via a Zarr compatible REST API.
7474
- `xrft <https://github.com/rabernat/xrft>`_: Fourier transforms for xarray data.

doc/whats-new.rst

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,25 @@ v0.19.1 (unreleased)
2222

2323
New Features
2424
~~~~~~~~~~~~
25+
- Xarray now does a better job rendering variable names that are long LaTeX sequences when plotting (:issue:`5681`, :pull:`5682`).
26+
By `Tomas Chor <https://github.com/tomchor>`_.
2527
- Add a option to disable the use of ``bottleneck`` (:pull:`5560`)
2628
By `Justus Magin <https://github.com/keewis>`_.
2729
- Added ``**kwargs`` argument to :py:meth:`open_rasterio` to access overviews (:issue:`3269`).
2830
By `Pushkar Kopparla <https://github.com/pkopparla>`_.
31+
- Added ``storage_options`` argument to :py:meth:`to_zarr` (:issue:`5601`).
32+
By `Ray Bell <https://github.com/raybellwaves>`_, `Zachary Blackwood <https://github.com/blackary>`_ and
33+
`Nathan Lis <https://github.com/wxman22>`_.
2934

3035

3136
Breaking changes
3237
~~~~~~~~~~~~~~~~
3338

39+
- The ``__repr__`` of a :py:class:`xarray.Dataset`'s ``coords`` and ``data_vars``
40+
ignore ``xarray.set_option(display_max_rows=...)`` and show the full output
41+
when called directly as, e.g., ``ds.data_vars`` or ``print(ds.data_vars)``
42+
(:issue:`5545`, :pull:`5580`).
43+
By `Stefan Bender <https://github.com/st-bender>`_.
3444

3545
Deprecations
3646
~~~~~~~~~~~~
@@ -51,8 +61,15 @@ Internal Changes
5161
By `Deepak Cherian <https://github.com/dcherian>`_.
5262
- Explicit indexes refactor: decouple ``xarray.Index``` from ``xarray.Variable`` (:pull:`5636`).
5363
By `Benoit Bovy <https://github.com/benbovy>`_.
64+
- Fix ``Mapping`` argument typing to allow mypy to pass on ``str`` keys (:pull:`5690`).
65+
By `Maximilian Roos <https://github.com/max-sixty>`_.
66+
- Annotate many of our tests, and fix some of the resulting typing errors. This will
67+
also mean our typing annotations are tested as part of CI. (:pull:`5728`).
68+
By `Maximilian Roos <https://github.com/max-sixty>`_.
5469
- Improve the performance of reprs for large datasets or dataarrays. (:pull:`5661`)
5570
By `Jimmy Westling <https://github.com/illviljan>`_.
71+
- Use isort's `float_to_top` config. (:pull:`5695`).
72+
By `Maximilian Roos <https://github.com/max-sixty>`_.
5673

5774
.. _whats-new.0.19.0:
5875

properties/test_encode_decode.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,10 @@
44
These ones pass, just as you'd hope!
55
66
"""
7-
import pytest # isort:skip
7+
import pytest
88

99
pytest.importorskip("hypothesis")
10+
# isort: split
1011

1112
import hypothesis.extra.numpy as npst
1213
import hypothesis.strategies as st
@@ -24,7 +25,7 @@
2425

2526
@pytest.mark.slow
2627
@given(st.data(), an_array)
27-
def test_CFMask_coder_roundtrip(data, arr):
28+
def test_CFMask_coder_roundtrip(data, arr) -> None:
2829
names = data.draw(
2930
st.lists(st.text(), min_size=arr.ndim, max_size=arr.ndim, unique=True).map(
3031
tuple
@@ -38,7 +39,7 @@ def test_CFMask_coder_roundtrip(data, arr):
3839

3940
@pytest.mark.slow
4041
@given(st.data(), an_array)
41-
def test_CFScaleOffset_coder_roundtrip(data, arr):
42+
def test_CFScaleOffset_coder_roundtrip(data, arr) -> None:
4243
names = data.draw(
4344
st.lists(st.text(), min_size=arr.ndim, max_size=arr.ndim, unique=True).map(
4445
tuple

properties/test_pandas_roundtrip.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828

2929

3030
@st.composite
31-
def datasets_1d_vars(draw):
31+
def datasets_1d_vars(draw) -> xr.Dataset:
3232
"""Generate datasets with only 1D variables
3333
3434
Suitable for converting to pandas dataframes.
@@ -49,7 +49,7 @@ def datasets_1d_vars(draw):
4949

5050

5151
@given(st.data(), an_array)
52-
def test_roundtrip_dataarray(data, arr):
52+
def test_roundtrip_dataarray(data, arr) -> None:
5353
names = data.draw(
5454
st.lists(st.text(), min_size=arr.ndim, max_size=arr.ndim, unique=True).map(
5555
tuple
@@ -62,15 +62,15 @@ def test_roundtrip_dataarray(data, arr):
6262

6363

6464
@given(datasets_1d_vars())
65-
def test_roundtrip_dataset(dataset):
65+
def test_roundtrip_dataset(dataset) -> None:
6666
df = dataset.to_dataframe()
6767
assert isinstance(df, pd.DataFrame)
6868
roundtripped = xr.Dataset(df)
6969
xr.testing.assert_identical(dataset, roundtripped)
7070

7171

7272
@given(numeric_series, st.text())
73-
def test_roundtrip_pandas_series(ser, ix_name):
73+
def test_roundtrip_pandas_series(ser, ix_name) -> None:
7474
# Need to name the index, otherwise Xarray calls it 'dim_0'.
7575
ser.index.name = ix_name
7676
arr = xr.DataArray(ser)
@@ -87,7 +87,7 @@ def test_roundtrip_pandas_series(ser, ix_name):
8787

8888
@pytest.mark.xfail
8989
@given(numeric_homogeneous_dataframe)
90-
def test_roundtrip_pandas_dataframe(df):
90+
def test_roundtrip_pandas_dataframe(df) -> None:
9191
# Need to name the indexes, otherwise Xarray names them 'dim_0', 'dim_1'.
9292
df.index.name = "rows"
9393
df.columns.name = "cols"

setup.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ exclude=
161161
[isort]
162162
profile = black
163163
skip_gitignore = true
164-
force_to_top = true
164+
float_to_top = true
165165
default_section = THIRDPARTY
166166
known_first_party = xarray
167167

xarray/backends/api.py

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1319,6 +1319,7 @@ def to_zarr(
13191319
append_dim: Hashable = None,
13201320
region: Mapping[str, slice] = None,
13211321
safe_chunks: bool = True,
1322+
storage_options: Dict[str, str] = None,
13221323
):
13231324
"""This function creates an appropriate datastore for writing a dataset to
13241325
a zarr ztore
@@ -1330,6 +1331,22 @@ def to_zarr(
13301331
store = _normalize_path(store)
13311332
chunk_store = _normalize_path(chunk_store)
13321333

1334+
if storage_options is None:
1335+
mapper = store
1336+
chunk_mapper = chunk_store
1337+
else:
1338+
from fsspec import get_mapper
1339+
1340+
if not isinstance(store, str):
1341+
raise ValueError(
1342+
f"store must be a string to use storage_options. Got {type(store)}"
1343+
)
1344+
mapper = get_mapper(store, **storage_options)
1345+
if chunk_store is not None:
1346+
chunk_mapper = get_mapper(chunk_store, **storage_options)
1347+
else:
1348+
chunk_mapper = chunk_store
1349+
13331350
if encoding is None:
13341351
encoding = {}
13351352

@@ -1372,13 +1389,13 @@ def to_zarr(
13721389
already_consolidated = False
13731390
consolidate_on_close = consolidated or consolidated is None
13741391
zstore = backends.ZarrStore.open_group(
1375-
store=store,
1392+
store=mapper,
13761393
mode=mode,
13771394
synchronizer=synchronizer,
13781395
group=group,
13791396
consolidated=already_consolidated,
13801397
consolidate_on_close=consolidate_on_close,
1381-
chunk_store=chunk_store,
1398+
chunk_store=chunk_mapper,
13821399
append_dim=append_dim,
13831400
write_region=region,
13841401
safe_chunks=safe_chunks,

xarray/backends/zarr.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -713,6 +713,9 @@ def open_zarr(
713713
falling back to read non-consolidated metadata if that fails.
714714
chunk_store : MutableMapping, optional
715715
A separate Zarr store only for chunk data.
716+
storage_options : dict, optional
717+
Any additional parameters for the storage backend (ignored for local
718+
paths).
716719
decode_timedelta : bool, optional
717720
If True, decode variables and coordinates with time units in
718721
{'days', 'hours', 'minutes', 'seconds', 'milliseconds', 'microseconds'}

0 commit comments

Comments
 (0)