@@ -51,6 +51,11 @@ We can save a Dataset to disk using the
51
51
52
52
.. ipython :: python
53
53
54
+ ds = xr.Dataset({' foo' : ((' x' , ' y' ), np.random.rand(4 , 5 ))},
55
+ coords = {' x' : [10 , 20 , 30 , 40 ],
56
+ ' y' : pd.date_range(' 2000-01-01' , periods = 5 ),
57
+ ' z' : (' x' , list (' abcd' ))})
58
+
54
59
ds.to_netcdf(' saved_on_disk.nc' )
55
60
56
61
By default, the file is saved as netCDF4 (assuming netCDF4-Python is
@@ -61,7 +66,7 @@ the ``format`` and ``engine`` arguments.
61
66
62
67
Using the `h5netcdf <https://github.com/shoyer/h5netcdf >`_ package
63
68
by passing ``engine='h5netcdf' `` to :py:meth: `~xarray.open_dataset ` can
64
- be quicker than the default ``engine='netcdf4' `` that uses the
69
+ sometimes be quicker than the default ``engine='netcdf4' `` that uses the
65
70
`netCDF4 <https://github.com/Unidata/netcdf4-python >`_ package.
66
71
67
72
@@ -191,8 +196,6 @@ will remove encoding information.
191
196
:suppress:
192
197
193
198
ds_disk.close()
194
- import os
195
- os.remove(' saved_on_disk.nc' )
196
199
197
200
198
201
.. _combining multiple files :
@@ -633,11 +636,6 @@ module:
633
636
634
637
import pickle
635
638
636
- ds = xr.Dataset({' foo' : ((' x' , ' y' ), np.random.rand(4 , 5 ))},
637
- coords = {' x' : [10 , 20 , 30 , 40 ],
638
- ' y' : pd.date_range(' 2000-01-01' , periods = 5 ),
639
- ' z' : (' x' , list (' abcd' ))})
640
-
641
639
# use the highest protocol (-1) because it is way faster than the default
642
640
# text based pickle format
643
641
pkl = pickle.dumps(ds, protocol = - 1 )
@@ -697,6 +695,12 @@ To export just the dataset schema, without the data itself, use the
697
695
This can be useful for generating indices of dataset contents to expose to
698
696
search indices or other automated data discovery tools.
699
697
698
+ .. ipython :: python
699
+ :suppress:
700
+
701
+ import os
702
+ os.remove(' saved_on_disk.nc' )
703
+
700
704
.. _io.rasterio :
701
705
702
706
Rasterio
0 commit comments