Skip to content

Commit aeb15b5

Browse files
committed
Merge remote-tracking branch 'upstream/master' into scipy19-docs
* upstream/master: (43 commits) Add hypothesis support to related projects (#3335) More doc fixes (#3333) Improve the documentation of swap_dims (#3331) fix the doc names of the return value of swap_dims (#3329) Fix isel performance regression (#3319) Allow weakref (#3318) Clarify that "scatter" is a plotting method in what's new. (#3316) Fix whats-new date :/ Revert to dev version Release v0.13.0 auto_combine deprecation to 0.14 (#3314) Deprecation: groupby, resample default dim. (#3313) Raise error if cmap is list of colors (#3310) Refactor concat to use merge for non-concatenated variables (#3239) Honor `keep_attrs` in DataArray.quantile (#3305) Fix DataArray api doc (#3309) Accept int value in head, thin and tail (#3298) ignore h5py 2.10.0 warnings and fix invalid_netcdf warning test. (#3301) Update why-xarray.rst with clearer expression (#3307) Compat and encoding deprecation to 0.14 (#3294) ...
2 parents f82d112 + e1183e8 commit aeb15b5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+1916
-1393
lines changed

asv_bench/benchmarks/combine.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
import numpy as np
2+
23
import xarray as xr
34

45

asv_bench/benchmarks/indexing.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -125,3 +125,16 @@ def setup(self, key):
125125
requires_dask()
126126
super().setup(key)
127127
self.ds = self.ds.chunk({"x": 100, "y": 50, "t": 50})
128+
129+
130+
class BooleanIndexing:
131+
# https://github.com/pydata/xarray/issues/2227
132+
def setup(self):
133+
self.ds = xr.Dataset(
134+
{"a": ("time", np.arange(10_000_000))},
135+
coords={"time": np.arange(10_000_000)},
136+
)
137+
self.time_filter = self.ds.time > 50_000
138+
139+
def time_indexing(self):
140+
self.ds.isel(time=self.time_filter)

doc/api.rst

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This page provides an auto-generated summary of xarray's API. For more details
88
and examples, refer to the relevant chapters in the main part of the
99
documentation.
1010

11-
See also: :ref:`public api`_.
11+
See also: :ref:`public api`
1212

1313
Top-level functions
1414
===================
@@ -117,6 +117,9 @@ Indexing
117117
Dataset.loc
118118
Dataset.isel
119119
Dataset.sel
120+
Dataset.head
121+
Dataset.tail
122+
Dataset.thin
120123
Dataset.squeeze
121124
Dataset.interp
122125
Dataset.interp_like
@@ -279,6 +282,9 @@ Indexing
279282
DataArray.loc
280283
DataArray.isel
281284
DataArray.sel
285+
DataArray.head
286+
DataArray.tail
287+
DataArray.thin
282288
DataArray.squeeze
283289
DataArray.interp
284290
DataArray.interp_like
@@ -604,6 +610,7 @@ Plotting
604610

605611
Dataset.plot
606612
DataArray.plot
613+
Dataset.plot.scatter
607614
plot.plot
608615
plot.contourf
609616
plot.contour

doc/dask.rst

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -75,13 +75,14 @@ entirely equivalent to opening a dataset using ``open_dataset`` and then
7575
chunking the data using the ``chunk`` method, e.g.,
7676
``xr.open_dataset('example-data.nc').chunk({'time': 10})``.
7777

78-
To open multiple files simultaneously, use :py:func:`~xarray.open_mfdataset`::
78+
To open multiple files simultaneously in parallel using Dask delayed,
79+
use :py:func:`~xarray.open_mfdataset`::
7980

80-
xr.open_mfdataset('my/files/*.nc')
81+
xr.open_mfdataset('my/files/*.nc', parallel=True)
8182

8283
This function will automatically concatenate and merge dataset into one in
8384
the simple cases that it understands (see :py:func:`~xarray.auto_combine`
84-
for the full disclaimer). By default, ``open_mfdataset`` will chunk each
85+
for the full disclaimer). By default, :py:func:`~xarray.open_mfdataset` will chunk each
8586
netCDF file into a single Dask array; again, supply the ``chunks`` argument to
8687
control the size of the resulting Dask arrays. In more complex cases, you can
8788
open each file individually using ``open_dataset`` and merge the result, as
@@ -132,6 +133,13 @@ A dataset can also be converted to a Dask DataFrame using :py:meth:`~xarray.Data
132133
133134
Dask DataFrames do not support multi-indexes so the coordinate variables from the dataset are included as columns in the Dask DataFrame.
134135

136+
.. ipython:: python
137+
:suppress:
138+
139+
import os
140+
os.remove('example-data.nc')
141+
os.remove('manipulated-example-data.nc')
142+
135143
Using Dask with xarray
136144
----------------------
137145

@@ -373,12 +381,6 @@ one million elements (e.g., a 1000x1000 matrix). With large arrays (10+ GB), the
373381
cost of queueing up Dask operations can be noticeable, and you may need even
374382
larger chunksizes.
375383

376-
.. ipython:: python
377-
:suppress:
378-
379-
import os
380-
os.remove('example-data.nc')
381-
382384
Optimization Tips
383385
-----------------
384386

doc/gallery/plot_cartopy_facetgrid.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,6 @@
4141
ax.set_extent([-160, -30, 5, 75])
4242
# Without this aspect attributes the maps will look chaotic and the
4343
# "extent" attribute above will be ignored
44-
ax.set_aspect("equal", "box-forced")
44+
ax.set_aspect("equal")
4545

4646
plt.show()

doc/indexing.rst

Lines changed: 1 addition & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -236,9 +236,8 @@ The :py:meth:`~xarray.Dataset.drop` method returns a new object with the listed
236236
index labels along a dimension dropped:
237237

238238
.. ipython:: python
239-
:okwarning:
240239
241-
ds.drop(['IN', 'IL'], dim='space')
240+
ds.drop(space=['IN', 'IL'])
242241
243242
``drop`` is both a ``Dataset`` and ``DataArray`` method.
244243

@@ -393,14 +392,6 @@ These methods may also be applied to ``Dataset`` objects
393392
You may find increased performance by loading your data into memory first,
394393
e.g., with :py:meth:`~xarray.Dataset.load`.
395394

396-
.. note::
397-
398-
Vectorized indexing is a new feature in v0.10.
399-
In older versions of xarray, dimensions of indexers are ignored.
400-
Dedicated methods for some advanced indexing use cases,
401-
``isel_points`` and ``sel_points`` are now deprecated.
402-
See :ref:`more_advanced_indexing` for their alternative.
403-
404395
.. note::
405396

406397
If an indexer is a :py:meth:`~xarray.DataArray`, its coordinates should not

0 commit comments

Comments
 (0)