Skip to content

Commit c56894e

Browse files
committed
Merge remote-tracking branch 'upstream/master' into boolean-array-kleene
2 parents 708c553 + 99cf733 commit c56894e

31 files changed

+177
-305
lines changed

README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -164,12 +164,11 @@ pip install pandas
164164
```
165165

166166
## Dependencies
167-
- [NumPy](https://www.numpy.org): 1.13.3 or higher
168-
- [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169-
- [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
167+
- [NumPy](https://www.numpy.org)
168+
- [python-dateutil](https://labix.org/python-dateutil)
169+
- [pytz](https://pythonhosted.org/pytz)
170170

171-
See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172-
for recommended and optional dependencies.
171+
See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
173172

174173
## Installation from sources
175174
To install pandas from source you need Cython in addition to the normal

azure-pipelines.yml

Lines changed: 0 additions & 89 deletions
Original file line numberDiff line numberDiff line change
@@ -16,95 +16,6 @@ jobs:
1616
name: Windows
1717
vmImage: vs2017-win2016
1818

19-
- job: 'Checks'
20-
pool:
21-
vmImage: ubuntu-16.04
22-
timeoutInMinutes: 90
23-
steps:
24-
- script: |
25-
echo '##vso[task.prependpath]$(HOME)/miniconda3/bin'
26-
echo '##vso[task.setvariable variable=ENV_FILE]environment.yml'
27-
echo '##vso[task.setvariable variable=AZURE]true'
28-
displayName: 'Setting environment variables'
29-
30-
# Do not require a conda environment
31-
- script: ci/code_checks.sh patterns
32-
displayName: 'Looking for unwanted patterns'
33-
condition: true
34-
35-
- script: |
36-
sudo apt-get update
37-
sudo apt-get install -y libc6-dev-i386
38-
ci/setup_env.sh
39-
displayName: 'Setup environment and build pandas'
40-
condition: true
41-
42-
# Do not require pandas
43-
- script: |
44-
source activate pandas-dev
45-
ci/code_checks.sh lint
46-
displayName: 'Linting'
47-
condition: true
48-
49-
- script: |
50-
source activate pandas-dev
51-
ci/code_checks.sh dependencies
52-
displayName: 'Dependencies consistency'
53-
condition: true
54-
55-
# Require pandas
56-
- script: |
57-
source activate pandas-dev
58-
ci/code_checks.sh code
59-
displayName: 'Checks on imported code'
60-
condition: true
61-
62-
- script: |
63-
source activate pandas-dev
64-
ci/code_checks.sh doctests
65-
displayName: 'Running doctests'
66-
condition: true
67-
68-
- script: |
69-
source activate pandas-dev
70-
ci/code_checks.sh docstrings
71-
displayName: 'Docstring validation'
72-
condition: true
73-
74-
- script: |
75-
source activate pandas-dev
76-
ci/code_checks.sh typing
77-
displayName: 'Typing validation'
78-
condition: true
79-
80-
- script: |
81-
source activate pandas-dev
82-
pytest --capture=no --strict scripts
83-
displayName: 'Testing docstring validation script'
84-
condition: true
85-
86-
- script: |
87-
source activate pandas-dev
88-
cd asv_bench
89-
asv check -E existing
90-
git remote add upstream https://github.com/pandas-dev/pandas.git
91-
git fetch upstream
92-
if git diff upstream/master --name-only | grep -q "^asv_bench/"; then
93-
asv machine --yes
94-
ASV_OUTPUT="$(asv dev)"
95-
if [[ $(echo "$ASV_OUTPUT" | grep "failed") ]]; then
96-
echo "##vso[task.logissue type=error]Benchmarks run with errors"
97-
echo "$ASV_OUTPUT"
98-
exit 1
99-
else
100-
echo "Benchmarks run without errors"
101-
fi
102-
else
103-
echo "Benchmarks did not run, no changes detected"
104-
fi
105-
displayName: 'Running benchmarks'
106-
condition: true
107-
10819
- job: 'Web_and_Docs'
10920
pool:
11021
vmImage: ubuntu-16.04

doc/source/user_guide/scale.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Use efficient datatypes
9494

9595
The default pandas data types are not the most memory efficient. This is
9696
especially true for text data columns with relatively few unique values (commonly
97-
referred to as "low-cardinality" data). By using more efficient data types you
97+
referred to as "low-cardinality" data). By using more efficient data types, you
9898
can store larger datasets in memory.
9999

100100
.. ipython:: python

doc/source/whatsnew/v1.0.0.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -403,6 +403,7 @@ or ``matplotlib.Axes.plot``. See :ref:`plotting.formatters` for more.
403403

404404
- Floordiv of integer-dtyped array by :class:`Timedelta` now raises ``TypeError`` (:issue:`21036`)
405405
- Removed the previously deprecated :meth:`Index.summary` (:issue:`18217`)
406+
- Removed the previously deprecated "fastpath" keyword from the :class:`Index` constructor (:issue:`23110`)
406407
- Removed the previously deprecated :meth:`Series.get_value`, :meth:`Series.set_value`, :meth:`DataFrame.get_value`, :meth:`DataFrame.set_value` (:issue:`17739`)
407408
- Changed the the default value of `inplace` in :meth:`DataFrame.set_index` and :meth:`Series.set_axis`. It now defaults to False (:issue:`27600`)
408409
- Removed support for nested renaming in :meth:`DataFrame.aggregate`, :meth:`Series.aggregate`, :meth:`DataFrameGroupBy.aggregate`, :meth:`SeriesGroupBy.aggregate`, :meth:`Rolling.aggregate` (:issue:`18529`)

environment.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ dependencies:
7878
- fastparquet>=0.3.2 # pandas.read_parquet, DataFrame.to_parquet
7979
- html5lib # pandas.read_html
8080
- lxml # pandas.read_html
81-
- openpyxl # pandas.read_excel, DataFrame.to_excel, pandas.ExcelWriter, pandas.ExcelFile
81+
- openpyxl<=3.0.1 # pandas.read_excel, DataFrame.to_excel, pandas.ExcelWriter, pandas.ExcelFile
8282
- pyarrow>=0.13.1 # pandas.read_parquet, DataFrame.to_parquet, pandas.read_feather, DataFrame.to_feather
8383
- pyqt>=5.9.2 # pandas.read_clipboard
8484
- pytables>=3.4.2 # pandas.read_hdf, DataFrame.to_hdf

pandas/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
_np_version_under1p15,
2525
_np_version_under1p16,
2626
_np_version_under1p17,
27+
_np_version_under1p18,
2728
)
2829

2930
try:

pandas/conftest.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -868,3 +868,16 @@ def float_frame():
868868
[30 rows x 4 columns]
869869
"""
870870
return DataFrame(tm.getSeriesData())
871+
872+
873+
@pytest.fixture(params=[pd.Index, pd.Series], ids=["index", "series"])
874+
def index_or_series(request):
875+
"""
876+
Fixture to parametrize over Index and Series, made necessary by a mypy
877+
bug, giving an error:
878+
879+
List item 0 has incompatible type "Type[Series]"; expected "Type[PandasObject]"
880+
881+
See GH#?????
882+
"""
883+
return request.param

pandas/core/indexes/base.py

Lines changed: 1 addition & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -265,14 +265,7 @@ def _outer_indexer(self, left, right):
265265
# Constructors
266266

267267
def __new__(
268-
cls,
269-
data=None,
270-
dtype=None,
271-
copy=False,
272-
name=None,
273-
fastpath=None,
274-
tupleize_cols=True,
275-
**kwargs,
268+
cls, data=None, dtype=None, copy=False, name=None, tupleize_cols=True, **kwargs,
276269
) -> "Index":
277270

278271
from .range import RangeIndex
@@ -284,16 +277,6 @@ def __new__(
284277
if name is None and hasattr(data, "name"):
285278
name = data.name
286279

287-
if fastpath is not None:
288-
warnings.warn(
289-
"The 'fastpath' keyword is deprecated, and will be "
290-
"removed in a future version.",
291-
FutureWarning,
292-
stacklevel=2,
293-
)
294-
if fastpath:
295-
return cls._simple_new(data, name)
296-
297280
if isinstance(data, ABCPandasArray):
298281
# ensure users don't accidentally put a PandasArray in an index.
299282
data = data.to_numpy()

pandas/core/indexes/category.py

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
import operator
22
from typing import Any
3-
import warnings
43

54
import numpy as np
65

@@ -172,19 +171,8 @@ def __new__(
172171
dtype=None,
173172
copy=False,
174173
name=None,
175-
fastpath=None,
176174
):
177175

178-
if fastpath is not None:
179-
warnings.warn(
180-
"The 'fastpath' keyword is deprecated, and will be "
181-
"removed in a future version.",
182-
FutureWarning,
183-
stacklevel=2,
184-
)
185-
if fastpath:
186-
return cls._simple_new(data, name=name, dtype=dtype)
187-
188176
dtype = CategoricalDtype._from_values_or_dtype(data, categories, ordered, dtype)
189177

190178
if name is None and hasattr(data, "name"):

pandas/core/indexes/datetimelike.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,10 @@ def sort_values(self, return_indexer=False, ascending=True):
284284
sorted_index = self.take(_as)
285285
return sorted_index, _as
286286
else:
287-
sorted_values = np.sort(self._ndarray_values)
287+
# NB: using asi8 instead of _ndarray_values matters in numpy 1.18
288+
# because the treatment of NaT has been changed to put NaT last
289+
# instead of first.
290+
sorted_values = np.sort(self.asi8)
288291
attribs = self._get_attributes_dict()
289292
freq = attribs["freq"]
290293

pandas/core/indexes/numeric.py

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
import warnings
2-
31
import numpy as np
42

53
from pandas._libs import index as libindex
@@ -47,17 +45,8 @@ class NumericIndex(Index):
4745

4846
_is_numeric_dtype = True
4947

50-
def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=None):
48+
def __new__(cls, data=None, dtype=None, copy=False, name=None):
5149
cls._validate_dtype(dtype)
52-
if fastpath is not None:
53-
warnings.warn(
54-
"The 'fastpath' keyword is deprecated, and will be "
55-
"removed in a future version.",
56-
FutureWarning,
57-
stacklevel=2,
58-
)
59-
if fastpath:
60-
return cls._simple_new(data, name=name)
6150

6251
# Coerce to ndarray if not already ndarray or Index
6352
if not isinstance(data, (np.ndarray, Index)):

pandas/core/indexes/range.py

Lines changed: 1 addition & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -81,26 +81,9 @@ class RangeIndex(Int64Index):
8181
# Constructors
8282

8383
def __new__(
84-
cls,
85-
start=None,
86-
stop=None,
87-
step=None,
88-
dtype=None,
89-
copy=False,
90-
name=None,
91-
fastpath=None,
84+
cls, start=None, stop=None, step=None, dtype=None, copy=False, name=None,
9285
):
9386

94-
if fastpath is not None:
95-
warnings.warn(
96-
"The 'fastpath' keyword is deprecated, and will be "
97-
"removed in a future version.",
98-
FutureWarning,
99-
stacklevel=2,
100-
)
101-
if fastpath:
102-
return cls._simple_new(range(start, stop, step), name=name)
103-
10487
cls._validate_dtype(dtype)
10588

10689
# RangeIndex

pandas/core/missing.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -339,7 +339,7 @@ def _interpolate_scipy_wrapper(
339339
}
340340

341341
if getattr(x, "is_all_dates", False):
342-
# GH 5975, scipy.interp1d can't hande datetime64s
342+
# GH 5975, scipy.interp1d can't handle datetime64s
343343
x, new_x = x._values.astype("i8"), new_x.astype("i8")
344344

345345
if method == "pchip":

pandas/io/msgpack/_packer.pyx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,7 @@ cdef class Packer:
234234
default_used = 1
235235
continue
236236
else:
237-
raise TypeError("can't serialize {thing!r}".format(thing=o))
237+
raise TypeError(f"can't serialize {repr(o)}")
238238
break
239239
return ret
240240

pandas/io/msgpack/_unpacker.pyx

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ cdef inline init_ctx(unpack_context *ctx,
9999

100100
def default_read_extended_type(typecode, data):
101101
raise NotImplementedError("Cannot decode extended type "
102-
"with typecode={code}".format(code=typecode))
102+
f"with typecode={typecode}")
103103

104104

105105
def unpackb(object packed, object object_hook=None, object list_hook=None,
@@ -159,7 +159,7 @@ def unpackb(object packed, object object_hook=None, object list_hook=None,
159159
return obj
160160
else:
161161
PyBuffer_Release(&view)
162-
raise UnpackValueError("Unpack failed: error = {ret}".format(ret=ret))
162+
raise UnpackValueError(f"Unpack failed: error = {ret}")
163163

164164

165165
def unpack(object stream, object object_hook=None, object list_hook=None,
@@ -430,8 +430,7 @@ cdef class Unpacker:
430430
else:
431431
raise OutOfData("No more data to unpack.")
432432
else:
433-
raise ValueError("Unpack failed: error = {ret}"
434-
.format(ret=ret))
433+
raise ValueError(f"Unpack failed: error = {ret}")
435434

436435
def read_bytes(self, Py_ssize_t nbytes):
437436
"""Read a specified number of raw bytes from the stream"""

pandas/io/sas/sas.pyx

Lines changed: 5 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -105,13 +105,11 @@ cdef const uint8_t[:] rle_decompress(int result_length,
105105
result[rpos] = 0x00
106106
rpos += 1
107107
else:
108-
raise ValueError("unknown control byte: {byte}"
109-
.format(byte=control_byte))
108+
raise ValueError(f"unknown control byte: {control_byte}")
110109

111110
# In py37 cython/clang sees `len(outbuff)` as size_t and not Py_ssize_t
112111
if <Py_ssize_t>len(result) != <Py_ssize_t>result_length:
113-
raise ValueError("RLE: {got} != {expect}".format(got=len(result),
114-
expect=result_length))
112+
raise ValueError(f"RLE: {len(result)} != {result_length}")
115113

116114
return np.asarray(result)
117115

@@ -194,8 +192,7 @@ cdef const uint8_t[:] rdc_decompress(int result_length,
194192

195193
# In py37 cython/clang sees `len(outbuff)` as size_t and not Py_ssize_t
196194
if <Py_ssize_t>len(outbuff) != <Py_ssize_t>result_length:
197-
raise ValueError("RDC: {got} != {expect}\n"
198-
.format(got=len(outbuff), expect=result_length))
195+
raise ValueError(f"RDC: {len(outbuff)} != {result_length}\n")
199196

200197
return np.asarray(outbuff)
201198

@@ -271,8 +268,7 @@ cdef class Parser:
271268
self.column_types[j] = column_type_string
272269
else:
273270
raise ValueError("unknown column type: "
274-
"{typ}"
275-
.format(typ=self.parser.columns[j].ctype))
271+
f"{self.parser.columns[j].ctype}")
276272

277273
# compression
278274
if parser.compression == const.rle_compression:
@@ -392,8 +388,7 @@ cdef class Parser:
392388
return True
393389
return False
394390
else:
395-
raise ValueError("unknown page type: {typ}"
396-
.format(typ=self.current_page_type))
391+
raise ValueError(f"unknown page type: {self.current_page_type}")
397392

398393
cdef void process_byte_array_with_data(self, int offset, int length):
399394

pandas/tests/api/test_api.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,7 @@ class TestPDApi(Base):
189189
"_np_version_under1p15",
190190
"_np_version_under1p16",
191191
"_np_version_under1p17",
192+
"_np_version_under1p18",
192193
"_tslib",
193194
"_typing",
194195
"_version",

0 commit comments

Comments
 (0)