Skip to content

Commit 10ca4f0

Browse files
committed
Merge pull request #5169 from cpcloud/doc-build-fixes
DOC/CLN: A few fixes and cleanup for doc warnings/errors
2 parents f9e0b7d + ee00276 commit 10ca4f0

File tree

5 files changed

+63
-48
lines changed

5 files changed

+63
-48
lines changed

doc/source/io.rst

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -2932,55 +2932,56 @@ if the source datatypes are compatible with BigQuery ones.
29322932
For specifics on the service itself, see `here <https://developers.google.com/bigquery/>`__
29332933

29342934
As an example, suppose you want to load all data from an existing table
2935-
: `test_dataset.test_table`
2936-
into BigQuery and pull it into a DataFrame.
2935+
``test_dataset.test_table`` into BigQuery and pull it into a ``DataFrame``.
29372936

2938-
.. code-block:: python
2937+
::
29392938

29402939
from pandas.io import gbq
29412940
data_frame = gbq.read_gbq('SELECT * FROM test_dataset.test_table')
29422941

2943-
The user will then be authenticated by the `bq` command line client -
2942+
The user will then be authenticated by the ``bq`` command line client -
29442943
this usually involves the default browser opening to a login page,
29452944
though the process can be done entirely from command line if necessary.
2946-
Datasets and additional parameters can be either configured with `bq`,
2947-
passed in as options to `read_gbq`, or set using Google's gflags (this
2948-
is not officially supported by this module, though care was taken
2949-
to ensure that they should be followed regardless of how you call the
2945+
Datasets and additional parameters can be either configured with ``bq``,
2946+
passed in as options to :func:`~pandas.read_gbq`, or set using Google's
2947+
``gflags`` (this is not officially supported by this module, though care was
2948+
taken to ensure that they should be followed regardless of how you call the
29502949
method).
29512950

29522951
Additionally, you can define which column to use as an index as well as a preferred column order as follows:
29532952

2954-
.. code-block:: python
2953+
::
29552954

29562955
data_frame = gbq.read_gbq('SELECT * FROM test_dataset.test_table',
29572956
index_col='index_column_name',
29582957
col_order='[col1, col2, col3,...]')
29592958

2960-
Finally, if you would like to create a BigQuery table, `my_dataset.my_table`, from the rows of DataFrame, `df`:
2959+
Finally, if you would like to create a BigQuery table, `my_dataset.my_table`,
2960+
from the rows of DataFrame, `df`:
29612961

2962-
.. code-block:: python
2962+
::
29632963

2964-
df = pandas.DataFrame({'string_col_name' : ['hello'],
2965-
'integer_col_name' : [1],
2966-
'boolean_col_name' : [True]})
2964+
df = pandas.DataFrame({'string_col_name': ['hello'],
2965+
'integer_col_name': [1],
2966+
'boolean_col_name': [True]})
29672967
schema = ['STRING', 'INTEGER', 'BOOLEAN']
2968-
data_frame = gbq.to_gbq(df, 'my_dataset.my_table',
2969-
if_exists='fail', schema = schema)
2968+
data_frame = gbq.to_gbq(df, 'my_dataset.my_table', if_exists='fail',
2969+
schema=schema)
29702970

29712971
To add more rows to this, simply:
29722972

2973-
.. code-block:: python
2973+
::
29742974

2975-
df2 = pandas.DataFrame({'string_col_name' : ['hello2'],
2976-
'integer_col_name' : [2],
2977-
'boolean_col_name' : [False]})
2975+
df2 = pandas.DataFrame({'string_col_name': ['hello2'],
2976+
'integer_col_name': [2],
2977+
'boolean_col_name': [False]})
29782978
data_frame = gbq.to_gbq(df2, 'my_dataset.my_table', if_exists='append')
29792979

29802980
.. note::
29812981

2982-
There is a hard cap on BigQuery result sets, at 128MB compressed. Also, the BigQuery SQL query language has some oddities,
2983-
see `here <https://developers.google.com/bigquery/query-reference>`__
2982+
There is a hard cap on BigQuery result sets, at 128MB compressed. Also, the
2983+
BigQuery SQL query language has some oddities, see `here
2984+
<https://developers.google.com/bigquery/query-reference>`__
29842985

29852986
.. _io.stata:
29862987

doc/source/missing_data.rst

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -397,8 +397,11 @@ at the new values.
397397
.. _documentation: http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation
398398
.. _guide: http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
399399

400-
Like other pandas fill methods, ``interpolate`` accepts a ``limit`` keyword argument.
401-
Use this to limit the number of consecutive interpolations, keeping ``NaN`` s for interpolations that are too far from the last valid observation:
400+
401+
Like other pandas fill methods, ``interpolate`` accepts a ``limit`` keyword
402+
argument. Use this to limit the number of consecutive interpolations, keeping
403+
``NaN`` values for interpolations that are too far from the last valid
404+
observation:
402405

403406
.. ipython:: python
404407

pandas/core/generic.py

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1982,29 +1982,35 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False,
19821982
19831983
Parameters
19841984
----------
1985-
method : {'linear', 'time', 'values', 'index' 'nearest',
1986-
'zero', 'slinear', 'quadratic', 'cubic',
1987-
'barycentric', 'krogh', 'polynomial', 'spline'
1988-
'piecewise_polynomial', 'pchip'}
1989-
'linear': ignore the index and treat the values as equally spaced. default
1990-
'time': interpolation works on daily and higher resolution
1985+
method : {'linear', 'time', 'values', 'index' 'nearest', 'zero',
1986+
'slinear', 'quadratic', 'cubic', 'barycentric', 'krogh',
1987+
'polynomial', 'spline' 'piecewise_polynomial', 'pchip'}
1988+
1989+
* 'linear': ignore the index and treat the values as equally
1990+
spaced. default
1991+
* 'time': interpolation works on daily and higher resolution
19911992
data to interpolate given length of interval
1992-
'index': use the actual numerical values of the index
1993-
'nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'barycentric',
1994-
'polynomial' is passed to `scipy.interpolate.interp1d` with the order given
1995-
both 'polynomial' and 'spline' requre that you also specify and order (int)
1996-
e.g. df.interpolate(method='polynomial', order=4)
1997-
'krogh', 'piecewise_polynomial', 'spline', and 'pchip' are all wrappers
1998-
around the scipy interpolation methods of similar names. See the
1999-
scipy documentation for more on their behavior:
2000-
http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation
2001-
http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
1993+
* 'index': use the actual numerical values of the index
1994+
* 'nearest', 'zero', 'slinear', 'quadratic', 'cubic',
1995+
'barycentric', 'polynomial' is passed to
1996+
`scipy.interpolate.interp1d` with the order given both
1997+
'polynomial' and 'spline' requre that you also specify and order
1998+
(int) e.g. df.interpolate(method='polynomial', order=4)
1999+
* 'krogh', 'piecewise_polynomial', 'spline', and 'pchip' are all
2000+
wrappers around the scipy interpolation methods of similar
2001+
names. See the scipy documentation for more on their behavior:
2002+
http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation
2003+
http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
2004+
20022005
axis : {0, 1}, default 0
2003-
0: fill column-by-column
2004-
1: fill row-by-row
2005-
limit : int, default None. Maximum number of consecutive NaNs to fill.
2006+
* 0: fill column-by-column
2007+
* 1: fill row-by-row
2008+
limit : int, default None.
2009+
Maximum number of consecutive NaNs to fill.
20062010
inplace : bool, default False
2011+
Update the NDFrame in place if possible.
20072012
downcast : optional, 'infer' or None, defaults to 'infer'
2013+
Downcast dtypes if possible.
20082014
20092015
Returns
20102016
-------

pandas/core/panel.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -391,8 +391,8 @@ def to_excel(self, path, na_rep='', engine=None, **kwargs):
391391
``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
392392
``io.excel.xlsm.writer``.
393393
394-
Keyword Arguments
395-
-----------------
394+
Other Parameters
395+
----------------
396396
float_format : string, default None
397397
Format string for floating point numbers
398398
cols : sequence, optional
@@ -409,6 +409,8 @@ def to_excel(self, path, na_rep='', engine=None, **kwargs):
409409
startow : upper left cell row to dump data frame
410410
startcol : upper left cell column to dump data frame
411411
412+
Notes
413+
-----
412414
Keyword arguments (and na_rep) are passed to the ``to_excel`` method
413415
for each DataFrame written.
414416
"""

pandas/io/html.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -782,7 +782,10 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
782782
latest information on table attributes for the modern web.
783783
784784
parse_dates : bool, optional
785-
See :func:`~pandas.read_csv` for details.
785+
See :func:`~pandas.io.parsers.read_csv` for more details. In 0.13, this
786+
parameter can sometimes interact strangely with ``infer_types``. If you
787+
get a large number of ``NaT`` values in your results, consider passing
788+
``infer_types=False`` and manually converting types afterwards.
786789
787790
tupleize_cols : bool, optional
788791
If ``False`` try to parse multiple header rows into a
@@ -824,12 +827,12 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
824827
825828
See Also
826829
--------
827-
pandas.read_csv
830+
pandas.io.parsers.read_csv
828831
"""
829832
if infer_types is not None:
830833
warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
831834
else:
832-
infer_types = True # TODO: remove in 0.14
835+
infer_types = True # TODO: remove effect of this in 0.14
833836

834837
# Type check here. We don't want to parse only to fail because of an
835838
# invalid value of an integer skiprows.

0 commit comments

Comments
 (0)