@@ -258,7 +258,7 @@ after a delimiter:
258
258
data = ' a, b, c\n 1, 2, 3\n 4, 5, 6'
259
259
print (data)
260
260
pd.read_csv(StringIO(data), skipinitialspace = True )
261
-
261
+
262
262
Moreover, ``read_csv `` ignores any completely commented lines:
263
263
264
264
.. ipython :: python
@@ -2962,7 +2962,7 @@ Notes & Caveats
2962
2962
``tables ``. The sizes of a string based indexing column
2963
2963
(e.g. *columns * or *minor_axis *) are determined as the maximum size
2964
2964
of the elements in that axis or by passing the parameter
2965
- - Be aware that timezones (e.g., ``pytz.timezone('US/Eastern') ``)
2965
+ - Be aware that timezones (e.g., ``pytz.timezone('US/Eastern') ``)
2966
2966
are not necessarily equal across timezone versions. So if data is
2967
2967
localized to a specific timezone in the HDFStore using one version
2968
2968
of a timezone library and that data is updated with another version, the data
@@ -3409,14 +3409,14 @@ Google BigQuery (Experimental)
3409
3409
The :mod: `pandas.io.gbq ` module provides a wrapper for Google's BigQuery
3410
3410
analytics web service to simplify retrieving results from BigQuery tables
3411
3411
using SQL-like queries. Result sets are parsed into a pandas
3412
- DataFrame with a shape and data types derived from the source table.
3413
- Additionally, DataFrames can be appended to existing BigQuery tables if
3412
+ DataFrame with a shape and data types derived from the source table.
3413
+ Additionally, DataFrames can be appended to existing BigQuery tables if
3414
3414
the destination table is the same shape as the DataFrame.
3415
3415
3416
3416
For specifics on the service itself, see `here <https://developers.google.com/bigquery/ >`__
3417
3417
3418
- As an example, suppose you want to load all data from an existing BigQuery
3419
- table : `test_dataset.test_table ` into a DataFrame using the :func: `~pandas.io.read_gbq `
3418
+ As an example, suppose you want to load all data from an existing BigQuery
3419
+ table : `test_dataset.test_table ` into a DataFrame using the :func: `~pandas.io.read_gbq `
3420
3420
function.
3421
3421
3422
3422
.. code-block :: python
@@ -3447,14 +3447,14 @@ Finally, you can append data to a BigQuery table from a pandas DataFrame
3447
3447
using the :func: `~pandas.io.to_gbq ` function. This function uses the
3448
3448
Google streaming API which requires that your destination table exists in
3449
3449
BigQuery. Given the BigQuery table already exists, your DataFrame should
3450
- match the destination table in column order, structure, and data types.
3451
- DataFrame indexes are not supported. By default, rows are streamed to
3452
- BigQuery in chunks of 10,000 rows, but you can pass other chuck values
3453
- via the ``chunksize `` argument. You can also see the progess of your
3454
- post via the ``verbose `` flag which defaults to ``True ``. The http
3455
- response code of Google BigQuery can be successful (200) even if the
3456
- append failed. For this reason, if there is a failure to append to the
3457
- table, the complete error response from BigQuery is returned which
3450
+ match the destination table in column order, structure, and data types.
3451
+ DataFrame indexes are not supported. By default, rows are streamed to
3452
+ BigQuery in chunks of 10,000 rows, but you can pass other chuck values
3453
+ via the ``chunksize `` argument. You can also see the progess of your
3454
+ post via the ``verbose `` flag which defaults to ``True ``. The http
3455
+ response code of Google BigQuery can be successful (200) even if the
3456
+ append failed. For this reason, if there is a failure to append to the
3457
+ table, the complete error response from BigQuery is returned which
3458
3458
can be quite long given it provides a status for each row. You may want
3459
3459
to start with smaller chuncks to test that the size and types of your
3460
3460
dataframe match your destination table to make debugging simpler.
@@ -3470,9 +3470,9 @@ The BigQuery SQL query language has some oddities, see `here <https://developers
3470
3470
3471
3471
While BigQuery uses SQL-like syntax, it has some important differences
3472
3472
from traditional databases both in functionality, API limitations (size and
3473
- qunatity of queries or uploads), and how Google charges for use of the service.
3473
+ qunatity of queries or uploads), and how Google charges for use of the service.
3474
3474
You should refer to Google documentation often as the service seems to
3475
- be changing and evolving. BiqQuery is best for analyzing large sets of
3475
+ be changing and evolving. BiqQuery is best for analyzing large sets of
3476
3476
data quickly, but it is not a direct replacement for a transactional database.
3477
3477
3478
3478
You can access the management console to determine project id's by:
0 commit comments