diff --git a/docs/bigtable-client-intro.rst b/docs/bigtable-client-intro.rst index 6c7e48790f3d..3f43371ec1db 100644 --- a/docs/bigtable-client-intro.rst +++ b/docs/bigtable-client-intro.rst @@ -21,18 +21,19 @@ defaults However, you may over-ride them and these will be used throughout all API requests made with the ``client`` you create. -Authorization +Configuration ------------- - For an overview of authentication in ``gcloud-python``, see :doc:`gcloud-auth`. - In addition to any authentication configuration, you can also set the - :envvar:`GCLOUD_PROJECT` environment variable for the project you'd like - to interact with. If you are Google App Engine or Google Compute Engine - this will be detected automatically. (Setting this environment - variable is not required, you may instead pass the ``project`` explicitly - when constructing a :class:`Client `). + :envvar:`GCLOUD_PROJECT` environment variable for the Google Cloud Console + project you'd like to interact with. If your code is running in Google App + Engine or Google Compute Engine the project will be detected automatically. + (Setting this environment variable is not required, you may instead pass the + ``project`` explicitly when constructing a + :class:`Client `). - After configuring your environment, create a :class:`Client ` diff --git a/docs/bigtable-cluster-api.rst b/docs/bigtable-cluster-api.rst index bedf581fbf44..17fba96840db 100644 --- a/docs/bigtable-cluster-api.rst +++ b/docs/bigtable-cluster-api.rst @@ -85,11 +85,13 @@ Check on Current Operation .. note:: When modifying a cluster (via a `CreateCluster`_, `UpdateCluster`_ or - `UndeleteCluster`_ request), the Bigtable API will return a long-running - `Operation`_. This will be stored on the object after each of + `UndeleteCluster`_ request), the Bigtable API will return a + `long-running operation`_ and a corresponding + :class:`Operation ` object + will be returned by each of :meth:`create() `, :meth:`update() ` and - :meth:`undelete() ` are called. + :meth:`undelete() `. You can check if a long-running operation (for a :meth:`create() `, @@ -176,4 +178,4 @@ Head next to learn about the :doc:`bigtable-table-api`. .. _ListClusters: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/2aae624081f652427052fb652d3ae43d8ac5bf5a/bigtable-protos/src/main/proto/google/bigtable/admin/cluster/v1/bigtable_cluster_service.proto#L44-L46 .. _GetOperation: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/2aae624081f652427052fb652d3ae43d8ac5bf5a/bigtable-protos/src/main/proto/google/longrunning/operations.proto#L43-L45 .. _UndeleteCluster: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/2aae624081f652427052fb652d3ae43d8ac5bf5a/bigtable-protos/src/main/proto/google/bigtable/admin/cluster/v1/bigtable_cluster_service.proto#L126-L128 -.. _Operation: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/2aae624081f652427052fb652d3ae43d8ac5bf5a/bigtable-protos/src/main/proto/google/longrunning/operations.proto#L73-L102 +.. _long-running operation: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/2aae624081f652427052fb652d3ae43d8ac5bf5a/bigtable-protos/src/main/proto/google/longrunning/operations.proto#L73-L102 diff --git a/docs/bigtable-data-api.rst b/docs/bigtable-data-api.rst index 776fe4a8429f..779efa991886 100644 --- a/docs/bigtable-data-api.rst +++ b/docs/bigtable-data-api.rst @@ -24,23 +24,11 @@ Cells vs. Columns vs. Column Families Modifying Data ++++++++++++++ -Since data is stored in cells, which are stored in rows, the -:class:`Row ` class is the only class used to -modify (write, update, delete) data in a +Since data is stored in cells, which are stored in rows, we +use the metaphor of a **row** in classes that are used to modify +(write, update, delete) data in a :class:`Table `. -Row Factory ------------ - -To create a :class:`Row ` object - -.. code:: python - - row = table.row(row_key) - -Unlike the previous string values we've used before, the row key must -be ``bytes``. - Direct vs. Conditional vs. Append --------------------------------- @@ -49,43 +37,65 @@ There are three ways to modify data in a table, described by the methods. * The **direct** way is via `MutateRow`_ which involves simply - adding, overwriting or deleting cells. + adding, overwriting or deleting cells. The + :class:`DirectRow ` class + handles direct mutations. * The **conditional** way is via `CheckAndMutateRow`_. This method first checks if some filter is matched in a a given row, then applies one of two sets of mutations, depending on if a match occurred or not. (These mutation sets are called the "true - mutations" and "false mutations".) + mutations" and "false mutations".) The + :class:`ConditionalRow ` class + handles conditional mutations. * The **append** way is via `ReadModifyWriteRow`_. This simply appends (as bytes) or increments (as an integer) data in a presumed - existing cell in a row. + existing cell in a row. The + :class:`AppendRow ` class + handles append mutations. -Building Up Mutations ---------------------- +Row Factory +----------- -In all three cases, a set of mutations (or two sets) are built up -on a :class:`Row ` before they are sent of -in a batch via :meth:`commit() `: +A single factory can be used to create any of the three row types. +To create a :class:`DirectRow `: .. code:: python - row.commit() + row = table.row(row_key) -To send **append** mutations in batch, use -:meth:`commit_modifications() `: +Unlike the previous string values we've used before, the row key must +be ``bytes``. + +To create a :class:`ConditionalRow `, +first create a :class:`RowFilter ` and +then .. code:: python - row.commit_modifications() + cond_row = table.row(row_key, filter_=filter_) -We have a small set of methods on the :class:`Row ` -to build these mutations up. +To create an :class:`AppendRow ` + +.. code:: python + + append_row = table.row(row_key, append=True) + +Building Up Mutations +--------------------- + +In all three cases, a set of mutations (or two sets) are built up +on a row before they are sent of in a batch via + +.. code:: python + + row.commit() Direct Mutations ---------------- Direct mutations can be added via one of four methods -* :meth:`set_cell() ` allows a +* :meth:`set_cell() ` allows a single value to be written to a column .. code:: python @@ -97,9 +107,9 @@ Direct mutations can be added via one of four methods Bigtable server will be used when the cell is stored. The value can either by bytes or an integer (which will be converted to - bytes as an unsigned 64-bit integer). + bytes as a signed 64-bit integer). -* :meth:`delete_cell() ` deletes +* :meth:`delete_cell() ` deletes all cells (i.e. for all timestamps) in a given column .. code:: python @@ -117,8 +127,9 @@ Direct mutations can be added via one of four methods row.delete_cell(column_family_id, column, time_range=time_range) -* :meth:`delete_cells() ` does - the same thing as :meth:`delete_cell() ` +* :meth:`delete_cells() ` does + the same thing as + :meth:`delete_cell() ` but accepts a list of columns in a column family rather than a single one. .. code:: python @@ -127,15 +138,16 @@ Direct mutations can be added via one of four methods time_range=time_range) In addition, if we want to delete cells from every column in a column family, - the special :attr:`ALL_COLUMNS ` value - can be used + the special :attr:`ALL_COLUMNS ` + value can be used .. code:: python - row.delete_cells(column_family_id, Row.ALL_COLUMNS, + row.delete_cells(column_family_id, row.ALL_COLUMNS, time_range=time_range) -* :meth:`delete() ` will delete the entire row +* :meth:`delete() ` will delete the + entire row .. code:: python @@ -145,57 +157,42 @@ Conditional Mutations --------------------- Making **conditional** modifications is essentially identical -to **direct** modifications, but we need to specify a filter to match -against in the row: +to **direct** modifications: it uses the exact same methods +to accumulate mutations. -.. code:: python - - row = table.row(row_key, filter_=filter_val) - -See the :class:`Row ` class for more information -about acceptable values for ``filter_``. - -The only other difference from **direct** modifications are that each mutation -added must specify a ``state``: will the mutation be applied if the filter -matches or if it fails to match. +However, each mutation added must specify a ``state``: will the mutation be +applied if the filter matches or if it fails to match. For example: .. code:: python - row.set_cell(column_family_id, column, value, - timestamp=timestamp, state=True) + cond_row.set_cell(column_family_id, column, value, + timestamp=timestamp, state=True) will add to the set of true mutations. -.. note:: - - If ``state`` is passed when no ``filter_`` is set on a - :class:`Row `, adding the mutation will fail. - Similarly, if no ``state`` is passed when a ``filter_`` has been set, - adding the mutation will fail. - Append Mutations ---------------- Append mutations can be added via one of two methods -* :meth:`append_cell_value() ` +* :meth:`append_cell_value() ` appends a bytes value to an existing cell: .. code:: python - row.append_cell_value(column_family_id, column, bytes_value) + append_row.append_cell_value(column_family_id, column, bytes_value) -* :meth:`increment_cell_value() ` +* :meth:`increment_cell_value() ` increments an integer value in an existing cell: .. code:: python - row.increment_cell_value(column_family_id, column, int_value) + append_row.increment_cell_value(column_family_id, column, int_value) Since only bytes are stored in a cell, the cell value is decoded as - an unsigned 64-bit integer before being incremented. (This happens on + a signed 64-bit integer before being incremented. (This happens on the Google Cloud Bigtable server, not in the library.) Notice that no timestamp was specified. This is because **append** mutations @@ -208,18 +205,10 @@ Starting Fresh -------------- If accumulated mutations need to be dropped, use -:meth:`clear_mutations() ` - -.. code:: python - - row.clear_mutations() - -To clear **append** mutations, use -:meth:`clear_modification_rules() ` .. code:: python - row.clear_modification_rules() + row.clear() Reading Data ++++++++++++ @@ -260,19 +249,20 @@ To make a `ReadRows`_ API request for a single row key, use >>> cell.timestamp datetime.datetime(2016, 2, 27, 3, 41, 18, 122823, tzinfo=) -Rather than returning a :class:`Row `, this method -returns a :class:`PartialRowData ` +Rather than returning a :class:`DirectRow ` +or similar class, this method returns a +:class:`PartialRowData ` instance. This class is used for reading and parsing data rather than for -modifying data (as :class:`Row ` is). +modifying data (as :class:`DirectRow ` is). -A filter can also be applied to the +A filter can also be applied to the results: .. code:: python row_data = table.read_row(row_key, filter_=filter_val) The allowable ``filter_`` values are the same as those used for a -:class:`Row ` with **conditional** mutations. For +:class:`ConditionalRow `. For more information, see the :meth:`Table.read_row() ` documentation. diff --git a/docs/bigtable-row.rst b/docs/bigtable-row.rst index 08ae1d4beaae..fae56b76f3d5 100644 --- a/docs/bigtable-row.rst +++ b/docs/bigtable-row.rst @@ -2,7 +2,8 @@ Bigtable Row ============ It is possible to use a :class:`RowFilter ` -when adding mutations to a :class:`Row ` and when +when adding mutations to a +:class:`ConditionalRow ` and when reading row data with :meth:`read_row() ` :meth:`read_rows() `. diff --git a/gcloud/bigtable/happybase/table.py b/gcloud/bigtable/happybase/table.py index 0f6314ccdfd7..dcbf45163015 100644 --- a/gcloud/bigtable/happybase/table.py +++ b/gcloud/bigtable/happybase/table.py @@ -603,12 +603,12 @@ def counter_inc(self, row, column, value=1): :rtype: int :returns: Counter value after incrementing. """ - row = self._low_level_table.row(row) + row = self._low_level_table.row(row, append=True) if isinstance(column, six.binary_type): column = column.decode('utf-8') column_family_id, column_qualifier = column.split(':') row.increment_cell_value(column_family_id, column_qualifier, value) - # See row.commit_modifications() will return a dictionary: + # See AppendRow.commit() will return a dictionary: # { # u'col-fam-id': { # b'col-name1': [ @@ -618,7 +618,7 @@ def counter_inc(self, row, column, value=1): # ... # }, # } - modified_cells = row.commit_modifications() + modified_cells = row.commit() # Get the cells in the modified column, column_cells = modified_cells[column_family_id][column_qualifier] # Make sure there is exactly one cell in the column. diff --git a/gcloud/bigtable/happybase/test_table.py b/gcloud/bigtable/happybase/test_table.py index b70abf77f2dc..cf95896a7696 100644 --- a/gcloud/bigtable/happybase/test_table.py +++ b/gcloud/bigtable/happybase/test_table.py @@ -871,10 +871,12 @@ def _counter_inc_helper(self, row, column, value, commit_result): table = self._makeOne(name, connection) # Mock the return values. table._low_level_table = _MockLowLevelTable() - table._low_level_table.row_values[row] = _MockLowLevelRow( + table._low_level_table.row_values[row] = row_obj = _MockLowLevelRow( row, commit_result=commit_result) + self.assertFalse(row_obj._append) result = table.counter_inc(row, column, value=value) + self.assertTrue(row_obj._append) incremented_value = value + _MockLowLevelRow.COUNTER_DEFAULT self.assertEqual(result, incremented_value) @@ -1431,8 +1433,10 @@ def list_column_families(self): self.list_column_families_calls += 1 return self.column_families - def row(self, row_key): - return self.row_values[row_key] + def row(self, row_key, append=None): + result = self.row_values[row_key] + result._append = append + return result def read_row(self, *args, **kwargs): self.read_row_calls.append((args, kwargs)) @@ -1449,6 +1453,7 @@ class _MockLowLevelRow(object): def __init__(self, row_key, commit_result=None): self.row_key = row_key + self._append = False self.counts = {} self.commit_result = commit_result @@ -1457,7 +1462,7 @@ def increment_cell_value(self, column_family_id, column, int_value): self.COUNTER_DEFAULT) self.counts[(column_family_id, column)] = count + int_value - def commit_modifications(self): + def commit(self): return self.commit_result diff --git a/gcloud/bigtable/row.py b/gcloud/bigtable/row.py index 028c697d0c3d..276f009b7bb1 100644 --- a/gcloud/bigtable/row.py +++ b/gcloud/bigtable/row.py @@ -34,94 +34,85 @@ class Row(object): - """Representation of a Google Cloud Bigtable Row. + """Base representation of a Google Cloud Bigtable Row. - .. note:: + This class has three subclasses corresponding to the three + RPC methods for sending row mutations: - A :class:`Row` accumulates mutations locally via the :meth:`set_cell`, - :meth:`delete`, :meth:`delete_cell` and :meth:`delete_cells` methods. - To actually send these mutations to the Google Cloud Bigtable API, you - must call :meth:`commit`. If a ``filter_`` is set on the :class:`Row`, - the mutations must have an associated state: :data:`True` or - :data:`False`. The mutations will be applied conditionally, based on - whether the filter matches any cells in the :class:`Row` or not. + * :class:`DirectRow` for ``MutateRow`` + * :class:`ConditionalRow` for ``CheckAndMutateRow`` + * :class:`AppendRow` for ``ReadModifyWriteRow`` :type row_key: bytes :param row_key: The key for the current row. :type table: :class:`Table ` :param table: The table that owns the row. + """ - :type filter_: :class:`RowFilter` - :param filter_: (Optional) Filter to be used for conditional mutations. - If a filter is set, then the :class:`Row` will accumulate - mutations for either a :data:`True` or :data:`False` state. - When :meth:`commit`-ed, the mutations for the :data:`True` - state will be applied if the filter matches any cells in - the row, otherwise the :data:`False` state will be. + def __init__(self, row_key, table): + self._row_key = _to_bytes(row_key) + self._table = table + + +class DirectRow(Row): + """Google Cloud Bigtable Row for sending "direct" mutations. + + These mutations directly set or delete cell contents: + + * :meth:`set_cell` + * :meth:`delete` + * :meth:`delete_cell` + * :meth:`delete_cells` + + These methods can be used directly:: + + >>> row = table.row(b'row-key1') + >>> row.set_cell(u'fam', b'col1', b'cell-val') + >>> row.delete_cell(u'fam', b'col2') + + .. note:: + + A :class:`DirectRow` accumulates mutations locally via the + :meth:`set_cell`, :meth:`delete`, :meth:`delete_cell` and + :meth:`delete_cells` methods. To actually send these mutations to the + Google Cloud Bigtable API, you must call :meth:`commit`. + + :type row_key: bytes + :param row_key: The key for the current row. + + :type table: :class:`Table ` + :param table: The table that owns the row. """ ALL_COLUMNS = object() """Sentinel value used to indicate all columns in a column family.""" - def __init__(self, row_key, table, filter_=None): - self._row_key = _to_bytes(row_key) - self._table = table - self._filter = filter_ - self._rule_pb_list = [] - if self._filter is None: - self._pb_mutations = [] - self._true_pb_mutations = None - self._false_pb_mutations = None - else: - self._pb_mutations = None - self._true_pb_mutations = [] - self._false_pb_mutations = [] + def __init__(self, row_key, table): + super(DirectRow, self).__init__(row_key, table) + self._pb_mutations = [] - def _get_mutations(self, state=None): + def _get_mutations(self, state): # pylint: disable=unused-argument """Gets the list of mutations for a given state. - If the state is :data`None` but there is a filter set, then we've - reached an invalid state. Similarly if no filter is set but the - state is not :data:`None`. + ``state`` is unused by :class:`DirectRow` but is used by + subclasses. :type state: bool - :param state: (Optional) The state that the mutation should be - applied in. Unset if the mutation is not conditional, - otherwise :data:`True` or :data:`False`. + :param state: The state that the mutation should be + applied in. :rtype: list :returns: The list to add new mutations to (for the current state). - :raises: :class:`ValueError ` """ - if state is None: - if self._filter is not None: - raise ValueError('A filter is set on the current row, but no ' - 'state given for the mutation') - return self._pb_mutations - else: - if self._filter is None: - raise ValueError('No filter was set on the current row, but a ' - 'state was given for the mutation') - if state: - return self._true_pb_mutations - else: - return self._false_pb_mutations + return self._pb_mutations - def set_cell(self, column_family_id, column, value, timestamp=None, - state=None): - """Sets a value in this row. + def _set_cell(self, column_family_id, column, value, timestamp=None, + state=None): + """Helper for :meth:`set_cell` - The cell is determined by the ``row_key`` of the :class:`Row` and the - ``column``. The ``column`` must be in an existing - :class:`.ColumnFamily` (as determined by ``column_family_id``). - - .. note:: - - This method adds a mutation to the accumulated mutations on this - :class:`Row`, but does not make an API request. To actually - send an API request (with the mutations) to the Google Cloud - Bigtable API, call :meth:`commit`. + ``state`` is unused by :class:`DirectRow` but is used by + subclasses. :type column_family_id: str :param column_family_id: The column family that contains the column. @@ -141,9 +132,8 @@ def set_cell(self, column_family_id, column, value, timestamp=None, :param timestamp: (Optional) The timestamp of the operation. :type state: bool - :param state: (Optional) The state that the mutation should be - applied in. Unset if the mutation is not conditional, - otherwise :data:`True` or :data:`False`. + :param state: (Optional) The state that is passed along to + :meth:`_get_mutations`. """ column = _to_bytes(column) if isinstance(value, six.integer_types): @@ -166,49 +156,19 @@ def set_cell(self, column_family_id, column, value, timestamp=None, mutation_pb = data_pb2.Mutation(set_cell=mutation_val) self._get_mutations(state).append(mutation_pb) - def append_cell_value(self, column_family_id, column, value): - """Appends a value to an existing cell. - - .. note:: - - This method adds a read-modify rule protobuf to the accumulated - read-modify rules on this :class:`Row`, but does not make an API - request. To actually send an API request (with the rules) to the - Google Cloud Bigtable API, call :meth:`commit_modifications`. - - :type column_family_id: str - :param column_family_id: The column family that contains the column. - Must be of the form - ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. - - :type column: bytes - :param column: The column within the column family where the cell - is located. - - :type value: bytes - :param value: The value to append to the existing value in the cell. If - the targeted cell is unset, it will be treated as - containing the empty string. - """ - column = _to_bytes(column) - value = _to_bytes(value) - rule_pb = data_pb2.ReadModifyWriteRule(family_name=column_family_id, - column_qualifier=column, - append_value=value) - self._rule_pb_list.append(rule_pb) - - def increment_cell_value(self, column_family_id, column, int_value): - """Increments a value in an existing cell. + def set_cell(self, column_family_id, column, value, timestamp=None): + """Sets a value in this row. - Assumes the value in the cell is stored as a 64 bit integer - serialized to bytes. + The cell is determined by the ``row_key`` of this :class:`DirectRow` + and the ``column``. The ``column`` must be in an existing + :class:`.ColumnFamily` (as determined by ``column_family_id``). .. note:: - This method adds a read-modify rule protobuf to the accumulated - read-modify rules on this :class:`Row`, but does not make an API - request. To actually send an API request (with the rules) to the - Google Cloud Bigtable API, call :meth:`commit_modifications`. + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. :type column_family_id: str :param column_family_id: The column family that contains the column. @@ -219,81 +179,49 @@ def increment_cell_value(self, column_family_id, column, int_value): :param column: The column within the column family where the cell is located. - :type int_value: int - :param int_value: The value to increment the existing value in the cell - by. If the targeted cell is unset, it will be treated - as containing a zero. Otherwise, the targeted cell - must contain an 8-byte value (interpreted as a 64-bit - big-endian signed integer), or the entire request - will fail. - """ - column = _to_bytes(column) - rule_pb = data_pb2.ReadModifyWriteRule(family_name=column_family_id, - column_qualifier=column, - increment_amount=int_value) - self._rule_pb_list.append(rule_pb) + :type value: bytes or :class:`int` + :param value: The value to set in the cell. If an integer is used, + will be interpreted as a 64-bit big-endian signed + integer (8 bytes). - def delete(self, state=None): - """Deletes this row from the table. + :type timestamp: :class:`datetime.datetime` + :param timestamp: (Optional) The timestamp of the operation. + """ + self._set_cell(column_family_id, column, value, timestamp=timestamp, + state=None) - .. note:: + def _delete(self, state=None): + """Helper for :meth:`delete` - This method adds a mutation to the accumulated mutations on this - :class:`Row`, but does not make an API request. To actually - send an API request (with the mutations) to the Google Cloud - Bigtable API, call :meth:`commit`. + ``state`` is unused by :class:`DirectRow` but is used by + subclasses. :type state: bool - :param state: (Optional) The state that the mutation should be - applied in. Unset if the mutation is not conditional, - otherwise :data:`True` or :data:`False`. + :param state: (Optional) The state that is passed along to + :meth:`_get_mutations`. """ mutation_val = data_pb2.Mutation.DeleteFromRow() mutation_pb = data_pb2.Mutation(delete_from_row=mutation_val) self._get_mutations(state).append(mutation_pb) - def delete_cell(self, column_family_id, column, time_range=None, - state=None): - """Deletes cell in this row. + def delete(self): + """Deletes this row from the table. .. note:: This method adds a mutation to the accumulated mutations on this - :class:`Row`, but does not make an API request. To actually + row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. - - :type column_family_id: str - :param column_family_id: The column family that contains the column - or columns with cells being deleted. Must be - of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. - - :type column: bytes - :param column: The column within the column family that will have a - cell deleted. - - :type time_range: :class:`TimestampRange` - :param time_range: (Optional) The range of time within which cells - should be deleted. - - :type state: bool - :param state: (Optional) The state that the mutation should be - applied in. Unset if the mutation is not conditional, - otherwise :data:`True` or :data:`False`. """ - self.delete_cells(column_family_id, [column], time_range=time_range, - state=state) + self._delete(state=None) - def delete_cells(self, column_family_id, columns, time_range=None, - state=None): - """Deletes cells in this row. - - .. note:: + def _delete_cells(self, column_family_id, columns, time_range=None, + state=None): + """Helper for :meth:`delete_cell` and :meth:`delete_cells`. - This method adds a mutation to the accumulated mutations on this - :class:`Row`, but does not make an API request. To actually - send an API request (with the mutations) to the Google Cloud - Bigtable API, call :meth:`commit`. + ``state`` is unused by :class:`DirectRow` but is used by + subclasses. :type column_family_id: str :param column_family_id: The column family that contains the column @@ -303,7 +231,7 @@ def delete_cells(self, column_family_id, columns, time_range=None, :type columns: :class:`list` of :class:`str` / :func:`unicode `, or :class:`object` :param columns: The columns within the column family that will have - cells deleted. If :attr:`Row.ALL_COLUMNS` is used then + cells deleted. If :attr:`ALL_COLUMNS` is used then the entire column family will be deleted from the row. :type time_range: :class:`TimestampRange` @@ -311,9 +239,8 @@ def delete_cells(self, column_family_id, columns, time_range=None, should be deleted. :type state: bool - :param state: (Optional) The state that the mutation should be - applied in. Unset if the mutation is not conditional, - otherwise :data:`True` or :data:`False`. + :param state: (Optional) The state that is passed along to + :meth:`_get_mutations`. """ mutations_list = self._get_mutations(state) if columns is self.ALL_COLUMNS: @@ -346,16 +273,76 @@ def delete_cells(self, column_family_id, columns, time_range=None, # processed without error. mutations_list.extend(to_append) - def _commit_mutate(self): + def delete_cell(self, column_family_id, column, time_range=None): + """Deletes cell in this row. + + .. note:: + + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. + + :type column_family_id: str + :param column_family_id: The column family that contains the column + or columns with cells being deleted. Must be + of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. + + :type column: bytes + :param column: The column within the column family that will have a + cell deleted. + + :type time_range: :class:`TimestampRange` + :param time_range: (Optional) The range of time within which cells + should be deleted. + """ + self._delete_cells(column_family_id, [column], time_range=time_range, + state=None) + + def delete_cells(self, column_family_id, columns, time_range=None): + """Deletes cells in this row. + + .. note:: + + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. + + :type column_family_id: str + :param column_family_id: The column family that contains the column + or columns with cells being deleted. Must be + of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. + + :type columns: :class:`list` of :class:`str` / + :func:`unicode `, or :class:`object` + :param columns: The columns within the column family that will have + cells deleted. If :attr:`ALL_COLUMNS` is used then + the entire column family will be deleted from the row. + + :type time_range: :class:`TimestampRange` + :param time_range: (Optional) The range of time within which cells + should be deleted. + """ + self._delete_cells(column_family_id, columns, time_range=time_range, + state=None) + + def commit(self): """Makes a ``MutateRow`` API request. - Assumes no filter is set on the :class:`Row` and is meant to be called - by :meth:`commit`. + If no mutations have been created in the row, no request is made. + + Mutations are applied atomically and in order, meaning that earlier + mutations can be masked / negated by later ones. Cells already present + in the row are left unchanged unless explicitly changed by a mutation. + + After committing the accumulated mutations, resets the local + mutations to an empty list. :raises: :class:`ValueError ` if the number of mutations exceeds the :data:`MAX_MUTATIONS`. """ - mutations_list = self._get_mutations() + mutations_list = self._get_mutations(None) num_mutations = len(mutations_list) if num_mutations == 0: return @@ -370,12 +357,93 @@ def _commit_mutate(self): # We expect a `google.protobuf.empty_pb2.Empty` client = self._table._cluster._client client._data_stub.MutateRow(request_pb, client.timeout_seconds) + self.clear() + + def clear(self): + """Removes all currently accumulated mutations on the current row.""" + del self._pb_mutations[:] + + +class ConditionalRow(DirectRow): + """Google Cloud Bigtable Row for sending mutations conditionally. + + Each mutation has an associated state: :data:`True` or :data:`False`. + When :meth:`commit`-ed, the mutations for the :data:`True` + state will be applied if the filter matches any cells in + the row, otherwise the :data:`False` state will be applied. + + A :class:`ConditionalRow` accumulates mutations in the same way a + :class:`DirectRow` does: + + * :meth:`set_cell` + * :meth:`delete` + * :meth:`delete_cell` + * :meth:`delete_cells` + + with the only change the extra ``state`` parameter:: + + >>> row_cond = table.row(b'row-key2', filter_=row_filter) + >>> row_cond.set_cell(u'fam', b'col', b'cell-val', state=True) + >>> row_cond.delete_cell(u'fam', b'col', state=False) - def _commit_check_and_mutate(self): + .. note:: + + As with :class:`DirectRow`, to actually send these mutations to the + Google Cloud Bigtable API, you must call :meth:`commit`. + + :type row_key: bytes + :param row_key: The key for the current row. + + :type table: :class:`Table ` + :param table: The table that owns the row. + + :type filter_: :class:`RowFilter` + :param filter_: Filter to be used for conditional mutations. + """ + def __init__(self, row_key, table, filter_): + super(ConditionalRow, self).__init__(row_key, table) + self._filter = filter_ + # NOTE: We use self._pb_mutations to hold the state=True mutations. + self._false_pb_mutations = [] + + def _get_mutations(self, state): + """Gets the list of mutations for a given state. + + Over-ridden so that the state can be used in: + + * :meth:`set_cell` + * :meth:`delete` + * :meth:`delete_cell` + * :meth:`delete_cells` + + :type state: bool + :param state: The state that the mutation should be + applied in. + + :rtype: list + :returns: The list to add new mutations to (for the current state). + """ + if state: + return self._pb_mutations + else: + return self._false_pb_mutations + + def commit(self): """Makes a ``CheckAndMutateRow`` API request. - Assumes a filter is set on the :class:`Row` and is meant to be called - by :meth:`commit`. + If no mutations have been created in the row, no request is made. + + The mutations will be applied conditionally, based on whether the + filter matches any cells in the :class:`ConditionalRow` or not. (Each + method which adds a mutation has a ``state`` parameter for this + purpose.) + + Mutations are applied atomically and in order, meaning that earlier + mutations can be masked / negated by later ones. Cells already present + in the row are left unchanged unless explicitly changed by a mutation. + + After committing the accumulated mutations, resets the local + mutations. :rtype: bool :returns: Flag indicating if the filter was matched (which also @@ -407,70 +475,253 @@ def _commit_check_and_mutate(self): client = self._table._cluster._client resp = client._data_stub.CheckAndMutateRow( request_pb, client.timeout_seconds) + self.clear() return resp.predicate_matched - def clear_mutations(self): - """Removes all currently accumulated mutations on the current row.""" - if self._filter is None: - del self._pb_mutations[:] - else: - del self._true_pb_mutations[:] - del self._false_pb_mutations[:] + # pylint: disable=arguments-differ + def set_cell(self, column_family_id, column, value, timestamp=None, + state=True): + """Sets a value in this row. - def commit(self): - """Makes a ``MutateRow`` or ``CheckAndMutateRow`` API request. + The cell is determined by the ``row_key`` of this + :class:`ConditionalRow` and the ``column``. The ``column`` must be in + an existing :class:`.ColumnFamily` (as determined by + ``column_family_id``). - If no mutations have been created in the row, no request is made. + .. note:: - Mutations are applied atomically and in order, meaning that earlier - mutations can be masked / negated by later ones. Cells already present - in the row are left unchanged unless explicitly changed by a mutation. + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. - After committing the accumulated mutations, resets the local - mutations to an empty list. + :type column_family_id: str + :param column_family_id: The column family that contains the column. + Must be of the form + ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. - In the case that a filter is set on the :class:`Row`, the mutations - will be applied conditionally, based on whether the filter matches - any cells in the :class:`Row` or not. (Each method which adds a - mutation has a ``state`` parameter for this purpose.) + :type column: bytes + :param column: The column within the column family where the cell + is located. - :rtype: :class:`bool` or :data:`NoneType ` - :returns: :data:`None` if there is no filter, otherwise a flag - indicating if the filter was matched (which also - indicates which set of mutations were applied by the server). - :raises: :class:`ValueError ` if the number of - mutations exceeds the :data:`MAX_MUTATIONS`. + :type value: bytes or :class:`int` + :param value: The value to set in the cell. If an integer is used, + will be interpreted as a 64-bit big-endian signed + integer (8 bytes). + + :type timestamp: :class:`datetime.datetime` + :param timestamp: (Optional) The timestamp of the operation. + + :type state: bool + :param state: (Optional) The state that the mutation should be + applied in. Defaults to :data:`True`. """ - if self._filter is None: - result = self._commit_mutate() - else: - result = self._commit_check_and_mutate() + self._set_cell(column_family_id, column, value, timestamp=timestamp, + state=state) + + def delete(self, state=True): + """Deletes this row from the table. + + .. note:: + + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. + + :type state: bool + :param state: (Optional) The state that the mutation should be + applied in. Defaults to :data:`True`. + """ + self._delete(state=state) + + def delete_cell(self, column_family_id, column, time_range=None, + state=True): + """Deletes cell in this row. + + .. note:: + + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. + + :type column_family_id: str + :param column_family_id: The column family that contains the column + or columns with cells being deleted. Must be + of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. + + :type column: bytes + :param column: The column within the column family that will have a + cell deleted. + + :type time_range: :class:`TimestampRange` + :param time_range: (Optional) The range of time within which cells + should be deleted. + + :type state: bool + :param state: (Optional) The state that the mutation should be + applied in. Defaults to :data:`True`. + """ + self._delete_cells(column_family_id, [column], time_range=time_range, + state=state) + + def delete_cells(self, column_family_id, columns, time_range=None, + state=True): + """Deletes cells in this row. + + .. note:: + + This method adds a mutation to the accumulated mutations on this + row, but does not make an API request. To actually + send an API request (with the mutations) to the Google Cloud + Bigtable API, call :meth:`commit`. + + :type column_family_id: str + :param column_family_id: The column family that contains the column + or columns with cells being deleted. Must be + of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. + + :type columns: :class:`list` of :class:`str` / + :func:`unicode `, or :class:`object` + :param columns: The columns within the column family that will have + cells deleted. If + :attr:`ALL_COLUMNS <.DirectRow.ALL_COLUMNS>` is used + then the entire column family will be deleted from + the row. + + :type time_range: :class:`TimestampRange` + :param time_range: (Optional) The range of time within which cells + should be deleted. + + :type state: bool + :param state: (Optional) The state that the mutation should be + applied in. Defaults to :data:`True`. + """ + self._delete_cells(column_family_id, columns, time_range=time_range, + state=state) + # pylint: enable=arguments-differ + + def clear(self): + """Removes all currently accumulated mutations on the current row.""" + del self._pb_mutations[:] + del self._false_pb_mutations[:] + + +class AppendRow(Row): + """Google Cloud Bigtable Row for sending append mutations. + + These mutations are intended to augment the value of an existing cell + and uses the methods: + + * :meth:`append_cell_value` + * :meth:`increment_cell_value` + + The first works by appending bytes and the second by incrementing an + integer (stored in the cell as 8 bytes). In either case, if the + cell is empty, assumes the default empty value (empty string for + bytes or and 0 for integer). - # Reset mutations after commit-ing request. - self.clear_mutations() + :type row_key: bytes + :param row_key: The key for the current row. - return result + :type table: :class:`Table ` + :param table: The table that owns the row. + """ + + def __init__(self, row_key, table): + super(AppendRow, self).__init__(row_key, table) + self._rule_pb_list = [] - def clear_modification_rules(self): + def clear(self): """Removes all currently accumulated modifications on current row.""" del self._rule_pb_list[:] - def commit_modifications(self): + def append_cell_value(self, column_family_id, column, value): + """Appends a value to an existing cell. + + .. note:: + + This method adds a read-modify rule protobuf to the accumulated + read-modify rules on this row, but does not make an API + request. To actually send an API request (with the rules) to the + Google Cloud Bigtable API, call :meth:`commit`. + + :type column_family_id: str + :param column_family_id: The column family that contains the column. + Must be of the form + ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. + + :type column: bytes + :param column: The column within the column family where the cell + is located. + + :type value: bytes + :param value: The value to append to the existing value in the cell. If + the targeted cell is unset, it will be treated as + containing the empty string. + """ + column = _to_bytes(column) + value = _to_bytes(value) + rule_pb = data_pb2.ReadModifyWriteRule(family_name=column_family_id, + column_qualifier=column, + append_value=value) + self._rule_pb_list.append(rule_pb) + + def increment_cell_value(self, column_family_id, column, int_value): + """Increments a value in an existing cell. + + Assumes the value in the cell is stored as a 64 bit integer + serialized to bytes. + + .. note:: + + This method adds a read-modify rule protobuf to the accumulated + read-modify rules on this row, but does not make an API + request. To actually send an API request (with the rules) to the + Google Cloud Bigtable API, call :meth:`commit`. + + :type column_family_id: str + :param column_family_id: The column family that contains the column. + Must be of the form + ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. + + :type column: bytes + :param column: The column within the column family where the cell + is located. + + :type int_value: int + :param int_value: The value to increment the existing value in the cell + by. If the targeted cell is unset, it will be treated + as containing a zero. Otherwise, the targeted cell + must contain an 8-byte value (interpreted as a 64-bit + big-endian signed integer), or the entire request + will fail. + """ + column = _to_bytes(column) + rule_pb = data_pb2.ReadModifyWriteRule(family_name=column_family_id, + column_qualifier=column, + increment_amount=int_value) + self._rule_pb_list.append(rule_pb) + + def commit(self): """Makes a ``ReadModifyWriteRow`` API request. This commits modifications made by :meth:`append_cell_value` and :meth:`increment_cell_value`. If no modifications were made, makes no API request and just returns ``{}``. - Modifies a row atomically, reading the latest existing timestamp/value - from the specified columns and writing a new value by appending / - incrementing. The new cell created uses either the current server time - or the highest timestamp of a cell in that column (if it exceeds the - server time). + Modifies a row atomically, reading the latest existing + timestamp / value from the specified columns and writing a new value by + appending / incrementing. The new cell created uses either the current + server time or the highest timestamp of a cell in that column (if it + exceeds the server time). + + After committing the accumulated mutations, resets the local mutations. .. code:: python - >>> row.commit_modifications() + >>> append_row.commit() { u'col-fam-id': { b'col-name1': [ @@ -494,10 +745,15 @@ def commit_modifications(self): dictionary of columns. Each column contains a list of cells modified. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. - + :raises: :class:`ValueError ` if the number of + mutations exceeds the :data:`MAX_MUTATIONS`. """ - if len(self._rule_pb_list) == 0: + num_mutations = len(self._rule_pb_list) + if num_mutations == 0: return {} + if num_mutations > MAX_MUTATIONS: + raise ValueError('%d total append mutations exceed the maximum ' + 'allowable %d.' % (num_mutations, MAX_MUTATIONS)) request_pb = messages_pb2.ReadModifyWriteRowRequest( table_name=self._table.name, row_key=self._row_key, @@ -509,7 +765,7 @@ def commit_modifications(self): request_pb, client.timeout_seconds) # Reset modifications after commit-ing request. - self.clear_modification_rules() + self.clear() # NOTE: We expect row_response.key == self._row_key but don't check. return _parse_rmw_row_response(row_response) diff --git a/gcloud/bigtable/table.py b/gcloud/bigtable/table.py index 81e967218a00..5815086d7c00 100644 --- a/gcloud/bigtable/table.py +++ b/gcloud/bigtable/table.py @@ -23,7 +23,9 @@ bigtable_service_messages_pb2 as data_messages_pb2) from gcloud.bigtable.column_family import _gc_rule_from_pb from gcloud.bigtable.column_family import ColumnFamily -from gcloud.bigtable.row import Row +from gcloud.bigtable.row import AppendRow +from gcloud.bigtable.row import ConditionalRow +from gcloud.bigtable.row import DirectRow from gcloud.bigtable.row_data import PartialRowData from gcloud.bigtable.row_data import PartialRowsData @@ -95,20 +97,38 @@ def column_family(self, column_family_id, gc_rule=None): """ return ColumnFamily(column_family_id, self, gc_rule=gc_rule) - def row(self, row_key, filter_=None): + def row(self, row_key, filter_=None, append=False): """Factory to create a row associated with this table. + .. warning:: + + At most one of ``filter_`` and ``append`` can be used in a + :class:`Row`. + :type row_key: bytes :param row_key: The key for the row being created. :type filter_: :class:`.RowFilter` :param filter_: (Optional) Filter to be used for conditional mutations. - See :class:`.Row` for more details. + See :class:`.DirectRow` for more details. + + :type append: bool + :param append: (Optional) Flag to determine if the row should be used + for append mutations. - :rtype: :class:`.Row` + :rtype: :class:`.DirectRow` :returns: A row owned by this table. + :raises: :class:`ValueError ` if both + ``filter_`` and ``append`` are used. """ - return Row(row_key, self, filter_=filter_) + if append and filter_ is not None: + raise ValueError('At most one of filter_ and append can be set') + if append: + return AppendRow(row_key, self) + elif filter_ is not None: + return ConditionalRow(row_key, self, filter_=filter_) + else: + return DirectRow(row_key, self) def __eq__(self, other): if not isinstance(other, self.__class__): diff --git a/gcloud/bigtable/test_row.py b/gcloud/bigtable/test_row.py index b2385be2901c..4caa6668e2b2 100644 --- a/gcloud/bigtable/test_row.py +++ b/gcloud/bigtable/test_row.py @@ -16,11 +16,11 @@ import unittest2 -class TestRow(unittest2.TestCase): +class TestDirectRow(unittest2.TestCase): def _getTargetClass(self): - from gcloud.bigtable.row import Row - return Row + from gcloud.bigtable.row import DirectRow + return DirectRow def _makeOne(self, *args, **kwargs): return self._getTargetClass()(*args, **kwargs) @@ -28,12 +28,11 @@ def _makeOne(self, *args, **kwargs): def test_constructor(self): row_key = b'row_key' table = object() - filter_ = object() - row = self._makeOne(row_key, table, filter_=filter_) + row = self._makeOne(row_key, table) self.assertEqual(row._row_key, row_key) self.assertTrue(row._table is table) - self.assertTrue(row._filter is filter_) + self.assertEqual(row._pb_mutations, []) def test_constructor_with_unicode(self): row_key = u'row_key' @@ -49,45 +48,12 @@ def test_constructor_with_non_bytes(self): with self.assertRaises(TypeError): self._makeOne(row_key, None) - def _get_mutations_helper(self, filter_=None, state=None): + def test__get_mutations(self): row_key = b'row_key' - row = self._makeOne(row_key, None, filter_=filter_) - # Mock the mutations with unique objects so we can compare. - row._pb_mutations = no_bool = object() - row._true_pb_mutations = true_mutations = object() - row._false_pb_mutations = false_mutations = object() - - mutations = row._get_mutations(state) - return (no_bool, true_mutations, false_mutations), mutations - - def test__get_mutations_no_filter(self): - (no_bool, _, _), mutations = self._get_mutations_helper() - self.assertTrue(mutations is no_bool) + row = self._makeOne(row_key, None) - def test__get_mutations_no_filter_bad_state(self): - state = object() # State should be null when no filter. - with self.assertRaises(ValueError): - self._get_mutations_helper(state=state) - - def test__get_mutations_with_filter_true_state(self): - filter_ = object() - state = True - (_, true_filter, _), mutations = self._get_mutations_helper( - filter_=filter_, state=state) - self.assertTrue(mutations is true_filter) - - def test__get_mutations_with_filter_false_state(self): - filter_ = object() - state = False - (_, _, false_filter), mutations = self._get_mutations_helper( - filter_=filter_, state=state) - self.assertTrue(mutations is false_filter) - - def test__get_mutations_with_filter_bad_state(self): - filter_ = object() - state = None - with self.assertRaises(ValueError): - self._get_mutations_helper(filter_=filter_, state=state) + row._pb_mutations = mutations = object() + self.assertTrue(mutations is row._get_mutations(None)) def _set_cell_helper(self, column=None, column_bytes=None, value=b'foobar', timestamp=None, @@ -152,40 +118,6 @@ def test_set_cell_with_non_null_timestamp(self): self._set_cell_helper(timestamp=timestamp, timestamp_micros=millis_granularity) - def test_append_cell_value(self): - from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 - - table = object() - row_key = b'row_key' - row = self._makeOne(row_key, table) - self.assertEqual(row._rule_pb_list, []) - - column = b'column' - column_family_id = u'column_family_id' - value = b'bytes-val' - row.append_cell_value(column_family_id, column, value) - expected_pb = data_pb2.ReadModifyWriteRule( - family_name=column_family_id, column_qualifier=column, - append_value=value) - self.assertEqual(row._rule_pb_list, [expected_pb]) - - def test_increment_cell_value(self): - from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 - - table = object() - row_key = b'row_key' - row = self._makeOne(row_key, table) - self.assertEqual(row._rule_pb_list, []) - - column = b'column' - column_family_id = u'column_family_id' - int_value = 281330 - row.increment_cell_value(column_family_id, column, int_value) - expected_pb = data_pb2.ReadModifyWriteRule( - family_name=column_family_id, column_qualifier=column, - increment_amount=int_value) - self.assertEqual(row._rule_pb_list, [expected_pb]) - def test_delete(self): from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 @@ -210,7 +142,7 @@ def __init__(self, *args, **kwargs): self._kwargs = [] # Replace the called method with one that logs arguments. - def delete_cells(self, *args, **kwargs): + def _delete_cells(self, *args, **kwargs): self._args.append(args) self._kwargs.append(kwargs) @@ -408,8 +340,6 @@ def test_commit(self): {}, )]) self.assertEqual(row._pb_mutations, []) - self.assertEqual(row._true_pb_mutations, None) - self.assertEqual(row._false_pb_mutations, None) def test_commit_too_many_mutations(self): from gcloud._testing import _Monkey @@ -442,7 +372,40 @@ def test_commit_no_mutations(self): # Make sure no request was sent. self.assertEqual(stub.method_calls, []) - def test_commit_with_filter(self): + +class TestConditionalRow(unittest2.TestCase): + + def _getTargetClass(self): + from gcloud.bigtable.row import ConditionalRow + return ConditionalRow + + def _makeOne(self, *args, **kwargs): + return self._getTargetClass()(*args, **kwargs) + + def test_constructor(self): + row_key = b'row_key' + table = object() + filter_ = object() + + row = self._makeOne(row_key, table, filter_=filter_) + self.assertEqual(row._row_key, row_key) + self.assertTrue(row._table is table) + self.assertTrue(row._filter is filter_) + self.assertEqual(row._pb_mutations, []) + self.assertEqual(row._false_pb_mutations, []) + + def test__get_mutations(self): + row_key = b'row_key' + filter_ = object() + row = self._makeOne(row_key, None, filter_=filter_) + + row._pb_mutations = true_mutations = object() + row._false_pb_mutations = false_mutations = object() + self.assertTrue(true_mutations is row._get_mutations(True)) + self.assertTrue(false_mutations is row._get_mutations(False)) + self.assertTrue(false_mutations is row._get_mutations(None)) + + def test_commit(self): from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 from gcloud.bigtable._generated import ( bigtable_service_messages_pb2 as messages_pb2) @@ -451,8 +414,11 @@ def test_commit_with_filter(self): row_key = b'row_key' table_name = 'projects/more-stuff' - column_family_id = u'column_family_id' - column = b'column' + column_family_id1 = u'column_family_id1' + column_family_id2 = u'column_family_id2' + column_family_id3 = u'column_family_id3' + column1 = b'column1' + column2 = b'column2' timeout_seconds = 262 client = _Client(timeout_seconds=timeout_seconds) table = _Table(table_name, client=client) @@ -463,26 +429,31 @@ def test_commit_with_filter(self): value1 = b'bytes-value' mutation1 = data_pb2.Mutation( set_cell=data_pb2.Mutation.SetCell( - family_name=column_family_id, - column_qualifier=column, + family_name=column_family_id1, + column_qualifier=column1, timestamp_micros=-1, # Default value. value=value1, ), ) - value2 = b'other-bytes' mutation2 = data_pb2.Mutation( - set_cell=data_pb2.Mutation.SetCell( - family_name=column_family_id, - column_qualifier=column, - timestamp_micros=-1, # Default value. - value=value2, + delete_from_row=data_pb2.Mutation.DeleteFromRow(), + ) + mutation3 = data_pb2.Mutation( + delete_from_column=data_pb2.Mutation.DeleteFromColumn( + family_name=column_family_id2, + column_qualifier=column2, + ), + ) + mutation4 = data_pb2.Mutation( + delete_from_family=data_pb2.Mutation.DeleteFromFamily( + family_name=column_family_id3, ), ) request_pb = messages_pb2.CheckAndMutateRowRequest( table_name=table_name, row_key=row_key, predicate_filter=row_filter.to_pb(), - true_mutations=[mutation1], + true_mutations=[mutation1, mutation3, mutation4], false_mutations=[mutation2], ) @@ -498,8 +469,10 @@ def test_commit_with_filter(self): expected_result = predicate_matched # Perform the method and check the result. - row.set_cell(column_family_id, column, value1, state=True) - row.set_cell(column_family_id, column, value2, state=False) + row.set_cell(column_family_id1, column1, value1, state=True) + row.delete(state=False) + row.delete_cell(column_family_id2, column2, state=True) + row.delete_cells(column_family_id3, row.ALL_COLUMNS, state=True) result = row.commit() self.assertEqual(result, expected_result) self.assertEqual(stub.method_calls, [( @@ -507,11 +480,10 @@ def test_commit_with_filter(self): (request_pb, timeout_seconds), {}, )]) - self.assertEqual(row._pb_mutations, None) - self.assertEqual(row._true_pb_mutations, []) + self.assertEqual(row._pb_mutations, []) self.assertEqual(row._false_pb_mutations, []) - def test_commit_with_filter_too_many_mutations(self): + def test_commit_too_many_mutations(self): from gcloud._testing import _Monkey from gcloud.bigtable import row as MUT @@ -519,13 +491,13 @@ def test_commit_with_filter_too_many_mutations(self): table = object() filter_ = object() row = self._makeOne(row_key, table, filter_=filter_) - row._true_pb_mutations = [1, 2, 3] - num_mutations = len(row._true_pb_mutations) + row._pb_mutations = [1, 2, 3] + num_mutations = len(row._pb_mutations) with _Monkey(MUT, MAX_MUTATIONS=num_mutations - 1): with self.assertRaises(ValueError): row.commit() - def test_commit_with_filter_no_mutations(self): + def test_commit_no_mutations(self): from gcloud.bigtable._testing import _FakeStub row_key = b'row_key' @@ -533,7 +505,7 @@ def test_commit_with_filter_no_mutations(self): table = _Table(None, client=client) filter_ = object() row = self._makeOne(row_key, table, filter_=filter_) - self.assertEqual(row._true_pb_mutations, []) + self.assertEqual(row._pb_mutations, []) self.assertEqual(row._false_pb_mutations, []) # Patch the stub used by the API method. @@ -545,7 +517,68 @@ def test_commit_with_filter_no_mutations(self): # Make sure no request was sent. self.assertEqual(stub.method_calls, []) - def test_commit_modifications(self): + +class TestAppendRow(unittest2.TestCase): + + def _getTargetClass(self): + from gcloud.bigtable.row import AppendRow + return AppendRow + + def _makeOne(self, *args, **kwargs): + return self._getTargetClass()(*args, **kwargs) + + def test_constructor(self): + row_key = b'row_key' + table = object() + + row = self._makeOne(row_key, table) + self.assertEqual(row._row_key, row_key) + self.assertTrue(row._table is table) + self.assertEqual(row._rule_pb_list, []) + + def test_clear(self): + row_key = b'row_key' + table = object() + row = self._makeOne(row_key, table) + row._rule_pb_list = [1, 2, 3] + row.clear() + self.assertEqual(row._rule_pb_list, []) + + def test_append_cell_value(self): + from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 + + table = object() + row_key = b'row_key' + row = self._makeOne(row_key, table) + self.assertEqual(row._rule_pb_list, []) + + column = b'column' + column_family_id = u'column_family_id' + value = b'bytes-val' + row.append_cell_value(column_family_id, column, value) + expected_pb = data_pb2.ReadModifyWriteRule( + family_name=column_family_id, column_qualifier=column, + append_value=value) + self.assertEqual(row._rule_pb_list, [expected_pb]) + + def test_increment_cell_value(self): + from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 + + table = object() + row_key = b'row_key' + row = self._makeOne(row_key, table) + self.assertEqual(row._rule_pb_list, []) + + column = b'column' + column_family_id = u'column_family_id' + int_value = 281330 + row.increment_cell_value(column_family_id, column, int_value) + expected_pb = data_pb2.ReadModifyWriteRule( + family_name=column_family_id, column_qualifier=column, + increment_amount=int_value) + self.assertEqual(row._rule_pb_list, [expected_pb]) + + def test_commit(self): from gcloud._testing import _Monkey from gcloud.bigtable._generated import bigtable_data_pb2 as data_pb2 from gcloud.bigtable._generated import ( @@ -594,7 +627,7 @@ def mock_parse_rmw_row_response(row_response): # Perform the method and check the result. with _Monkey(MUT, _parse_rmw_row_response=mock_parse_rmw_row_response): row.append_cell_value(column_family_id, column, value) - result = row.commit_modifications() + result = row.commit() self.assertEqual(result, expected_result) self.assertEqual(stub.method_calls, [( @@ -602,14 +635,10 @@ def mock_parse_rmw_row_response(row_response): (request_pb, timeout_seconds), {}, )]) - self.assertEqual(row._pb_mutations, []) - self.assertEqual(row._true_pb_mutations, None) - self.assertEqual(row._false_pb_mutations, None) - self.assertEqual(row_responses, [response_pb]) self.assertEqual(row._rule_pb_list, []) - def test_commit_modifications_no_rules(self): + def test_commit_no_rules(self): from gcloud.bigtable._testing import _FakeStub row_key = b'row_key' @@ -622,11 +651,24 @@ def test_commit_modifications_no_rules(self): client._data_stub = stub = _FakeStub() # Perform the method and check the result. - result = row.commit_modifications() + result = row.commit() self.assertEqual(result, {}) # Make sure no request was sent. self.assertEqual(stub.method_calls, []) + def test_commit_too_many_mutations(self): + from gcloud._testing import _Monkey + from gcloud.bigtable import row as MUT + + row_key = b'row_key' + table = object() + row = self._makeOne(row_key, table) + row._rule_pb_list = [1, 2, 3] + num_mutations = len(row._rule_pb_list) + with _Monkey(MUT, MAX_MUTATIONS=num_mutations - 1): + with self.assertRaises(ValueError): + row.commit() + class Test_BoolFilter(unittest2.TestCase): diff --git a/gcloud/bigtable/test_table.py b/gcloud/bigtable/test_table.py index 882db22c4d51..65cfe0d87fe2 100644 --- a/gcloud/bigtable/test_table.py +++ b/gcloud/bigtable/test_table.py @@ -56,8 +56,20 @@ def test_column_family_factory(self): self.assertTrue(column_family.gc_rule is gc_rule) self.assertEqual(column_family._table, table) - def test_row_factory(self): - from gcloud.bigtable.row import Row + def test_row_factory_direct(self): + from gcloud.bigtable.row import DirectRow + + table_id = 'table-id' + table = self._makeOne(table_id, None) + row_key = b'row_key' + row = table.row(row_key) + + self.assertTrue(isinstance(row, DirectRow)) + self.assertEqual(row._row_key, row_key) + self.assertEqual(row._table, table) + + def test_row_factory_conditional(self): + from gcloud.bigtable.row import ConditionalRow table_id = 'table-id' table = self._makeOne(table_id, None) @@ -65,10 +77,27 @@ def test_row_factory(self): filter_ = object() row = table.row(row_key, filter_=filter_) - self.assertTrue(isinstance(row, Row)) + self.assertTrue(isinstance(row, ConditionalRow)) + self.assertEqual(row._row_key, row_key) + self.assertEqual(row._table, table) + + def test_row_factory_append(self): + from gcloud.bigtable.row import AppendRow + + table_id = 'table-id' + table = self._makeOne(table_id, None) + row_key = b'row_key' + row = table.row(row_key, append=True) + + self.assertTrue(isinstance(row, AppendRow)) self.assertEqual(row._row_key, row_key) self.assertEqual(row._table, table) - self.assertEqual(row._filter, filter_) + + def test_row_factory_failure(self): + table_id = 'table-id' + table = self._makeOne(table_id, None) + with self.assertRaises(ValueError): + table.row(b'row_key', filter_=object(), append=True) def test___eq__(self): table_id = 'table_id' diff --git a/scripts/pylintrc_default b/scripts/pylintrc_default index 9b99cdb0b5c5..dc4b50bed3c8 100644 --- a/scripts/pylintrc_default +++ b/scripts/pylintrc_default @@ -200,7 +200,7 @@ no-space-check = # Maximum number of lines in a module # DEFAULT: max-module-lines=1000 # RATIONALE: API-mapping -max-module-lines=1500 +max-module-lines=1593 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). diff --git a/scripts/run_pylint.py b/scripts/run_pylint.py index 888d68a427dd..1e92422fc45c 100644 --- a/scripts/run_pylint.py +++ b/scripts/run_pylint.py @@ -66,7 +66,7 @@ } TEST_RC_REPLACEMENTS = { 'FORMAT': { - 'max-module-lines': 1800, + 'max-module-lines': 1825, }, } diff --git a/system_tests/bigtable.py b/system_tests/bigtable.py index 36ec696dad1d..5066e2e710de 100644 --- a/system_tests/bigtable.py +++ b/system_tests/bigtable.py @@ -332,7 +332,7 @@ def setUp(self): def tearDown(self): for row in self.rows_to_delete: - row.clear_mutations() + row.clear() row.delete() row.commit()