-
Notifications
You must be signed in to change notification settings - Fork 0
Managed backups #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…igtable into managed-backups
…oved to the ``Table`` class.
|
||
import re | ||
|
||
from google.cloud._helpers import _datetime_to_pb_timestamp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might need to import _pb_timestamp_to_datetime
to update expire_time
and/or create_time
from backup_pb
message.
) | ||
|
||
@property | ||
def cluster(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit.
maybe call it cluter_id
update_mask = {"paths": ["expire_time"]} | ||
api = self._instance._client.table_admin_client | ||
api.update_backup(backup_update, update_mask) | ||
self._expire_time = new_expire_time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
api.update_backup()
returns a backup_pb
message, is it reasonable here to update _expire_time
from the returned backup_pb
message?
updated_backup_pb = api.update_backup(backup_update, update_mask)
self._expire_time = _pb_timestamp_to_datetime(updated_backup_pb._expire_time)
result = [] | ||
for backup_pb in backup_list_pb: | ||
backup_id = backup_pb.name.split("/")[-1] | ||
backup_cluster_id = backup_pb.name.split("/")[-3] | ||
backup_expire_time = backup_pb.expire_time | ||
result.append(self.backup( | ||
backup_id, | ||
cluster_id=backup_cluster_id, | ||
expire_time=backup_expire_time | ||
)) | ||
|
||
return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggesting re-using backup.from_pb
method here which also verifies that backup_pb.name is as expected and sets other values start_time
, end_time
, size_bytes
, state
as well.
return [Backup.from_bp(backup_pb, self) for backup_pb in backup_list_pb]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also suggesting to add list_backups
to the Instance
class as well to list backups for given instance as well.
If yes, then this method should delegate to the Instance.list_backups
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggesting re-using
backup.from_pb
method here which also verifies that backup_pb.name is as expected and sets other valuesstart_time
,end_time
,size_bytes
,state
as well.return [Backup.from_bp(backup_pb, self) for backup_pb in backup_list_pb]
Scratch above
Since gapic.list_backups
returns a PageIterator
, let's return an iterator to the user as well, without cashing all the data.
introduce a new method:
def _item_to_backup(self, iterator, backup_pb):
# Convert a backup protobuf to a Backup object.
return Backup.from_pb(backup_pb, self)
use the above method to set the item_to_value
property on the page_iter
returned by gapic.list_backups
to be used on next
backup_page_iter_pb = self._instance._client.table_admin_client.list_backups(
self._instance.name + "/clusters/" + cluster_id,
filter_=filter_,
order_by=order_by,
page_size=page_size,
)
backup_page_iter_pb.item_to_value = self._item_to_backup
return page_iter
WDYT?
google/cloud/bigtable/table.py
Outdated
return result | ||
|
||
def restore( | ||
self, table_id, cluster_id=None, backup_id=None, backup_name=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is table_id
needed here? Can use self.table_id
?
Use case
new_table = instance.table('non-existant-table')
op = new_table.restore(backup)
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This must be different from self.table_id
, but probably worth renaming to new_table_id
to avoid confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method returns a long running operation and does not update self
. There is no need to instantiate a Table
object before restore()
. How about making this method into a @classmethod
. WDYT?
google/cloud/bigtable/table.py
Outdated
return result | ||
|
||
def restore( | ||
self, table_id, cluster_id=None, backup_id=None, backup_name=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another suggestion, instead of cluster_id
, backup_id
, backup_name
use param source
which could be either full backup_name (projects/<project>/instances/<instance>/clusters/<cluster>/backups/<backup>
) or Backup
instance
from google.cloud.bigtable.backup import _BACKUP_NAME_RE
backup_name = ""
if not source:
raise ValueError("")
if isinstance(source, Backup):
backup_name = Backup.name
elif isinstance(source, str):
match_backup_name = _BACKUP_NAME_RE.match(source)
if match_backup_name is None:
raise ValueError("")
backup_name = source
else: raise ValueError("source should be ...")
feat: new `__eq__` and `__ne__` convenience methods
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #<issue_number_goes_here> 🦕