Skip to content

Commit 86ee275

Browse files
committed
Regen datalabeling. (googleapis#497)
* Regen datalabeling. Output from synthtool: ``` synthtool > Executing /Users/swast/src/google-cloud-python-private/datalabeling/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:70ba28fda87e032ae44e6df41b7fc342c1b0cce1ed90658c4890eb4f613038c2 Status: Image is up to date for googleapis/artman:latest synthtool > Using local googleapis at /Users/swast/src/googleapis-private synthtool > Running generator for google/cloud/datalabeling/artman_datalabeling_v1beta1.yaml. synthtool > Generated code into /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1. synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/operations.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/operations.proto synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/dataset.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/dataset.proto synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/annotation_spec_set.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/annotation_spec_set.proto synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/human_annotation_config.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/human_annotation_config.proto synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/annotation.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/annotation.proto synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/data_labeling_service.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/data_labeling_service.proto synthtool > Copy: /Users/swast/src/googleapis-private/google/cloud/datalabeling/v1beta1/instruction.proto to /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto/instruction.proto synthtool > Placed proto files into /Users/swast/src/googleapis-private/artman-genfiles/python/datalabeling-v1beta1/google/cloud/datalabeling_v1beta1/proto. synthtool > Replaced 'operations_pb2.ImportDataOperationResponse' in google/cloud/datalabeling_v1beta1/gapic/data_labeling_service_client.py. synthtool > Replaced 'operations_pb2.ImportDataOperationMetadata' in google/cloud/datalabeling_v1beta1/gapic/data_labeling_service_client.py. synthtool > Replaced 'operations_pb2.Operation\\(' in tests/unit/gapic/v1beta1/test_data_labeling_service_client_v1beta1.py. nox > Running session blacken nox > Creating virtualenv using python3.6 in /Users/swast/src/google-cloud-python-private/datalabeling/.nox/blacken nox > pip install --upgrade black nox > black google tests docs reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/__init__.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/__init__.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/gapic/enums.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/gapic/data_labeling_service_client_config.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/annotation_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/gapic/transports/data_labeling_service_grpc_transport.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/annotation_spec_set_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/annotation_spec_set_pb2.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/data_labeling_service_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/gapic/data_labeling_service_client.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/dataset_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/human_annotation_config_pb2.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/human_annotation_config_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/annotation_pb2.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/instruction_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/instruction_pb2.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/operations_pb2_grpc.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/types.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/dataset_pb2.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/data_labeling_service_pb2.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/tests/unit/gapic/v1beta1/test_data_labeling_service_client_v1beta1.py reformatted /Users/swast/src/google-cloud-python-private/datalabeling/google/cloud/datalabeling_v1beta1/proto/operations_pb2.py All done! ✨ 🍰 ✨ 23 files reformatted, 5 files left unchanged. nox > Session blacken was successful. synthtool > Cleaned up 0 temporary directories. synthtool > Wrote metadata to synth.metadata. ``` * Re-ran synthtool
1 parent 64fee9b commit 86ee275

File tree

12 files changed

+2745
-131
lines changed

12 files changed

+2745
-131
lines changed

datalabeling/google/cloud/datalabeling_v1beta1/gapic/data_labeling_service_client.py

Lines changed: 313 additions & 0 deletions
Large diffs are not rendered by default.

datalabeling/google/cloud/datalabeling_v1beta1/gapic/enums.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,25 +50,32 @@ class AnnotationType(enum.IntEnum):
5050
ANNOTATION_TYPE_UNSPECIFIED (int)
5151
IMAGE_CLASSIFICATION_ANNOTATION (int): Classification annotations in an image.
5252
IMAGE_BOUNDING_BOX_ANNOTATION (int): Bounding box annotations in an image.
53+
IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION (int): Oriented bounding box. The box does not have to be parallel to horizontal
54+
line.
5355
IMAGE_BOUNDING_POLY_ANNOTATION (int): Bounding poly annotations in an image.
5456
IMAGE_POLYLINE_ANNOTATION (int): Polyline annotations in an image.
57+
IMAGE_SEGMENTATION_ANNOTATION (int): Segmentation annotations in an image.
5558
VIDEO_SHOTS_CLASSIFICATION_ANNOTATION (int): Classification annotations in video shots.
5659
VIDEO_OBJECT_TRACKING_ANNOTATION (int): Video object tracking annotation.
5760
VIDEO_OBJECT_DETECTION_ANNOTATION (int): Video object detection annotation.
5861
VIDEO_EVENT_ANNOTATION (int): Video event annotation.
62+
AUDIO_TRANSCRIPTION_ANNOTATION (int): Speech to text annotation.
5963
TEXT_CLASSIFICATION_ANNOTATION (int): Classification for text.
6064
TEXT_ENTITY_EXTRACTION_ANNOTATION (int): Entity extraction for text.
6165
"""
6266

6367
ANNOTATION_TYPE_UNSPECIFIED = 0
6468
IMAGE_CLASSIFICATION_ANNOTATION = 1
6569
IMAGE_BOUNDING_BOX_ANNOTATION = 2
70+
IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION = 13
6671
IMAGE_BOUNDING_POLY_ANNOTATION = 10
6772
IMAGE_POLYLINE_ANNOTATION = 11
73+
IMAGE_SEGMENTATION_ANNOTATION = 12
6874
VIDEO_SHOTS_CLASSIFICATION_ANNOTATION = 3
6975
VIDEO_OBJECT_TRACKING_ANNOTATION = 4
7076
VIDEO_OBJECT_DETECTION_ANNOTATION = 5
7177
VIDEO_EVENT_ANNOTATION = 6
78+
AUDIO_TRANSCRIPTION_ANNOTATION = 7
7279
TEXT_CLASSIFICATION_ANNOTATION = 8
7380
TEXT_ENTITY_EXTRACTION_ANNOTATION = 9
7481

Lines changed: 337 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,337 @@
1+
// Copyright 2018 Google LLC.
2+
//
3+
// Licensed under the Apache License, Version 2.0 (the "License");
4+
// you may not use this file except in compliance with the License.
5+
// You may obtain a copy of the License at
6+
//
7+
// http://www.apache.org/licenses/LICENSE-2.0
8+
//
9+
// Unless required by applicable law or agreed to in writing, software
10+
// distributed under the License is distributed on an "AS IS" BASIS,
11+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
// See the License for the specific language governing permissions and
13+
// limitations under the License.
14+
//
15+
16+
syntax = "proto3";
17+
18+
package google.cloud.datalabeling.v1beta1;
19+
20+
import "google/cloud/datalabeling/v1beta1/annotation_spec_set.proto";
21+
import "google/protobuf/duration.proto";
22+
import "google/protobuf/struct.proto";
23+
import "google/protobuf/timestamp.proto";
24+
import "google/api/annotations.proto";
25+
26+
option go_package = "google.golang.org/genproto/googleapis/cloud/datalabeling/v1beta1;datalabeling";
27+
option java_multiple_files = true;
28+
option java_package = "com.google.cloud.datalabeling.v1beta1";
29+
30+
31+
// Specifies where is the answer from.
32+
enum AnnotationSource {
33+
ANNOTATION_SOURCE_UNSPECIFIED = 0;
34+
35+
// Answer is provided by a human contributor.
36+
OPERATOR = 3;
37+
}
38+
39+
enum AnnotationSentiment {
40+
ANNOTATION_SENTIMENT_UNSPECIFIED = 0;
41+
42+
// This annotation describes negatively about the data.
43+
NEGATIVE = 1;
44+
45+
// This label describes positively about the data.
46+
POSITIVE = 2;
47+
}
48+
49+
enum AnnotationType {
50+
ANNOTATION_TYPE_UNSPECIFIED = 0;
51+
52+
// Classification annotations in an image.
53+
IMAGE_CLASSIFICATION_ANNOTATION = 1;
54+
55+
// Bounding box annotations in an image.
56+
IMAGE_BOUNDING_BOX_ANNOTATION = 2;
57+
58+
// Oriented bounding box. The box does not have to be parallel to horizontal
59+
// line.
60+
IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION = 13;
61+
62+
// Bounding poly annotations in an image.
63+
IMAGE_BOUNDING_POLY_ANNOTATION = 10;
64+
65+
// Polyline annotations in an image.
66+
IMAGE_POLYLINE_ANNOTATION = 11;
67+
68+
// Segmentation annotations in an image.
69+
IMAGE_SEGMENTATION_ANNOTATION = 12;
70+
71+
// Classification annotations in video shots.
72+
VIDEO_SHOTS_CLASSIFICATION_ANNOTATION = 3;
73+
74+
// Video object tracking annotation.
75+
VIDEO_OBJECT_TRACKING_ANNOTATION = 4;
76+
77+
// Video object detection annotation.
78+
VIDEO_OBJECT_DETECTION_ANNOTATION = 5;
79+
80+
// Video event annotation.
81+
VIDEO_EVENT_ANNOTATION = 6;
82+
83+
// Speech to text annotation.
84+
AUDIO_TRANSCRIPTION_ANNOTATION = 7;
85+
86+
// Classification for text.
87+
TEXT_CLASSIFICATION_ANNOTATION = 8;
88+
89+
// Entity extraction for text.
90+
TEXT_ENTITY_EXTRACTION_ANNOTATION = 9;
91+
}
92+
93+
// Annotation for Example. Each example may have one or more annotations. For
94+
// example in image classification problem, each image might have one or more
95+
// labels. We call labels binded with this image an Annotation.
96+
message Annotation {
97+
// Output only. Unique name of this annotation, format is:
98+
//
99+
// projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/{annotated_dataset}/examples/{example_id}/annotations/{annotation_id}
100+
string name = 1;
101+
102+
// Output only. The source of the annotation.
103+
AnnotationSource annotation_source = 2;
104+
105+
// Output only. This is the actual annotation value, e.g classification,
106+
// bounding box values are stored here.
107+
AnnotationValue annotation_value = 3;
108+
109+
// Output only. Annotation metadata, including information like votes
110+
// for labels.
111+
AnnotationMetadata annotation_metadata = 4;
112+
113+
// Output only. Sentiment for this annotation.
114+
AnnotationSentiment annotation_sentiment = 6;
115+
}
116+
117+
// Annotation value for an example.
118+
message AnnotationValue {
119+
oneof value_type {
120+
// Annotation value for image classification case.
121+
ImageClassificationAnnotation image_classification_annotation = 1;
122+
123+
// Annotation value for image bounding box, oriented bounding box
124+
// and polygon cases.
125+
ImageBoundingPolyAnnotation image_bounding_poly_annotation = 2;
126+
127+
// Annotation value for image polyline cases.
128+
// Polyline here is different from BoundingPoly. It is formed by
129+
// line segments connected to each other but not closed form(Bounding Poly).
130+
// The line segments can cross each other.
131+
ImagePolylineAnnotation image_polyline_annotation = 8;
132+
133+
// Annotation value for image segmentation.
134+
ImageSegmentationAnnotation image_segmentation_annotation = 9;
135+
136+
// Annotation value for text classification case.
137+
TextClassificationAnnotation text_classification_annotation = 3;
138+
139+
// Annotation value for video classification case.
140+
VideoClassificationAnnotation video_classification_annotation = 4;
141+
142+
// Annotation value for video object detection and tracking case.
143+
VideoObjectTrackingAnnotation video_object_tracking_annotation = 5;
144+
145+
// Annotation value for video event case.
146+
VideoEventAnnotation video_event_annotation = 6;
147+
148+
// Annotation value for speech audio recognition case.
149+
AudioRecognitionAnnotation audio_recognition_annotation = 7;
150+
}
151+
}
152+
153+
// Image classification annotation definition.
154+
message ImageClassificationAnnotation {
155+
// Label of image.
156+
AnnotationSpec annotation_spec = 1;
157+
}
158+
159+
// A vertex represents a 2D point in the image.
160+
// NOTE: the vertex coordinates are in the same scale as the original image.
161+
message Vertex {
162+
// X coordinate.
163+
int32 x = 1;
164+
165+
// Y coordinate.
166+
int32 y = 2;
167+
}
168+
169+
// A vertex represents a 2D point in the image.
170+
// NOTE: the normalized vertex coordinates are relative to the original image
171+
// and range from 0 to 1.
172+
message NormalizedVertex {
173+
// X coordinate.
174+
float x = 1;
175+
176+
// Y coordinate.
177+
float y = 2;
178+
}
179+
180+
// A bounding polygon in the image.
181+
message BoundingPoly {
182+
// The bounding polygon vertices.
183+
repeated Vertex vertices = 1;
184+
}
185+
186+
// Normalized bounding polygon.
187+
message NormalizedBoundingPoly {
188+
// The bounding polygon normalized vertices.
189+
repeated NormalizedVertex normalized_vertices = 1;
190+
}
191+
192+
// Image bounding poly annotation. It represents a polygon including
193+
// bounding box in the image.
194+
message ImageBoundingPolyAnnotation {
195+
// The region of the polygon. If it is a bounding box, it is guaranteed to be
196+
// four points.
197+
oneof bounded_area {
198+
BoundingPoly bounding_poly = 2;
199+
200+
NormalizedBoundingPoly normalized_bounding_poly = 3;
201+
}
202+
203+
// Label of object in this bounding polygon.
204+
AnnotationSpec annotation_spec = 1;
205+
}
206+
207+
// A line with multiple line segments.
208+
message Polyline {
209+
// The polyline vertices.
210+
repeated Vertex vertices = 1;
211+
}
212+
213+
// Normalized polyline.
214+
message NormalizedPolyline {
215+
// The normalized polyline vertices.
216+
repeated NormalizedVertex normalized_vertices = 1;
217+
}
218+
219+
// A polyline for the image annotation.
220+
message ImagePolylineAnnotation {
221+
oneof poly {
222+
Polyline polyline = 2;
223+
224+
NormalizedPolyline normalized_polyline = 3;
225+
}
226+
227+
// Label of this polyline.
228+
AnnotationSpec annotation_spec = 1;
229+
}
230+
231+
// Image segmentation annotation.
232+
message ImageSegmentationAnnotation {
233+
// The mapping between rgb color and annotation spec. The key is the rgb
234+
// color represented in format of rgb(0, 0, 0). The value is the
235+
// AnnotationSpec.
236+
map<string, AnnotationSpec> annotation_colors = 1;
237+
238+
// Image format.
239+
string mime_type = 2;
240+
241+
// A byte string of a full image's color map.
242+
bytes image_bytes = 3;
243+
}
244+
245+
// Text classification annotation.
246+
message TextClassificationAnnotation {
247+
// Label of the text.
248+
AnnotationSpec annotation_spec = 1;
249+
}
250+
251+
// A time period inside of an example that has a time dimension (e.g. video).
252+
message TimeSegment {
253+
// Start of the time segment (inclusive), represented as the duration since
254+
// the example start.
255+
google.protobuf.Duration start_time_offset = 1;
256+
257+
// End of the time segment (exclusive), represented as the duration since the
258+
// example start.
259+
google.protobuf.Duration end_time_offset = 2;
260+
}
261+
262+
// Video classification annotation.
263+
message VideoClassificationAnnotation {
264+
// The time segment of the video to which the annotation applies.
265+
TimeSegment time_segment = 1;
266+
267+
// Label of the segment specified by time_segment.
268+
AnnotationSpec annotation_spec = 2;
269+
}
270+
271+
// Video frame level annotation for object detection and tracking.
272+
message ObjectTrackingFrame {
273+
// The bounding box location of this object track for the frame.
274+
oneof bounded_area {
275+
BoundingPoly bounding_poly = 1;
276+
277+
NormalizedBoundingPoly normalized_bounding_poly = 2;
278+
}
279+
280+
// The time offset of this frame relative to the beginning of the video.
281+
google.protobuf.Duration time_offset = 3;
282+
}
283+
284+
// Video object tracking annotation.
285+
message VideoObjectTrackingAnnotation {
286+
// Label of the object tracked in this annotation.
287+
AnnotationSpec annotation_spec = 1;
288+
289+
// The time segment of the video to which object tracking applies.
290+
TimeSegment time_segment = 2;
291+
292+
// The list of frames where this object track appears.
293+
repeated ObjectTrackingFrame object_tracking_frames = 3;
294+
}
295+
296+
// Video event annotation.
297+
message VideoEventAnnotation {
298+
// Label of the event in this annotation.
299+
AnnotationSpec annotation_spec = 1;
300+
301+
// The time segment of the video to which the annotation applies.
302+
TimeSegment time_segment = 2;
303+
}
304+
305+
// Speech audio recognition.
306+
message AudioRecognitionAnnotation {
307+
// Transcript text representing the words spoken.
308+
string transcript = 1;
309+
310+
// Start position in audio file that the transcription corresponds to.
311+
google.protobuf.Duration start_offset = 2;
312+
313+
// End position in audio file that the transcription corresponds to.
314+
google.protobuf.Duration end_offset = 3;
315+
}
316+
317+
// Additional information associated with the annotation.
318+
message AnnotationMetadata {
319+
// Metadata related to human labeling.
320+
OperatorMetadata operator_metadata = 2;
321+
}
322+
323+
// General information useful for labels coming from contributors.
324+
message OperatorMetadata {
325+
// Confidence score corresponding to a label. For examle, if 3 contributors
326+
// have answered the question and 2 of them agree on the final label, the
327+
// confidence score will be 0.67 (2/3).
328+
float score = 1;
329+
330+
// The total number of contributors that answer this question.
331+
int32 total_votes = 2;
332+
333+
// The total number of contributors that choose this label.
334+
int32 label_votes = 3;
335+
336+
repeated string comments = 4;
337+
}

0 commit comments

Comments
 (0)