Google Cloud Native v0.30.0, Apr 14 23
Google Cloud Native v0.30.0, Apr 14 23
google-native.datalabeling/v1beta1.EvaluationJob
Explore with Pulumi AI
Creates an evaluation job. Auto-naming is currently not supported for this resource.
Create EvaluationJob Resource
new EvaluationJob(name: string, args: EvaluationJobArgs, opts?: CustomResourceOptions);
@overload
def EvaluationJob(resource_name: str,
opts: Optional[ResourceOptions] = None,
annotation_spec_set: Optional[str] = None,
description: Optional[str] = None,
evaluation_job_config: Optional[GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs] = None,
label_missing_ground_truth: Optional[bool] = None,
model_version: Optional[str] = None,
project: Optional[str] = None,
schedule: Optional[str] = None)
@overload
def EvaluationJob(resource_name: str,
args: EvaluationJobArgs,
opts: Optional[ResourceOptions] = None)
func NewEvaluationJob(ctx *Context, name string, args EvaluationJobArgs, opts ...ResourceOption) (*EvaluationJob, error)
public EvaluationJob(string name, EvaluationJobArgs args, CustomResourceOptions? opts = null)
public EvaluationJob(String name, EvaluationJobArgs args)
public EvaluationJob(String name, EvaluationJobArgs args, CustomResourceOptions options)
type: google-native:datalabeling/v1beta1:EvaluationJob
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
EvaluationJob Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The EvaluationJob resource accepts the following input properties:
- Annotation
Spec stringSet Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- Description string
Description of the job. The description can be up to 25,000 characters long.
- Evaluation
Job Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Config Args Configuration details for the evaluation job.
- Label
Missing boolGround Truth Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
.- Model
Version string The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- Schedule string
Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- Project string
- Annotation
Spec stringSet Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- Description string
Description of the job. The description can be up to 25,000 characters long.
- Evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Args Configuration details for the evaluation job.
- Label
Missing boolGround Truth Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
.- Model
Version string The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- Schedule string
Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- Project string
- annotation
Spec StringSet Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description String
Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Args Configuration details for the evaluation job.
- label
Missing BooleanGround Truth Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
.- model
Version String The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule String
Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project String
- annotation
Spec stringSet Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description string
Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Args Configuration details for the evaluation job.
- label
Missing booleanGround Truth Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
.- model
Version string The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule string
Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project string
- annotation_
spec_ strset Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description str
Description of the job. The description can be up to 25,000 characters long.
- evaluation_
job_ Googleconfig Cloud Datalabeling V1beta1Evaluation Job Config Args Configuration details for the evaluation job.
- label_
missing_ boolground_ truth Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
.- model_
version str The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule str
Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project str
- annotation
Spec StringSet Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description String
Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job Property MapConfig Configuration details for the evaluation job.
- label
Missing BooleanGround Truth Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
.- model
Version String The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule String
Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the EvaluationJob resource produces the following output properties:
- Attempts
List<Pulumi.
Google Native. Data Labeling. V1Beta1. Outputs. Google Cloud Datalabeling V1beta1Attempt Response> Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- Create
Time string Timestamp of when this evaluation job was created.
- Id string
The provider-assigned unique ID for this managed resource.
- Name string
After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- State string
Describes the current state of the job.
- Attempts
[]Google
Cloud Datalabeling V1beta1Attempt Response Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- Create
Time string Timestamp of when this evaluation job was created.
- Id string
The provider-assigned unique ID for this managed resource.
- Name string
After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- State string
Describes the current state of the job.
- attempts
List<Google
Cloud Datalabeling V1beta1Attempt Response> Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time String Timestamp of when this evaluation job was created.
- id String
The provider-assigned unique ID for this managed resource.
- name String
After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state String
Describes the current state of the job.
- attempts
Google
Cloud Datalabeling V1beta1Attempt Response[] Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time string Timestamp of when this evaluation job was created.
- id string
The provider-assigned unique ID for this managed resource.
- name string
After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state string
Describes the current state of the job.
- attempts
Sequence[Google
Cloud Datalabeling V1beta1Attempt Response] Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create_
time str Timestamp of when this evaluation job was created.
- id str
The provider-assigned unique ID for this managed resource.
- name str
After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state str
Describes the current state of the job.
- attempts List<Property Map>
Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time String Timestamp of when this evaluation job was created.
- id String
The provider-assigned unique ID for this managed resource.
- name String
After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state String
Describes the current state of the job.
Supporting Types
GoogleCloudDatalabelingV1beta1AttemptResponse
- Attempt
Time string - Partial
Failures List<Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Rpc Status Response> Details of errors that occurred.
- Attempt
Time string - Partial
Failures []GoogleRpc Status Response Details of errors that occurred.
- attempt
Time String - partial
Failures List<GoogleRpc Status Response> Details of errors that occurred.
- attempt
Time string - partial
Failures GoogleRpc Status Response[] Details of errors that occurred.
- attempt_
time str - partial_
failures Sequence[GoogleRpc Status Response] Details of errors that occurred.
- attempt
Time String - partial
Failures List<Property Map> Details of errors that occurred.
GoogleCloudDatalabelingV1beta1BigQuerySource
- Input
Uri string BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- Input
Uri string BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri string BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input_
uri str BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
- Input
Uri string BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- Input
Uri string BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri string BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input_
uri str BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions
- Iou
Threshold double Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- Iou
Threshold float64 Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Double Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold number Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou_
threshold float Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Number Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
- Iou
Threshold double Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- Iou
Threshold float64 Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Double Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold number Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou_
threshold float Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Number Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
GoogleCloudDatalabelingV1beta1BoundingPolyConfig
- Annotation
Spec stringSet Annotation spec set resource name.
- Instruction
Message string Optional. Instruction message showed on contributors UI.
- Annotation
Spec stringSet Annotation spec set resource name.
- Instruction
Message string Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet Annotation spec set resource name.
- instruction
Message String Optional. Instruction message showed on contributors UI.
- annotation
Spec stringSet Annotation spec set resource name.
- instruction
Message string Optional. Instruction message showed on contributors UI.
- annotation_
spec_ strset Annotation spec set resource name.
- instruction_
message str Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet Annotation spec set resource name.
- instruction
Message String Optional. Instruction message showed on contributors UI.
GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
- Annotation
Spec stringSet Annotation spec set resource name.
- Instruction
Message string Optional. Instruction message showed on contributors UI.
- Annotation
Spec stringSet Annotation spec set resource name.
- Instruction
Message string Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet Annotation spec set resource name.
- instruction
Message String Optional. Instruction message showed on contributors UI.
- annotation
Spec stringSet Annotation spec set resource name.
- instruction
Message string Optional. Instruction message showed on contributors UI.
- annotation_
spec_ strset Annotation spec set resource name.
- instruction_
message str Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet Annotation spec set resource name.
- instruction
Message String Optional. Instruction message showed on contributors UI.
GoogleCloudDatalabelingV1beta1ClassificationMetadata
- Is
Multi boolLabel Whether the classification task is multi-label or not.
- Is
Multi boolLabel Whether the classification task is multi-label or not.
- is
Multi BooleanLabel Whether the classification task is multi-label or not.
- is
Multi booleanLabel Whether the classification task is multi-label or not.
- is_
multi_ boollabel Whether the classification task is multi-label or not.
- is
Multi BooleanLabel Whether the classification task is multi-label or not.
GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
- Is
Multi boolLabel Whether the classification task is multi-label or not.
- Is
Multi boolLabel Whether the classification task is multi-label or not.
- is
Multi BooleanLabel Whether the classification task is multi-label or not.
- is
Multi booleanLabel Whether the classification task is multi-label or not.
- is_
multi_ boollabel Whether the classification task is multi-label or not.
- is
Multi BooleanLabel Whether the classification task is multi-label or not.
GoogleCloudDatalabelingV1beta1EvaluationConfig
- Bounding
Box Pulumi.Evaluation Options Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Box Evaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- Bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding_
box_ Googleevaluation_ options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box Property MapEvaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
- Bounding
Box Pulumi.Evaluation Options Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- Bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding_
box_ Googleevaluation_ options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box Property MapEvaluation Options Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig
- Email string
An email address to send alerts to.
- double
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- Email string
An email address to send alerts to.
- float64
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
An email address to send alerts to.
- Double
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email string
An email address to send alerts to.
- number
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email str
An email address to send alerts to.
- min_
acceptable_ floatmean_ average_ precision A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
An email address to send alerts to.
- Number
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
- Email string
An email address to send alerts to.
- double
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- Email string
An email address to send alerts to.
- float64
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
An email address to send alerts to.
- Double
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email string
An email address to send alerts to.
- number
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email str
An email address to send alerts to.
- min_
acceptable_ floatmean_ average_ precision A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
An email address to send alerts to.
- Number
A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
GoogleCloudDatalabelingV1beta1EvaluationJobConfig
- Bigquery
Import Dictionary<string, string>Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- Evaluation
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Config Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- Example
Count int The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- Example
Sample doublePercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Bounding
Poly Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Poly Config Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- Evaluation
Job Pulumi.Alert Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Alert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Human
Annotation Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Human Annotation Config Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- Image
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Image Classification Config Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- Input
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Input Config Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- Text
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Classification Config Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- Bigquery
Import map[string]stringKeys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- Evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- Example
Count int The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- Example
Sample float64Percentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- Evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- Image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- Input
Config GoogleCloud Datalabeling V1beta1Input Config Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- Text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String,String>Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- example
Count Integer The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example
Sample DoublePercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input
Config GoogleCloud Datalabeling V1beta1Input Config Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import {[key: string]: string}Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- example
Count number The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example
Sample numberPercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input
Config GoogleCloud Datalabeling V1beta1Input Config Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery_
import_ Mapping[str, str]keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- evaluation_
config GoogleCloud Datalabeling V1beta1Evaluation Config Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- example_
count int The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example_
sample_ floatpercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding_
poly_ Googleconfig Cloud Datalabeling V1beta1Bounding Poly Config Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation_
job_ Googlealert_ config Cloud Datalabeling V1beta1Evaluation Job Alert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human_
annotation_ Googleconfig Cloud Datalabeling V1beta1Human Annotation Config Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image_
classification_ Googleconfig Cloud Datalabeling V1beta1Image Classification Config Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input_
config GoogleCloud Datalabeling V1beta1Input Config Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text_
classification_ Googleconfig Cloud Datalabeling V1beta1Text Classification Config Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String>Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- evaluation
Config Property Map Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- example
Count Number The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example
Sample NumberPercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding
Poly Property MapConfig Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation
Job Property MapAlert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human
Annotation Property MapConfig Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image
Classification Property MapConfig Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input
Config Property Map Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text
Classification Property MapConfig Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse
- Bigquery
Import Dictionary<string, string>Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- Bounding
Poly Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Poly Config Response Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- Evaluation
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Config Response Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- Evaluation
Job Pulumi.Alert Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Alert Config Response Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Example
Count int The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- Example
Sample doublePercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Human
Annotation Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Human Annotation Config Response Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- Image
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Image Classification Config Response Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- Input
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Input Config Response Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- Text
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Classification Config Response Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- Bigquery
Import map[string]stringKeys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- Bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- Evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- Evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Example
Count int The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- Example
Sample float64Percentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- Image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- Input
Config GoogleCloud Datalabeling V1beta1Input Config Response Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- Text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String,String>Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count Integer The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example
Sample DoublePercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input
Config GoogleCloud Datalabeling V1beta1Input Config Response Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import {[key: string]: string}Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count number The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example
Sample numberPercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input
Config GoogleCloud Datalabeling V1beta1Input Config Response Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery_
import_ Mapping[str, str]keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- bounding_
poly_ Googleconfig Cloud Datalabeling V1beta1Bounding Poly Config Response Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation_
config GoogleCloud Datalabeling V1beta1Evaluation Config Response Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- evaluation_
job_ Googlealert_ config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example_
count int The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example_
sample_ floatpercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human_
annotation_ Googleconfig Cloud Datalabeling V1beta1Human Annotation Config Response Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image_
classification_ Googleconfig Cloud Datalabeling V1beta1Image Classification Config Response Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input_
config GoogleCloud Datalabeling V1beta1Input Config Response Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text_
classification_ Googleconfig Cloud Datalabeling V1beta1Text Classification Config Response Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String>Keys Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.- bounding
Poly Property MapConfig Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.- evaluation
Config Property Map Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration.- evaluation
Job Property MapAlert Config Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count Number The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit.- example
Sample NumberPercentage Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation Property MapConfig Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration.- image
Classification Property MapConfig Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.- input
Config Property Map Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
).- text
Classification Property MapConfig Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
GoogleCloudDatalabelingV1beta1GcsSource
GoogleCloudDatalabelingV1beta1GcsSourceResponse
GoogleCloudDatalabelingV1beta1HumanAnnotationConfig
- Annotated
Dataset stringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Instruction string
Instruction resource name.
- Annotated
Dataset stringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Contributor
Emails List<string> Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Label
Group string Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- Language
Code string Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- Annotated
Dataset stringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Instruction string
Instruction resource name.
- Annotated
Dataset stringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Contributor
Emails []string Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Label
Group string Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- Language
Code string Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction String
Instruction resource name.
- annotated
Dataset StringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor
Emails List<String> Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label
Group String Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language
Code String Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Integer Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset stringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction string
Instruction resource name.
- annotated
Dataset stringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor
Emails string[] Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label
Group string Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language
Code string Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration string Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count number Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email stringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated_
dataset_ strdisplay_ name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction str
Instruction resource name.
- annotated_
dataset_ strdescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor_
emails Sequence[str] Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label_
group str Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language_
code str Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question_
duration str Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica_
count int Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user_
email_ straddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction String
Instruction resource name.
- annotated
Dataset StringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor
Emails List<String> Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label
Group String Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language
Code String Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Number Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
- Annotated
Dataset stringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Annotated
Dataset stringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Contributor
Emails List<string> Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Instruction string
Instruction resource name.
- Label
Group string Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- Language
Code string Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- Annotated
Dataset stringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Annotated
Dataset stringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Contributor
Emails []string Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Instruction string
Instruction resource name.
- Label
Group string Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- Language
Code string Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset StringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails List<String> Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction String
Instruction resource name.
- label
Group String Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language
Code String Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Integer Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset stringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset stringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails string[] Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction string
Instruction resource name.
- label
Group string Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language
Code string Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration string Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count number Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email stringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated_
dataset_ strdescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated_
dataset_ strdisplay_ name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor_
emails Sequence[str] Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction str
Instruction resource name.
- label_
group str Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language_
code str Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question_
duration str Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica_
count int Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user_
email_ straddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDescription Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset StringDisplay Name A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails List<String> Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction String
Instruction resource name.
- label
Group String Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
.- language
Code String Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Number Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
GoogleCloudDatalabelingV1beta1ImageClassificationConfig
- Annotation
Spec stringSet Annotation spec set resource name.
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Answer
Aggregation Pulumi.Type Google Native. Data Labeling. V1Beta1. Google Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Optional. The type of how to aggregate answers.
- Annotation
Spec stringSet Annotation spec set resource name.
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Answer
Aggregation GoogleType Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Optional. The type of how to aggregate answers.
- annotation
Spec StringSet Annotation spec set resource name.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer
Aggregation GoogleType Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Optional. The type of how to aggregate answers.
- annotation
Spec stringSet Annotation spec set resource name.
- allow
Multi booleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer
Aggregation GoogleType Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Optional. The type of how to aggregate answers.
- annotation_
spec_ strset Annotation spec set resource name.
- allow_
multi_ boollabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer_
aggregation_ Googletype Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Optional. The type of how to aggregate answers.
- annotation
Spec StringSet Annotation spec set resource name.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer
Aggregation "STRING_AGGREGATION_TYPE_UNSPECIFIED" | "MAJORITY_VOTE" | "UNANIMOUS_VOTE" | "NO_AGGREGATION"Type Optional. The type of how to aggregate answers.
GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType
- String
Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Majority
Vote - MAJORITY_VOTE
Majority vote to aggregate answers.
- Unanimous
Vote - UNANIMOUS_VOTE
Unanimous answers will be adopted.
- No
Aggregation - NO_AGGREGATION
Preserve all answers by crowd compute.
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type String Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Majority Vote - MAJORITY_VOTE
Majority vote to aggregate answers.
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Unanimous Vote - UNANIMOUS_VOTE
Unanimous answers will be adopted.
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type No Aggregation - NO_AGGREGATION
Preserve all answers by crowd compute.
- String
Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Majority
Vote - MAJORITY_VOTE
Majority vote to aggregate answers.
- Unanimous
Vote - UNANIMOUS_VOTE
Unanimous answers will be adopted.
- No
Aggregation - NO_AGGREGATION
Preserve all answers by crowd compute.
- String
Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Majority
Vote - MAJORITY_VOTE
Majority vote to aggregate answers.
- Unanimous
Vote - UNANIMOUS_VOTE
Unanimous answers will be adopted.
- No
Aggregation - NO_AGGREGATION
Preserve all answers by crowd compute.
- STRING_AGGREGATION_TYPE_UNSPECIFIED
- STRING_AGGREGATION_TYPE_UNSPECIFIED
- MAJORITY_VOTE
- MAJORITY_VOTE
Majority vote to aggregate answers.
- UNANIMOUS_VOTE
- UNANIMOUS_VOTE
Unanimous answers will be adopted.
- NO_AGGREGATION
- NO_AGGREGATION
Preserve all answers by crowd compute.
- "STRING_AGGREGATION_TYPE_UNSPECIFIED"
- STRING_AGGREGATION_TYPE_UNSPECIFIED
- "MAJORITY_VOTE"
- MAJORITY_VOTE
Majority vote to aggregate answers.
- "UNANIMOUS_VOTE"
- UNANIMOUS_VOTE
Unanimous answers will be adopted.
- "NO_AGGREGATION"
- NO_AGGREGATION
Preserve all answers by crowd compute.
GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Annotation
Spec stringSet Annotation spec set resource name.
- Answer
Aggregation stringType Optional. The type of how to aggregate answers.
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Annotation
Spec stringSet Annotation spec set resource name.
- Answer
Aggregation stringType Optional. The type of how to aggregate answers.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec StringSet Annotation spec set resource name.
- answer
Aggregation StringType Optional. The type of how to aggregate answers.
- allow
Multi booleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec stringSet Annotation spec set resource name.
- answer
Aggregation stringType Optional. The type of how to aggregate answers.
- allow_
multi_ boollabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation_
spec_ strset Annotation spec set resource name.
- answer_
aggregation_ strtype Optional. The type of how to aggregate answers.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec StringSet Annotation spec set resource name.
- answer
Aggregation StringType Optional. The type of how to aggregate answers.
GoogleCloudDatalabelingV1beta1InputConfig
- Data
Type Pulumi.Google Native. Data Labeling. V1Beta1. Google Cloud Datalabeling V1beta1Input Config Data Type Data type must be specifed when user tries to import data.
- Annotation
Type Pulumi.Google Native. Data Labeling. V1Beta1. Google Cloud Datalabeling V1beta1Input Config Annotation Type Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Big Query Source Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Classification Metadata Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Gcs
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Gcs Source Source located in Cloud Storage.
- Text
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Metadata Required for text import, as language code must be specified.
- Data
Type GoogleCloud Datalabeling V1beta1Input Config Data Type Data type must be specifed when user tries to import data.
- Annotation
Type GoogleCloud Datalabeling V1beta1Input Config Annotation Type Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Source located in Cloud Storage.
- Text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Required for text import, as language code must be specified.
- data
Type GoogleCloud Datalabeling V1beta1Input Config Data Type Data type must be specifed when user tries to import data.
- annotation
Type GoogleCloud Datalabeling V1beta1Input Config Annotation Type Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Required for text import, as language code must be specified.
- data
Type GoogleCloud Datalabeling V1beta1Input Config Data Type Data type must be specifed when user tries to import data.
- annotation
Type GoogleCloud Datalabeling V1beta1Input Config Annotation Type Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Required for text import, as language code must be specified.
- data_
type GoogleCloud Datalabeling V1beta1Input Config Data Type Data type must be specifed when user tries to import data.
- annotation_
type GoogleCloud Datalabeling V1beta1Input Config Annotation Type Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery_
source GoogleCloud Datalabeling V1beta1Big Query Source Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification_
metadata GoogleCloud Datalabeling V1beta1Classification Metadata Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs_
source GoogleCloud Datalabeling V1beta1Gcs Source Source located in Cloud Storage.
- text_
metadata GoogleCloud Datalabeling V1beta1Text Metadata Required for text import, as language code must be specified.
- data
Type "DATA_TYPE_UNSPECIFIED" | "IMAGE" | "VIDEO" | "TEXT" | "GENERAL_DATA" Data type must be specifed when user tries to import data.
- annotation
Type "ANNOTATION_TYPE_UNSPECIFIED" | "IMAGE_CLASSIFICATION_ANNOTATION" | "IMAGE_BOUNDING_BOX_ANNOTATION" | "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION" | "IMAGE_BOUNDING_POLY_ANNOTATION" | "IMAGE_POLYLINE_ANNOTATION" | "IMAGE_SEGMENTATION_ANNOTATION" | "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION" | "VIDEO_OBJECT_TRACKING_ANNOTATION" | "VIDEO_OBJECT_DETECTION_ANNOTATION" | "VIDEO_EVENT_ANNOTATION" | "TEXT_CLASSIFICATION_ANNOTATION" | "TEXT_ENTITY_EXTRACTION_ANNOTATION" | "GENERAL_CLASSIFICATION_ANNOTATION" Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source Property Map Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata Property Map Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs
Source Property Map Source located in Cloud Storage.
- text
Metadata Property Map Required for text import, as language code must be specified.
GoogleCloudDatalabelingV1beta1InputConfigAnnotationType
- Annotation
Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Image
Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATION
Classification annotations in an image. Allowed for continuous evaluation.
- Image
Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATION
Bounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Image
Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
Oriented bounding box. The box does not have to be parallel to horizontal line.
- Image
Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATION
Bounding poly annotations in an image.
- Image
Polyline Annotation - IMAGE_POLYLINE_ANNOTATION
Polyline annotations in an image.
- Image
Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATION
Segmentation annotations in an image.
- Video
Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
Classification annotations in video shots.
- Video
Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATION
Video object tracking annotation.
- Video
Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATION
Video object detection annotation.
- Video
Event Annotation - VIDEO_EVENT_ANNOTATION
Video event annotation.
- Text
Classification Annotation - TEXT_CLASSIFICATION_ANNOTATION
Classification for text. Allowed for continuous evaluation.
- Text
Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATION
Entity extraction for text.
- General
Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATION
General classification. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Annotation Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATION
Classification annotations in an image. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATION
Bounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
Oriented bounding box. The box does not have to be parallel to horizontal line.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATION
Bounding poly annotations in an image.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Polyline Annotation - IMAGE_POLYLINE_ANNOTATION
Polyline annotations in an image.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATION
Segmentation annotations in an image.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
Classification annotations in video shots.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATION
Video object tracking annotation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATION
Video object detection annotation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Event Annotation - VIDEO_EVENT_ANNOTATION
Video event annotation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Text Classification Annotation - TEXT_CLASSIFICATION_ANNOTATION
Classification for text. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Text Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATION
Entity extraction for text.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type General Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATION
General classification. Allowed for continuous evaluation.
- Annotation
Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Image
Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATION
Classification annotations in an image. Allowed for continuous evaluation.
- Image
Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATION
Bounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Image
Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
Oriented bounding box. The box does not have to be parallel to horizontal line.
- Image
Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATION
Bounding poly annotations in an image.
- Image
Polyline Annotation - IMAGE_POLYLINE_ANNOTATION
Polyline annotations in an image.
- Image
Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATION
Segmentation annotations in an image.
- Video
Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
Classification annotations in video shots.
- Video
Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATION
Video object tracking annotation.
- Video
Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATION
Video object detection annotation.
- Video
Event Annotation - VIDEO_EVENT_ANNOTATION
Video event annotation.
- Text
Classification Annotation - TEXT_CLASSIFICATION_ANNOTATION
Classification for text. Allowed for continuous evaluation.
- Text
Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATION
Entity extraction for text.
- General
Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATION
General classification. Allowed for continuous evaluation.
- Annotation
Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Image
Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATION
Classification annotations in an image. Allowed for continuous evaluation.
- Image
Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATION
Bounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Image
Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
Oriented bounding box. The box does not have to be parallel to horizontal line.
- Image
Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATION
Bounding poly annotations in an image.
- Image
Polyline Annotation - IMAGE_POLYLINE_ANNOTATION
Polyline annotations in an image.
- Image
Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATION
Segmentation annotations in an image.
- Video
Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
Classification annotations in video shots.
- Video
Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATION
Video object tracking annotation.
- Video
Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATION
Video object detection annotation.
- Video
Event Annotation - VIDEO_EVENT_ANNOTATION
Video event annotation.
- Text
Classification Annotation - TEXT_CLASSIFICATION_ANNOTATION
Classification for text. Allowed for continuous evaluation.
- Text
Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATION
Entity extraction for text.
- General
Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATION
General classification. Allowed for continuous evaluation.
- ANNOTATION_TYPE_UNSPECIFIED
- ANNOTATION_TYPE_UNSPECIFIED
- IMAGE_CLASSIFICATION_ANNOTATION
- IMAGE_CLASSIFICATION_ANNOTATION
Classification annotations in an image. Allowed for continuous evaluation.
- IMAGE_BOUNDING_BOX_ANNOTATION
- IMAGE_BOUNDING_BOX_ANNOTATION
Bounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
- IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
Oriented bounding box. The box does not have to be parallel to horizontal line.
- IMAGE_BOUNDING_POLY_ANNOTATION
- IMAGE_BOUNDING_POLY_ANNOTATION
Bounding poly annotations in an image.
- IMAGE_POLYLINE_ANNOTATION
- IMAGE_POLYLINE_ANNOTATION
Polyline annotations in an image.
- IMAGE_SEGMENTATION_ANNOTATION
- IMAGE_SEGMENTATION_ANNOTATION
Segmentation annotations in an image.
- VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
- VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
Classification annotations in video shots.
- VIDEO_OBJECT_TRACKING_ANNOTATION
- VIDEO_OBJECT_TRACKING_ANNOTATION
Video object tracking annotation.
- VIDEO_OBJECT_DETECTION_ANNOTATION
- VIDEO_OBJECT_DETECTION_ANNOTATION
Video object detection annotation.
- VIDEO_EVENT_ANNOTATION
- VIDEO_EVENT_ANNOTATION
Video event annotation.
- TEXT_CLASSIFICATION_ANNOTATION
- TEXT_CLASSIFICATION_ANNOTATION
Classification for text. Allowed for continuous evaluation.
- TEXT_ENTITY_EXTRACTION_ANNOTATION
- TEXT_ENTITY_EXTRACTION_ANNOTATION
Entity extraction for text.
- GENERAL_CLASSIFICATION_ANNOTATION
- GENERAL_CLASSIFICATION_ANNOTATION
General classification. Allowed for continuous evaluation.
- "ANNOTATION_TYPE_UNSPECIFIED"
- ANNOTATION_TYPE_UNSPECIFIED
- "IMAGE_CLASSIFICATION_ANNOTATION"
- IMAGE_CLASSIFICATION_ANNOTATION
Classification annotations in an image. Allowed for continuous evaluation.
- "IMAGE_BOUNDING_BOX_ANNOTATION"
- IMAGE_BOUNDING_BOX_ANNOTATION
Bounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION"
- IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
Oriented bounding box. The box does not have to be parallel to horizontal line.
- "IMAGE_BOUNDING_POLY_ANNOTATION"
- IMAGE_BOUNDING_POLY_ANNOTATION
Bounding poly annotations in an image.
- "IMAGE_POLYLINE_ANNOTATION"
- IMAGE_POLYLINE_ANNOTATION
Polyline annotations in an image.
- "IMAGE_SEGMENTATION_ANNOTATION"
- IMAGE_SEGMENTATION_ANNOTATION
Segmentation annotations in an image.
- "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION"
- VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
Classification annotations in video shots.
- "VIDEO_OBJECT_TRACKING_ANNOTATION"
- VIDEO_OBJECT_TRACKING_ANNOTATION
Video object tracking annotation.
- "VIDEO_OBJECT_DETECTION_ANNOTATION"
- VIDEO_OBJECT_DETECTION_ANNOTATION
Video object detection annotation.
- "VIDEO_EVENT_ANNOTATION"
- VIDEO_EVENT_ANNOTATION
Video event annotation.
- "TEXT_CLASSIFICATION_ANNOTATION"
- TEXT_CLASSIFICATION_ANNOTATION
Classification for text. Allowed for continuous evaluation.
- "TEXT_ENTITY_EXTRACTION_ANNOTATION"
- TEXT_ENTITY_EXTRACTION_ANNOTATION
Entity extraction for text.
- "GENERAL_CLASSIFICATION_ANNOTATION"
- GENERAL_CLASSIFICATION_ANNOTATION
General classification. Allowed for continuous evaluation.
GoogleCloudDatalabelingV1beta1InputConfigDataType
- Data
Type Unspecified - DATA_TYPE_UNSPECIFIED
Data type is unspecified.
- Image
- IMAGE
Allowed for continuous evaluation.
- Video
- VIDEO
Video data type.
- Text
- TEXT
Allowed for continuous evaluation.
- General
Data - GENERAL_DATA
Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Data Type Unspecified - DATA_TYPE_UNSPECIFIED
Data type is unspecified.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Image - IMAGE
Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Video - VIDEO
Video data type.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Text - TEXT
Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Data Type General Data - GENERAL_DATA
Allowed for continuous evaluation.
- Data
Type Unspecified - DATA_TYPE_UNSPECIFIED
Data type is unspecified.
- Image
- IMAGE
Allowed for continuous evaluation.
- Video
- VIDEO
Video data type.
- Text
- TEXT
Allowed for continuous evaluation.
- General
Data - GENERAL_DATA
Allowed for continuous evaluation.
- Data
Type Unspecified - DATA_TYPE_UNSPECIFIED
Data type is unspecified.
- Image
- IMAGE
Allowed for continuous evaluation.
- Video
- VIDEO
Video data type.
- Text
- TEXT
Allowed for continuous evaluation.
- General
Data - GENERAL_DATA
Allowed for continuous evaluation.
- DATA_TYPE_UNSPECIFIED
- DATA_TYPE_UNSPECIFIED
Data type is unspecified.
- IMAGE
- IMAGE
Allowed for continuous evaluation.
- VIDEO
- VIDEO
Video data type.
- TEXT
- TEXT
Allowed for continuous evaluation.
- GENERAL_DATA
- GENERAL_DATA
Allowed for continuous evaluation.
- "DATA_TYPE_UNSPECIFIED"
- DATA_TYPE_UNSPECIFIED
Data type is unspecified.
- "IMAGE"
- IMAGE
Allowed for continuous evaluation.
- "VIDEO"
- VIDEO
Video data type.
- "TEXT"
- TEXT
Allowed for continuous evaluation.
- "GENERAL_DATA"
- GENERAL_DATA
Allowed for continuous evaluation.
GoogleCloudDatalabelingV1beta1InputConfigResponse
- Annotation
Type string Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Big Query Source Response Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Classification Metadata Response Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Data
Type string Data type must be specifed when user tries to import data.
- Gcs
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Gcs Source Response Source located in Cloud Storage.
- Text
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Metadata Response Required for text import, as language code must be specified.
- Annotation
Type string Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Data
Type string Data type must be specifed when user tries to import data.
- Gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response Source located in Cloud Storage.
- Text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response Required for text import, as language code must be specified.
- annotation
Type String Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type String Data type must be specifed when user tries to import data.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response Required for text import, as language code must be specified.
- annotation
Type string Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type string Data type must be specifed when user tries to import data.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response Required for text import, as language code must be specified.
- annotation_
type str Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery_
source GoogleCloud Datalabeling V1beta1Big Query Source Response Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification_
metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data_
type str Data type must be specifed when user tries to import data.
- gcs_
source GoogleCloud Datalabeling V1beta1Gcs Source Response Source located in Cloud Storage.
- text_
metadata GoogleCloud Datalabeling V1beta1Text Metadata Response Required for text import, as language code must be specified.
- annotation
Type String Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source Property Map Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata Property Map Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type String Data type must be specifed when user tries to import data.
- gcs
Source Property Map Source located in Cloud Storage.
- text
Metadata Property Map Required for text import, as language code must be specified.
GoogleCloudDatalabelingV1beta1SentimentConfig
- Enable
Label boolSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- Enable
Label boolSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label booleanSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable_
label_ boolsentiment_ selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
GoogleCloudDatalabelingV1beta1SentimentConfigResponse
- Enable
Label boolSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- Enable
Label boolSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label booleanSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable_
label_ boolsentiment_ selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
GoogleCloudDatalabelingV1beta1TextClassificationConfig
- Annotation
Spec stringSet Annotation spec set resource name.
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Sentiment
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Sentiment Config Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- Annotation
Spec stringSet Annotation spec set resource name.
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation
Spec StringSet Annotation spec set resource name.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation
Spec stringSet Annotation spec set resource name.
- allow
Multi booleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation_
spec_ strset Annotation spec set resource name.
- allow_
multi_ boollabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment_
config GoogleCloud Datalabeling V1beta1Sentiment Config Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation
Spec StringSet Annotation spec set resource name.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment
Config Property Map Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Annotation
Spec stringSet Annotation spec set resource name.
- Sentiment
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Sentiment Config Response Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- Allow
Multi boolLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Annotation
Spec stringSet Annotation spec set resource name.
- Sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec StringSet Annotation spec set resource name.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi booleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec stringSet Annotation spec set resource name.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow_
multi_ boollabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation_
spec_ strset Annotation spec set resource name.
- sentiment_
config GoogleCloud Datalabeling V1beta1Sentiment Config Response Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi BooleanLabel Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec StringSet Annotation spec set resource name.
- sentiment
Config Property Map Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
GoogleCloudDatalabelingV1beta1TextMetadata
- Language
Code string The language of this text, as a BCP-47. Default value is en-US.
- Language
Code string The language of this text, as a BCP-47. Default value is en-US.
- language
Code String The language of this text, as a BCP-47. Default value is en-US.
- language
Code string The language of this text, as a BCP-47. Default value is en-US.
- language_
code str The language of this text, as a BCP-47. Default value is en-US.
- language
Code String The language of this text, as a BCP-47. Default value is en-US.
GoogleCloudDatalabelingV1beta1TextMetadataResponse
- Language
Code string The language of this text, as a BCP-47. Default value is en-US.
- Language
Code string The language of this text, as a BCP-47. Default value is en-US.
- language
Code String The language of this text, as a BCP-47. Default value is en-US.
- language
Code string The language of this text, as a BCP-47. Default value is en-US.
- language_
code str The language of this text, as a BCP-47. Default value is en-US.
- language
Code String The language of this text, as a BCP-47. Default value is en-US.
GoogleRpcStatusResponse
- Code int
The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0