Google Native

Pulumi Official
Package maintained by Pulumi
v0.19.1 published on Tuesday, May 24, 2022 by Pulumi

getEvaluationJob

Gets an evaluation job by resource name.

Using getEvaluationJob

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getEvaluationJob(args: GetEvaluationJobArgs, opts?: InvokeOptions): Promise<GetEvaluationJobResult>
function getEvaluationJobOutput(args: GetEvaluationJobOutputArgs, opts?: InvokeOptions): Output<GetEvaluationJobResult>
def get_evaluation_job(evaluation_job_id: Optional[str] = None,
                       project: Optional[str] = None,
                       opts: Optional[InvokeOptions] = None) -> GetEvaluationJobResult
def get_evaluation_job_output(evaluation_job_id: Optional[pulumi.Input[str]] = None,
                       project: Optional[pulumi.Input[str]] = None,
                       opts: Optional[InvokeOptions] = None) -> Output[GetEvaluationJobResult]
func LookupEvaluationJob(ctx *Context, args *LookupEvaluationJobArgs, opts ...InvokeOption) (*LookupEvaluationJobResult, error)
func LookupEvaluationJobOutput(ctx *Context, args *LookupEvaluationJobOutputArgs, opts ...InvokeOption) LookupEvaluationJobResultOutput

> Note: This function is named LookupEvaluationJob in the Go SDK.

public static class GetEvaluationJob 
{
    public static Task<GetEvaluationJobResult> InvokeAsync(GetEvaluationJobArgs args, InvokeOptions? opts = null)
    public static Output<GetEvaluationJobResult> Invoke(GetEvaluationJobInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetEvaluationJobResult> getEvaluationJob(GetEvaluationJobArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
Fn::Invoke:
  Function: google-native:datalabeling/v1beta1:getEvaluationJob
  Arguments:
    # Arguments dictionary

The following arguments are supported:

getEvaluationJob Result

The following output properties are available:

AnnotationSpecSet string

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

Attempts List<Pulumi.GoogleNative.DataLabeling.V1Beta1.Outputs.GoogleCloudDatalabelingV1beta1AttemptResponse>

Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

CreateTime string

Timestamp of when this evaluation job was created.

Description string

Description of the job. The description can be up to 25,000 characters long.

EvaluationJobConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Outputs.GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse

Configuration details for the evaluation job.

LabelMissingGroundTruth bool

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

ModelVersion string

The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

Name string

After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

Schedule string

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

State string

Describes the current state of the job.

AnnotationSpecSet string

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

Attempts []GoogleCloudDatalabelingV1beta1AttemptResponse

Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

CreateTime string

Timestamp of when this evaluation job was created.

Description string

Description of the job. The description can be up to 25,000 characters long.

EvaluationJobConfig GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse

Configuration details for the evaluation job.

LabelMissingGroundTruth bool

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

ModelVersion string

The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

Name string

After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

Schedule string

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

State string

Describes the current state of the job.

annotationSpecSet String

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

attempts List<GoogleCloudDatalabelingV1beta1AttemptResponse>

Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

createTime String

Timestamp of when this evaluation job was created.

description String

Description of the job. The description can be up to 25,000 characters long.

evaluationJobConfig GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse

Configuration details for the evaluation job.

labelMissingGroundTruth Boolean

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

modelVersion String

The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

name String

After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

schedule String

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

state String

Describes the current state of the job.

annotationSpecSet string

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

attempts GoogleCloudDatalabelingV1beta1AttemptResponse[]

Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

createTime string

Timestamp of when this evaluation job was created.

description string

Description of the job. The description can be up to 25,000 characters long.

evaluationJobConfig GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse

Configuration details for the evaluation job.

labelMissingGroundTruth boolean

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

modelVersion string

The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

name string

After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

schedule string

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

state string

Describes the current state of the job.

annotation_spec_set str

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

attempts Sequence[GoogleCloudDatalabelingV1beta1AttemptResponse]

Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

create_time str

Timestamp of when this evaluation job was created.

description str

Description of the job. The description can be up to 25,000 characters long.

evaluation_job_config GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse

Configuration details for the evaluation job.

label_missing_ground_truth bool

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

model_version str

The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

name str

After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

schedule str

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

state str

Describes the current state of the job.

annotationSpecSet String

Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

attempts List<Property Map>

Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

createTime String

Timestamp of when this evaluation job was created.

description String

Description of the job. The description can be up to 25,000 characters long.

evaluationJobConfig Property Map

Configuration details for the evaluation job.

labelMissingGroundTruth Boolean

Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

modelVersion String

The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

name String

After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

schedule String

Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

state String

Describes the current state of the job.

Supporting Types

GoogleCloudDatalabelingV1beta1AttemptResponse

AttemptTime string
PartialFailures []GoogleRpcStatusResponse

Details of errors that occurred.

attemptTime String
partialFailures List<GoogleRpcStatusResponse>

Details of errors that occurred.

attemptTime string
partialFailures GoogleRpcStatusResponse[]

Details of errors that occurred.

attemptTime String
partialFailures List<Property Map>

Details of errors that occurred.

GoogleCloudDatalabelingV1beta1BigQuerySourceResponse

InputUri string

BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

InputUri string

BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

inputUri String

BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

inputUri string

BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

input_uri str

BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

inputUri String

BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.

GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse

IouThreshold double

Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

IouThreshold float64

Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

iouThreshold Double

Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

iouThreshold number

Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

iou_threshold float

Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

iouThreshold Number

Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.

GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse

AnnotationSpecSet string

Annotation spec set resource name.

InstructionMessage string

Optional. Instruction message showed on contributors UI.

AnnotationSpecSet string

Annotation spec set resource name.

InstructionMessage string

Optional. Instruction message showed on contributors UI.

annotationSpecSet String

Annotation spec set resource name.

instructionMessage String

Optional. Instruction message showed on contributors UI.

annotationSpecSet string

Annotation spec set resource name.

instructionMessage string

Optional. Instruction message showed on contributors UI.

annotation_spec_set str

Annotation spec set resource name.

instruction_message str

Optional. Instruction message showed on contributors UI.

annotationSpecSet String

Annotation spec set resource name.

instructionMessage String

Optional. Instruction message showed on contributors UI.

GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse

IsMultiLabel bool

Whether the classification task is multi-label or not.

IsMultiLabel bool

Whether the classification task is multi-label or not.

isMultiLabel Boolean

Whether the classification task is multi-label or not.

isMultiLabel boolean

Whether the classification task is multi-label or not.

is_multi_label bool

Whether the classification task is multi-label or not.

isMultiLabel Boolean

Whether the classification task is multi-label or not.

GoogleCloudDatalabelingV1beta1EvaluationConfigResponse

BoundingBoxEvaluationOptions Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse

Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

BoundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse

Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

boundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse

Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

boundingBoxEvaluationOptions GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse

Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

bounding_box_evaluation_options GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse

Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

boundingBoxEvaluationOptions Property Map

Only specify this field if the related model performs image object detection (IMAGE_BOUNDING_BOX_ANNOTATION). Describes how to evaluate bounding boxes.

GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse

Email string

An email address to send alerts to.

MinAcceptableMeanAveragePrecision double

A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

Email string

An email address to send alerts to.

MinAcceptableMeanAveragePrecision float64

A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

email String

An email address to send alerts to.

minAcceptableMeanAveragePrecision Double

A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

email string

An email address to send alerts to.

minAcceptableMeanAveragePrecision number

A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

email str

An email address to send alerts to.

min_acceptable_mean_average_precision float

A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

email String

An email address to send alerts to.

minAcceptableMeanAveragePrecision Number

A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.

GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse

BigqueryImportKeys Dictionary<string, string>

Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.

BoundingPolyConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse

Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.

EvaluationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationConfigResponse

Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

EvaluationJobAlertConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

ExampleCount int

The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

ExampleSamplePercentage double

Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

HumanAnnotationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse

Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

ImageClassificationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse

Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

InputConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1InputConfigResponse

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).

TextClassificationConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse

Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

BigqueryImportKeys map[string]string

Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.

BoundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse

Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.

EvaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfigResponse

Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

EvaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

ExampleCount int

The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

ExampleSamplePercentage float64

Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

HumanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse

Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

ImageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse

Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

InputConfig GoogleCloudDatalabelingV1beta1InputConfigResponse

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).

TextClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse

Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

bigqueryImportKeys Map<String,String>

Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.

boundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse

Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.

evaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfigResponse

Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

evaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

exampleCount Integer

The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

exampleSamplePercentage Double

Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

humanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse

Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

imageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse

Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

inputConfig GoogleCloudDatalabelingV1beta1InputConfigResponse

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).

textClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse

Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

bigqueryImportKeys {[key: string]: string}

Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.

boundingPolyConfig GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse

Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.

evaluationConfig GoogleCloudDatalabelingV1beta1EvaluationConfigResponse

Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

evaluationJobAlertConfig GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

exampleCount number

The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

exampleSamplePercentage number

Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

humanAnnotationConfig GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse

Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

imageClassificationConfig GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse

Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

inputConfig GoogleCloudDatalabelingV1beta1InputConfigResponse

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).

textClassificationConfig GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse

Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

bigquery_import_keys Mapping[str, str]

Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.

bounding_poly_config GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse

Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.

evaluation_config GoogleCloudDatalabelingV1beta1EvaluationConfigResponse

Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

evaluation_job_alert_config GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

example_count int

The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

example_sample_percentage float

Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

human_annotation_config GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse

Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

image_classification_config GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse

Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

input_config GoogleCloudDatalabelingV1beta1InputConfigResponse

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).

text_classification_config GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse

Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

bigqueryImportKeys Map<String>

Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: * data_json_key: the data key for prediction input. You must provide either this key or reference_json_key. * reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key. * label_json_key: the label key for prediction output. Required. * label_score_json_key: the score key for prediction output. Required. * bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys.

boundingPolyConfig Property Map

Specify this field if your model version performs image object detection (bounding box detection). annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet.

evaluationConfig Property Map

Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

evaluationJobAlertConfig Property Map

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

exampleCount Number

The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

exampleSamplePercentage Number

Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

humanAnnotationConfig Property Map

Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

imageClassificationConfig Property Map

Specify this field if your model version performs image classification or general classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

inputConfig Property Map

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: * dataType must be one of IMAGE, TEXT, or GENERAL_DATA. * annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection). * If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel. * You must specify bigquerySource (not gcsSource).

textClassificationConfig Property Map

Specify this field if your model version performs text classification. annotationSpecSet in this configuration must match EvaluationJob.annotationSpecSet. allowMultiLabel in this configuration must match classificationMetadata.isMultiLabel in input_config.

GoogleCloudDatalabelingV1beta1GcsSourceResponse

InputUri string

The input URI of source file. This must be a Cloud Storage path (gs://...).

MimeType string

The format of the source file. Only "text/csv" is supported.

InputUri string

The input URI of source file. This must be a Cloud Storage path (gs://...).

MimeType string

The format of the source file. Only "text/csv" is supported.

inputUri String

The input URI of source file. This must be a Cloud Storage path (gs://...).

mimeType String

The format of the source file. Only "text/csv" is supported.

inputUri string

The input URI of source file. This must be a Cloud Storage path (gs://...).

mimeType string

The format of the source file. Only "text/csv" is supported.

input_uri str

The input URI of source file. This must be a Cloud Storage path (gs://...).

mime_type str

The format of the source file. Only "text/csv" is supported.

inputUri String

The input URI of source file. This must be a Cloud Storage path (gs://...).

mimeType String

The format of the source file. Only "text/csv" is supported.

GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse

AnnotatedDatasetDescription string

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

AnnotatedDatasetDisplayName string

A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

ContributorEmails List<string>

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

Instruction string

Instruction resource name.

LabelGroup string

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

LanguageCode string

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.

QuestionDuration string

Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.

ReplicaCount int

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

UserEmailAddress string

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

AnnotatedDatasetDescription string

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

AnnotatedDatasetDisplayName string

A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

ContributorEmails []string

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

Instruction string

Instruction resource name.

LabelGroup string

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

LanguageCode string

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.

QuestionDuration string

Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.

ReplicaCount int

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

UserEmailAddress string

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

annotatedDatasetDescription String

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

annotatedDatasetDisplayName String

A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

contributorEmails List<String>

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

instruction String

Instruction resource name.

labelGroup String

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

languageCode String

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.

questionDuration String

Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.

replicaCount Integer

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

userEmailAddress String

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

annotatedDatasetDescription string

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

annotatedDatasetDisplayName string

A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

contributorEmails string[]

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

instruction string

Instruction resource name.

labelGroup string

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

languageCode string

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.

questionDuration string

Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.

replicaCount number

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

userEmailAddress string

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

annotated_dataset_description str

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

annotated_dataset_display_name str

A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

contributor_emails Sequence[str]

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

instruction str

Instruction resource name.

label_group str

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

language_code str

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.

question_duration str

Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.

replica_count int

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

user_email_address str

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

annotatedDatasetDescription String

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

annotatedDatasetDisplayName String

A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

contributorEmails List<String>

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

instruction String

Instruction resource name.

labelGroup String

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

languageCode String

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.

questionDuration String

Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.

replicaCount Number

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

userEmailAddress String

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse

AllowMultiLabel bool

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.

AnnotationSpecSet string

Annotation spec set resource name.

AnswerAggregationType string

Optional. The type of how to aggregate answers.

AllowMultiLabel bool

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.

AnnotationSpecSet string

Annotation spec set resource name.

AnswerAggregationType string

Optional. The type of how to aggregate answers.

allowMultiLabel Boolean

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.

annotationSpecSet String

Annotation spec set resource name.

answerAggregationType String

Optional. The type of how to aggregate answers.

allowMultiLabel boolean

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.

annotationSpecSet string

Annotation spec set resource name.

answerAggregationType string

Optional. The type of how to aggregate answers.

allow_multi_label bool

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.

annotation_spec_set str

Annotation spec set resource name.

answer_aggregation_type str

Optional. The type of how to aggregate answers.

allowMultiLabel Boolean

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.

annotationSpecSet String

Annotation spec set resource name.

answerAggregationType String

Optional. The type of how to aggregate answers.

GoogleCloudDatalabelingV1beta1InputConfigResponse

AnnotationType string

Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.

BigquerySource Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BigQuerySourceResponse

Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.

ClassificationMetadata Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse

Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.

DataType string

Data type must be specifed when user tries to import data.

GcsSource Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1GcsSourceResponse

Source located in Cloud Storage.

TextMetadata Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextMetadataResponse

Required for text import, as language code must be specified.

AnnotationType string

Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.

BigquerySource GoogleCloudDatalabelingV1beta1BigQuerySourceResponse

Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.

ClassificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse

Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.

DataType string

Data type must be specifed when user tries to import data.

GcsSource GoogleCloudDatalabelingV1beta1GcsSourceResponse

Source located in Cloud Storage.

TextMetadata GoogleCloudDatalabelingV1beta1TextMetadataResponse

Required for text import, as language code must be specified.

annotationType String

Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.

bigquerySource GoogleCloudDatalabelingV1beta1BigQuerySourceResponse

Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.

classificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse

Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.

dataType String

Data type must be specifed when user tries to import data.

gcsSource GoogleCloudDatalabelingV1beta1GcsSourceResponse

Source located in Cloud Storage.

textMetadata GoogleCloudDatalabelingV1beta1TextMetadataResponse

Required for text import, as language code must be specified.

annotationType string

Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.

bigquerySource GoogleCloudDatalabelingV1beta1BigQuerySourceResponse

Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.

classificationMetadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse

Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.

dataType string

Data type must be specifed when user tries to import data.

gcsSource GoogleCloudDatalabelingV1beta1GcsSourceResponse

Source located in Cloud Storage.

textMetadata GoogleCloudDatalabelingV1beta1TextMetadataResponse

Required for text import, as language code must be specified.

annotation_type str

Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.

bigquery_source GoogleCloudDatalabelingV1beta1BigQuerySourceResponse

Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.

classification_metadata GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse

Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.

data_type str

Data type must be specifed when user tries to import data.

gcs_source GoogleCloudDatalabelingV1beta1GcsSourceResponse

Source located in Cloud Storage.

text_metadata GoogleCloudDatalabelingV1beta1TextMetadataResponse

Required for text import, as language code must be specified.

annotationType String

Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.

bigquerySource Property Map

Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.

classificationMetadata Property Map

Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.

dataType String

Data type must be specifed when user tries to import data.

gcsSource Property Map

Source located in Cloud Storage.

textMetadata Property Map

Required for text import, as language code must be specified.

GoogleCloudDatalabelingV1beta1SentimentConfigResponse

EnableLabelSentimentSelection bool

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

EnableLabelSentimentSelection bool

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

enableLabelSentimentSelection Boolean

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

enableLabelSentimentSelection boolean

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

enable_label_sentiment_selection bool

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

enableLabelSentimentSelection Boolean

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse

AllowMultiLabel bool

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.

AnnotationSpecSet string

Annotation spec set resource name.

SentimentConfig Pulumi.GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1SentimentConfigResponse

Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

AllowMultiLabel bool

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.

AnnotationSpecSet string

Annotation spec set resource name.

SentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfigResponse

Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

allowMultiLabel Boolean

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.

annotationSpecSet String

Annotation spec set resource name.

sentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfigResponse

Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

allowMultiLabel boolean

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.

annotationSpecSet string

Annotation spec set resource name.

sentimentConfig GoogleCloudDatalabelingV1beta1SentimentConfigResponse

Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

allow_multi_label bool

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.

annotation_spec_set str

Annotation spec set resource name.

sentiment_config GoogleCloudDatalabelingV1beta1SentimentConfigResponse

Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

allowMultiLabel Boolean

Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.

annotationSpecSet String

Annotation spec set resource name.

sentimentConfig Property Map

Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.

GoogleCloudDatalabelingV1beta1TextMetadataResponse

LanguageCode string

The language of this text, as a BCP-47. Default value is en-US.

LanguageCode string

The language of this text, as a BCP-47. Default value is en-US.

languageCode String

The language of this text, as a BCP-47. Default value is en-US.

languageCode string

The language of this text, as a BCP-47. Default value is en-US.

language_code str

The language of this text, as a BCP-47. Default value is en-US.

languageCode String

The language of this text, as a BCP-47. Default value is en-US.

GoogleRpcStatusResponse

Code int

The status code, which should be an enum value of google.rpc.Code.

Details List<ImmutableDictionary<string, string>>

A list of messages that carry the error details. There is a common set of message types for APIs to use.

Message string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

Code int

The status code, which should be an enum value of google.rpc.Code.

Details []map[string]string

A list of messages that carry the error details. There is a common set of message types for APIs to use.

Message string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

code Integer

The status code, which should be an enum value of google.rpc.Code.

details List<Map<String,String>>

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message String

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

code number

The status code, which should be an enum value of google.rpc.Code.

details {[key: string]: string}[]

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

code int

The status code, which should be an enum value of google.rpc.Code.

details Sequence[Mapping[str, str]]

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message str

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

code Number

The status code, which should be an enum value of google.rpc.Code.

details List<Map<String>>

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message String

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

Package Details

Repository
https://github.com/pulumi/pulumi-google-native
License
Apache-2.0