aws-native.sagemaker.DataQualityJobDefinition
Explore with Pulumi AI
Resource Type definition for AWS::SageMaker::DataQualityJobDefinition
Create DataQualityJobDefinition Resource
new DataQualityJobDefinition(name: string, args: DataQualityJobDefinitionArgs, opts?: CustomResourceOptions);
@overload
def DataQualityJobDefinition(resource_name: str,
opts: Optional[ResourceOptions] = None,
data_quality_app_specification: Optional[DataQualityJobDefinitionDataQualityAppSpecificationArgs] = None,
data_quality_baseline_config: Optional[DataQualityJobDefinitionDataQualityBaselineConfigArgs] = None,
data_quality_job_input: Optional[DataQualityJobDefinitionDataQualityJobInputArgs] = None,
data_quality_job_output_config: Optional[DataQualityJobDefinitionMonitoringOutputConfigArgs] = None,
endpoint_name: Optional[str] = None,
job_definition_name: Optional[str] = None,
job_resources: Optional[DataQualityJobDefinitionMonitoringResourcesArgs] = None,
network_config: Optional[DataQualityJobDefinitionNetworkConfigArgs] = None,
role_arn: Optional[str] = None,
stopping_condition: Optional[DataQualityJobDefinitionStoppingConditionArgs] = None,
tags: Optional[Sequence[DataQualityJobDefinitionTagArgs]] = None)
@overload
def DataQualityJobDefinition(resource_name: str,
args: DataQualityJobDefinitionArgs,
opts: Optional[ResourceOptions] = None)
func NewDataQualityJobDefinition(ctx *Context, name string, args DataQualityJobDefinitionArgs, opts ...ResourceOption) (*DataQualityJobDefinition, error)
public DataQualityJobDefinition(string name, DataQualityJobDefinitionArgs args, CustomResourceOptions? opts = null)
public DataQualityJobDefinition(String name, DataQualityJobDefinitionArgs args)
public DataQualityJobDefinition(String name, DataQualityJobDefinitionArgs args, CustomResourceOptions options)
type: aws-native:sagemaker:DataQualityJobDefinition
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DataQualityJobDefinition Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The DataQualityJobDefinition resource accepts the following input properties:
- Data
Quality Pulumi.App Specification Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality App Specification Args - Data
Quality Pulumi.Job Input Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality Job Input Args - Data
Quality Pulumi.Job Output Config Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Output Config Args - Job
Resources Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Resources Args - Role
Arn string The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- Data
Quality Pulumi.Baseline Config Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality Baseline Config Args - Endpoint
Name string - Job
Definition stringName - Network
Config Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Network Config Args - Stopping
Condition Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Stopping Condition Args - List<Pulumi.
Aws Native. Sage Maker. Inputs. Data Quality Job Definition Tag Args> An array of key-value pairs to apply to this resource.
- Data
Quality DataApp Specification Quality Job Definition Data Quality App Specification Args - Data
Quality DataJob Input Quality Job Definition Data Quality Job Input Args - Data
Quality DataJob Output Config Quality Job Definition Monitoring Output Config Args - Job
Resources DataQuality Job Definition Monitoring Resources Args - Role
Arn string The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- Data
Quality DataBaseline Config Quality Job Definition Data Quality Baseline Config Args - Endpoint
Name string - Job
Definition stringName - Network
Config DataQuality Job Definition Network Config Args - Stopping
Condition DataQuality Job Definition Stopping Condition Args - []Data
Quality Job Definition Tag Args An array of key-value pairs to apply to this resource.
- data
Quality DataApp Specification Quality Job Definition Data Quality App Specification Args - data
Quality DataJob Input Quality Job Definition Data Quality Job Input Args - data
Quality DataJob Output Config Quality Job Definition Monitoring Output Config Args - job
Resources DataQuality Job Definition Monitoring Resources Args - role
Arn String The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data
Quality DataBaseline Config Quality Job Definition Data Quality Baseline Config Args - endpoint
Name String - job
Definition StringName - network
Config DataQuality Job Definition Network Config Args - stopping
Condition DataQuality Job Definition Stopping Condition Args - List<Data
Quality Job Definition Tag Args> An array of key-value pairs to apply to this resource.
- data
Quality DataApp Specification Quality Job Definition Data Quality App Specification Args - data
Quality DataJob Input Quality Job Definition Data Quality Job Input Args - data
Quality DataJob Output Config Quality Job Definition Monitoring Output Config Args - job
Resources DataQuality Job Definition Monitoring Resources Args - role
Arn string The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data
Quality DataBaseline Config Quality Job Definition Data Quality Baseline Config Args - endpoint
Name string - job
Definition stringName - network
Config DataQuality Job Definition Network Config Args - stopping
Condition DataQuality Job Definition Stopping Condition Args - Data
Quality Job Definition Tag Args[] An array of key-value pairs to apply to this resource.
- data_
quality_ Dataapp_ specification Quality Job Definition Data Quality App Specification Args - data_
quality_ Datajob_ input Quality Job Definition Data Quality Job Input Args - data_
quality_ Datajob_ output_ config Quality Job Definition Monitoring Output Config Args - job_
resources DataQuality Job Definition Monitoring Resources Args - role_
arn str The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data_
quality_ Databaseline_ config Quality Job Definition Data Quality Baseline Config Args - endpoint_
name str - job_
definition_ strname - network_
config DataQuality Job Definition Network Config Args - stopping_
condition DataQuality Job Definition Stopping Condition Args - Sequence[Data
Quality Job Definition Tag Args] An array of key-value pairs to apply to this resource.
- data
Quality Property MapApp Specification - data
Quality Property MapJob Input - data
Quality Property MapJob Output Config - job
Resources Property Map - role
Arn String The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data
Quality Property MapBaseline Config - endpoint
Name String - job
Definition StringName - network
Config Property Map - stopping
Condition Property Map - List<Property Map>
An array of key-value pairs to apply to this resource.
Outputs
All input properties are implicitly available as output properties. Additionally, the DataQualityJobDefinition resource produces the following output properties:
- Creation
Time string The time at which the job definition was created.
- Id string
The provider-assigned unique ID for this managed resource.
- Job
Definition stringArn The Amazon Resource Name (ARN) of job definition.
- Creation
Time string The time at which the job definition was created.
- Id string
The provider-assigned unique ID for this managed resource.
- Job
Definition stringArn The Amazon Resource Name (ARN) of job definition.
- creation
Time String The time at which the job definition was created.
- id String
The provider-assigned unique ID for this managed resource.
- job
Definition StringArn The Amazon Resource Name (ARN) of job definition.
- creation
Time string The time at which the job definition was created.
- id string
The provider-assigned unique ID for this managed resource.
- job
Definition stringArn The Amazon Resource Name (ARN) of job definition.
- creation_
time str The time at which the job definition was created.
- id str
The provider-assigned unique ID for this managed resource.
- job_
definition_ strarn The Amazon Resource Name (ARN) of job definition.
- creation
Time String The time at which the job definition was created.
- id String
The provider-assigned unique ID for this managed resource.
- job
Definition StringArn The Amazon Resource Name (ARN) of job definition.
Supporting Types
DataQualityJobDefinitionBatchTransformInput
- Data
Captured stringDestination S3Uri A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- Dataset
Format Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Dataset Format - Local
Path string Path to the filesystem where the endpoint data is available to the container.
- S3Data
Distribution Pulumi.Type Aws Native. Sage Maker. Data Quality Job Definition Batch Transform Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition Batch Transform Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- Data
Captured stringDestination S3Uri A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- Dataset
Format DataQuality Job Definition Dataset Format - Local
Path string Path to the filesystem where the endpoint data is available to the container.
- S3Data
Distribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode DataQuality Job Definition Batch Transform Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data
Captured StringDestination S3Uri A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset
Format DataQuality Job Definition Dataset Format - local
Path String Path to the filesystem where the endpoint data is available to the container.
- s3Data
Distribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Batch Transform Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data
Captured stringDestination S3Uri A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset
Format DataQuality Job Definition Dataset Format - local
Path string Path to the filesystem where the endpoint data is available to the container.
- s3Data
Distribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Batch Transform Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data_
captured_ strdestination_ s3_ uri A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset_
format DataQuality Job Definition Dataset Format - local_
path str Path to the filesystem where the endpoint data is available to the container.
- s3_
data_ Datadistribution_ type Quality Job Definition Batch Transform Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3_
input_ Datamode Quality Job Definition Batch Transform Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data
Captured StringDestination S3Uri A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset
Format Property Map - local
Path String Path to the filesystem where the endpoint data is available to the container.
- s3Data
Distribution "FullyType Replicated" | "Sharded By S3Key" Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode "Pipe" | "File" Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
DataQualityJobDefinitionBatchTransformInputS3DataDistributionType
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Data
Quality Job Definition Batch Transform Input S3Data Distribution Type Fully Replicated - FullyReplicated
- Data
Quality Job Definition Batch Transform Input S3Data Distribution Type Sharded By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- FULLY_REPLICATED
- FullyReplicated
- SHARDED_BY_S3_KEY
- ShardedByS3Key
- "Fully
Replicated" - FullyReplicated
- "Sharded
By S3Key" - ShardedByS3Key
DataQualityJobDefinitionBatchTransformInputS3InputMode
- Pipe
- Pipe
- File
- File
- Data
Quality Job Definition Batch Transform Input S3Input Mode Pipe - Pipe
- Data
Quality Job Definition Batch Transform Input S3Input Mode File - File
- Pipe
- Pipe
- File
- File
- Pipe
- Pipe
- File
- File
- PIPE
- Pipe
- FILE
- File
- "Pipe"
- Pipe
- "File"
- File
DataQualityJobDefinitionClusterConfig
- Instance
Count int The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- Instance
Type string The ML compute instance type for the processing job.
- Volume
Size intIn GB The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- Volume
Kms stringKey Id The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- Instance
Count int The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- Instance
Type string The ML compute instance type for the processing job.
- Volume
Size intIn GB The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- Volume
Kms stringKey Id The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance
Count Integer The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance
Type String The ML compute instance type for the processing job.
- volume
Size IntegerIn GB The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume
Kms StringKey Id The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance
Count number The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance
Type string The ML compute instance type for the processing job.
- volume
Size numberIn GB The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume
Kms stringKey Id The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance_
count int The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance_
type str The ML compute instance type for the processing job.
- volume_
size_ intin_ gb The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume_
kms_ strkey_ id The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance
Count Number The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance
Type String The ML compute instance type for the processing job.
- volume
Size NumberIn GB The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume
Kms StringKey Id The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
DataQualityJobDefinitionConstraintsResource
- S3Uri string
The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- S3Uri string
The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri String
The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri string
The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3_
uri str The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri String
The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
DataQualityJobDefinitionCsv
- Header bool
A boolean flag indicating if given CSV has header
- Header bool
A boolean flag indicating if given CSV has header
- header Boolean
A boolean flag indicating if given CSV has header
- header boolean
A boolean flag indicating if given CSV has header
- header bool
A boolean flag indicating if given CSV has header
- header Boolean
A boolean flag indicating if given CSV has header
DataQualityJobDefinitionDataQualityAppSpecification
- Image
Uri string The container image to be run by the monitoring job.
- Container
Arguments List<string> An array of arguments for the container used to run the monitoring job.
- Container
Entrypoint List<string> Specifies the entrypoint for a container used to run the monitoring job.
- Environment object
Sets the environment variables in the Docker container
- Post
Analytics stringProcessor Source Uri An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- Record
Preprocessor stringSource Uri An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- Image
Uri string The container image to be run by the monitoring job.
- Container
Arguments []string An array of arguments for the container used to run the monitoring job.
- Container
Entrypoint []string Specifies the entrypoint for a container used to run the monitoring job.
- Environment interface{}
Sets the environment variables in the Docker container
- Post
Analytics stringProcessor Source Uri An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- Record
Preprocessor stringSource Uri An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image
Uri String The container image to be run by the monitoring job.
- container
Arguments List<String> An array of arguments for the container used to run the monitoring job.
- container
Entrypoint List<String> Specifies the entrypoint for a container used to run the monitoring job.
- environment Object
Sets the environment variables in the Docker container
- post
Analytics StringProcessor Source Uri An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record
Preprocessor StringSource Uri An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image
Uri string The container image to be run by the monitoring job.
- container
Arguments string[] An array of arguments for the container used to run the monitoring job.
- container
Entrypoint string[] Specifies the entrypoint for a container used to run the monitoring job.
- environment any
Sets the environment variables in the Docker container
- post
Analytics stringProcessor Source Uri An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record
Preprocessor stringSource Uri An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image_
uri str The container image to be run by the monitoring job.
- container_
arguments Sequence[str] An array of arguments for the container used to run the monitoring job.
- container_
entrypoint Sequence[str] Specifies the entrypoint for a container used to run the monitoring job.
- environment Any
Sets the environment variables in the Docker container
- post_
analytics_ strprocessor_ source_ uri An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record_
preprocessor_ strsource_ uri An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image
Uri String The container image to be run by the monitoring job.
- container
Arguments List<String> An array of arguments for the container used to run the monitoring job.
- container
Entrypoint List<String> Specifies the entrypoint for a container used to run the monitoring job.
- environment Any
Sets the environment variables in the Docker container
- post
Analytics StringProcessor Source Uri An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record
Preprocessor StringSource Uri An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
DataQualityJobDefinitionDataQualityBaselineConfig
DataQualityJobDefinitionDataQualityJobInput
DataQualityJobDefinitionDatasetFormat
- csv Property Map
- json Property Map
- parquet Boolean
DataQualityJobDefinitionEndpointInput
- Endpoint
Name string - Local
Path string Path to the filesystem where the endpoint data is available to the container.
- S3Data
Distribution Pulumi.Type Aws Native. Sage Maker. Data Quality Job Definition Endpoint Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition Endpoint Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- Endpoint
Name string - Local
Path string Path to the filesystem where the endpoint data is available to the container.
- S3Data
Distribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode DataQuality Job Definition Endpoint Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint
Name String - local
Path String Path to the filesystem where the endpoint data is available to the container.
- s3Data
Distribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Endpoint Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint
Name string - local
Path string Path to the filesystem where the endpoint data is available to the container.
- s3Data
Distribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Endpoint Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint_
name str - local_
path str Path to the filesystem where the endpoint data is available to the container.
- s3_
data_ Datadistribution_ type Quality Job Definition Endpoint Input S3Data Distribution Type Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3_
input_ Datamode Quality Job Definition Endpoint Input S3Input Mode Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint
Name String - local
Path String Path to the filesystem where the endpoint data is available to the container.
- s3Data
Distribution "FullyType Replicated" | "Sharded By S3Key" Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode "Pipe" | "File" Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
DataQualityJobDefinitionEndpointInputS3DataDistributionType
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Data
Quality Job Definition Endpoint Input S3Data Distribution Type Fully Replicated - FullyReplicated
- Data
Quality Job Definition Endpoint Input S3Data Distribution Type Sharded By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- FULLY_REPLICATED
- FullyReplicated
- SHARDED_BY_S3_KEY
- ShardedByS3Key
- "Fully
Replicated" - FullyReplicated
- "Sharded
By S3Key" - ShardedByS3Key
DataQualityJobDefinitionEndpointInputS3InputMode
- Pipe
- Pipe
- File
- File
- Data
Quality Job Definition Endpoint Input S3Input Mode Pipe - Pipe
- Data
Quality Job Definition Endpoint Input S3Input Mode File - File
- Pipe
- Pipe
- File
- File
- Pipe
- Pipe
- File
- File
- PIPE
- Pipe
- FILE
- File
- "Pipe"
- Pipe
- "File"
- File
DataQualityJobDefinitionJson
- Line bool
A boolean flag indicating if it is JSON line format
- Line bool
A boolean flag indicating if it is JSON line format
- line Boolean
A boolean flag indicating if it is JSON line format
- line boolean
A boolean flag indicating if it is JSON line format
- line bool
A boolean flag indicating if it is JSON line format
- line Boolean
A boolean flag indicating if it is JSON line format
DataQualityJobDefinitionMonitoringOutput
DataQualityJobDefinitionMonitoringOutputConfig
- Monitoring
Outputs List<Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Output> Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- Kms
Key stringId The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- Monitoring
Outputs []DataQuality Job Definition Monitoring Output Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- Kms
Key stringId The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring
Outputs List<DataQuality Job Definition Monitoring Output> Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms
Key StringId The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring
Outputs DataQuality Job Definition Monitoring Output[] Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms
Key stringId The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring_
outputs Sequence[DataQuality Job Definition Monitoring Output] Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms_
key_ strid The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring
Outputs List<Property Map> Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms
Key StringId The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
DataQualityJobDefinitionMonitoringResources
DataQualityJobDefinitionNetworkConfig
- Enable
Inter boolContainer Traffic Encryption Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- Enable
Network boolIsolation Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- Vpc
Config Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Vpc Config
- Enable
Inter boolContainer Traffic Encryption Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- Enable
Network boolIsolation Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- Vpc
Config DataQuality Job Definition Vpc Config
- enable
Inter BooleanContainer Traffic Encryption Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable
Network BooleanIsolation Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc
Config DataQuality Job Definition Vpc Config
- enable
Inter booleanContainer Traffic Encryption Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable
Network booleanIsolation Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc
Config DataQuality Job Definition Vpc Config
- enable_
inter_ boolcontainer_ traffic_ encryption Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable_
network_ boolisolation Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc_
config DataQuality Job Definition Vpc Config
- enable
Inter BooleanContainer Traffic Encryption Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable
Network BooleanIsolation Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc
Config Property Map
DataQualityJobDefinitionS3Output
- Local
Path string The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- S3Uri string
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- S3Upload
Mode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition S3Output S3Upload Mode Whether to upload the results of the monitoring job continuously or after the job completes.
- Local
Path string The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- S3Uri string
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- S3Upload
Mode DataQuality Job Definition S3Output S3Upload Mode Whether to upload the results of the monitoring job continuously or after the job completes.
- local
Path String The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri String
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3Upload
Mode DataQuality Job Definition S3Output S3Upload Mode Whether to upload the results of the monitoring job continuously or after the job completes.
- local
Path string The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri string
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3Upload
Mode DataQuality Job Definition S3Output S3Upload Mode Whether to upload the results of the monitoring job continuously or after the job completes.
- local_
path str The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3_
uri str A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3_
upload_ Datamode Quality Job Definition S3Output S3Upload Mode Whether to upload the results of the monitoring job continuously or after the job completes.
- local
Path String The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri String
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3Upload
Mode "Continuous" | "EndOf Job" Whether to upload the results of the monitoring job continuously or after the job completes.
DataQualityJobDefinitionS3OutputS3UploadMode
- Continuous
- Continuous
- End
Of Job - EndOfJob
- Data
Quality Job Definition S3Output S3Upload Mode Continuous - Continuous
- Data
Quality Job Definition S3Output S3Upload Mode End Of Job - EndOfJob
- Continuous
- Continuous
- End
Of Job - EndOfJob
- Continuous
- Continuous
- End
Of Job - EndOfJob
- CONTINUOUS
- Continuous
- END_OF_JOB
- EndOfJob
- "Continuous"
- Continuous
- "End
Of Job" - EndOfJob
DataQualityJobDefinitionStatisticsResource
- S3Uri string
The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- S3Uri string
The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri String
The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri string
The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3_
uri str The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri String
The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
DataQualityJobDefinitionStoppingCondition
- Max
Runtime intIn Seconds The maximum runtime allowed in seconds.
- Max
Runtime intIn Seconds The maximum runtime allowed in seconds.
- max
Runtime IntegerIn Seconds The maximum runtime allowed in seconds.
- max
Runtime numberIn Seconds The maximum runtime allowed in seconds.
- max_
runtime_ intin_ seconds The maximum runtime allowed in seconds.
- max
Runtime NumberIn Seconds The maximum runtime allowed in seconds.
DataQualityJobDefinitionTag
- Key string
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- Value string
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- Key string
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- Value string
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- key String
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- value String
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- key string
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- value string
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- key str
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- value str
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- key String
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
- value String
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
DataQualityJobDefinitionVpcConfig
- Security
Group List<string>Ids The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- Subnets List<string>
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- Security
Group []stringIds The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- Subnets []string
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security
Group List<String>Ids The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets List<String>
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security
Group string[]Ids The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets string[]
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security_
group_ Sequence[str]ids The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets Sequence[str]
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security
Group List<String>Ids The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets List<String>
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0