1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataplex
  5. dataplex/v1
  6. DataScan

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

google-native.dataplex/v1.DataScan

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi

    Creates a DataScan resource. Auto-naming is currently not supported for this resource.

    Create DataScan Resource

    new DataScan(name: string, args: DataScanArgs, opts?: CustomResourceOptions);
    @overload
    def DataScan(resource_name: str,
                 opts: Optional[ResourceOptions] = None,
                 data: Optional[GoogleCloudDataplexV1DataSourceArgs] = None,
                 data_profile_spec: Optional[GoogleCloudDataplexV1DataProfileSpecArgs] = None,
                 data_quality_spec: Optional[GoogleCloudDataplexV1DataQualitySpecArgs] = None,
                 data_scan_id: Optional[str] = None,
                 description: Optional[str] = None,
                 display_name: Optional[str] = None,
                 execution_spec: Optional[GoogleCloudDataplexV1DataScanExecutionSpecArgs] = None,
                 labels: Optional[Mapping[str, str]] = None,
                 location: Optional[str] = None,
                 project: Optional[str] = None)
    @overload
    def DataScan(resource_name: str,
                 args: DataScanArgs,
                 opts: Optional[ResourceOptions] = None)
    func NewDataScan(ctx *Context, name string, args DataScanArgs, opts ...ResourceOption) (*DataScan, error)
    public DataScan(string name, DataScanArgs args, CustomResourceOptions? opts = null)
    public DataScan(String name, DataScanArgs args)
    public DataScan(String name, DataScanArgs args, CustomResourceOptions options)
    
    type: google-native:dataplex/v1:DataScan
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    
    name string
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    DataScan Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The DataScan resource accepts the following input properties:

    Data Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataSource

    The data source for DataScan.

    DataScanId string

    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.

    DataProfileSpec Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpec

    DataProfileScan related setting.

    DataQualitySpec Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpec

    DataQualityScan related setting.

    Description string

    Optional. Description of the scan. Must be between 1-1024 characters.

    DisplayName string

    Optional. User friendly display name. Must be between 1-256 characters.

    ExecutionSpec Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataScanExecutionSpec

    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.

    Labels Dictionary<string, string>

    Optional. User-defined labels for the scan.

    Location string
    Project string
    Data GoogleCloudDataplexV1DataSourceArgs

    The data source for DataScan.

    DataScanId string

    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.

    DataProfileSpec GoogleCloudDataplexV1DataProfileSpecArgs

    DataProfileScan related setting.

    DataQualitySpec GoogleCloudDataplexV1DataQualitySpecArgs

    DataQualityScan related setting.

    Description string

    Optional. Description of the scan. Must be between 1-1024 characters.

    DisplayName string

    Optional. User friendly display name. Must be between 1-256 characters.

    ExecutionSpec GoogleCloudDataplexV1DataScanExecutionSpecArgs

    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.

    Labels map[string]string

    Optional. User-defined labels for the scan.

    Location string
    Project string
    data GoogleCloudDataplexV1DataSource

    The data source for DataScan.

    dataScanId String

    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.

    dataProfileSpec GoogleCloudDataplexV1DataProfileSpec

    DataProfileScan related setting.

    dataQualitySpec GoogleCloudDataplexV1DataQualitySpec

    DataQualityScan related setting.

    description String

    Optional. Description of the scan. Must be between 1-1024 characters.

    displayName String

    Optional. User friendly display name. Must be between 1-256 characters.

    executionSpec GoogleCloudDataplexV1DataScanExecutionSpec

    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.

    labels Map<String,String>

    Optional. User-defined labels for the scan.

    location String
    project String
    data GoogleCloudDataplexV1DataSource

    The data source for DataScan.

    dataScanId string

    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.

    dataProfileSpec GoogleCloudDataplexV1DataProfileSpec

    DataProfileScan related setting.

    dataQualitySpec GoogleCloudDataplexV1DataQualitySpec

    DataQualityScan related setting.

    description string

    Optional. Description of the scan. Must be between 1-1024 characters.

    displayName string

    Optional. User friendly display name. Must be between 1-256 characters.

    executionSpec GoogleCloudDataplexV1DataScanExecutionSpec

    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.

    labels {[key: string]: string}

    Optional. User-defined labels for the scan.

    location string
    project string
    data GoogleCloudDataplexV1DataSourceArgs

    The data source for DataScan.

    data_scan_id str

    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.

    data_profile_spec GoogleCloudDataplexV1DataProfileSpecArgs

    DataProfileScan related setting.

    data_quality_spec GoogleCloudDataplexV1DataQualitySpecArgs

    DataQualityScan related setting.

    description str

    Optional. Description of the scan. Must be between 1-1024 characters.

    display_name str

    Optional. User friendly display name. Must be between 1-256 characters.

    execution_spec GoogleCloudDataplexV1DataScanExecutionSpecArgs

    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.

    labels Mapping[str, str]

    Optional. User-defined labels for the scan.

    location str
    project str
    data Property Map

    The data source for DataScan.

    dataScanId String

    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.

    dataProfileSpec Property Map

    DataProfileScan related setting.

    dataQualitySpec Property Map

    DataQualityScan related setting.

    description String

    Optional. Description of the scan. Must be between 1-1024 characters.

    displayName String

    Optional. User friendly display name. Must be between 1-256 characters.

    executionSpec Property Map

    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.

    labels Map<String>

    Optional. User-defined labels for the scan.

    location String
    project String

    Outputs

    All input properties are implicitly available as output properties. Additionally, the DataScan resource produces the following output properties:

    CreateTime string

    The time when the scan was created.

    DataProfileResult Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataProfileResultResponse

    The result of the data profile scan.

    DataQualityResult Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataQualityResultResponse

    The result of the data quality scan.

    ExecutionStatus Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataScanExecutionStatusResponse

    Status of the data scan execution.

    Id string

    The provider-assigned unique ID for this managed resource.

    Name string

    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.

    State string

    Current state of the DataScan.

    Type string

    The type of DataScan.

    Uid string

    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.

    UpdateTime string

    The time when the scan was last updated.

    CreateTime string

    The time when the scan was created.

    DataProfileResult GoogleCloudDataplexV1DataProfileResultResponse

    The result of the data profile scan.

    DataQualityResult GoogleCloudDataplexV1DataQualityResultResponse

    The result of the data quality scan.

    ExecutionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse

    Status of the data scan execution.

    Id string

    The provider-assigned unique ID for this managed resource.

    Name string

    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.

    State string

    Current state of the DataScan.

    Type string

    The type of DataScan.

    Uid string

    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.

    UpdateTime string

    The time when the scan was last updated.

    createTime String

    The time when the scan was created.

    dataProfileResult GoogleCloudDataplexV1DataProfileResultResponse

    The result of the data profile scan.

    dataQualityResult GoogleCloudDataplexV1DataQualityResultResponse

    The result of the data quality scan.

    executionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse

    Status of the data scan execution.

    id String

    The provider-assigned unique ID for this managed resource.

    name String

    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.

    state String

    Current state of the DataScan.

    type String

    The type of DataScan.

    uid String

    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.

    updateTime String

    The time when the scan was last updated.

    createTime string

    The time when the scan was created.

    dataProfileResult GoogleCloudDataplexV1DataProfileResultResponse

    The result of the data profile scan.

    dataQualityResult GoogleCloudDataplexV1DataQualityResultResponse

    The result of the data quality scan.

    executionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse

    Status of the data scan execution.

    id string

    The provider-assigned unique ID for this managed resource.

    name string

    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.

    state string

    Current state of the DataScan.

    type string

    The type of DataScan.

    uid string

    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.

    updateTime string

    The time when the scan was last updated.

    create_time str

    The time when the scan was created.

    data_profile_result GoogleCloudDataplexV1DataProfileResultResponse

    The result of the data profile scan.

    data_quality_result GoogleCloudDataplexV1DataQualityResultResponse

    The result of the data quality scan.

    execution_status GoogleCloudDataplexV1DataScanExecutionStatusResponse

    Status of the data scan execution.

    id str

    The provider-assigned unique ID for this managed resource.

    name str

    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.

    state str

    Current state of the DataScan.

    type str

    The type of DataScan.

    uid str

    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.

    update_time str

    The time when the scan was last updated.

    createTime String

    The time when the scan was created.

    dataProfileResult Property Map

    The result of the data profile scan.

    dataQualityResult Property Map

    The result of the data quality scan.

    executionStatus Property Map

    Status of the data scan execution.

    id String

    The provider-assigned unique ID for this managed resource.

    name String

    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.

    state String

    Current state of the DataScan.

    type String

    The type of DataScan.

    uid String

    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.

    updateTime String

    The time when the scan was last updated.

    Supporting Types

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponseArgs

    Average double

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    Max double

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    Min double

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    Quartiles List<double>

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    StandardDeviation double

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    Average float64

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    Max float64

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    Min float64

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    Quartiles []float64

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    StandardDeviation float64

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average Double

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max Double

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min Double

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles List<Double>

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standardDeviation Double

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average number

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max number

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min number

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles number[]

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standardDeviation number

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average float

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max float

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min float

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles Sequence[float]

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standard_deviation float

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average Number

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max Number

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min Number

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles List<Number>

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standardDeviation Number

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponseArgs

    Average double

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    Max string

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    Min string

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    Quartiles List<string>

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    StandardDeviation double

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    Average float64

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    Max string

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    Min string

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    Quartiles []string

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    StandardDeviation float64

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average Double

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max String

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min String

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles List<String>

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standardDeviation Double

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average number

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max string

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min string

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles string[]

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standardDeviation number

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average float

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max str

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min str

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles Sequence[str]

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standard_deviation float

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    average Number

    Average of non-null values in the scanned data. NaN, if the field has a NaN.

    max String

    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.

    min String

    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.

    quartiles List<String>

    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.

    standardDeviation Number

    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponseArgs

    DistinctRatio double

    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    DoubleProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse

    Double type field information.

    IntegerProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse

    Integer type field information.

    NullRatio double

    Ratio of rows with null value against total scanned rows.

    StringProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse

    String type field information.

    TopNValues List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse>

    The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    DistinctRatio float64

    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    DoubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse

    Double type field information.

    IntegerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse

    Integer type field information.

    NullRatio float64

    Ratio of rows with null value against total scanned rows.

    StringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse

    String type field information.

    TopNValues []GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse

    The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    distinctRatio Double

    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    doubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse

    Double type field information.

    integerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse

    Integer type field information.

    nullRatio Double

    Ratio of rows with null value against total scanned rows.

    stringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse

    String type field information.

    topNValues List<GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse>

    The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    distinctRatio number

    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    doubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse

    Double type field information.

    integerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse

    Integer type field information.

    nullRatio number

    Ratio of rows with null value against total scanned rows.

    stringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse

    String type field information.

    topNValues GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse[]

    The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    distinct_ratio float

    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    double_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse

    Double type field information.

    integer_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse

    Integer type field information.

    null_ratio float

    Ratio of rows with null value against total scanned rows.

    string_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse

    String type field information.

    top_n_values Sequence[GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse]

    The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    distinctRatio Number

    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    doubleProfile Property Map

    Double type field information.

    integerProfile Property Map

    Integer type field information.

    nullRatio Number

    Ratio of rows with null value against total scanned rows.

    stringProfile Property Map

    String type field information.

    topNValues List<Property Map>

    The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponseArgs

    AverageLength double

    Average length of non-null values in the scanned data.

    MaxLength string

    Maximum length of non-null values in the scanned data.

    MinLength string

    Minimum length of non-null values in the scanned data.

    AverageLength float64

    Average length of non-null values in the scanned data.

    MaxLength string

    Maximum length of non-null values in the scanned data.

    MinLength string

    Minimum length of non-null values in the scanned data.

    averageLength Double

    Average length of non-null values in the scanned data.

    maxLength String

    Maximum length of non-null values in the scanned data.

    minLength String

    Minimum length of non-null values in the scanned data.

    averageLength number

    Average length of non-null values in the scanned data.

    maxLength string

    Maximum length of non-null values in the scanned data.

    minLength string

    Minimum length of non-null values in the scanned data.

    average_length float

    Average length of non-null values in the scanned data.

    max_length str

    Maximum length of non-null values in the scanned data.

    min_length str

    Minimum length of non-null values in the scanned data.

    averageLength Number

    Average length of non-null values in the scanned data.

    maxLength String

    Maximum length of non-null values in the scanned data.

    minLength String

    Minimum length of non-null values in the scanned data.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponseArgs

    Count string

    Count of the corresponding value in the scanned data.

    Value string

    String value of a top N non-null value.

    Count string

    Count of the corresponding value in the scanned data.

    Value string

    String value of a top N non-null value.

    count String

    Count of the corresponding value in the scanned data.

    value String

    String value of a top N non-null value.

    count string

    Count of the corresponding value in the scanned data.

    value string

    String value of a top N non-null value.

    count str

    Count of the corresponding value in the scanned data.

    value str

    String value of a top N non-null value.

    count String

    Count of the corresponding value in the scanned data.

    value String

    String value of a top N non-null value.

    GoogleCloudDataplexV1DataProfileResultProfileFieldResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldResponseArgs

    Mode string

    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.

    Name string

    The name of the field.

    Profile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse

    Profile information for the corresponding field.

    Type string

    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    Mode string

    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.

    Name string

    The name of the field.

    Profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse

    Profile information for the corresponding field.

    Type string

    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    mode String

    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.

    name String

    The name of the field.

    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse

    Profile information for the corresponding field.

    type String

    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    mode string

    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.

    name string

    The name of the field.

    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse

    Profile information for the corresponding field.

    type string

    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    mode str

    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.

    name str

    The name of the field.

    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse

    Profile information for the corresponding field.

    type str

    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    mode String

    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.

    name String

    The name of the field.

    profile Property Map

    Profile information for the corresponding field.

    type String

    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    GoogleCloudDataplexV1DataProfileResultProfileResponse, GoogleCloudDataplexV1DataProfileResultProfileResponseArgs

    Fields []GoogleCloudDataplexV1DataProfileResultProfileFieldResponse

    List of fields with structural and profile information for each field.

    fields List<GoogleCloudDataplexV1DataProfileResultProfileFieldResponse>

    List of fields with structural and profile information for each field.

    fields GoogleCloudDataplexV1DataProfileResultProfileFieldResponse[]

    List of fields with structural and profile information for each field.

    fields Sequence[GoogleCloudDataplexV1DataProfileResultProfileFieldResponse]

    List of fields with structural and profile information for each field.

    fields List<Property Map>

    List of fields with structural and profile information for each field.

    GoogleCloudDataplexV1DataProfileResultResponse, GoogleCloudDataplexV1DataProfileResultResponseArgs

    Profile GoogleCloudDataplexV1DataProfileResultProfileResponse

    The profile information per field.

    RowCount string

    The count of rows scanned.

    ScannedData GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    profile GoogleCloudDataplexV1DataProfileResultProfileResponse

    The profile information per field.

    rowCount String

    The count of rows scanned.

    scannedData GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    profile GoogleCloudDataplexV1DataProfileResultProfileResponse

    The profile information per field.

    rowCount string

    The count of rows scanned.

    scannedData GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    profile GoogleCloudDataplexV1DataProfileResultProfileResponse

    The profile information per field.

    row_count str

    The count of rows scanned.

    scanned_data GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    profile Property Map

    The profile information per field.

    rowCount String

    The count of rows scanned.

    scannedData Property Map

    The data scanned for this result.

    GoogleCloudDataplexV1DataProfileSpec, GoogleCloudDataplexV1DataProfileSpecArgs

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    SamplingPercent double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    SamplingPercent float64

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    samplingPercent Double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    samplingPercent number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    row_filter str

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    sampling_percent float

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    samplingPercent Number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataProfileSpecResponse, GoogleCloudDataplexV1DataProfileSpecResponseArgs

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    SamplingPercent double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    SamplingPercent float64

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    samplingPercent Double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    samplingPercent number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    row_filter str

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    sampling_percent float

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    samplingPercent Number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataQualityDimensionResultResponse, GoogleCloudDataplexV1DataQualityDimensionResultResponseArgs

    Passed bool

    Whether the dimension passed or failed.

    Passed bool

    Whether the dimension passed or failed.

    passed Boolean

    Whether the dimension passed or failed.

    passed boolean

    Whether the dimension passed or failed.

    passed bool

    Whether the dimension passed or failed.

    passed Boolean

    Whether the dimension passed or failed.

    GoogleCloudDataplexV1DataQualityResultResponse, GoogleCloudDataplexV1DataQualityResultResponseArgs

    Dimensions List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityDimensionResultResponse>

    A list of results at the dimension level.

    Passed bool

    Overall data quality result -- true if all rules passed.

    RowCount string

    The count of rows processed.

    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResultResponse>

    A list of all the rules in a job, and their results.

    ScannedData Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    Dimensions []GoogleCloudDataplexV1DataQualityDimensionResultResponse

    A list of results at the dimension level.

    Passed bool

    Overall data quality result -- true if all rules passed.

    RowCount string

    The count of rows processed.

    Rules []GoogleCloudDataplexV1DataQualityRuleResultResponse

    A list of all the rules in a job, and their results.

    ScannedData GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    dimensions List<GoogleCloudDataplexV1DataQualityDimensionResultResponse>

    A list of results at the dimension level.

    passed Boolean

    Overall data quality result -- true if all rules passed.

    rowCount String

    The count of rows processed.

    rules List<GoogleCloudDataplexV1DataQualityRuleResultResponse>

    A list of all the rules in a job, and their results.

    scannedData GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    dimensions GoogleCloudDataplexV1DataQualityDimensionResultResponse[]

    A list of results at the dimension level.

    passed boolean

    Overall data quality result -- true if all rules passed.

    rowCount string

    The count of rows processed.

    rules GoogleCloudDataplexV1DataQualityRuleResultResponse[]

    A list of all the rules in a job, and their results.

    scannedData GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    dimensions Sequence[GoogleCloudDataplexV1DataQualityDimensionResultResponse]

    A list of results at the dimension level.

    passed bool

    Overall data quality result -- true if all rules passed.

    row_count str

    The count of rows processed.

    rules Sequence[GoogleCloudDataplexV1DataQualityRuleResultResponse]

    A list of all the rules in a job, and their results.

    scanned_data GoogleCloudDataplexV1ScannedDataResponse

    The data scanned for this result.

    dimensions List<Property Map>

    A list of results at the dimension level.

    passed Boolean

    Overall data quality result -- true if all rules passed.

    rowCount String

    The count of rows processed.

    rules List<Property Map>

    A list of all the rules in a job, and their results.

    scannedData Property Map

    The data scanned for this result.

    GoogleCloudDataplexV1DataQualityRule, GoogleCloudDataplexV1DataQualityRuleArgs

    Dimension string

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    Column string

    Optional. The unnested column which this rule is evaluated against.

    IgnoreNull bool

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    NonNullExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleNonNullExpectation

    ColumnMap rule which evaluates whether each column value is null.

    RangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectation

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    RegexExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectation

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    RowConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation

    Table rule which evaluates whether each row passes the specified condition.

    SetExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectation

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    StatisticRangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    TableConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation

    Table rule which evaluates whether the provided expression is true.

    Threshold double

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    UniquenessExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation

    ColumnAggregate rule which evaluates whether the column has duplicates.

    Dimension string

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    Column string

    Optional. The unnested column which this rule is evaluated against.

    IgnoreNull bool

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    NonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation

    ColumnMap rule which evaluates whether each column value is null.

    RangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    RegexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    RowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation

    Table rule which evaluates whether each row passes the specified condition.

    SetExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectation

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    StatisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    TableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation

    Table rule which evaluates whether the provided expression is true.

    Threshold float64

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    UniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation

    ColumnAggregate rule which evaluates whether the column has duplicates.

    dimension String

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    column String

    Optional. The unnested column which this rule is evaluated against.

    ignoreNull Boolean

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation

    ColumnMap rule which evaluates whether each column value is null.

    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation

    Table rule which evaluates whether each row passes the specified condition.

    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectation

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation

    Table rule which evaluates whether the provided expression is true.

    threshold Double

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation

    ColumnAggregate rule which evaluates whether the column has duplicates.

    dimension string

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    column string

    Optional. The unnested column which this rule is evaluated against.

    ignoreNull boolean

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation

    ColumnMap rule which evaluates whether each column value is null.

    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation

    Table rule which evaluates whether each row passes the specified condition.

    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectation

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation

    Table rule which evaluates whether the provided expression is true.

    threshold number

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation

    ColumnAggregate rule which evaluates whether the column has duplicates.

    dimension str

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    column str

    Optional. The unnested column which this rule is evaluated against.

    ignore_null bool

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    non_null_expectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation

    ColumnMap rule which evaluates whether each column value is null.

    range_expectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regex_expectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    row_condition_expectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation

    Table rule which evaluates whether each row passes the specified condition.

    set_expectation GoogleCloudDataplexV1DataQualityRuleSetExpectation

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statistic_range_expectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    table_condition_expectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation

    Table rule which evaluates whether the provided expression is true.

    threshold float

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniqueness_expectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation

    ColumnAggregate rule which evaluates whether the column has duplicates.

    dimension String

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    column String

    Optional. The unnested column which this rule is evaluated against.

    ignoreNull Boolean

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    nonNullExpectation Property Map

    ColumnMap rule which evaluates whether each column value is null.

    rangeExpectation Property Map

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regexExpectation Property Map

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    rowConditionExpectation Property Map

    Table rule which evaluates whether each row passes the specified condition.

    setExpectation Property Map

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statisticRangeExpectation Property Map

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    tableConditionExpectation Property Map

    Table rule which evaluates whether the provided expression is true.

    threshold Number

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniquenessExpectation Property Map

    ColumnAggregate rule which evaluates whether the column has duplicates.

    GoogleCloudDataplexV1DataQualityRuleRangeExpectation, GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs

    MaxValue string

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    MinValue string

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    StrictMaxEnabled bool

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    MaxValue string

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    MinValue string

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    StrictMaxEnabled bool

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    minValue String

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strictMaxEnabled Boolean

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue string

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    minValue string

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strictMaxEnabled boolean

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled boolean

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    max_value str

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    min_value str

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strict_max_enabled bool

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strict_min_enabled bool

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    minValue String

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strictMaxEnabled Boolean

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponseArgs

    MaxValue string

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    MinValue string

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    StrictMaxEnabled bool

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    MaxValue string

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    MinValue string

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    StrictMaxEnabled bool

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    minValue String

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strictMaxEnabled Boolean

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue string

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    minValue string

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strictMaxEnabled boolean

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled boolean

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    max_value str

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    min_value str

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strict_max_enabled bool

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strict_min_enabled bool

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    minValue String

    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.

    strictMaxEnabled Boolean

    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleRegexExpectation, GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs

    Regex string

    A regular expression the column value is expected to match.

    Regex string

    A regular expression the column value is expected to match.

    regex String

    A regular expression the column value is expected to match.

    regex string

    A regular expression the column value is expected to match.

    regex str

    A regular expression the column value is expected to match.

    regex String

    A regular expression the column value is expected to match.

    GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponseArgs

    Regex string

    A regular expression the column value is expected to match.

    Regex string

    A regular expression the column value is expected to match.

    regex String

    A regular expression the column value is expected to match.

    regex string

    A regular expression the column value is expected to match.

    regex str

    A regular expression the column value is expected to match.

    regex String

    A regular expression the column value is expected to match.

    GoogleCloudDataplexV1DataQualityRuleResponse, GoogleCloudDataplexV1DataQualityRuleResponseArgs

    Column string

    Optional. The unnested column which this rule is evaluated against.

    Dimension string

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    IgnoreNull bool

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    NonNullExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse

    ColumnMap rule which evaluates whether each column value is null.

    RangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    RegexExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    RowConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse

    Table rule which evaluates whether each row passes the specified condition.

    SetExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    StatisticRangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    TableConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse

    Table rule which evaluates whether the provided expression is true.

    Threshold double

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    UniquenessExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse

    ColumnAggregate rule which evaluates whether the column has duplicates.

    Column string

    Optional. The unnested column which this rule is evaluated against.

    Dimension string

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    IgnoreNull bool

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    NonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse

    ColumnMap rule which evaluates whether each column value is null.

    RangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    RegexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    RowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse

    Table rule which evaluates whether each row passes the specified condition.

    SetExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    StatisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    TableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse

    Table rule which evaluates whether the provided expression is true.

    Threshold float64

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    UniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse

    ColumnAggregate rule which evaluates whether the column has duplicates.

    column String

    Optional. The unnested column which this rule is evaluated against.

    dimension String

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    ignoreNull Boolean

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse

    ColumnMap rule which evaluates whether each column value is null.

    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse

    Table rule which evaluates whether each row passes the specified condition.

    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse

    Table rule which evaluates whether the provided expression is true.

    threshold Double

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse

    ColumnAggregate rule which evaluates whether the column has duplicates.

    column string

    Optional. The unnested column which this rule is evaluated against.

    dimension string

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    ignoreNull boolean

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse

    ColumnMap rule which evaluates whether each column value is null.

    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse

    Table rule which evaluates whether each row passes the specified condition.

    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse

    Table rule which evaluates whether the provided expression is true.

    threshold number

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse

    ColumnAggregate rule which evaluates whether the column has duplicates.

    column str

    Optional. The unnested column which this rule is evaluated against.

    dimension str

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    ignore_null bool

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    non_null_expectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse

    ColumnMap rule which evaluates whether each column value is null.

    range_expectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regex_expectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    row_condition_expectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse

    Table rule which evaluates whether each row passes the specified condition.

    set_expectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statistic_range_expectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    table_condition_expectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse

    Table rule which evaluates whether the provided expression is true.

    threshold float

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniqueness_expectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse

    ColumnAggregate rule which evaluates whether the column has duplicates.

    column String

    Optional. The unnested column which this rule is evaluated against.

    dimension String

    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    ignoreNull Boolean

    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.

    nonNullExpectation Property Map

    ColumnMap rule which evaluates whether each column value is null.

    rangeExpectation Property Map

    ColumnMap rule which evaluates whether each column value lies between a specified range.

    regexExpectation Property Map

    ColumnMap rule which evaluates whether each column value matches a specified regex.

    rowConditionExpectation Property Map

    Table rule which evaluates whether each row passes the specified condition.

    setExpectation Property Map

    ColumnMap rule which evaluates whether each column value is contained by a specified set.

    statisticRangeExpectation Property Map

    ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.

    tableConditionExpectation Property Map

    Table rule which evaluates whether the provided expression is true.

    threshold Number

    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).

    uniquenessExpectation Property Map

    ColumnAggregate rule which evaluates whether the column has duplicates.

    GoogleCloudDataplexV1DataQualityRuleResultResponse, GoogleCloudDataplexV1DataQualityRuleResultResponseArgs

    EvaluatedCount string

    The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.

    FailingRowsQuery string

    The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.

    NullCount string

    The number of rows with null values in the specified column.

    PassRatio double

    The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.

    Passed bool

    Whether the rule passed or failed.

    PassedCount string

    The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.

    Rule Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResponse

    The rule specified in the DataQualitySpec, as is.

    EvaluatedCount string

    The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.

    FailingRowsQuery string

    The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.

    NullCount string

    The number of rows with null values in the specified column.

    PassRatio float64

    The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.

    Passed bool

    Whether the rule passed or failed.

    PassedCount string

    The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.

    Rule GoogleCloudDataplexV1DataQualityRuleResponse

    The rule specified in the DataQualitySpec, as is.

    evaluatedCount String

    The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.

    failingRowsQuery String

    The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.

    nullCount String

    The number of rows with null values in the specified column.

    passRatio Double

    The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.

    passed Boolean

    Whether the rule passed or failed.

    passedCount String

    The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.

    rule GoogleCloudDataplexV1DataQualityRuleResponse

    The rule specified in the DataQualitySpec, as is.

    evaluatedCount string

    The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.

    failingRowsQuery string

    The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.

    nullCount string

    The number of rows with null values in the specified column.

    passRatio number

    The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.

    passed boolean

    Whether the rule passed or failed.

    passedCount string

    The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.

    rule GoogleCloudDataplexV1DataQualityRuleResponse

    The rule specified in the DataQualitySpec, as is.

    evaluated_count str

    The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.

    failing_rows_query str

    The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.

    null_count str

    The number of rows with null values in the specified column.

    pass_ratio float

    The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.

    passed bool

    Whether the rule passed or failed.

    passed_count str

    The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.

    rule GoogleCloudDataplexV1DataQualityRuleResponse

    The rule specified in the DataQualitySpec, as is.

    evaluatedCount String

    The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.

    failingRowsQuery String

    The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.

    nullCount String

    The number of rows with null values in the specified column.

    passRatio Number

    The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.

    passed Boolean

    Whether the rule passed or failed.

    passedCount String

    The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.

    rule Property Map

    The rule specified in the DataQualitySpec, as is.

    GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs

    SqlExpression string

    The SQL expression.

    SqlExpression string

    The SQL expression.

    sqlExpression String

    The SQL expression.

    sqlExpression string

    The SQL expression.

    sql_expression str

    The SQL expression.

    sqlExpression String

    The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponseArgs

    SqlExpression string

    The SQL expression.

    SqlExpression string

    The SQL expression.

    sqlExpression String

    The SQL expression.

    sqlExpression string

    The SQL expression.

    sql_expression str

    The SQL expression.

    sqlExpression String

    The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleSetExpectation, GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs

    Values List<string>

    Expected values for the column value.

    Values []string

    Expected values for the column value.

    values List<String>

    Expected values for the column value.

    values string[]

    Expected values for the column value.

    values Sequence[str]

    Expected values for the column value.

    values List<String>

    Expected values for the column value.

    GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse, GoogleCloudDataplexV1DataQualityRuleSetExpectationResponseArgs

    Values List<string>

    Expected values for the column value.

    Values []string

    Expected values for the column value.

    values List<String>

    Expected values for the column value.

    values string[]

    Expected values for the column value.

    values Sequence[str]

    Expected values for the column value.

    values List<String>

    Expected values for the column value.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs

    MaxValue string

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    MinValue string

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    Statistic Pulumi.GoogleNative.Dataplex.V1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic

    The aggregate metric to evaluate.

    StrictMaxEnabled bool

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    MaxValue string

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    MinValue string

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    Statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic

    The aggregate metric to evaluate.

    StrictMaxEnabled bool

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    minValue String

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic

    The aggregate metric to evaluate.

    strictMaxEnabled Boolean

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue string

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    minValue string

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic

    The aggregate metric to evaluate.

    strictMaxEnabled boolean

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled boolean

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    max_value str

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    min_value str

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic

    The aggregate metric to evaluate.

    strict_max_enabled bool

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strict_min_enabled bool

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    minValue String

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic "STATISTIC_UNDEFINED" | "MEAN" | "MIN" | "MAX"

    The aggregate metric to evaluate.

    strictMaxEnabled Boolean

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponseArgs

    MaxValue string

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    MinValue string

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    Statistic string

    The aggregate metric to evaluate.

    StrictMaxEnabled bool

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    MaxValue string

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    MinValue string

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    Statistic string

    The aggregate metric to evaluate.

    StrictMaxEnabled bool

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    StrictMinEnabled bool

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    minValue String

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic String

    The aggregate metric to evaluate.

    strictMaxEnabled Boolean

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue string

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    minValue string

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic string

    The aggregate metric to evaluate.

    strictMaxEnabled boolean

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled boolean

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    max_value str

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    min_value str

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic str

    The aggregate metric to evaluate.

    strict_max_enabled bool

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strict_min_enabled bool

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    maxValue String

    The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    minValue String

    The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.

    statistic String

    The aggregate metric to evaluate.

    strictMaxEnabled Boolean

    Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.

    strictMinEnabled Boolean

    Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticArgs

    StatisticUndefined
    STATISTIC_UNDEFINED

    Unspecified statistic type

    Mean
    MEAN

    Evaluate the column mean

    Min
    MIN

    Evaluate the column min

    Max
    MAX

    Evaluate the column max

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticStatisticUndefined
    STATISTIC_UNDEFINED

    Unspecified statistic type

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticMean
    MEAN

    Evaluate the column mean

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticMin
    MIN

    Evaluate the column min

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticMax
    MAX

    Evaluate the column max

    StatisticUndefined
    STATISTIC_UNDEFINED

    Unspecified statistic type

    Mean
    MEAN

    Evaluate the column mean

    Min
    MIN

    Evaluate the column min

    Max
    MAX

    Evaluate the column max

    StatisticUndefined
    STATISTIC_UNDEFINED

    Unspecified statistic type

    Mean
    MEAN

    Evaluate the column mean

    Min
    MIN

    Evaluate the column min

    Max
    MAX

    Evaluate the column max

    STATISTIC_UNDEFINED
    STATISTIC_UNDEFINED

    Unspecified statistic type

    MEAN
    MEAN

    Evaluate the column mean

    MIN
    MIN

    Evaluate the column min

    MAX
    MAX

    Evaluate the column max

    "STATISTIC_UNDEFINED"
    STATISTIC_UNDEFINED

    Unspecified statistic type

    "MEAN"
    MEAN

    Evaluate the column mean

    "MIN"
    MIN

    Evaluate the column min

    "MAX"
    MAX

    Evaluate the column max

    GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs

    SqlExpression string

    The SQL expression.

    SqlExpression string

    The SQL expression.

    sqlExpression String

    The SQL expression.

    sqlExpression string

    The SQL expression.

    sql_expression str

    The SQL expression.

    sqlExpression String

    The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponseArgs

    SqlExpression string

    The SQL expression.

    SqlExpression string

    The SQL expression.

    sqlExpression String

    The SQL expression.

    sqlExpression string

    The SQL expression.

    sql_expression str

    The SQL expression.

    sqlExpression String

    The SQL expression.

    GoogleCloudDataplexV1DataQualitySpec, GoogleCloudDataplexV1DataQualitySpecArgs

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRule>

    The list of rules to evaluate against a data source. At least one rule is required.

    SamplingPercent double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    Rules []GoogleCloudDataplexV1DataQualityRule

    The list of rules to evaluate against a data source. At least one rule is required.

    SamplingPercent float64

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules List<GoogleCloudDataplexV1DataQualityRule>

    The list of rules to evaluate against a data source. At least one rule is required.

    samplingPercent Double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules GoogleCloudDataplexV1DataQualityRule[]

    The list of rules to evaluate against a data source. At least one rule is required.

    samplingPercent number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    row_filter str

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules Sequence[GoogleCloudDataplexV1DataQualityRule]

    The list of rules to evaluate against a data source. At least one rule is required.

    sampling_percent float

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules List<Property Map>

    The list of rules to evaluate against a data source. At least one rule is required.

    samplingPercent Number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataQualitySpecResponse, GoogleCloudDataplexV1DataQualitySpecResponseArgs

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResponse>

    The list of rules to evaluate against a data source. At least one rule is required.

    SamplingPercent double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    RowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    Rules []GoogleCloudDataplexV1DataQualityRuleResponse

    The list of rules to evaluate against a data source. At least one rule is required.

    SamplingPercent float64

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules List<GoogleCloudDataplexV1DataQualityRuleResponse>

    The list of rules to evaluate against a data source. At least one rule is required.

    samplingPercent Double

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter string

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules GoogleCloudDataplexV1DataQualityRuleResponse[]

    The list of rules to evaluate against a data source. At least one rule is required.

    samplingPercent number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    row_filter str

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules Sequence[GoogleCloudDataplexV1DataQualityRuleResponse]

    The list of rules to evaluate against a data source. At least one rule is required.

    sampling_percent float

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    rowFilter String

    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10

    rules List<Property Map>

    The list of rules to evaluate against a data source. At least one rule is required.

    samplingPercent Number

    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataScanExecutionSpec, GoogleCloudDataplexV1DataScanExecutionSpecArgs

    Field string

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    Trigger Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1Trigger

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    Field string

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    Trigger GoogleCloudDataplexV1Trigger

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field String

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger GoogleCloudDataplexV1Trigger

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field string

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger GoogleCloudDataplexV1Trigger

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field str

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger GoogleCloudDataplexV1Trigger

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field String

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger Property Map

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    GoogleCloudDataplexV1DataScanExecutionSpecResponse, GoogleCloudDataplexV1DataScanExecutionSpecResponseArgs

    Field string

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    Trigger Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerResponse

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    Field string

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    Trigger GoogleCloudDataplexV1TriggerResponse

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field String

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger GoogleCloudDataplexV1TriggerResponse

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field string

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger GoogleCloudDataplexV1TriggerResponse

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field str

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger GoogleCloudDataplexV1TriggerResponse

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    field String

    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.

    trigger Property Map

    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    GoogleCloudDataplexV1DataScanExecutionStatusResponse, GoogleCloudDataplexV1DataScanExecutionStatusResponseArgs

    LatestJobEndTime string

    The time when the latest DataScanJob ended.

    LatestJobStartTime string

    The time when the latest DataScanJob started.

    LatestJobEndTime string

    The time when the latest DataScanJob ended.

    LatestJobStartTime string

    The time when the latest DataScanJob started.

    latestJobEndTime String

    The time when the latest DataScanJob ended.

    latestJobStartTime String

    The time when the latest DataScanJob started.

    latestJobEndTime string

    The time when the latest DataScanJob ended.

    latestJobStartTime string

    The time when the latest DataScanJob started.

    latest_job_end_time str

    The time when the latest DataScanJob ended.

    latest_job_start_time str

    The time when the latest DataScanJob started.

    latestJobEndTime String

    The time when the latest DataScanJob ended.

    latestJobStartTime String

    The time when the latest DataScanJob started.

    GoogleCloudDataplexV1DataSource, GoogleCloudDataplexV1DataSourceArgs

    Entity string

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    Resource string

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    Entity string

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    Resource string

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity String

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource String

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity string

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource string

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity str

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource str

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity String

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource String

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataSourceResponse, GoogleCloudDataplexV1DataSourceResponseArgs

    Entity string

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    Resource string

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    Entity string

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    Resource string

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity String

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource String

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity string

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource string

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity str

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource str

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    entity String

    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.

    resource String

    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse, GoogleCloudDataplexV1ScannedDataIncrementalFieldResponseArgs

    End string

    Value that marks the end of the range.

    Field string

    The field that contains values which monotonically increases over time (e.g. a timestamp column).

    Start string

    Value that marks the start of the range.

    End string

    Value that marks the end of the range.

    Field string

    The field that contains values which monotonically increases over time (e.g. a timestamp column).

    Start string

    Value that marks the start of the range.

    end String

    Value that marks the end of the range.

    field String

    The field that contains values which monotonically increases over time (e.g. a timestamp column).

    start String

    Value that marks the start of the range.

    end string

    Value that marks the end of the range.

    field string

    The field that contains values which monotonically increases over time (e.g. a timestamp column).

    start string

    Value that marks the start of the range.

    end str

    Value that marks the end of the range.

    field str

    The field that contains values which monotonically increases over time (e.g. a timestamp column).

    start str

    Value that marks the start of the range.

    end String

    Value that marks the end of the range.

    field String

    The field that contains values which monotonically increases over time (e.g. a timestamp column).

    start String

    Value that marks the start of the range.

    GoogleCloudDataplexV1ScannedDataResponse, GoogleCloudDataplexV1ScannedDataResponseArgs

    IncrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse

    The range denoted by values of an incremental field

    incrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse

    The range denoted by values of an incremental field

    incrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse

    The range denoted by values of an incremental field

    incremental_field GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse

    The range denoted by values of an incremental field

    incrementalField Property Map

    The range denoted by values of an incremental field

    GoogleCloudDataplexV1Trigger, GoogleCloudDataplexV1TriggerArgs

    OnDemand GoogleCloudDataplexV1TriggerOnDemand

    The scan runs once via RunDataScan API.

    Schedule GoogleCloudDataplexV1TriggerSchedule

    The scan is scheduled to run periodically.

    onDemand GoogleCloudDataplexV1TriggerOnDemand

    The scan runs once via RunDataScan API.

    schedule GoogleCloudDataplexV1TriggerSchedule

    The scan is scheduled to run periodically.

    onDemand GoogleCloudDataplexV1TriggerOnDemand

    The scan runs once via RunDataScan API.

    schedule GoogleCloudDataplexV1TriggerSchedule

    The scan is scheduled to run periodically.

    on_demand GoogleCloudDataplexV1TriggerOnDemand

    The scan runs once via RunDataScan API.

    schedule GoogleCloudDataplexV1TriggerSchedule

    The scan is scheduled to run periodically.

    onDemand Property Map

    The scan runs once via RunDataScan API.

    schedule Property Map

    The scan is scheduled to run periodically.

    GoogleCloudDataplexV1TriggerResponse, GoogleCloudDataplexV1TriggerResponseArgs

    OnDemand GoogleCloudDataplexV1TriggerOnDemandResponse

    The scan runs once via RunDataScan API.

    Schedule GoogleCloudDataplexV1TriggerScheduleResponse

    The scan is scheduled to run periodically.

    onDemand GoogleCloudDataplexV1TriggerOnDemandResponse

    The scan runs once via RunDataScan API.

    schedule GoogleCloudDataplexV1TriggerScheduleResponse

    The scan is scheduled to run periodically.

    onDemand GoogleCloudDataplexV1TriggerOnDemandResponse

    The scan runs once via RunDataScan API.

    schedule GoogleCloudDataplexV1TriggerScheduleResponse

    The scan is scheduled to run periodically.

    on_demand GoogleCloudDataplexV1TriggerOnDemandResponse

    The scan runs once via RunDataScan API.

    schedule GoogleCloudDataplexV1TriggerScheduleResponse

    The scan is scheduled to run periodically.

    onDemand Property Map

    The scan runs once via RunDataScan API.

    schedule Property Map

    The scan is scheduled to run periodically.

    GoogleCloudDataplexV1TriggerSchedule, GoogleCloudDataplexV1TriggerScheduleArgs

    Cron string

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    Cron string

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron String

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron string

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron str

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron String

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    GoogleCloudDataplexV1TriggerScheduleResponse, GoogleCloudDataplexV1TriggerScheduleResponseArgs

    Cron string

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    Cron string

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron String

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron string

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron str

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    cron String

    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.31.1 published on Thursday, Jul 20, 2023 by Pulumi