1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataplex
  5. dataplex/v1
  6. getDataScan

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataplex/v1.getDataScan

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Gets a DataScan resource.

    Using getDataScan

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getDataScan(args: GetDataScanArgs, opts?: InvokeOptions): Promise<GetDataScanResult>
    function getDataScanOutput(args: GetDataScanOutputArgs, opts?: InvokeOptions): Output<GetDataScanResult>
    def get_data_scan(data_scan_id: Optional[str] = None,
                      location: Optional[str] = None,
                      project: Optional[str] = None,
                      view: Optional[str] = None,
                      opts: Optional[InvokeOptions] = None) -> GetDataScanResult
    def get_data_scan_output(data_scan_id: Optional[pulumi.Input[str]] = None,
                      location: Optional[pulumi.Input[str]] = None,
                      project: Optional[pulumi.Input[str]] = None,
                      view: Optional[pulumi.Input[str]] = None,
                      opts: Optional[InvokeOptions] = None) -> Output[GetDataScanResult]
    func LookupDataScan(ctx *Context, args *LookupDataScanArgs, opts ...InvokeOption) (*LookupDataScanResult, error)
    func LookupDataScanOutput(ctx *Context, args *LookupDataScanOutputArgs, opts ...InvokeOption) LookupDataScanResultOutput

    > Note: This function is named LookupDataScan in the Go SDK.

    public static class GetDataScan 
    {
        public static Task<GetDataScanResult> InvokeAsync(GetDataScanArgs args, InvokeOptions? opts = null)
        public static Output<GetDataScanResult> Invoke(GetDataScanInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetDataScanResult> getDataScan(GetDataScanArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: google-native:dataplex/v1:getDataScan
      arguments:
        # arguments dictionary

    The following arguments are supported:

    DataScanId string
    Location string
    Project string
    View string
    DataScanId string
    Location string
    Project string
    View string
    dataScanId String
    location String
    project String
    view String
    dataScanId string
    location string
    project string
    view string
    dataScanId String
    location String
    project String
    view String

    getDataScan Result

    The following output properties are available:

    CreateTime string
    The time when the scan was created.
    Data Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataSourceResponse
    The data source for DataScan.
    DataProfileResult Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    DataProfileSpec Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataProfileSpecResponse
    DataProfileScan related setting.
    DataQualityResult Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    DataQualitySpec Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataQualitySpecResponse
    DataQualityScan related setting.
    Description string
    Optional. Description of the scan. Must be between 1-1024 characters.
    DisplayName string
    Optional. User friendly display name. Must be between 1-256 characters.
    ExecutionSpec Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataScanExecutionSpecResponse
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    ExecutionStatus Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    Labels Dictionary<string, string>
    Optional. User-defined labels for the scan.
    Name string
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    State string
    Current state of the DataScan.
    Type string
    The type of DataScan.
    Uid string
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    UpdateTime string
    The time when the scan was last updated.
    CreateTime string
    The time when the scan was created.
    Data GoogleCloudDataplexV1DataSourceResponse
    The data source for DataScan.
    DataProfileResult GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    DataProfileSpec GoogleCloudDataplexV1DataProfileSpecResponse
    DataProfileScan related setting.
    DataQualityResult GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    DataQualitySpec GoogleCloudDataplexV1DataQualitySpecResponse
    DataQualityScan related setting.
    Description string
    Optional. Description of the scan. Must be between 1-1024 characters.
    DisplayName string
    Optional. User friendly display name. Must be between 1-256 characters.
    ExecutionSpec GoogleCloudDataplexV1DataScanExecutionSpecResponse
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    ExecutionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    Labels map[string]string
    Optional. User-defined labels for the scan.
    Name string
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    State string
    Current state of the DataScan.
    Type string
    The type of DataScan.
    Uid string
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    UpdateTime string
    The time when the scan was last updated.
    createTime String
    The time when the scan was created.
    data GoogleCloudDataplexV1DataSourceResponse
    The data source for DataScan.
    dataProfileResult GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    dataProfileSpec GoogleCloudDataplexV1DataProfileSpecResponse
    DataProfileScan related setting.
    dataQualityResult GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    dataQualitySpec GoogleCloudDataplexV1DataQualitySpecResponse
    DataQualityScan related setting.
    description String
    Optional. Description of the scan. Must be between 1-1024 characters.
    displayName String
    Optional. User friendly display name. Must be between 1-256 characters.
    executionSpec GoogleCloudDataplexV1DataScanExecutionSpecResponse
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    executionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    labels Map<String,String>
    Optional. User-defined labels for the scan.
    name String
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state String
    Current state of the DataScan.
    type String
    The type of DataScan.
    uid String
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    updateTime String
    The time when the scan was last updated.
    createTime string
    The time when the scan was created.
    data GoogleCloudDataplexV1DataSourceResponse
    The data source for DataScan.
    dataProfileResult GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    dataProfileSpec GoogleCloudDataplexV1DataProfileSpecResponse
    DataProfileScan related setting.
    dataQualityResult GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    dataQualitySpec GoogleCloudDataplexV1DataQualitySpecResponse
    DataQualityScan related setting.
    description string
    Optional. Description of the scan. Must be between 1-1024 characters.
    displayName string
    Optional. User friendly display name. Must be between 1-256 characters.
    executionSpec GoogleCloudDataplexV1DataScanExecutionSpecResponse
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    executionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    labels {[key: string]: string}
    Optional. User-defined labels for the scan.
    name string
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state string
    Current state of the DataScan.
    type string
    The type of DataScan.
    uid string
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    updateTime string
    The time when the scan was last updated.
    create_time str
    The time when the scan was created.
    data GoogleCloudDataplexV1DataSourceResponse
    The data source for DataScan.
    data_profile_result GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    data_profile_spec GoogleCloudDataplexV1DataProfileSpecResponse
    DataProfileScan related setting.
    data_quality_result GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    data_quality_spec GoogleCloudDataplexV1DataQualitySpecResponse
    DataQualityScan related setting.
    description str
    Optional. Description of the scan. Must be between 1-1024 characters.
    display_name str
    Optional. User friendly display name. Must be between 1-256 characters.
    execution_spec GoogleCloudDataplexV1DataScanExecutionSpecResponse
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    execution_status GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    labels Mapping[str, str]
    Optional. User-defined labels for the scan.
    name str
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state str
    Current state of the DataScan.
    type str
    The type of DataScan.
    uid str
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    update_time str
    The time when the scan was last updated.
    createTime String
    The time when the scan was created.
    data Property Map
    The data source for DataScan.
    dataProfileResult Property Map
    The result of the data profile scan.
    dataProfileSpec Property Map
    DataProfileScan related setting.
    dataQualityResult Property Map
    The result of the data quality scan.
    dataQualitySpec Property Map
    DataQualityScan related setting.
    description String
    Optional. Description of the scan. Must be between 1-1024 characters.
    displayName String
    Optional. User friendly display name. Must be between 1-256 characters.
    executionSpec Property Map
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    executionStatus Property Map
    Status of the data scan execution.
    labels Map<String>
    Optional. User-defined labels for the scan.
    name String
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state String
    Current state of the DataScan.
    type String
    The type of DataScan.
    uid String
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    updateTime String
    The time when the scan was last updated.

    Supporting Types

    GoogleCloudDataplexV1DataProfileResultPostScanActionsResultBigQueryExportResultResponse

    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.
    message string
    Additional information about the BigQuery exporting.
    state string
    Execution state for the BigQuery exporting.
    message str
    Additional information about the BigQuery exporting.
    state str
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.

    GoogleCloudDataplexV1DataProfileResultPostScanActionsResultResponse

    bigqueryExportResult Property Map
    The result of BigQuery export post scan action.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse

    Average double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max double
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min double
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles List<double>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    Average float64
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max float64
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min float64
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles []float64
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation float64
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max Double
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min Double
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<Double>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max number
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min number
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles number[]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average float
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max float
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min float
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles Sequence[float]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standard_deviation float
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max Number
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min Number
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<Number>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse

    Average double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max string
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min string
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles List<string>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    Average float64
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max string
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min string
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles []string
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation float64
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max String
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min String
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<String>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max string
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min string
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles string[]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average float
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max str
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min str
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles Sequence[str]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standard_deviation float
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max String
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min String
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<String>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse

    DistinctRatio double
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    DoubleProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    IntegerProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    NullRatio double
    Ratio of rows with null value against total scanned rows.
    StringProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    TopNValues List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse>
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    DistinctRatio float64
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    DoubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    IntegerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    NullRatio float64
    Ratio of rows with null value against total scanned rows.
    StringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    TopNValues []GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinctRatio Double
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    doubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    integerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    nullRatio Double
    Ratio of rows with null value against total scanned rows.
    stringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    topNValues List<GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse>
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinctRatio number
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    doubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    integerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    nullRatio number
    Ratio of rows with null value against total scanned rows.
    stringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    topNValues GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse[]
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinct_ratio float
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    double_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    integer_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    null_ratio float
    Ratio of rows with null value against total scanned rows.
    string_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    top_n_values Sequence[GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse]
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinctRatio Number
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    doubleProfile Property Map
    Double type field information.
    integerProfile Property Map
    Integer type field information.
    nullRatio Number
    Ratio of rows with null value against total scanned rows.
    stringProfile Property Map
    String type field information.
    topNValues List<Property Map>
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse

    AverageLength double
    Average length of non-null values in the scanned data.
    MaxLength string
    Maximum length of non-null values in the scanned data.
    MinLength string
    Minimum length of non-null values in the scanned data.
    AverageLength float64
    Average length of non-null values in the scanned data.
    MaxLength string
    Maximum length of non-null values in the scanned data.
    MinLength string
    Minimum length of non-null values in the scanned data.
    averageLength Double
    Average length of non-null values in the scanned data.
    maxLength String
    Maximum length of non-null values in the scanned data.
    minLength String
    Minimum length of non-null values in the scanned data.
    averageLength number
    Average length of non-null values in the scanned data.
    maxLength string
    Maximum length of non-null values in the scanned data.
    minLength string
    Minimum length of non-null values in the scanned data.
    average_length float
    Average length of non-null values in the scanned data.
    max_length str
    Maximum length of non-null values in the scanned data.
    min_length str
    Minimum length of non-null values in the scanned data.
    averageLength Number
    Average length of non-null values in the scanned data.
    maxLength String
    Maximum length of non-null values in the scanned data.
    minLength String
    Minimum length of non-null values in the scanned data.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse

    Count string
    Count of the corresponding value in the scanned data.
    Ratio double
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    Value string
    String value of a top N non-null value.
    Count string
    Count of the corresponding value in the scanned data.
    Ratio float64
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    Value string
    String value of a top N non-null value.
    count String
    Count of the corresponding value in the scanned data.
    ratio Double
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value String
    String value of a top N non-null value.
    count string
    Count of the corresponding value in the scanned data.
    ratio number
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value string
    String value of a top N non-null value.
    count str
    Count of the corresponding value in the scanned data.
    ratio float
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value str
    String value of a top N non-null value.
    count String
    Count of the corresponding value in the scanned data.
    ratio Number
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value String
    String value of a top N non-null value.

    GoogleCloudDataplexV1DataProfileResultProfileFieldResponse

    Mode string
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    Name string
    The name of the field.
    Profile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    Type string
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    Mode string
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    Name string
    The name of the field.
    Profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    Type string
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode String
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name String
    The name of the field.
    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    type String
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode string
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name string
    The name of the field.
    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    type string
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode str
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name str
    The name of the field.
    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    type str
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode String
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name String
    The name of the field.
    profile Property Map
    Profile information for the corresponding field.
    type String
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    GoogleCloudDataplexV1DataProfileResultProfileResponse

    Fields []GoogleCloudDataplexV1DataProfileResultProfileFieldResponse
    List of fields with structural and profile information for each field.
    fields List<GoogleCloudDataplexV1DataProfileResultProfileFieldResponse>
    List of fields with structural and profile information for each field.
    fields GoogleCloudDataplexV1DataProfileResultProfileFieldResponse[]
    List of fields with structural and profile information for each field.
    fields Sequence[GoogleCloudDataplexV1DataProfileResultProfileFieldResponse]
    List of fields with structural and profile information for each field.
    fields List<Property Map>
    List of fields with structural and profile information for each field.

    GoogleCloudDataplexV1DataProfileResultResponse

    postScanActionsResult Property Map
    The result of post scan actions.
    profile Property Map
    The profile information per field.
    rowCount String
    The count of rows scanned.
    scannedData Property Map
    The data scanned for this result.

    GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse

    ResultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    ResultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    results_table str
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse

    BigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigquery_export GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport Property Map
    Optional. If set, results will be exported to the provided BigQuery table.

    GoogleCloudDataplexV1DataProfileSpecResponse

    ExcludeFields Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    IncludeFields Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    PostScanActions Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    ExcludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    IncludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    PostScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent float64
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    rowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    exclude_fields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    include_fields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    post_scan_actions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    row_filter str
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    sampling_percent float
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields Property Map
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields Property Map
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions Property Map
    Optional. Actions to take upon job completion..
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse

    FieldNames List<string>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    FieldNames []string
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames List<String>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames string[]
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    field_names Sequence[str]
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames List<String>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.

    GoogleCloudDataplexV1DataQualityColumnResultResponse

    Column string
    The column specified in the DataQualityRule.
    Score double
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    Column string
    The column specified in the DataQualityRule.
    Score float64
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column String
    The column specified in the DataQualityRule.
    score Double
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column string
    The column specified in the DataQualityRule.
    score number
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column str
    The column specified in the DataQualityRule.
    score float
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column String
    The column specified in the DataQualityRule.
    score Number
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).

    GoogleCloudDataplexV1DataQualityDimensionResponse

    Name string
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    Name string
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name String
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name string
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name str
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name String
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    GoogleCloudDataplexV1DataQualityDimensionResultResponse

    Dimension Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    Passed bool
    Whether the dimension passed or failed.
    Score double
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    Dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    Passed bool
    Whether the dimension passed or failed.
    Score float64
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    passed Boolean
    Whether the dimension passed or failed.
    score Double
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    passed boolean
    Whether the dimension passed or failed.
    score number
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    passed bool
    Whether the dimension passed or failed.
    score float
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension Property Map
    The dimension config specified in the DataQualitySpec, as is.
    passed Boolean
    Whether the dimension passed or failed.
    score Number
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).

    GoogleCloudDataplexV1DataQualityResultPostScanActionsResultBigQueryExportResultResponse

    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.
    message string
    Additional information about the BigQuery exporting.
    state string
    Execution state for the BigQuery exporting.
    message str
    Additional information about the BigQuery exporting.
    state str
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.

    GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse

    bigqueryExportResult Property Map
    The result of BigQuery export post scan action.

    GoogleCloudDataplexV1DataQualityResultResponse

    Columns List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityColumnResultResponse>
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    Dimensions List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityDimensionResultResponse>
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    Passed bool
    Overall data quality result -- true if all rules passed.
    PostScanActionsResult Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    RowCount string
    The count of rows processed.
    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResultResponse>
    A list of all the rules in a job, and their results.
    ScannedData Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    Score double
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    Columns []GoogleCloudDataplexV1DataQualityColumnResultResponse
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    Dimensions []GoogleCloudDataplexV1DataQualityDimensionResultResponse
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    Passed bool
    Overall data quality result -- true if all rules passed.
    PostScanActionsResult GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    RowCount string
    The count of rows processed.
    Rules []GoogleCloudDataplexV1DataQualityRuleResultResponse
    A list of all the rules in a job, and their results.
    ScannedData GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    Score float64
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns List<GoogleCloudDataplexV1DataQualityColumnResultResponse>
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions List<GoogleCloudDataplexV1DataQualityDimensionResultResponse>
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed Boolean
    Overall data quality result -- true if all rules passed.
    postScanActionsResult GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    rowCount String
    The count of rows processed.
    rules List<GoogleCloudDataplexV1DataQualityRuleResultResponse>
    A list of all the rules in a job, and their results.
    scannedData GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    score Double
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns GoogleCloudDataplexV1DataQualityColumnResultResponse[]
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions GoogleCloudDataplexV1DataQualityDimensionResultResponse[]
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed boolean
    Overall data quality result -- true if all rules passed.
    postScanActionsResult GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    rowCount string
    The count of rows processed.
    rules GoogleCloudDataplexV1DataQualityRuleResultResponse[]
    A list of all the rules in a job, and their results.
    scannedData GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    score number
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns Sequence[GoogleCloudDataplexV1DataQualityColumnResultResponse]
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions Sequence[GoogleCloudDataplexV1DataQualityDimensionResultResponse]
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed bool
    Overall data quality result -- true if all rules passed.
    post_scan_actions_result GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    row_count str
    The count of rows processed.
    rules Sequence[GoogleCloudDataplexV1DataQualityRuleResultResponse]
    A list of all the rules in a job, and their results.
    scanned_data GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    score float
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns List<Property Map>
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions List<Property Map>
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed Boolean
    Overall data quality result -- true if all rules passed.
    postScanActionsResult Property Map
    The result of post scan actions.
    rowCount String
    The count of rows processed.
    rules List<Property Map>
    A list of all the rules in a job, and their results.
    scannedData Property Map
    The data scanned for this result.
    score Number
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).

    GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse

    MaxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    StrictMaxEnabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    MaxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    StrictMaxEnabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled Boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    max_value str
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    min_value str
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strict_max_enabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strict_min_enabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled Boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse

    Regex string
    Optional. A regular expression the column value is expected to match.
    Regex string
    Optional. A regular expression the column value is expected to match.
    regex String
    Optional. A regular expression the column value is expected to match.
    regex string
    Optional. A regular expression the column value is expected to match.
    regex str
    Optional. A regular expression the column value is expected to match.
    regex String
    Optional. A regular expression the column value is expected to match.

    GoogleCloudDataplexV1DataQualityRuleResponse

    Column string
    Optional. The unnested column which this rule is evaluated against.
    Description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    Dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    IgnoreNull bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    Name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    NonNullExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    RangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    RegexExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    RowConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    SetExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    StatisticRangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    TableConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    Threshold double
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    UniquenessExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    Column string
    Optional. The unnested column which this rule is evaluated against.
    Description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    Dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    IgnoreNull bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    Name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    NonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    RangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    RegexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    RowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    SetExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    StatisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    TableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    Threshold float64
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    UniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column String
    Optional. The unnested column which this rule is evaluated against.
    description String
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension String
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignoreNull Boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name String
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold Double
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column string
    Optional. The unnested column which this rule is evaluated against.
    description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignoreNull boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold number
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column str
    Optional. The unnested column which this rule is evaluated against.
    description str
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension str
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignore_null bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name str
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    non_null_expectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    range_expectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    regex_expectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    row_condition_expectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    set_expectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statistic_range_expectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    table_condition_expectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold float
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniqueness_expectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column String
    Optional. The unnested column which this rule is evaluated against.
    description String
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension String
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignoreNull Boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name String
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation Property Map
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation Property Map
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation Property Map
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation Property Map
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation Property Map
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation Property Map
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation Property Map
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold Number
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation Property Map
    Row-level rule which evaluates whether each column value is unique.

    GoogleCloudDataplexV1DataQualityRuleResultResponse

    EvaluatedCount string
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    FailingRowsQuery string
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    NullCount string
    The number of rows with null values in the specified column.
    PassRatio double
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    Passed bool
    Whether the rule passed or failed.
    PassedCount string
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    Rule Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    EvaluatedCount string
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    FailingRowsQuery string
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    NullCount string
    The number of rows with null values in the specified column.
    PassRatio float64
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    Passed bool
    Whether the rule passed or failed.
    PassedCount string
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    Rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluatedCount String
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failingRowsQuery String
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    nullCount String
    The number of rows with null values in the specified column.
    passRatio Double
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed Boolean
    Whether the rule passed or failed.
    passedCount String
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluatedCount string
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failingRowsQuery string
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    nullCount string
    The number of rows with null values in the specified column.
    passRatio number
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed boolean
    Whether the rule passed or failed.
    passedCount string
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluated_count str
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failing_rows_query str
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    null_count str
    The number of rows with null values in the specified column.
    pass_ratio float
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed bool
    Whether the rule passed or failed.
    passed_count str
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluatedCount String
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failingRowsQuery String
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    nullCount String
    The number of rows with null values in the specified column.
    passRatio Number
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed Boolean
    Whether the rule passed or failed.
    passedCount String
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule Property Map
    The rule specified in the DataQualitySpec, as is.

    GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse

    SqlExpression string
    Optional. The SQL expression.
    SqlExpression string
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.
    sqlExpression string
    Optional. The SQL expression.
    sql_expression str
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse

    Values List<string>
    Optional. Expected values for the column value.
    Values []string
    Optional. Expected values for the column value.
    values List<String>
    Optional. Expected values for the column value.
    values string[]
    Optional. Expected values for the column value.
    values Sequence[str]
    Optional. Expected values for the column value.
    values List<String>
    Optional. Expected values for the column value.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse

    MaxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    Statistic string
    Optional. The aggregate metric to evaluate.
    StrictMaxEnabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    MaxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    Statistic string
    Optional. The aggregate metric to evaluate.
    StrictMaxEnabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic String
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled Boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic string
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    max_value str
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    min_value str
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic str
    Optional. The aggregate metric to evaluate.
    strict_max_enabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strict_min_enabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic String
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled Boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse

    SqlExpression string
    Optional. The SQL expression.
    SqlExpression string
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.
    sqlExpression string
    Optional. The SQL expression.
    sql_expression str
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.

    GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse

    ResultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    ResultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    results_table str
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse

    BigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigquery_export GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport Property Map
    Optional. If set, results will be exported to the provided BigQuery table.

    GoogleCloudDataplexV1DataQualitySpecResponse

    PostScanActions Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResponse>
    The list of rules to evaluate against a data source. At least one rule is required.
    SamplingPercent double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    PostScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    Rules []GoogleCloudDataplexV1DataQualityRuleResponse
    The list of rules to evaluate against a data source. At least one rule is required.
    SamplingPercent float64
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    postScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules List<GoogleCloudDataplexV1DataQualityRuleResponse>
    The list of rules to evaluate against a data source. At least one rule is required.
    samplingPercent Double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    postScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    rowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules GoogleCloudDataplexV1DataQualityRuleResponse[]
    The list of rules to evaluate against a data source. At least one rule is required.
    samplingPercent number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    post_scan_actions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    row_filter str
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules Sequence[GoogleCloudDataplexV1DataQualityRuleResponse]
    The list of rules to evaluate against a data source. At least one rule is required.
    sampling_percent float
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    postScanActions Property Map
    Optional. Actions to take upon job completion.
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules List<Property Map>
    The list of rules to evaluate against a data source. At least one rule is required.
    samplingPercent Number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataScanExecutionSpecResponse

    Field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    Trigger Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    Field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    Trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field String
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field str
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field String
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger Property Map
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    GoogleCloudDataplexV1DataScanExecutionStatusResponse

    LatestJobEndTime string
    The time when the latest DataScanJob ended.
    LatestJobStartTime string
    The time when the latest DataScanJob started.
    LatestJobEndTime string
    The time when the latest DataScanJob ended.
    LatestJobStartTime string
    The time when the latest DataScanJob started.
    latestJobEndTime String
    The time when the latest DataScanJob ended.
    latestJobStartTime String
    The time when the latest DataScanJob started.
    latestJobEndTime string
    The time when the latest DataScanJob ended.
    latestJobStartTime string
    The time when the latest DataScanJob started.
    latest_job_end_time str
    The time when the latest DataScanJob ended.
    latest_job_start_time str
    The time when the latest DataScanJob started.
    latestJobEndTime String
    The time when the latest DataScanJob ended.
    latestJobStartTime String
    The time when the latest DataScanJob started.

    GoogleCloudDataplexV1DataSourceResponse

    Entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    Resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    Entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    Resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity String
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource String
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity str
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource str
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity String
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource String
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse

    End string
    Value that marks the end of the range.
    Field string
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    Start string
    Value that marks the start of the range.
    End string
    Value that marks the end of the range.
    Field string
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    Start string
    Value that marks the start of the range.
    end String
    Value that marks the end of the range.
    field String
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start String
    Value that marks the start of the range.
    end string
    Value that marks the end of the range.
    field string
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start string
    Value that marks the start of the range.
    end str
    Value that marks the end of the range.
    field str
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start str
    Value that marks the start of the range.
    end String
    Value that marks the end of the range.
    field String
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start String
    Value that marks the start of the range.

    GoogleCloudDataplexV1ScannedDataResponse

    IncrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incremental_field GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incrementalField Property Map
    The range denoted by values of an incremental field

    GoogleCloudDataplexV1TriggerResponse

    OnDemand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    Schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    onDemand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    onDemand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    on_demand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    onDemand Property Map
    The scan runs once via RunDataScan API.
    schedule Property Map
    The scan is scheduled to run periodically.

    GoogleCloudDataplexV1TriggerScheduleResponse

    Cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    Cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron String
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron str
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron String
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi