Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataplex/v1.DataScan
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a DataScan resource. Auto-naming is currently not supported for this resource.
Create DataScan Resource
new DataScan(name: string, args: DataScanArgs, opts?: CustomResourceOptions);
@overload
def DataScan(resource_name: str,
opts: Optional[ResourceOptions] = None,
data: Optional[GoogleCloudDataplexV1DataSourceArgs] = None,
data_profile_spec: Optional[GoogleCloudDataplexV1DataProfileSpecArgs] = None,
data_quality_spec: Optional[GoogleCloudDataplexV1DataQualitySpecArgs] = None,
data_scan_id: Optional[str] = None,
description: Optional[str] = None,
display_name: Optional[str] = None,
execution_spec: Optional[GoogleCloudDataplexV1DataScanExecutionSpecArgs] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
project: Optional[str] = None)
@overload
def DataScan(resource_name: str,
args: DataScanArgs,
opts: Optional[ResourceOptions] = None)
func NewDataScan(ctx *Context, name string, args DataScanArgs, opts ...ResourceOption) (*DataScan, error)
public DataScan(string name, DataScanArgs args, CustomResourceOptions? opts = null)
public DataScan(String name, DataScanArgs args)
public DataScan(String name, DataScanArgs args, CustomResourceOptions options)
type: google-native:dataplex/v1:DataScan
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DataScan Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The DataScan resource accepts the following input properties:
- Data
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Source The data source for DataScan.
- Data
Scan stringId Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- Data
Profile Pulumi.Spec Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec DataProfileScan related setting.
- Data
Quality Pulumi.Spec Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Spec DataQualityScan related setting.
- Description string
Optional. Description of the scan. Must be between 1-1024 characters.
- Display
Name string Optional. User friendly display name. Must be between 1-256 characters.
- Execution
Spec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Scan Execution Spec Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- Labels Dictionary<string, string>
Optional. User-defined labels for the scan.
- Location string
- Project string
- Data
Google
Cloud Dataplex V1Data Source Args The data source for DataScan.
- Data
Scan stringId Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- Data
Profile GoogleSpec Cloud Dataplex V1Data Profile Spec Args DataProfileScan related setting.
- Data
Quality GoogleSpec Cloud Dataplex V1Data Quality Spec Args DataQualityScan related setting.
- Description string
Optional. Description of the scan. Must be between 1-1024 characters.
- Display
Name string Optional. User friendly display name. Must be between 1-256 characters.
- Execution
Spec GoogleCloud Dataplex V1Data Scan Execution Spec Args Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- Labels map[string]string
Optional. User-defined labels for the scan.
- Location string
- Project string
- data
Google
Cloud Dataplex V1Data Source The data source for DataScan.
- data
Scan StringId Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data
Profile GoogleSpec Cloud Dataplex V1Data Profile Spec DataProfileScan related setting.
- data
Quality GoogleSpec Cloud Dataplex V1Data Quality Spec DataQualityScan related setting.
- description String
Optional. Description of the scan. Must be between 1-1024 characters.
- display
Name String Optional. User friendly display name. Must be between 1-256 characters.
- execution
Spec GoogleCloud Dataplex V1Data Scan Execution Spec Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels Map<String,String>
Optional. User-defined labels for the scan.
- location String
- project String
- data
Google
Cloud Dataplex V1Data Source The data source for DataScan.
- data
Scan stringId Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data
Profile GoogleSpec Cloud Dataplex V1Data Profile Spec DataProfileScan related setting.
- data
Quality GoogleSpec Cloud Dataplex V1Data Quality Spec DataQualityScan related setting.
- description string
Optional. Description of the scan. Must be between 1-1024 characters.
- display
Name string Optional. User friendly display name. Must be between 1-256 characters.
- execution
Spec GoogleCloud Dataplex V1Data Scan Execution Spec Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels {[key: string]: string}
Optional. User-defined labels for the scan.
- location string
- project string
- data
Google
Cloud Dataplex V1Data Source Args The data source for DataScan.
- data_
scan_ strid Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data_
profile_ Googlespec Cloud Dataplex V1Data Profile Spec Args DataProfileScan related setting.
- data_
quality_ Googlespec Cloud Dataplex V1Data Quality Spec Args DataQualityScan related setting.
- description str
Optional. Description of the scan. Must be between 1-1024 characters.
- display_
name str Optional. User friendly display name. Must be between 1-256 characters.
- execution_
spec GoogleCloud Dataplex V1Data Scan Execution Spec Args Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels Mapping[str, str]
Optional. User-defined labels for the scan.
- location str
- project str
- data Property Map
The data source for DataScan.
- data
Scan StringId Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data
Profile Property MapSpec DataProfileScan related setting.
- data
Quality Property MapSpec DataQualityScan related setting.
- description String
Optional. Description of the scan. Must be between 1-1024 characters.
- display
Name String Optional. User friendly display name. Must be between 1-256 characters.
- execution
Spec Property Map Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels Map<String>
Optional. User-defined labels for the scan.
- location String
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the DataScan resource produces the following output properties:
- Create
Time string The time when the scan was created.
- Data
Profile Pulumi.Result Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Data Profile Result Response The result of the data profile scan.
- Data
Quality Pulumi.Result Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Data Quality Result Response The result of the data quality scan.
- Execution
Status Pulumi.Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Data Scan Execution Status Response Status of the data scan execution.
- Id string
The provider-assigned unique ID for this managed resource.
- Name string
The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- State string
Current state of the DataScan.
- Type string
The type of DataScan.
- Uid string
System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- Update
Time string The time when the scan was last updated.
- Create
Time string The time when the scan was created.
- Data
Profile GoogleResult Cloud Dataplex V1Data Profile Result Response The result of the data profile scan.
- Data
Quality GoogleResult Cloud Dataplex V1Data Quality Result Response The result of the data quality scan.
- Execution
Status GoogleCloud Dataplex V1Data Scan Execution Status Response Status of the data scan execution.
- Id string
The provider-assigned unique ID for this managed resource.
- Name string
The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- State string
Current state of the DataScan.
- Type string
The type of DataScan.
- Uid string
System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- Update
Time string The time when the scan was last updated.
- create
Time String The time when the scan was created.
- data
Profile GoogleResult Cloud Dataplex V1Data Profile Result Response The result of the data profile scan.
- data
Quality GoogleResult Cloud Dataplex V1Data Quality Result Response The result of the data quality scan.
- execution
Status GoogleCloud Dataplex V1Data Scan Execution Status Response Status of the data scan execution.
- id String
The provider-assigned unique ID for this managed resource.
- name String
The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state String
Current state of the DataScan.
- type String
The type of DataScan.
- uid String
System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update
Time String The time when the scan was last updated.
- create
Time string The time when the scan was created.
- data
Profile GoogleResult Cloud Dataplex V1Data Profile Result Response The result of the data profile scan.
- data
Quality GoogleResult Cloud Dataplex V1Data Quality Result Response The result of the data quality scan.
- execution
Status GoogleCloud Dataplex V1Data Scan Execution Status Response Status of the data scan execution.
- id string
The provider-assigned unique ID for this managed resource.
- name string
The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state string
Current state of the DataScan.
- type string
The type of DataScan.
- uid string
System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update
Time string The time when the scan was last updated.
- create_
time str The time when the scan was created.
- data_
profile_ Googleresult Cloud Dataplex V1Data Profile Result Response The result of the data profile scan.
- data_
quality_ Googleresult Cloud Dataplex V1Data Quality Result Response The result of the data quality scan.
- execution_
status GoogleCloud Dataplex V1Data Scan Execution Status Response Status of the data scan execution.
- id str
The provider-assigned unique ID for this managed resource.
- name str
The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state str
Current state of the DataScan.
- type str
The type of DataScan.
- uid str
System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update_
time str The time when the scan was last updated.
- create
Time String The time when the scan was created.
- data
Profile Property MapResult The result of the data profile scan.
- data
Quality Property MapResult The result of the data quality scan.
- execution
Status Property Map Status of the data scan execution.
- id String
The provider-assigned unique ID for this managed resource.
- name String
The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state String
Current state of the DataScan.
- type String
The type of DataScan.
- uid String
System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update
Time String The time when the scan was last updated.
Supporting Types
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponseArgs
- Average double
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max double
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min double
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles List<double>
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation double Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- Average float64
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max float64
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min float64
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles []float64
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation float64 Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Double
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max Double
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min Double
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<Double>
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Double Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average number
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max number
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min number
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles number[]
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation number Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average float
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max float
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min float
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles Sequence[float]
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard_
deviation float Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Number
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max Number
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min Number
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<Number>
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Number Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponseArgs
- Average double
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max string
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min string
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles List<string>
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation double Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- Average float64
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max string
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min string
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles []string
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation float64 Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Double
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max String
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min String
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<String>
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Double Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average number
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max string
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min string
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles string[]
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation number Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average float
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max str
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min str
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles Sequence[str]
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard_
deviation float Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Number
Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max String
Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min String
Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<String>
A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Number Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponseArgs
- Distinct
Ratio double Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- Double
Profile Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response Double type field information.
- Integer
Profile Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response Integer type field information.
- Null
Ratio double Ratio of rows with null value against total scanned rows.
- String
Profile Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response String type field information.
- Top
NValues List<Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response> The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- Distinct
Ratio float64 Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- Double
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response Double type field information.
- Integer
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response Integer type field information.
- Null
Ratio float64 Ratio of rows with null value against total scanned rows.
- String
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response String type field information.
- Top
NValues []GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct
Ratio Double Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response Double type field information.
- integer
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response Integer type field information.
- null
Ratio Double Ratio of rows with null value against total scanned rows.
- string
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response String type field information.
- top
NValues List<GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response> The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct
Ratio number Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response Double type field information.
- integer
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response Integer type field information.
- null
Ratio number Ratio of rows with null value against total scanned rows.
- string
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response String type field information.
- top
NValues GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response[] The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct_
ratio float Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double_
profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response Double type field information.
- integer_
profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response Integer type field information.
- null_
ratio float Ratio of rows with null value against total scanned rows.
- string_
profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response String type field information.
- top_
n_ Sequence[Googlevalues Cloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response] The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct
Ratio Number Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double
Profile Property Map Double type field information.
- integer
Profile Property Map Integer type field information.
- null
Ratio Number Ratio of rows with null value against total scanned rows.
- string
Profile Property Map String type field information.
- top
NValues List<Property Map> The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponseArgs
- Average
Length double Average length of non-null values in the scanned data.
- Max
Length string Maximum length of non-null values in the scanned data.
- Min
Length string Minimum length of non-null values in the scanned data.
- Average
Length float64 Average length of non-null values in the scanned data.
- Max
Length string Maximum length of non-null values in the scanned data.
- Min
Length string Minimum length of non-null values in the scanned data.
- average
Length Double Average length of non-null values in the scanned data.
- max
Length String Maximum length of non-null values in the scanned data.
- min
Length String Minimum length of non-null values in the scanned data.
- average
Length number Average length of non-null values in the scanned data.
- max
Length string Maximum length of non-null values in the scanned data.
- min
Length string Minimum length of non-null values in the scanned data.
- average_
length float Average length of non-null values in the scanned data.
- max_
length str Maximum length of non-null values in the scanned data.
- min_
length str Minimum length of non-null values in the scanned data.
- average
Length Number Average length of non-null values in the scanned data.
- max
Length String Maximum length of non-null values in the scanned data.
- min
Length String Minimum length of non-null values in the scanned data.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponseArgs
GoogleCloudDataplexV1DataProfileResultProfileFieldResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldResponseArgs
- Mode string
The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- Name string
The name of the field.
- Profile
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response Profile information for the corresponding field.
- Type string
The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- Mode string
The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- Name string
The name of the field.
- Profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response Profile information for the corresponding field.
- Type string
The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode String
The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name String
The name of the field.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response Profile information for the corresponding field.
- type String
The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode string
The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name string
The name of the field.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response Profile information for the corresponding field.
- type string
The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode str
The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name str
The name of the field.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response Profile information for the corresponding field.
- type str
The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode String
The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name String
The name of the field.
- profile Property Map
Profile information for the corresponding field.
- type String
The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
GoogleCloudDataplexV1DataProfileResultProfileResponse, GoogleCloudDataplexV1DataProfileResultProfileResponseArgs
- Fields
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Response> List of fields with structural and profile information for each field.
- Fields
[]Google
Cloud Dataplex V1Data Profile Result Profile Field Response List of fields with structural and profile information for each field.
- fields
List<Google
Cloud Dataplex V1Data Profile Result Profile Field Response> List of fields with structural and profile information for each field.
- fields
Google
Cloud Dataplex V1Data Profile Result Profile Field Response[] List of fields with structural and profile information for each field.
- fields
Sequence[Google
Cloud Dataplex V1Data Profile Result Profile Field Response] List of fields with structural and profile information for each field.
- fields List<Property Map>
List of fields with structural and profile information for each field.
GoogleCloudDataplexV1DataProfileResultResponse, GoogleCloudDataplexV1DataProfileResultResponseArgs
- Profile
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Response The profile information per field.
- Row
Count string The count of rows scanned.
- Scanned
Data Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Scanned Data Response The data scanned for this result.
- Profile
Google
Cloud Dataplex V1Data Profile Result Profile Response The profile information per field.
- Row
Count string The count of rows scanned.
- Scanned
Data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Response The profile information per field.
- row
Count String The count of rows scanned.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Response The profile information per field.
- row
Count string The count of rows scanned.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Response The profile information per field.
- row_
count str The count of rows scanned.
- scanned_
data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- profile Property Map
The profile information per field.
- row
Count String The count of rows scanned.
- scanned
Data Property Map The data scanned for this result.
GoogleCloudDataplexV1DataProfileSpec, GoogleCloudDataplexV1DataProfileSpecArgs
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent float64 Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row_
filter str Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling_
percent float Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataProfileSpecResponse, GoogleCloudDataplexV1DataProfileSpecResponseArgs
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent float64 Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row_
filter str Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling_
percent float Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataQualityDimensionResultResponse, GoogleCloudDataplexV1DataQualityDimensionResultResponseArgs
- Passed bool
Whether the dimension passed or failed.
- Passed bool
Whether the dimension passed or failed.
- passed Boolean
Whether the dimension passed or failed.
- passed boolean
Whether the dimension passed or failed.
- passed bool
Whether the dimension passed or failed.
- passed Boolean
Whether the dimension passed or failed.
GoogleCloudDataplexV1DataQualityResultResponse, GoogleCloudDataplexV1DataQualityResultResponseArgs
- Dimensions
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Dimension Result Response> A list of results at the dimension level.
- Passed bool
Overall data quality result -- true if all rules passed.
- Row
Count string The count of rows processed.
- Rules
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Result Response> A list of all the rules in a job, and their results.
- Scanned
Data Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Scanned Data Response The data scanned for this result.
- Dimensions
[]Google
Cloud Dataplex V1Data Quality Dimension Result Response A list of results at the dimension level.
- Passed bool
Overall data quality result -- true if all rules passed.
- Row
Count string The count of rows processed.
- Rules
[]Google
Cloud Dataplex V1Data Quality Rule Result Response A list of all the rules in a job, and their results.
- Scanned
Data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- dimensions
List<Google
Cloud Dataplex V1Data Quality Dimension Result Response> A list of results at the dimension level.
- passed Boolean
Overall data quality result -- true if all rules passed.
- row
Count String The count of rows processed.
- rules
List<Google
Cloud Dataplex V1Data Quality Rule Result Response> A list of all the rules in a job, and their results.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- dimensions
Google
Cloud Dataplex V1Data Quality Dimension Result Response[] A list of results at the dimension level.
- passed boolean
Overall data quality result -- true if all rules passed.
- row
Count string The count of rows processed.
- rules
Google
Cloud Dataplex V1Data Quality Rule Result Response[] A list of all the rules in a job, and their results.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- dimensions
Sequence[Google
Cloud Dataplex V1Data Quality Dimension Result Response] A list of results at the dimension level.
- passed bool
Overall data quality result -- true if all rules passed.
- row_
count str The count of rows processed.
- rules
Sequence[Google
Cloud Dataplex V1Data Quality Rule Result Response] A list of all the rules in a job, and their results.
- scanned_
data GoogleCloud Dataplex V1Scanned Data Response The data scanned for this result.
- dimensions List<Property Map>
A list of results at the dimension level.
- passed Boolean
Overall data quality result -- true if all rules passed.
- row
Count String The count of rows processed.
- rules List<Property Map>
A list of all the rules in a job, and their results.
- scanned
Data Property Map The data scanned for this result.
GoogleCloudDataplexV1DataQualityRule, GoogleCloudDataplexV1DataQualityRuleArgs
- Dimension string
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Column string
Optional. The unnested column which this rule is evaluated against.
- Ignore
Null bool Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- Non
Null Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Non Null Expectation ColumnMap rule which evaluates whether each column value is null.
- Range
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Range Expectation ColumnMap rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Regex Expectation ColumnMap rule which evaluates whether each column value matches a specified regex.
- Row
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Row Condition Expectation Table rule which evaluates whether each row passes the specified condition.
- Set
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Set Expectation ColumnMap rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Statistic Range Expectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Table Condition Expectation Table rule which evaluates whether the provided expression is true.
- Threshold double
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- Uniqueness
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Uniqueness Expectation ColumnAggregate rule which evaluates whether the column has duplicates.
- Dimension string
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Column string
Optional. The unnested column which this rule is evaluated against.
- Ignore
Null bool Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- Non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation ColumnMap rule which evaluates whether each column value is null.
- Range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation ColumnMap rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation ColumnMap rule which evaluates whether each column value matches a specified regex.
- Row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Table rule which evaluates whether each row passes the specified condition.
- Set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation ColumnMap rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Table rule which evaluates whether the provided expression is true.
- Threshold float64
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- Uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation ColumnAggregate rule which evaluates whether the column has duplicates.
- dimension String
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column String
Optional. The unnested column which this rule is evaluated against.
- ignore
Null Boolean Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation ColumnMap rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation ColumnMap rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Table rule which evaluates whether each row passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Table rule which evaluates whether the provided expression is true.
- threshold Double
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation ColumnAggregate rule which evaluates whether the column has duplicates.
- dimension string
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column string
Optional. The unnested column which this rule is evaluated against.
- ignore
Null boolean Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation ColumnMap rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation ColumnMap rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Table rule which evaluates whether each row passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Table rule which evaluates whether the provided expression is true.
- threshold number
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation ColumnAggregate rule which evaluates whether the column has duplicates.
- dimension str
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column str
Optional. The unnested column which this rule is evaluated against.
- ignore_
null bool Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non_
null_ Googleexpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation ColumnMap rule which evaluates whether each column value is null.
- range_
expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex_
expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation ColumnMap rule which evaluates whether each column value matches a specified regex.
- row_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Table rule which evaluates whether each row passes the specified condition.
- set_
expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic_
range_ Googleexpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Table rule which evaluates whether the provided expression is true.
- threshold float
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness_
expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation ColumnAggregate rule which evaluates whether the column has duplicates.
- dimension String
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column String
Optional. The unnested column which this rule is evaluated against.
- ignore
Null Boolean Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non
Null Property MapExpectation ColumnMap rule which evaluates whether each column value is null.
- range
Expectation Property Map ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex
Expectation Property Map ColumnMap rule which evaluates whether each column value matches a specified regex.
- row
Condition Property MapExpectation Table rule which evaluates whether each row passes the specified condition.
- set
Expectation Property Map ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic
Range Property MapExpectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition Property MapExpectation Table rule which evaluates whether the provided expression is true.
- threshold Number
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness
Expectation Property Map ColumnAggregate rule which evaluates whether the column has duplicates.
GoogleCloudDataplexV1DataQualityRuleRangeExpectation, GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs
- Max
Value string Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value string Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max booleanEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min_
value str Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict_
max_ boolenabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponseArgs
- Max
Value string Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value string Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max booleanEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min_
value str Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict_
max_ boolenabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleRegexExpectation, GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs
- Regex string
A regular expression the column value is expected to match.
- Regex string
A regular expression the column value is expected to match.
- regex String
A regular expression the column value is expected to match.
- regex string
A regular expression the column value is expected to match.
- regex str
A regular expression the column value is expected to match.
- regex String
A regular expression the column value is expected to match.
GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponseArgs
- Regex string
A regular expression the column value is expected to match.
- Regex string
A regular expression the column value is expected to match.
- regex String
A regular expression the column value is expected to match.
- regex string
A regular expression the column value is expected to match.
- regex str
A regular expression the column value is expected to match.
- regex String
A regular expression the column value is expected to match.
GoogleCloudDataplexV1DataQualityRuleResponse, GoogleCloudDataplexV1DataQualityRuleResponseArgs
- Column string
Optional. The unnested column which this rule is evaluated against.
- Dimension string
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Ignore
Null bool Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- Non
Null Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Non Null Expectation Response ColumnMap rule which evaluates whether each column value is null.
- Range
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Range Expectation Response ColumnMap rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Regex Expectation Response ColumnMap rule which evaluates whether each column value matches a specified regex.
- Row
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response Table rule which evaluates whether each row passes the specified condition.
- Set
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Set Expectation Response ColumnMap rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response Table rule which evaluates whether the provided expression is true.
- Threshold double
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- Uniqueness
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Uniqueness Expectation Response ColumnAggregate rule which evaluates whether the column has duplicates.
- Column string
Optional. The unnested column which this rule is evaluated against.
- Dimension string
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Ignore
Null bool Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- Non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response ColumnMap rule which evaluates whether each column value is null.
- Range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response ColumnMap rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response ColumnMap rule which evaluates whether each column value matches a specified regex.
- Row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response Table rule which evaluates whether each row passes the specified condition.
- Set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response ColumnMap rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response Table rule which evaluates whether the provided expression is true.
- Threshold float64
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- Uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response ColumnAggregate rule which evaluates whether the column has duplicates.
- column String
Optional. The unnested column which this rule is evaluated against.
- dimension String
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore
Null Boolean Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response ColumnMap rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response ColumnMap rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response Table rule which evaluates whether each row passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response Table rule which evaluates whether the provided expression is true.
- threshold Double
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response ColumnAggregate rule which evaluates whether the column has duplicates.
- column string
Optional. The unnested column which this rule is evaluated against.
- dimension string
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore
Null boolean Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response ColumnMap rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response ColumnMap rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response Table rule which evaluates whether each row passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response Table rule which evaluates whether the provided expression is true.
- threshold number
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response ColumnAggregate rule which evaluates whether the column has duplicates.
- column str
Optional. The unnested column which this rule is evaluated against.
- dimension str
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore_
null bool Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non_
null_ Googleexpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response ColumnMap rule which evaluates whether each column value is null.
- range_
expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex_
expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response ColumnMap rule which evaluates whether each column value matches a specified regex.
- row_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response Table rule which evaluates whether each row passes the specified condition.
- set_
expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic_
range_ Googleexpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response Table rule which evaluates whether the provided expression is true.
- threshold float
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness_
expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response ColumnAggregate rule which evaluates whether the column has duplicates.
- column String
Optional. The unnested column which this rule is evaluated against.
- dimension String
The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore
Null Boolean Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.Only applicable to ColumnMap rules.
- non
Null Property MapExpectation ColumnMap rule which evaluates whether each column value is null.
- range
Expectation Property Map ColumnMap rule which evaluates whether each column value lies between a specified range.
- regex
Expectation Property Map ColumnMap rule which evaluates whether each column value matches a specified regex.
- row
Condition Property MapExpectation Table rule which evaluates whether each row passes the specified condition.
- set
Expectation Property Map ColumnMap rule which evaluates whether each column value is contained by a specified set.
- statistic
Range Property MapExpectation ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition Property MapExpectation Table rule which evaluates whether the provided expression is true.
- threshold Number
Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).
- uniqueness
Expectation Property Map ColumnAggregate rule which evaluates whether the column has duplicates.
GoogleCloudDataplexV1DataQualityRuleResultResponse, GoogleCloudDataplexV1DataQualityRuleResultResponseArgs
- Evaluated
Count string The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- Failing
Rows stringQuery The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
- Null
Count string The number of rows with null values in the specified column.
- Pass
Ratio double The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
- Passed bool
Whether the rule passed or failed.
- Passed
Count string The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
- Rule
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Response The rule specified in the DataQualitySpec, as is.
- Evaluated
Count string The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- Failing
Rows stringQuery The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
- Null
Count string The number of rows with null values in the specified column.
- Pass
Ratio float64 The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
- Passed bool
Whether the rule passed or failed.
- Passed
Count string The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
- Rule
Google
Cloud Dataplex V1Data Quality Rule Response The rule specified in the DataQualitySpec, as is.
- evaluated
Count String The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing
Rows StringQuery The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
- null
Count String The number of rows with null values in the specified column.
- pass
Ratio Double The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
- passed Boolean
Whether the rule passed or failed.
- passed
Count String The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
- rule
Google
Cloud Dataplex V1Data Quality Rule Response The rule specified in the DataQualitySpec, as is.
- evaluated
Count string The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing
Rows stringQuery The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
- null
Count string The number of rows with null values in the specified column.
- pass
Ratio number The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
- passed boolean
Whether the rule passed or failed.
- passed
Count string The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
- rule
Google
Cloud Dataplex V1Data Quality Rule Response The rule specified in the DataQualitySpec, as is.
- evaluated_
count str The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing_
rows_ strquery The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
- null_
count str The number of rows with null values in the specified column.
- pass_
ratio float The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
- passed bool
Whether the rule passed or failed.
- passed_
count str The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
- rule
Google
Cloud Dataplex V1Data Quality Rule Response The rule specified in the DataQualitySpec, as is.
- evaluated
Count String The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing
Rows StringQuery The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.
- null
Count String The number of rows with null values in the specified column.
- pass
Ratio Number The ratio of passed_count / evaluated_count. This field is only valid for ColumnMap type rules.
- passed Boolean
Whether the rule passed or failed.
- passed
Count String The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.
- rule Property Map
The rule specified in the DataQualitySpec, as is.
GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs
- Sql
Expression string The SQL expression.
- Sql
Expression string The SQL expression.
- sql
Expression String The SQL expression.
- sql
Expression string The SQL expression.
- sql_
expression str The SQL expression.
- sql
Expression String The SQL expression.
GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponseArgs
- Sql
Expression string The SQL expression.
- Sql
Expression string The SQL expression.
- sql
Expression String The SQL expression.
- sql
Expression string The SQL expression.
- sql_
expression str The SQL expression.
- sql
Expression String The SQL expression.
GoogleCloudDataplexV1DataQualityRuleSetExpectation, GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs
- Values List<string>
Expected values for the column value.
- Values []string
Expected values for the column value.
- values List<String>
Expected values for the column value.
- values string[]
Expected values for the column value.
- values Sequence[str]
Expected values for the column value.
- values List<String>
Expected values for the column value.
GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse, GoogleCloudDataplexV1DataQualityRuleSetExpectationResponseArgs
- Values List<string>
Expected values for the column value.
- Values []string
Expected values for the column value.
- values List<String>
Expected values for the column value.
- values string[]
Expected values for the column value.
- values Sequence[str]
Expected values for the column value.
- values List<String>
Expected values for the column value.
GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs
- Max
Value string The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic
Pulumi.
Google Native. Dataplex. V1. Google Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic The aggregate metric to evaluate.
- Strict
Max boolEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic The aggregate metric to evaluate.
- Strict
Max boolEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic The aggregate metric to evaluate.
- strict
Max BooleanEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value string The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic The aggregate metric to evaluate.
- strict
Max booleanEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min_
value str The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic The aggregate metric to evaluate.
- strict_
max_ boolenabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic "STATISTIC_UNDEFINED" | "MEAN" | "MIN" | "MAX"
The aggregate metric to evaluate.
- strict
Max BooleanEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponseArgs
- Max
Value string The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic string
The aggregate metric to evaluate.
- Strict
Max boolEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic string
The aggregate metric to evaluate.
- Strict
Max boolEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic String
The aggregate metric to evaluate.
- strict
Max BooleanEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value string The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic string
The aggregate metric to evaluate.
- strict
Max booleanEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min_
value str The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic str
The aggregate metric to evaluate.
- strict_
max_ boolenabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic String
The aggregate metric to evaluate.
- strict
Max BooleanEnabled Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticArgs
- Statistic
Undefined - STATISTIC_UNDEFINED
Unspecified statistic type
- Mean
- MEAN
Evaluate the column mean
- Min
- MIN
Evaluate the column min
- Max
- MAX
Evaluate the column max
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Statistic Undefined - STATISTIC_UNDEFINED
Unspecified statistic type
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Mean - MEAN
Evaluate the column mean
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Min - MIN
Evaluate the column min
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Max - MAX
Evaluate the column max
- Statistic
Undefined - STATISTIC_UNDEFINED
Unspecified statistic type
- Mean
- MEAN
Evaluate the column mean
- Min
- MIN
Evaluate the column min
- Max
- MAX
Evaluate the column max
- Statistic
Undefined - STATISTIC_UNDEFINED
Unspecified statistic type
- Mean
- MEAN
Evaluate the column mean
- Min
- MIN
Evaluate the column min
- Max
- MAX
Evaluate the column max
- STATISTIC_UNDEFINED
- STATISTIC_UNDEFINED
Unspecified statistic type
- MEAN
- MEAN
Evaluate the column mean
- MIN
- MIN
Evaluate the column min
- MAX
- MAX
Evaluate the column max
- "STATISTIC_UNDEFINED"
- STATISTIC_UNDEFINED
Unspecified statistic type
- "MEAN"
- MEAN
Evaluate the column mean
- "MIN"
- MIN
Evaluate the column min
- "MAX"
- MAX
Evaluate the column max
GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs
- Sql
Expression string The SQL expression.
- Sql
Expression string The SQL expression.
- sql
Expression String The SQL expression.
- sql
Expression string The SQL expression.
- sql_
expression str The SQL expression.
- sql
Expression String The SQL expression.
GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponseArgs
- Sql
Expression string The SQL expression.
- Sql
Expression string The SQL expression.
- sql
Expression String The SQL expression.
- sql
Expression string The SQL expression.
- sql_
expression str The SQL expression.
- sql
Expression String The SQL expression.
GoogleCloudDataplexV1DataQualitySpec, GoogleCloudDataplexV1DataQualitySpecArgs
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Rules
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule> The list of rules to evaluate against a data source. At least one rule is required.
- Sampling
Percent double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Rules
[]Google
Cloud Dataplex V1Data Quality Rule The list of rules to evaluate against a data source. At least one rule is required.
- Sampling
Percent float64 Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
List<Google
Cloud Dataplex V1Data Quality Rule> The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent Double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
Google
Cloud Dataplex V1Data Quality Rule[] The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row_
filter str Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
Sequence[Google
Cloud Dataplex V1Data Quality Rule] The list of rules to evaluate against a data source. At least one rule is required.
- sampling_
percent float Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules List<Property Map>
The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent Number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataQualitySpecResponse, GoogleCloudDataplexV1DataQualitySpecResponseArgs
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Rules
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Response> The list of rules to evaluate against a data source. At least one rule is required.
- Sampling
Percent double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Rules
[]Google
Cloud Dataplex V1Data Quality Rule Response The list of rules to evaluate against a data source. At least one rule is required.
- Sampling
Percent float64 Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
List<Google
Cloud Dataplex V1Data Quality Rule Response> The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent Double Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter string Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
Google
Cloud Dataplex V1Data Quality Rule Response[] The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row_
filter str Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
Sequence[Google
Cloud Dataplex V1Data Quality Rule Response] The list of rules to evaluate against a data source. At least one rule is required.
- sampling_
percent float Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- row
Filter String Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules List<Property Map>
The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent Number Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataScanExecutionSpec, GoogleCloudDataplexV1DataScanExecutionSpecArgs
- Field string
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- Field string
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Google
Cloud Dataplex V1Trigger Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field string
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field str
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger Property Map
Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
GoogleCloudDataplexV1DataScanExecutionSpecResponse, GoogleCloudDataplexV1DataScanExecutionSpecResponseArgs
- Field string
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Response Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- Field string
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Google
Cloud Dataplex V1Trigger Response Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Response Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field string
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Response Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field str
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Response Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger Property Map
Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
GoogleCloudDataplexV1DataScanExecutionStatusResponse, GoogleCloudDataplexV1DataScanExecutionStatusResponseArgs
- Latest
Job stringEnd Time The time when the latest DataScanJob ended.
- Latest
Job stringStart Time The time when the latest DataScanJob started.
- Latest
Job stringEnd Time The time when the latest DataScanJob ended.
- Latest
Job stringStart Time The time when the latest DataScanJob started.
- latest
Job StringEnd Time The time when the latest DataScanJob ended.
- latest
Job StringStart Time The time when the latest DataScanJob started.
- latest
Job stringEnd Time The time when the latest DataScanJob ended.
- latest
Job stringStart Time The time when the latest DataScanJob started.
- latest_
job_ strend_ time The time when the latest DataScanJob ended.
- latest_
job_ strstart_ time The time when the latest DataScanJob started.
- latest
Job StringEnd Time The time when the latest DataScanJob ended.
- latest
Job StringStart Time The time when the latest DataScanJob started.
GoogleCloudDataplexV1DataSource, GoogleCloudDataplexV1DataSourceArgs
- Entity string
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Entity string
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity string
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource string
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity str
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource str
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1DataSourceResponse, GoogleCloudDataplexV1DataSourceResponseArgs
- Entity string
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Entity string
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity string
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource string
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity str
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource str
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse, GoogleCloudDataplexV1ScannedDataIncrementalFieldResponseArgs
GoogleCloudDataplexV1ScannedDataResponse, GoogleCloudDataplexV1ScannedDataResponseArgs
- Incremental
Field Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Scanned Data Incremental Field Response The range denoted by values of an incremental field
- Incremental
Field GoogleCloud Dataplex V1Scanned Data Incremental Field Response The range denoted by values of an incremental field
- incremental
Field GoogleCloud Dataplex V1Scanned Data Incremental Field Response The range denoted by values of an incremental field
- incremental
Field GoogleCloud Dataplex V1Scanned Data Incremental Field Response The range denoted by values of an incremental field
- incremental_
field GoogleCloud Dataplex V1Scanned Data Incremental Field Response The range denoted by values of an incremental field
- incremental
Field Property Map The range denoted by values of an incremental field
GoogleCloudDataplexV1Trigger, GoogleCloudDataplexV1TriggerArgs
- On
Demand Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger On Demand The scan runs once via RunDataScan API.
- Schedule
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Schedule The scan is scheduled to run periodically.
- On
Demand GoogleCloud Dataplex V1Trigger On Demand The scan runs once via RunDataScan API.
- Schedule
Google
Cloud Dataplex V1Trigger Schedule The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule The scan is scheduled to run periodically.
- on_
demand GoogleCloud Dataplex V1Trigger On Demand The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule The scan is scheduled to run periodically.
- on
Demand Property Map The scan runs once via RunDataScan API.
- schedule Property Map
The scan is scheduled to run periodically.
GoogleCloudDataplexV1TriggerResponse, GoogleCloudDataplexV1TriggerResponseArgs
- On
Demand Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger On Demand Response The scan runs once via RunDataScan API.
- Schedule
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Schedule Response The scan is scheduled to run periodically.
- On
Demand GoogleCloud Dataplex V1Trigger On Demand Response The scan runs once via RunDataScan API.
- Schedule
Google
Cloud Dataplex V1Trigger Schedule Response The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand Response The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule Response The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand Response The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule Response The scan is scheduled to run periodically.
- on_
demand GoogleCloud Dataplex V1Trigger On Demand Response The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule Response The scan is scheduled to run periodically.
- on
Demand Property Map The scan runs once via RunDataScan API.
- schedule Property Map
The scan is scheduled to run periodically.
GoogleCloudDataplexV1TriggerSchedule, GoogleCloudDataplexV1TriggerScheduleArgs
- Cron string
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- Cron string
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron string
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron str
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
GoogleCloudDataplexV1TriggerScheduleResponse, GoogleCloudDataplexV1TriggerScheduleResponseArgs
- Cron string
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- Cron string
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron string
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron str
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.