1. Packages
  2. Google Cloud Native
  3. API Docs
  4. bigquery
  5. bigquery/v2
  6. getRoutine

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.bigquery/v2.getRoutine

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Gets the specified routine resource by routine ID.

    Using getRoutine

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getRoutine(args: GetRoutineArgs, opts?: InvokeOptions): Promise<GetRoutineResult>
    function getRoutineOutput(args: GetRoutineOutputArgs, opts?: InvokeOptions): Output<GetRoutineResult>
    def get_routine(dataset_id: Optional[str] = None,
                    project: Optional[str] = None,
                    read_mask: Optional[str] = None,
                    routine_id: Optional[str] = None,
                    opts: Optional[InvokeOptions] = None) -> GetRoutineResult
    def get_routine_output(dataset_id: Optional[pulumi.Input[str]] = None,
                    project: Optional[pulumi.Input[str]] = None,
                    read_mask: Optional[pulumi.Input[str]] = None,
                    routine_id: Optional[pulumi.Input[str]] = None,
                    opts: Optional[InvokeOptions] = None) -> Output[GetRoutineResult]
    func LookupRoutine(ctx *Context, args *LookupRoutineArgs, opts ...InvokeOption) (*LookupRoutineResult, error)
    func LookupRoutineOutput(ctx *Context, args *LookupRoutineOutputArgs, opts ...InvokeOption) LookupRoutineResultOutput

    > Note: This function is named LookupRoutine in the Go SDK.

    public static class GetRoutine 
    {
        public static Task<GetRoutineResult> InvokeAsync(GetRoutineArgs args, InvokeOptions? opts = null)
        public static Output<GetRoutineResult> Invoke(GetRoutineInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetRoutineResult> getRoutine(GetRoutineArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: google-native:bigquery/v2:getRoutine
      arguments:
        # arguments dictionary

    The following arguments are supported:

    DatasetId string
    RoutineId string
    Project string
    ReadMask string
    DatasetId string
    RoutineId string
    Project string
    ReadMask string
    datasetId String
    routineId String
    project String
    readMask String
    datasetId string
    routineId string
    project string
    readMask string
    datasetId String
    routineId String
    project String
    readMask String

    getRoutine Result

    The following output properties are available:

    Arguments List<Pulumi.GoogleNative.BigQuery.V2.Outputs.ArgumentResponse>
    Optional.
    CreationTime string
    The time when this routine was created, in milliseconds since the epoch.
    DataGovernanceType string
    Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
    DefinitionBody string
    The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
    Description string
    Optional. The description of the routine, if defined.
    DeterminismLevel string
    Optional. The determinism level of the JavaScript UDF, if defined.
    Etag string
    A hash of this resource.
    ImportedLibraries List<string>
    Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
    Language string
    Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
    LastModifiedTime string
    The time when this routine was last modified, in milliseconds since the epoch.
    RemoteFunctionOptions Pulumi.GoogleNative.BigQuery.V2.Outputs.RemoteFunctionOptionsResponse
    Optional. Remote function specific options.
    ReturnTableType Pulumi.GoogleNative.BigQuery.V2.Outputs.StandardSqlTableTypeResponse
    Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
    ReturnType Pulumi.GoogleNative.BigQuery.V2.Outputs.StandardSqlDataTypeResponse
    Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
    RoutineReference Pulumi.GoogleNative.BigQuery.V2.Outputs.RoutineReferenceResponse
    Reference describing the ID of this routine.
    RoutineType string
    The type of routine.
    SecurityMode string
    Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
    SparkOptions Pulumi.GoogleNative.BigQuery.V2.Outputs.SparkOptionsResponse
    Optional. Spark specific options.
    StrictMode bool
    Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
    Arguments []ArgumentResponse
    Optional.
    CreationTime string
    The time when this routine was created, in milliseconds since the epoch.
    DataGovernanceType string
    Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
    DefinitionBody string
    The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
    Description string
    Optional. The description of the routine, if defined.
    DeterminismLevel string
    Optional. The determinism level of the JavaScript UDF, if defined.
    Etag string
    A hash of this resource.
    ImportedLibraries []string
    Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
    Language string
    Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
    LastModifiedTime string
    The time when this routine was last modified, in milliseconds since the epoch.
    RemoteFunctionOptions RemoteFunctionOptionsResponse
    Optional. Remote function specific options.
    ReturnTableType StandardSqlTableTypeResponse
    Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
    ReturnType StandardSqlDataTypeResponse
    Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
    RoutineReference RoutineReferenceResponse
    Reference describing the ID of this routine.
    RoutineType string
    The type of routine.
    SecurityMode string
    Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
    SparkOptions SparkOptionsResponse
    Optional. Spark specific options.
    StrictMode bool
    Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
    arguments List<ArgumentResponse>
    Optional.
    creationTime String
    The time when this routine was created, in milliseconds since the epoch.
    dataGovernanceType String
    Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
    definitionBody String
    The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
    description String
    Optional. The description of the routine, if defined.
    determinismLevel String
    Optional. The determinism level of the JavaScript UDF, if defined.
    etag String
    A hash of this resource.
    importedLibraries List<String>
    Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
    language String
    Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
    lastModifiedTime String
    The time when this routine was last modified, in milliseconds since the epoch.
    remoteFunctionOptions RemoteFunctionOptionsResponse
    Optional. Remote function specific options.
    returnTableType StandardSqlTableTypeResponse
    Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
    returnType StandardSqlDataTypeResponse
    Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
    routineReference RoutineReferenceResponse
    Reference describing the ID of this routine.
    routineType String
    The type of routine.
    securityMode String
    Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
    sparkOptions SparkOptionsResponse
    Optional. Spark specific options.
    strictMode Boolean
    Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
    arguments ArgumentResponse[]
    Optional.
    creationTime string
    The time when this routine was created, in milliseconds since the epoch.
    dataGovernanceType string
    Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
    definitionBody string
    The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
    description string
    Optional. The description of the routine, if defined.
    determinismLevel string
    Optional. The determinism level of the JavaScript UDF, if defined.
    etag string
    A hash of this resource.
    importedLibraries string[]
    Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
    language string
    Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
    lastModifiedTime string
    The time when this routine was last modified, in milliseconds since the epoch.
    remoteFunctionOptions RemoteFunctionOptionsResponse
    Optional. Remote function specific options.
    returnTableType StandardSqlTableTypeResponse
    Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
    returnType StandardSqlDataTypeResponse
    Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
    routineReference RoutineReferenceResponse
    Reference describing the ID of this routine.
    routineType string
    The type of routine.
    securityMode string
    Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
    sparkOptions SparkOptionsResponse
    Optional. Spark specific options.
    strictMode boolean
    Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
    arguments Sequence[ArgumentResponse]
    Optional.
    creation_time str
    The time when this routine was created, in milliseconds since the epoch.
    data_governance_type str
    Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
    definition_body str
    The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
    description str
    Optional. The description of the routine, if defined.
    determinism_level str
    Optional. The determinism level of the JavaScript UDF, if defined.
    etag str
    A hash of this resource.
    imported_libraries Sequence[str]
    Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
    language str
    Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
    last_modified_time str
    The time when this routine was last modified, in milliseconds since the epoch.
    remote_function_options RemoteFunctionOptionsResponse
    Optional. Remote function specific options.
    return_table_type StandardSqlTableTypeResponse
    Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
    return_type StandardSqlDataTypeResponse
    Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
    routine_reference RoutineReferenceResponse
    Reference describing the ID of this routine.
    routine_type str
    The type of routine.
    security_mode str
    Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
    spark_options SparkOptionsResponse
    Optional. Spark specific options.
    strict_mode bool
    Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
    arguments List<Property Map>
    Optional.
    creationTime String
    The time when this routine was created, in milliseconds since the epoch.
    dataGovernanceType String
    Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
    definitionBody String
    The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y)) The definition_body is concat(x, "\n", y) (\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n' The definition_body is return "\n";\n Note that both \n are replaced with linebreaks.
    description String
    Optional. The description of the routine, if defined.
    determinismLevel String
    Optional. The determinism level of the JavaScript UDF, if defined.
    etag String
    A hash of this resource.
    importedLibraries List<String>
    Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
    language String
    Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
    lastModifiedTime String
    The time when this routine was last modified, in milliseconds since the epoch.
    remoteFunctionOptions Property Map
    Optional. Remote function specific options.
    returnTableType Property Map
    Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
    returnType Property Map
    Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y); * CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1)); * CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1)); The return_type is {type_kind: "FLOAT64"} for Add and Decrement, and is absent for Increment (inferred as FLOAT64 at query time). Suppose the function Add is replaced by CREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y); Then the inferred return type of Increment is automatically changed to INT64 at query time, while the return type of Decrement remains FLOAT64.
    routineReference Property Map
    Reference describing the ID of this routine.
    routineType String
    The type of routine.
    securityMode String
    Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
    sparkOptions Property Map
    Optional. Spark specific options.
    strictMode Boolean
    Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.

    Supporting Types

    ArgumentResponse

    ArgumentKind string
    Optional. Defaults to FIXED_TYPE.
    DataType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
    Required unless argument_kind = ANY_TYPE.
    IsAggregate bool
    Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
    Mode string
    Optional. Specifies whether the argument is input or output. Can be set for procedures only.
    Name string
    Optional. The name of this argument. Can be absent for function return argument.
    ArgumentKind string
    Optional. Defaults to FIXED_TYPE.
    DataType StandardSqlDataTypeResponse
    Required unless argument_kind = ANY_TYPE.
    IsAggregate bool
    Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
    Mode string
    Optional. Specifies whether the argument is input or output. Can be set for procedures only.
    Name string
    Optional. The name of this argument. Can be absent for function return argument.
    argumentKind String
    Optional. Defaults to FIXED_TYPE.
    dataType StandardSqlDataTypeResponse
    Required unless argument_kind = ANY_TYPE.
    isAggregate Boolean
    Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
    mode String
    Optional. Specifies whether the argument is input or output. Can be set for procedures only.
    name String
    Optional. The name of this argument. Can be absent for function return argument.
    argumentKind string
    Optional. Defaults to FIXED_TYPE.
    dataType StandardSqlDataTypeResponse
    Required unless argument_kind = ANY_TYPE.
    isAggregate boolean
    Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
    mode string
    Optional. Specifies whether the argument is input or output. Can be set for procedures only.
    name string
    Optional. The name of this argument. Can be absent for function return argument.
    argument_kind str
    Optional. Defaults to FIXED_TYPE.
    data_type StandardSqlDataTypeResponse
    Required unless argument_kind = ANY_TYPE.
    is_aggregate bool
    Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
    mode str
    Optional. Specifies whether the argument is input or output. Can be set for procedures only.
    name str
    Optional. The name of this argument. Can be absent for function return argument.
    argumentKind String
    Optional. Defaults to FIXED_TYPE.
    dataType Property Map
    Required unless argument_kind = ANY_TYPE.
    isAggregate Boolean
    Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
    mode String
    Optional. Specifies whether the argument is input or output. Can be set for procedures only.
    name String
    Optional. The name of this argument. Can be absent for function return argument.

    RemoteFunctionOptionsResponse

    Connection string
    Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
    Endpoint string
    Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
    MaxBatchingRows string
    Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
    UserDefinedContext Dictionary<string, string>
    User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
    Connection string
    Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
    Endpoint string
    Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
    MaxBatchingRows string
    Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
    UserDefinedContext map[string]string
    User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
    connection String
    Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
    endpoint String
    Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
    maxBatchingRows String
    Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
    userDefinedContext Map<String,String>
    User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
    connection string
    Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
    endpoint string
    Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
    maxBatchingRows string
    Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
    userDefinedContext {[key: string]: string}
    User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
    connection str
    Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
    endpoint str
    Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
    max_batching_rows str
    Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
    user_defined_context Mapping[str, str]
    User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
    connection String
    Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
    endpoint String
    Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
    maxBatchingRows String
    Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
    userDefinedContext Map<String>
    User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.

    RoutineReferenceResponse

    DatasetId string
    The ID of the dataset containing this routine.
    Project string
    The ID of the project containing this routine.
    RoutineId string
    The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
    DatasetId string
    The ID of the dataset containing this routine.
    Project string
    The ID of the project containing this routine.
    RoutineId string
    The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
    datasetId String
    The ID of the dataset containing this routine.
    project String
    The ID of the project containing this routine.
    routineId String
    The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
    datasetId string
    The ID of the dataset containing this routine.
    project string
    The ID of the project containing this routine.
    routineId string
    The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
    dataset_id str
    The ID of the dataset containing this routine.
    project str
    The ID of the project containing this routine.
    routine_id str
    The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
    datasetId String
    The ID of the dataset containing this routine.
    project String
    The ID of the project containing this routine.
    routineId String
    The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.

    SparkOptionsResponse

    ArchiveUris List<string>
    Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    Connection string
    Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
    ContainerImage string
    Custom container image for the runtime environment.
    FileUris List<string>
    Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    JarUris List<string>
    JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
    MainClass string
    The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
    MainFileUri string
    The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
    Properties Dictionary<string, string>
    Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
    PyFileUris List<string>
    Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
    RuntimeVersion string
    Runtime version. If not specified, the default runtime version is used.
    ArchiveUris []string
    Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    Connection string
    Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
    ContainerImage string
    Custom container image for the runtime environment.
    FileUris []string
    Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    JarUris []string
    JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
    MainClass string
    The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
    MainFileUri string
    The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
    Properties map[string]string
    Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
    PyFileUris []string
    Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
    RuntimeVersion string
    Runtime version. If not specified, the default runtime version is used.
    archiveUris List<String>
    Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    connection String
    Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
    containerImage String
    Custom container image for the runtime environment.
    fileUris List<String>
    Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    jarUris List<String>
    JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
    mainClass String
    The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
    mainFileUri String
    The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
    properties Map<String,String>
    Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
    pyFileUris List<String>
    Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
    runtimeVersion String
    Runtime version. If not specified, the default runtime version is used.
    archiveUris string[]
    Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    connection string
    Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
    containerImage string
    Custom container image for the runtime environment.
    fileUris string[]
    Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    jarUris string[]
    JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
    mainClass string
    The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
    mainFileUri string
    The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
    properties {[key: string]: string}
    Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
    pyFileUris string[]
    Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
    runtimeVersion string
    Runtime version. If not specified, the default runtime version is used.
    archive_uris Sequence[str]
    Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    connection str
    Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
    container_image str
    Custom container image for the runtime environment.
    file_uris Sequence[str]
    Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    jar_uris Sequence[str]
    JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
    main_class str
    The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
    main_file_uri str
    The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
    properties Mapping[str, str]
    Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
    py_file_uris Sequence[str]
    Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
    runtime_version str
    Runtime version. If not specified, the default runtime version is used.
    archiveUris List<String>
    Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    connection String
    Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
    containerImage String
    Custom container image for the runtime environment.
    fileUris List<String>
    Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
    jarUris List<String>
    JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
    mainClass String
    The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
    mainFileUri String
    The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
    properties Map<String>
    Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
    pyFileUris List<String>
    Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py, .egg, and .zip. For more information about Apache Spark, see Apache Spark.
    runtimeVersion String
    Runtime version. If not specified, the default runtime version is used.

    StandardSqlDataTypeResponse

    StructType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlStructTypeResponse
    The fields of this struct, in order, if type_kind = "STRUCT".
    TypeKind string
    The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
    ArrayElementType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
    The type of the array's elements, if type_kind = "ARRAY".
    RangeElementType Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
    The type of the range's elements, if type_kind = "RANGE".
    StructType StandardSqlStructTypeResponse
    The fields of this struct, in order, if type_kind = "STRUCT".
    TypeKind string
    The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
    ArrayElementType StandardSqlDataTypeResponse
    The type of the array's elements, if type_kind = "ARRAY".
    RangeElementType StandardSqlDataTypeResponse
    The type of the range's elements, if type_kind = "RANGE".
    structType StandardSqlStructTypeResponse
    The fields of this struct, in order, if type_kind = "STRUCT".
    typeKind String
    The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
    arrayElementType StandardSqlDataTypeResponse
    The type of the array's elements, if type_kind = "ARRAY".
    rangeElementType StandardSqlDataTypeResponse
    The type of the range's elements, if type_kind = "RANGE".
    structType StandardSqlStructTypeResponse
    The fields of this struct, in order, if type_kind = "STRUCT".
    typeKind string
    The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
    arrayElementType StandardSqlDataTypeResponse
    The type of the array's elements, if type_kind = "ARRAY".
    rangeElementType StandardSqlDataTypeResponse
    The type of the range's elements, if type_kind = "RANGE".
    struct_type StandardSqlStructTypeResponse
    The fields of this struct, in order, if type_kind = "STRUCT".
    type_kind str
    The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
    array_element_type StandardSqlDataTypeResponse
    The type of the array's elements, if type_kind = "ARRAY".
    range_element_type StandardSqlDataTypeResponse
    The type of the range's elements, if type_kind = "RANGE".
    structType Property Map
    The fields of this struct, in order, if type_kind = "STRUCT".
    typeKind String
    The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
    arrayElementType Property Map
    The type of the array's elements, if type_kind = "ARRAY".
    rangeElementType Property Map
    The type of the range's elements, if type_kind = "RANGE".

    StandardSqlFieldResponse

    Name string
    Optional. The name of this field. Can be absent for struct fields.
    Type Pulumi.GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeResponse
    Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
    Name string
    Optional. The name of this field. Can be absent for struct fields.
    Type StandardSqlDataTypeResponse
    Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
    name String
    Optional. The name of this field. Can be absent for struct fields.
    type StandardSqlDataTypeResponse
    Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
    name string
    Optional. The name of this field. Can be absent for struct fields.
    type StandardSqlDataTypeResponse
    Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
    name str
    Optional. The name of this field. Can be absent for struct fields.
    type StandardSqlDataTypeResponse
    Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
    name String
    Optional. The name of this field. Can be absent for struct fields.
    type Property Map
    Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).

    StandardSqlStructTypeResponse

    Fields []StandardSqlFieldResponse
    Fields within the struct.
    fields List<StandardSqlFieldResponse>
    Fields within the struct.
    fields StandardSqlFieldResponse[]
    Fields within the struct.
    fields List<Property Map>
    Fields within the struct.

    StandardSqlTableTypeResponse

    Columns []StandardSqlFieldResponse
    The columns in this table type
    columns List<StandardSqlFieldResponse>
    The columns in this table type
    columns StandardSqlFieldResponse[]
    The columns in this table type
    columns Sequence[StandardSqlFieldResponse]
    The columns in this table type
    columns List<Property Map>
    The columns in this table type

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi