Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.bigquery/v2.Routine
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new routine in the dataset. Auto-naming is currently not supported for this resource.
Create Routine Resource
new Routine(name: string, args: RoutineArgs, opts?: CustomResourceOptions);
@overload
def Routine(resource_name: str,
opts: Optional[ResourceOptions] = None,
arguments: Optional[Sequence[ArgumentArgs]] = None,
data_governance_type: Optional[RoutineDataGovernanceType] = None,
dataset_id: Optional[str] = None,
definition_body: Optional[str] = None,
description: Optional[str] = None,
determinism_level: Optional[RoutineDeterminismLevel] = None,
imported_libraries: Optional[Sequence[str]] = None,
language: Optional[RoutineLanguage] = None,
project: Optional[str] = None,
remote_function_options: Optional[RemoteFunctionOptionsArgs] = None,
return_table_type: Optional[StandardSqlTableTypeArgs] = None,
return_type: Optional[StandardSqlDataTypeArgs] = None,
routine_reference: Optional[RoutineReferenceArgs] = None,
routine_type: Optional[RoutineRoutineType] = None,
security_mode: Optional[RoutineSecurityMode] = None,
spark_options: Optional[SparkOptionsArgs] = None,
strict_mode: Optional[bool] = None)
@overload
def Routine(resource_name: str,
args: RoutineArgs,
opts: Optional[ResourceOptions] = None)
func NewRoutine(ctx *Context, name string, args RoutineArgs, opts ...ResourceOption) (*Routine, error)
public Routine(string name, RoutineArgs args, CustomResourceOptions? opts = null)
public Routine(String name, RoutineArgs args)
public Routine(String name, RoutineArgs args, CustomResourceOptions options)
type: google-native:bigquery/v2:Routine
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Routine Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Routine resource accepts the following input properties:
- Dataset
Id string - Definition
Body string The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks.- Routine
Reference Pulumi.Google Native. Big Query. V2. Inputs. Routine Reference Reference describing the ID of this routine.
- Routine
Type Pulumi.Google Native. Big Query. V2. Routine Routine Type The type of routine.
- Arguments
List<Pulumi.
Google Native. Big Query. V2. Inputs. Argument> Optional.
- Data
Governance Pulumi.Type Google Native. Big Query. V2. Routine Data Governance Type Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines.- Description string
Optional. The description of the routine, if defined.
- Determinism
Level Pulumi.Google Native. Big Query. V2. Routine Determinism Level Optional. The determinism level of the JavaScript UDF, if defined.
- Imported
Libraries List<string> Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language
Pulumi.
Google Native. Big Query. V2. Routine Language Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- Project string
- Remote
Function Pulumi.Options Google Native. Big Query. V2. Inputs. Remote Function Options Optional. Remote function specific options.
- Return
Table Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Table Type Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- Return
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64.- Security
Mode Pulumi.Google Native. Big Query. V2. Routine Security Mode Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- Spark
Options Pulumi.Google Native. Big Query. V2. Inputs. Spark Options Optional. Spark specific options.
- Strict
Mode bool Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- Dataset
Id string - Definition
Body string The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks.- Routine
Reference RoutineReference Args Reference describing the ID of this routine.
- Routine
Type RoutineRoutine Type The type of routine.
- Arguments
[]Argument
Args Optional.
- Data
Governance RoutineType Data Governance Type Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines.- Description string
Optional. The description of the routine, if defined.
- Determinism
Level RoutineDeterminism Level Optional. The determinism level of the JavaScript UDF, if defined.
- Imported
Libraries []string Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language
Routine
Language Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- Project string
- Remote
Function RemoteOptions Function Options Args Optional. Remote function specific options.
- Return
Table StandardType Sql Table Type Args Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- Return
Type StandardSql Data Type Args Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64.- Security
Mode RoutineSecurity Mode Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- Spark
Options SparkOptions Args Optional. Spark specific options.
- Strict
Mode bool Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset
Id String - definition
Body String The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks.- routine
Reference RoutineReference Reference describing the ID of this routine.
- routine
Type RoutineRoutine Type The type of routine.
- arguments List<Argument>
Optional.
- data
Governance RoutineType Data Governance Type Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines.- description String
Optional. The description of the routine, if defined.
- determinism
Level RoutineDeterminism Level Optional. The determinism level of the JavaScript UDF, if defined.
- imported
Libraries List<String> Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
Routine
Language Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project String
- remote
Function RemoteOptions Function Options Optional. Remote function specific options.
- return
Table StandardType Sql Table Type Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return
Type StandardSql Data Type Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64.- security
Mode RoutineSecurity Mode Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark
Options SparkOptions Optional. Spark specific options.
- strict
Mode Boolean Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset
Id string - definition
Body string The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks.- routine
Reference RoutineReference Reference describing the ID of this routine.
- routine
Type RoutineRoutine Type The type of routine.
- arguments Argument[]
Optional.
- data
Governance RoutineType Data Governance Type Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines.- description string
Optional. The description of the routine, if defined.
- determinism
Level RoutineDeterminism Level Optional. The determinism level of the JavaScript UDF, if defined.
- imported
Libraries string[] Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
Routine
Language Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project string
- remote
Function RemoteOptions Function Options Optional. Remote function specific options.
- return
Table StandardType Sql Table Type Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return
Type StandardSql Data Type Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64.- security
Mode RoutineSecurity Mode Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark
Options SparkOptions Optional. Spark specific options.
- strict
Mode boolean Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset_
id str - definition_
body str The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks.- routine_
reference RoutineReference Args Reference describing the ID of this routine.
- routine_
type RoutineRoutine Type The type of routine.
- arguments
Sequence[Argument
Args] Optional.
- data_
governance_ Routinetype Data Governance Type Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines.- description str
Optional. The description of the routine, if defined.
- determinism_
level RoutineDeterminism Level Optional. The determinism level of the JavaScript UDF, if defined.
- imported_
libraries Sequence[str] Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
Routine
Language Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project str
- remote_
function_ Remoteoptions Function Options Args Optional. Remote function specific options.
- return_
table_ Standardtype Sql Table Type Args Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return_
type StandardSql Data Type Args Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64.- security_
mode RoutineSecurity Mode Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark_
options SparkOptions Args Optional. Spark specific options.
- strict_
mode bool Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset
Id String - definition
Body String The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks.- routine
Reference Property Map Reference describing the ID of this routine.
- routine
Type "ROUTINE_TYPE_UNSPECIFIED" | "SCALAR_FUNCTION" | "PROCEDURE" | "TABLE_VALUED_FUNCTION" | "AGGREGATE_FUNCTION" The type of routine.
- arguments List<Property Map>
Optional.
- data
Governance "DATA_GOVERNANCE_TYPE_UNSPECIFIED" | "DATA_MASKING"Type Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines.- description String
Optional. The description of the routine, if defined.
- determinism
Level "DETERMINISM_LEVEL_UNSPECIFIED" | "DETERMINISTIC" | "NOT_DETERMINISTIC" Optional. The determinism level of the JavaScript UDF, if defined.
- imported
Libraries List<String> Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language "LANGUAGE_UNSPECIFIED" | "SQL" | "JAVASCRIPT" | "PYTHON" | "JAVA" | "SCALA"
Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project String
- remote
Function Property MapOptions Optional. Remote function specific options.
- return
Table Property MapType Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return
Type Property Map Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64.- security
Mode "SECURITY_MODE_UNSPECIFIED" | "DEFINER" | "INVOKER" Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark
Options Property Map Optional. Spark specific options.
- strict
Mode Boolean Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
Outputs
All input properties are implicitly available as output properties. Additionally, the Routine resource produces the following output properties:
- Creation
Time string The time when this routine was created, in milliseconds since the epoch.
- Etag string
A hash of this resource.
- Id string
The provider-assigned unique ID for this managed resource.
- Last
Modified stringTime The time when this routine was last modified, in milliseconds since the epoch.
- Creation
Time string The time when this routine was created, in milliseconds since the epoch.
- Etag string
A hash of this resource.
- Id string
The provider-assigned unique ID for this managed resource.
- Last
Modified stringTime The time when this routine was last modified, in milliseconds since the epoch.
- creation
Time String The time when this routine was created, in milliseconds since the epoch.
- etag String
A hash of this resource.
- id String
The provider-assigned unique ID for this managed resource.
- last
Modified StringTime The time when this routine was last modified, in milliseconds since the epoch.
- creation
Time string The time when this routine was created, in milliseconds since the epoch.
- etag string
A hash of this resource.
- id string
The provider-assigned unique ID for this managed resource.
- last
Modified stringTime The time when this routine was last modified, in milliseconds since the epoch.
- creation_
time str The time when this routine was created, in milliseconds since the epoch.
- etag str
A hash of this resource.
- id str
The provider-assigned unique ID for this managed resource.
- last_
modified_ strtime The time when this routine was last modified, in milliseconds since the epoch.
- creation
Time String The time when this routine was created, in milliseconds since the epoch.
- etag String
A hash of this resource.
- id String
The provider-assigned unique ID for this managed resource.
- last
Modified StringTime The time when this routine was last modified, in milliseconds since the epoch.
Supporting Types
Argument, ArgumentArgs
- Argument
Kind Pulumi.Google Native. Big Query. V2. Argument Argument Kind Optional. Defaults to FIXED_TYPE.
- Data
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode
Pulumi.
Google Native. Big Query. V2. Argument Mode Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
Optional. The name of this argument. Can be absent for function return argument.
- Argument
Kind ArgumentArgument Kind Optional. Defaults to FIXED_TYPE.
- Data
Type StandardSql Data Type Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode
Argument
Mode Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind ArgumentArgument Kind Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
Argument
Mode Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind ArgumentArgument Kind Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type Required unless argument_kind = ANY_TYPE.
- is
Aggregate boolean Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
Argument
Mode Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
Optional. The name of this argument. Can be absent for function return argument.
- argument_
kind ArgumentArgument Kind Optional. Defaults to FIXED_TYPE.
- data_
type StandardSql Data Type Required unless argument_kind = ANY_TYPE.
- is_
aggregate bool Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
Argument
Mode Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind "ARGUMENT_KIND_UNSPECIFIED" | "FIXED_TYPE" | "ANY_TYPE" Optional. Defaults to FIXED_TYPE.
- data
Type Property Map Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode "MODE_UNSPECIFIED" | "IN" | "OUT" | "INOUT"
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
Optional. The name of this argument. Can be absent for function return argument.
ArgumentArgumentKind, ArgumentArgumentKindArgs
- Argument
Kind Unspecified - ARGUMENT_KIND_UNSPECIFIED
Default value.
- Fixed
Type - FIXED_TYPE
The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Any
Type - ANY_TYPE
The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- Argument
Argument Kind Argument Kind Unspecified - ARGUMENT_KIND_UNSPECIFIED
Default value.
- Argument
Argument Kind Fixed Type - FIXED_TYPE
The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Argument
Argument Kind Any Type - ANY_TYPE
The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- Argument
Kind Unspecified - ARGUMENT_KIND_UNSPECIFIED
Default value.
- Fixed
Type - FIXED_TYPE
The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Any
Type - ANY_TYPE
The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- Argument
Kind Unspecified - ARGUMENT_KIND_UNSPECIFIED
Default value.
- Fixed
Type - FIXED_TYPE
The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Any
Type - ANY_TYPE
The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- ARGUMENT_KIND_UNSPECIFIED
- ARGUMENT_KIND_UNSPECIFIED
Default value.
- FIXED_TYPE
- FIXED_TYPE
The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- ANY_TYPE
- ANY_TYPE
The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- "ARGUMENT_KIND_UNSPECIFIED"
- ARGUMENT_KIND_UNSPECIFIED
Default value.
- "FIXED_TYPE"
- FIXED_TYPE
The argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- "ANY_TYPE"
- ANY_TYPE
The argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ArgumentMode, ArgumentModeArgs
- Mode
Unspecified - MODE_UNSPECIFIED
Default value.
- In
- IN
The argument is input-only.
- Out
- OUT
The argument is output-only.
- Inout
- INOUT
The argument is both an input and an output.
- Argument
Mode Mode Unspecified - MODE_UNSPECIFIED
Default value.
- Argument
Mode In - IN
The argument is input-only.
- Argument
Mode Out - OUT
The argument is output-only.
- Argument
Mode Inout - INOUT
The argument is both an input and an output.
- Mode
Unspecified - MODE_UNSPECIFIED
Default value.
- In
- IN
The argument is input-only.
- Out
- OUT
The argument is output-only.
- Inout
- INOUT
The argument is both an input and an output.
- Mode
Unspecified - MODE_UNSPECIFIED
Default value.
- In
- IN
The argument is input-only.
- Out
- OUT
The argument is output-only.
- Inout
- INOUT
The argument is both an input and an output.
- MODE_UNSPECIFIED
- MODE_UNSPECIFIED
Default value.
- IN_
- IN
The argument is input-only.
- OUT
- OUT
The argument is output-only.
- INOUT
- INOUT
The argument is both an input and an output.
- "MODE_UNSPECIFIED"
- MODE_UNSPECIFIED
Default value.
- "IN"
- IN
The argument is input-only.
- "OUT"
- OUT
The argument is output-only.
- "INOUT"
- INOUT
The argument is both an input and an output.
ArgumentResponse, ArgumentResponseArgs
- Argument
Kind string Optional. Defaults to FIXED_TYPE.
- Data
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
Optional. The name of this argument. Can be absent for function return argument.
- Argument
Kind string Optional. Defaults to FIXED_TYPE.
- Data
Type StandardSql Data Type Response Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind String Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type Response Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind string Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type Response Required unless argument_kind = ANY_TYPE.
- is
Aggregate boolean Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode string
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
Optional. The name of this argument. Can be absent for function return argument.
- argument_
kind str Optional. Defaults to FIXED_TYPE.
- data_
type StandardSql Data Type Response Required unless argument_kind = ANY_TYPE.
- is_
aggregate bool Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode str
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind String Optional. Defaults to FIXED_TYPE.
- data
Type Property Map Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
Optional. The name of this argument. Can be absent for function return argument.
RemoteFunctionOptions, RemoteFunctionOptionsArgs
- Connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined Dictionary<string, string>Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined map[string]stringContext User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String,String>Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching stringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined {[key: string]: string}Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_
batching_ strrows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_
defined_ Mapping[str, str]context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String>Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RemoteFunctionOptionsResponse, RemoteFunctionOptionsResponseArgs
- Connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined Dictionary<string, string>Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined map[string]stringContext User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String,String>Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching stringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined {[key: string]: string}Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_
batching_ strrows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_
defined_ Mapping[str, str]context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String>Context User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RoutineDataGovernanceType, RoutineDataGovernanceTypeArgs
- Data
Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIED
The data governance type is unspecified.
- Data
Masking - DATA_MASKING
The data governance type is data masking.
- Routine
Data Governance Type Data Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIED
The data governance type is unspecified.
- Routine
Data Governance Type Data Masking - DATA_MASKING
The data governance type is data masking.
- Data
Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIED
The data governance type is unspecified.
- Data
Masking - DATA_MASKING
The data governance type is data masking.
- Data
Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIED
The data governance type is unspecified.
- Data
Masking - DATA_MASKING
The data governance type is data masking.
- DATA_GOVERNANCE_TYPE_UNSPECIFIED
- DATA_GOVERNANCE_TYPE_UNSPECIFIED
The data governance type is unspecified.
- DATA_MASKING
- DATA_MASKING
The data governance type is data masking.
- "DATA_GOVERNANCE_TYPE_UNSPECIFIED"
- DATA_GOVERNANCE_TYPE_UNSPECIFIED
The data governance type is unspecified.
- "DATA_MASKING"
- DATA_MASKING
The data governance type is data masking.
RoutineDeterminismLevel, RoutineDeterminismLevelArgs
- Determinism
Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIED
The determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTIC
The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Not
Deterministic - NOT_DETERMINISTIC
The UDF is not deterministic.
- Routine
Determinism Level Determinism Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIED
The determinism of the UDF is unspecified.
- Routine
Determinism Level Deterministic - DETERMINISTIC
The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Routine
Determinism Level Not Deterministic - NOT_DETERMINISTIC
The UDF is not deterministic.
- Determinism
Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIED
The determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTIC
The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Not
Deterministic - NOT_DETERMINISTIC
The UDF is not deterministic.
- Determinism
Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIED
The determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTIC
The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Not
Deterministic - NOT_DETERMINISTIC
The UDF is not deterministic.
- DETERMINISM_LEVEL_UNSPECIFIED
- DETERMINISM_LEVEL_UNSPECIFIED
The determinism of the UDF is unspecified.
- DETERMINISTIC
- DETERMINISTIC
The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- NOT_DETERMINISTIC
- NOT_DETERMINISTIC
The UDF is not deterministic.
- "DETERMINISM_LEVEL_UNSPECIFIED"
- DETERMINISM_LEVEL_UNSPECIFIED
The determinism of the UDF is unspecified.
- "DETERMINISTIC"
- DETERMINISTIC
The UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- "NOT_DETERMINISTIC"
- NOT_DETERMINISTIC
The UDF is not deterministic.
RoutineLanguage, RoutineLanguageArgs
- Language
Unspecified - LANGUAGE_UNSPECIFIED
Default value.
- Sql
- SQL
SQL language.
- Javascript
- JAVASCRIPT
JavaScript language.
- Python
- PYTHON
Python language.
- Java
- JAVA
Java language.
- Scala
- SCALA
Scala language.
- Routine
Language Language Unspecified - LANGUAGE_UNSPECIFIED
Default value.
- Routine
Language Sql - SQL
SQL language.
- Routine
Language Javascript - JAVASCRIPT
JavaScript language.
- Routine
Language Python - PYTHON
Python language.
- Routine
Language Java - JAVA
Java language.
- Routine
Language Scala - SCALA
Scala language.
- Language
Unspecified - LANGUAGE_UNSPECIFIED
Default value.
- Sql
- SQL
SQL language.
- Javascript
- JAVASCRIPT
JavaScript language.
- Python
- PYTHON
Python language.
- Java
- JAVA
Java language.
- Scala
- SCALA
Scala language.
- Language
Unspecified - LANGUAGE_UNSPECIFIED
Default value.
- Sql
- SQL
SQL language.
- Javascript
- JAVASCRIPT
JavaScript language.
- Python
- PYTHON
Python language.
- Java
- JAVA
Java language.
- Scala
- SCALA
Scala language.
- LANGUAGE_UNSPECIFIED
- LANGUAGE_UNSPECIFIED
Default value.
- SQL
- SQL
SQL language.
- JAVASCRIPT
- JAVASCRIPT
JavaScript language.
- PYTHON
- PYTHON
Python language.
- JAVA
- JAVA
Java language.
- SCALA
- SCALA
Scala language.
- "LANGUAGE_UNSPECIFIED"
- LANGUAGE_UNSPECIFIED
Default value.
- "SQL"
- SQL
SQL language.
- "JAVASCRIPT"
- JAVASCRIPT
JavaScript language.
- "PYTHON"
- PYTHON
Python language.
- "JAVA"
- JAVA
Java language.
- "SCALA"
- SCALA
Scala language.
RoutineReference, RoutineReferenceArgs
- dataset_
id str The ID of the dataset containing this routine.
- project str
The ID of the project containing this routine.
- routine_
id str The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
RoutineReferenceResponse, RoutineReferenceResponseArgs
- dataset_
id str The ID of the dataset containing this routine.
- project str
The ID of the project containing this routine.
- routine_
id str The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
RoutineRoutineType, RoutineRoutineTypeArgs
- Routine
Type Unspecified - ROUTINE_TYPE_UNSPECIFIED
Default value.
- Scalar
Function - SCALAR_FUNCTION
Non-built-in persistent scalar function.
- Procedure
- PROCEDURE
Stored procedure.
- Table
Valued Function - TABLE_VALUED_FUNCTION
Non-built-in persistent TVF.
- Aggregate
Function - AGGREGATE_FUNCTION
Non-built-in persistent aggregate function.
- Routine
Routine Type Routine Type Unspecified - ROUTINE_TYPE_UNSPECIFIED
Default value.
- Routine
Routine Type Scalar Function - SCALAR_FUNCTION
Non-built-in persistent scalar function.
- Routine
Routine Type Procedure - PROCEDURE
Stored procedure.
- Routine
Routine Type Table Valued Function - TABLE_VALUED_FUNCTION
Non-built-in persistent TVF.
- Routine
Routine Type Aggregate Function - AGGREGATE_FUNCTION
Non-built-in persistent aggregate function.
- Routine
Type Unspecified - ROUTINE_TYPE_UNSPECIFIED
Default value.
- Scalar
Function - SCALAR_FUNCTION
Non-built-in persistent scalar function.
- Procedure
- PROCEDURE
Stored procedure.
- Table
Valued Function - TABLE_VALUED_FUNCTION
Non-built-in persistent TVF.
- Aggregate
Function - AGGREGATE_FUNCTION
Non-built-in persistent aggregate function.
- Routine
Type Unspecified - ROUTINE_TYPE_UNSPECIFIED
Default value.
- Scalar
Function - SCALAR_FUNCTION
Non-built-in persistent scalar function.
- Procedure
- PROCEDURE
Stored procedure.
- Table
Valued Function - TABLE_VALUED_FUNCTION
Non-built-in persistent TVF.
- Aggregate
Function - AGGREGATE_FUNCTION
Non-built-in persistent aggregate function.
- ROUTINE_TYPE_UNSPECIFIED
- ROUTINE_TYPE_UNSPECIFIED
Default value.
- SCALAR_FUNCTION
- SCALAR_FUNCTION
Non-built-in persistent scalar function.
- PROCEDURE
- PROCEDURE
Stored procedure.
- TABLE_VALUED_FUNCTION
- TABLE_VALUED_FUNCTION
Non-built-in persistent TVF.
- AGGREGATE_FUNCTION
- AGGREGATE_FUNCTION
Non-built-in persistent aggregate function.
- "ROUTINE_TYPE_UNSPECIFIED"
- ROUTINE_TYPE_UNSPECIFIED
Default value.
- "SCALAR_FUNCTION"
- SCALAR_FUNCTION
Non-built-in persistent scalar function.
- "PROCEDURE"
- PROCEDURE
Stored procedure.
- "TABLE_VALUED_FUNCTION"
- TABLE_VALUED_FUNCTION
Non-built-in persistent TVF.
- "AGGREGATE_FUNCTION"
- AGGREGATE_FUNCTION
Non-built-in persistent aggregate function.
RoutineSecurityMode, RoutineSecurityModeArgs
- Security
Mode Unspecified - SECURITY_MODE_UNSPECIFIED
The security mode of the routine is unspecified.
- Definer
- DEFINER
The routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKER
The routine is to be executed with the privileges of the user who invokes it.
- Routine
Security Mode Security Mode Unspecified - SECURITY_MODE_UNSPECIFIED
The security mode of the routine is unspecified.
- Routine
Security Mode Definer - DEFINER
The routine is to be executed with the privileges of the user who defines it.
- Routine
Security Mode Invoker - INVOKER
The routine is to be executed with the privileges of the user who invokes it.
- Security
Mode Unspecified - SECURITY_MODE_UNSPECIFIED
The security mode of the routine is unspecified.
- Definer
- DEFINER
The routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKER
The routine is to be executed with the privileges of the user who invokes it.
- Security
Mode Unspecified - SECURITY_MODE_UNSPECIFIED
The security mode of the routine is unspecified.
- Definer
- DEFINER
The routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKER
The routine is to be executed with the privileges of the user who invokes it.
- SECURITY_MODE_UNSPECIFIED
- SECURITY_MODE_UNSPECIFIED
The security mode of the routine is unspecified.
- DEFINER
- DEFINER
The routine is to be executed with the privileges of the user who defines it.
- INVOKER
- INVOKER
The routine is to be executed with the privileges of the user who invokes it.
- "SECURITY_MODE_UNSPECIFIED"
- SECURITY_MODE_UNSPECIFIED
The security mode of the routine is unspecified.
- "DEFINER"
- DEFINER
The routine is to be executed with the privileges of the user who defines it.
- "INVOKER"
- INVOKER
The routine is to be executed with the privileges of the user who invokes it.
SparkOptions, SparkOptionsArgs
- Archive
Uris List<string> Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string Custom container image for the runtime environment.
- File
Uris List<string> Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris List<string> JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File List<string>Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- Runtime
Version string Runtime version. If not specified, the default runtime version is used.
- Archive
Uris []string Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string Custom container image for the runtime environment.
- File
Uris []string Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris []string JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File []stringUris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- Runtime
Version string Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String Custom container image for the runtime environment.
- file
Uris List<String> Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime
Version String Runtime version. If not specified, the default runtime version is used.
- archive
Uris string[] Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image string Custom container image for the runtime environment.
- file
Uris string[] Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris string[] JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class string The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File stringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File string[]Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime
Version string Runtime version. If not specified, the default runtime version is used.
- archive_
uris Sequence[str] Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_
image str Custom container image for the runtime environment.
- file_
uris Sequence[str] Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_
uris Sequence[str] JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_
class str The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_
file_ struri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_
file_ Sequence[str]uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime_
version str Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String Custom container image for the runtime environment.
- file
Uris List<String> Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime
Version String Runtime version. If not specified, the default runtime version is used.
SparkOptionsResponse, SparkOptionsResponseArgs
- Archive
Uris List<string> Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string Custom container image for the runtime environment.
- File
Uris List<string> Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris List<string> JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File List<string>Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- Runtime
Version string Runtime version. If not specified, the default runtime version is used.
- Archive
Uris []string Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string Custom container image for the runtime environment.
- File
Uris []string Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris []string JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File []stringUris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- Runtime
Version string Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String Custom container image for the runtime environment.
- file
Uris List<String> Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime
Version String Runtime version. If not specified, the default runtime version is used.
- archive
Uris string[] Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image string Custom container image for the runtime environment.
- file
Uris string[] Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris string[] JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class string The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File stringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File string[]Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime
Version string Runtime version. If not specified, the default runtime version is used.
- archive_
uris Sequence[str] Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_
image str Custom container image for the runtime environment.
- file_
uris Sequence[str] Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_
uris Sequence[str] JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_
class str The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_
file_ struri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_
file_ Sequence[str]uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime_
version str Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String Custom container image for the runtime environment.
- file
Uris List<String> Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark.- runtime
Version String Runtime version. If not specified, the default runtime version is used.
StandardSqlDataType, StandardSqlDataTypeArgs
- Type
Kind Pulumi.Google Native. Big Query. V2. Standard Sql Data Type Type Kind The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type The type of the array's elements, if type_kind = "ARRAY".
- Range
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type The type of the range's elements, if type_kind = "RANGE".
- Struct
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type The fields of this struct, in order, if type_kind = "STRUCT".
- Type
Kind StandardSql Data Type Type Kind The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element StandardType Sql Data Type The type of the array's elements, if type_kind = "ARRAY".
- Range
Element StandardType Sql Data Type The type of the range's elements, if type_kind = "RANGE".
- Struct
Type StandardSql Struct Type The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind StandardSql Data Type Type Kind The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind StandardSql Data Type Type Kind The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type The fields of this struct, in order, if type_kind = "STRUCT".
- type_
kind StandardSql Data Type Type Kind The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_
element_ Standardtype Sql Data Type The type of the array's elements, if type_kind = "ARRAY".
- range_
element_ Standardtype Sql Data Type The type of the range's elements, if type_kind = "RANGE".
- struct_
type StandardSql Struct Type The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind "TYPE_KIND_UNSPECIFIED" | "INT64" | "BOOL" | "FLOAT64" | "STRING" | "BYTES" | "TIMESTAMP" | "DATE" | "TIME" | "DATETIME" | "INTERVAL" | "GEOGRAPHY" | "NUMERIC" | "BIGNUMERIC" | "JSON" | "ARRAY" | "STRUCT" | "RANGE" The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element Property MapType The type of the array's elements, if type_kind = "ARRAY".
- range
Element Property MapType The type of the range's elements, if type_kind = "RANGE".
- struct
Type Property Map The fields of this struct, in order, if type_kind = "STRUCT".
StandardSqlDataTypeResponse, StandardSqlDataTypeResponseArgs
- Struct
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type Response The fields of this struct, in order, if type_kind = "STRUCT".
- Type
Kind string The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response The type of the array's elements, if type_kind = "ARRAY".
- Range
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response The type of the range's elements, if type_kind = "RANGE".
- Struct
Type StandardSql Struct Type Response The fields of this struct, in order, if type_kind = "STRUCT".
- Type
Kind string The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element StandardType Sql Data Type Response The type of the array's elements, if type_kind = "ARRAY".
- Range
Element StandardType Sql Data Type Response The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type Response The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind String The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type Response The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type Response The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type Response The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind string The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type Response The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type Response The type of the range's elements, if type_kind = "RANGE".
- struct_
type StandardSql Struct Type Response The fields of this struct, in order, if type_kind = "STRUCT".
- type_
kind str The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_
element_ Standardtype Sql Data Type Response The type of the array's elements, if type_kind = "ARRAY".
- range_
element_ Standardtype Sql Data Type Response The type of the range's elements, if type_kind = "RANGE".
- struct
Type Property Map The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind String The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element Property MapType The type of the array's elements, if type_kind = "ARRAY".
- range
Element Property MapType The type of the range's elements, if type_kind = "RANGE".
StandardSqlDataTypeTypeKind, StandardSqlDataTypeTypeKindArgs
- Type
Kind Unspecified - TYPE_KIND_UNSPECIFIED
Invalid type.
- Int64
- INT64
Encoded as a string in decimal format.
- Bool
- BOOL
Encoded as a boolean "false" or "true".
- Float64
- FLOAT64
Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRING
Encoded as a string value.
- Bytes
- BYTES
Encoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMP
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATE
Encoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIME
Encoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIME
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVAL
Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHY
Encoded as WKT
- Numeric
- NUMERIC
Encoded as a decimal string.
- Bignumeric
- BIGNUMERIC
Encoded as a decimal string.
- Json
- JSON
Encoded as a string.
- Array
- ARRAY
Encoded as a list with types matching Type.array_type.
- Struct
- STRUCT
Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGE
Encoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- Standard
Sql Data Type Type Kind Type Kind Unspecified - TYPE_KIND_UNSPECIFIED
Invalid type.
- Standard
Sql Data Type Type Kind Int64 - INT64
Encoded as a string in decimal format.
- Standard
Sql Data Type Type Kind Bool - BOOL
Encoded as a boolean "false" or "true".
- Standard
Sql Data Type Type Kind Float64 - FLOAT64
Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- Standard
Sql Data Type Type Kind String - STRING
Encoded as a string value.
- Standard
Sql Data Type Type Kind Bytes - BYTES
Encoded as a base64 string per RFC 4648, section 4.
- Standard
Sql Data Type Type Kind Timestamp - TIMESTAMP
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Standard
Sql Data Type Type Kind Date - DATE
Encoded as RFC 3339 full-date format string: 1985-04-12
- Standard
Sql Data Type Type Kind Time - TIME
Encoded as RFC 3339 partial-time format string: 23:20:50.52
- Standard
Sql Data Type Type Kind Datetime - DATETIME
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Standard
Sql Data Type Type Kind Interval - INTERVAL
Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Standard
Sql Data Type Type Kind Geography - GEOGRAPHY
Encoded as WKT
- Standard
Sql Data Type Type Kind Numeric - NUMERIC
Encoded as a decimal string.
- Standard
Sql Data Type Type Kind Bignumeric - BIGNUMERIC
Encoded as a decimal string.
- Standard
Sql Data Type Type Kind Json - JSON
Encoded as a string.
- Standard
Sql Data Type Type Kind Array - ARRAY
Encoded as a list with types matching Type.array_type.
- Standard
Sql Data Type Type Kind Struct - STRUCT
Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Standard
Sql Data Type Type Kind Range - RANGE
Encoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- Type
Kind Unspecified - TYPE_KIND_UNSPECIFIED
Invalid type.
- Int64
- INT64
Encoded as a string in decimal format.
- Bool
- BOOL
Encoded as a boolean "false" or "true".
- Float64
- FLOAT64
Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRING
Encoded as a string value.
- Bytes
- BYTES
Encoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMP
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATE
Encoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIME
Encoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIME
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVAL
Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHY
Encoded as WKT
- Numeric
- NUMERIC
Encoded as a decimal string.
- Bignumeric
- BIGNUMERIC
Encoded as a decimal string.
- Json
- JSON
Encoded as a string.
- Array
- ARRAY
Encoded as a list with types matching Type.array_type.
- Struct
- STRUCT
Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGE
Encoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- Type
Kind Unspecified - TYPE_KIND_UNSPECIFIED
Invalid type.
- Int64
- INT64
Encoded as a string in decimal format.
- Bool
- BOOL
Encoded as a boolean "false" or "true".
- Float64
- FLOAT64
Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRING
Encoded as a string value.
- Bytes
- BYTES
Encoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMP
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATE
Encoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIME
Encoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIME
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVAL
Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHY
Encoded as WKT
- Numeric
- NUMERIC
Encoded as a decimal string.
- Bignumeric
- BIGNUMERIC
Encoded as a decimal string.
- Json
- JSON
Encoded as a string.
- Array
- ARRAY
Encoded as a list with types matching Type.array_type.
- Struct
- STRUCT
Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGE
Encoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- TYPE_KIND_UNSPECIFIED
- TYPE_KIND_UNSPECIFIED
Invalid type.
- INT64
- INT64
Encoded as a string in decimal format.
- BOOL
- BOOL
Encoded as a boolean "false" or "true".
- FLOAT64
- FLOAT64
Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- STRING
- STRING
Encoded as a string value.
- BYTES
- BYTES
Encoded as a base64 string per RFC 4648, section 4.
- TIMESTAMP
- TIMESTAMP
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- DATE
- DATE
Encoded as RFC 3339 full-date format string: 1985-04-12
- TIME
- TIME
Encoded as RFC 3339 partial-time format string: 23:20:50.52
- DATETIME
- DATETIME
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- INTERVAL
- INTERVAL
Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
- GEOGRAPHY
- GEOGRAPHY
Encoded as WKT
- NUMERIC
- NUMERIC
Encoded as a decimal string.
- BIGNUMERIC
- BIGNUMERIC
Encoded as a decimal string.
- JSON
- JSON
Encoded as a string.
- ARRAY
- ARRAY
Encoded as a list with types matching Type.array_type.
- STRUCT
- STRUCT
Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- RANGE
- RANGE
Encoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- "TYPE_KIND_UNSPECIFIED"
- TYPE_KIND_UNSPECIFIED
Invalid type.
- "INT64"
- INT64
Encoded as a string in decimal format.
- "BOOL"
- BOOL
Encoded as a boolean "false" or "true".
- "FLOAT64"
- FLOAT64
Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- "STRING"
- STRING
Encoded as a string value.
- "BYTES"
- BYTES
Encoded as a base64 string per RFC 4648, section 4.
- "TIMESTAMP"
- TIMESTAMP
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- "DATE"
- DATE
Encoded as RFC 3339 full-date format string: 1985-04-12
- "TIME"
- TIME
Encoded as RFC 3339 partial-time format string: 23:20:50.52
- "DATETIME"
- DATETIME
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- "INTERVAL"
- INTERVAL
Encoded as fully qualified 3 part: 0-5 15 2:30:45.6
- "GEOGRAPHY"
- GEOGRAPHY
Encoded as WKT
- "NUMERIC"
- NUMERIC
Encoded as a decimal string.
- "BIGNUMERIC"
- BIGNUMERIC
Encoded as a decimal string.
- "JSON"
- JSON
Encoded as a string.
- "ARRAY"
- ARRAY
Encoded as a list with types matching Type.array_type.
- "STRUCT"
- STRUCT
Encoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- "RANGE"
- RANGE
Encoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
StandardSqlField, StandardSqlFieldArgs
- Name string
Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Data Type Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
Optional. The name of this field. Can be absent for struct fields.
- Type
Standard
Sql Data Type Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
Optional. The name of this field. Can be absent for struct fields.
- type Property Map
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlFieldResponse, StandardSqlFieldResponseArgs
- Name string
Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
Optional. The name of this field. Can be absent for struct fields.
- Type
Standard
Sql Data Type Response Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Response Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Response Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Response Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
Optional. The name of this field. Can be absent for struct fields.
- type Property Map
Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlStructType, StandardSqlStructTypeArgs
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field> Fields within the struct.
- Fields
[]Standard
Sql Field Fields within the struct.
- fields
List<Standard
Sql Field> Fields within the struct.
- fields
Standard
Sql Field[] Fields within the struct.
- fields
Sequence[Standard
Sql Field] Fields within the struct.
- fields List<Property Map>
Fields within the struct.
StandardSqlStructTypeResponse, StandardSqlStructTypeResponseArgs
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field Response> Fields within the struct.
- Fields
[]Standard
Sql Field Response Fields within the struct.
- fields
List<Standard
Sql Field Response> Fields within the struct.
- fields
Standard
Sql Field Response[] Fields within the struct.
- fields
Sequence[Standard
Sql Field Response] Fields within the struct.
- fields List<Property Map>
Fields within the struct.
StandardSqlTableType, StandardSqlTableTypeArgs
- Columns
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field> The columns in this table type
- Columns
[]Standard
Sql Field The columns in this table type
- columns
List<Standard
Sql Field> The columns in this table type
- columns
Standard
Sql Field[] The columns in this table type
- columns
Sequence[Standard
Sql Field] The columns in this table type
- columns List<Property Map>
The columns in this table type
StandardSqlTableTypeResponse, StandardSqlTableTypeResponseArgs
- Columns
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field Response> The columns in this table type
- Columns
[]Standard
Sql Field Response The columns in this table type
- columns
List<Standard
Sql Field Response> The columns in this table type
- columns
Standard
Sql Field Response[] The columns in this table type
- columns
Sequence[Standard
Sql Field Response] The columns in this table type
- columns List<Property Map>
The columns in this table type
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.