1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. Job

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.Job

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Submits a job to a cluster. Auto-naming is currently not supported for this resource.

    Create Job Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new Job(name: string, args: JobArgs, opts?: CustomResourceOptions);
    @overload
    def Job(resource_name: str,
            args: JobArgs,
            opts: Optional[ResourceOptions] = None)
    
    @overload
    def Job(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            placement: Optional[JobPlacementArgs] = None,
            region: Optional[str] = None,
            project: Optional[str] = None,
            pyspark_job: Optional[PySparkJobArgs] = None,
            labels: Optional[Mapping[str, str]] = None,
            pig_job: Optional[PigJobArgs] = None,
            hadoop_job: Optional[HadoopJobArgs] = None,
            presto_job: Optional[PrestoJobArgs] = None,
            driver_scheduling_config: Optional[DriverSchedulingConfigArgs] = None,
            hive_job: Optional[HiveJobArgs] = None,
            reference: Optional[JobReferenceArgs] = None,
            flink_job: Optional[FlinkJobArgs] = None,
            request_id: Optional[str] = None,
            scheduling: Optional[JobSchedulingArgs] = None,
            spark_job: Optional[SparkJobArgs] = None,
            spark_r_job: Optional[SparkRJobArgs] = None,
            spark_sql_job: Optional[SparkSqlJobArgs] = None,
            trino_job: Optional[TrinoJobArgs] = None)
    func NewJob(ctx *Context, name string, args JobArgs, opts ...ResourceOption) (*Job, error)
    public Job(string name, JobArgs args, CustomResourceOptions? opts = null)
    public Job(String name, JobArgs args)
    public Job(String name, JobArgs args, CustomResourceOptions options)
    
    type: google-native:dataproc/v1:Job
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Example

    The following reference example uses placeholder values for all input properties.

    var examplejobResourceResourceFromDataprocv1 = new GoogleNative.Dataproc.V1.Job("examplejobResourceResourceFromDataprocv1", new()
    {
        Placement = new GoogleNative.Dataproc.V1.Inputs.JobPlacementArgs
        {
            ClusterName = "string",
            ClusterLabels = 
            {
                { "string", "string" },
            },
        },
        Region = "string",
        Project = "string",
        PysparkJob = new GoogleNative.Dataproc.V1.Inputs.PySparkJobArgs
        {
            MainPythonFileUri = "string",
            ArchiveUris = new[]
            {
                "string",
            },
            Args = new[]
            {
                "string",
            },
            FileUris = new[]
            {
                "string",
            },
            JarFileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            Properties = 
            {
                { "string", "string" },
            },
            PythonFileUris = new[]
            {
                "string",
            },
        },
        Labels = 
        {
            { "string", "string" },
        },
        PigJob = new GoogleNative.Dataproc.V1.Inputs.PigJobArgs
        {
            ContinueOnFailure = false,
            JarFileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            Properties = 
            {
                { "string", "string" },
            },
            QueryFileUri = "string",
            QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
            {
                Queries = new[]
                {
                    "string",
                },
            },
            ScriptVariables = 
            {
                { "string", "string" },
            },
        },
        HadoopJob = new GoogleNative.Dataproc.V1.Inputs.HadoopJobArgs
        {
            ArchiveUris = new[]
            {
                "string",
            },
            Args = new[]
            {
                "string",
            },
            FileUris = new[]
            {
                "string",
            },
            JarFileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            MainClass = "string",
            MainJarFileUri = "string",
            Properties = 
            {
                { "string", "string" },
            },
        },
        PrestoJob = new GoogleNative.Dataproc.V1.Inputs.PrestoJobArgs
        {
            ClientTags = new[]
            {
                "string",
            },
            ContinueOnFailure = false,
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            OutputFormat = "string",
            Properties = 
            {
                { "string", "string" },
            },
            QueryFileUri = "string",
            QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
            {
                Queries = new[]
                {
                    "string",
                },
            },
        },
        DriverSchedulingConfig = new GoogleNative.Dataproc.V1.Inputs.DriverSchedulingConfigArgs
        {
            MemoryMb = 0,
            Vcores = 0,
        },
        HiveJob = new GoogleNative.Dataproc.V1.Inputs.HiveJobArgs
        {
            ContinueOnFailure = false,
            JarFileUris = new[]
            {
                "string",
            },
            Properties = 
            {
                { "string", "string" },
            },
            QueryFileUri = "string",
            QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
            {
                Queries = new[]
                {
                    "string",
                },
            },
            ScriptVariables = 
            {
                { "string", "string" },
            },
        },
        Reference = new GoogleNative.Dataproc.V1.Inputs.JobReferenceArgs
        {
            JobId = "string",
            Project = "string",
        },
        FlinkJob = new GoogleNative.Dataproc.V1.Inputs.FlinkJobArgs
        {
            Args = new[]
            {
                "string",
            },
            JarFileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            MainClass = "string",
            MainJarFileUri = "string",
            Properties = 
            {
                { "string", "string" },
            },
            SavepointUri = "string",
        },
        RequestId = "string",
        Scheduling = new GoogleNative.Dataproc.V1.Inputs.JobSchedulingArgs
        {
            MaxFailuresPerHour = 0,
            MaxFailuresTotal = 0,
        },
        SparkJob = new GoogleNative.Dataproc.V1.Inputs.SparkJobArgs
        {
            ArchiveUris = new[]
            {
                "string",
            },
            Args = new[]
            {
                "string",
            },
            FileUris = new[]
            {
                "string",
            },
            JarFileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            MainClass = "string",
            MainJarFileUri = "string",
            Properties = 
            {
                { "string", "string" },
            },
        },
        SparkRJob = new GoogleNative.Dataproc.V1.Inputs.SparkRJobArgs
        {
            MainRFileUri = "string",
            ArchiveUris = new[]
            {
                "string",
            },
            Args = new[]
            {
                "string",
            },
            FileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            Properties = 
            {
                { "string", "string" },
            },
        },
        SparkSqlJob = new GoogleNative.Dataproc.V1.Inputs.SparkSqlJobArgs
        {
            JarFileUris = new[]
            {
                "string",
            },
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            Properties = 
            {
                { "string", "string" },
            },
            QueryFileUri = "string",
            QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
            {
                Queries = new[]
                {
                    "string",
                },
            },
            ScriptVariables = 
            {
                { "string", "string" },
            },
        },
        TrinoJob = new GoogleNative.Dataproc.V1.Inputs.TrinoJobArgs
        {
            ClientTags = new[]
            {
                "string",
            },
            ContinueOnFailure = false,
            LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
            {
                DriverLogLevels = 
                {
                    { "string", "string" },
                },
            },
            OutputFormat = "string",
            Properties = 
            {
                { "string", "string" },
            },
            QueryFileUri = "string",
            QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
            {
                Queries = new[]
                {
                    "string",
                },
            },
        },
    });
    
    example, err := dataproc.NewJob(ctx, "examplejobResourceResourceFromDataprocv1", &dataproc.JobArgs{
    Placement: &dataproc.JobPlacementArgs{
    ClusterName: pulumi.String("string"),
    ClusterLabels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    Region: pulumi.String("string"),
    Project: pulumi.String("string"),
    PysparkJob: &dataproc.PySparkJobArgs{
    MainPythonFileUri: pulumi.String("string"),
    ArchiveUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    Args: pulumi.StringArray{
    pulumi.String("string"),
    },
    FileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    PythonFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    Labels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    PigJob: &dataproc.PigJobArgs{
    ContinueOnFailure: pulumi.Bool(false),
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    QueryFileUri: pulumi.String("string"),
    QueryList: &dataproc.QueryListArgs{
    Queries: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    ScriptVariables: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    HadoopJob: &dataproc.HadoopJobArgs{
    ArchiveUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    Args: pulumi.StringArray{
    pulumi.String("string"),
    },
    FileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    MainClass: pulumi.String("string"),
    MainJarFileUri: pulumi.String("string"),
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    PrestoJob: &dataproc.PrestoJobArgs{
    ClientTags: pulumi.StringArray{
    pulumi.String("string"),
    },
    ContinueOnFailure: pulumi.Bool(false),
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    OutputFormat: pulumi.String("string"),
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    QueryFileUri: pulumi.String("string"),
    QueryList: &dataproc.QueryListArgs{
    Queries: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    },
    DriverSchedulingConfig: &dataproc.DriverSchedulingConfigArgs{
    MemoryMb: pulumi.Int(0),
    Vcores: pulumi.Int(0),
    },
    HiveJob: &dataproc.HiveJobArgs{
    ContinueOnFailure: pulumi.Bool(false),
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    QueryFileUri: pulumi.String("string"),
    QueryList: &dataproc.QueryListArgs{
    Queries: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    ScriptVariables: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    Reference: &dataproc.JobReferenceArgs{
    JobId: pulumi.String("string"),
    Project: pulumi.String("string"),
    },
    FlinkJob: &dataproc.FlinkJobArgs{
    Args: pulumi.StringArray{
    pulumi.String("string"),
    },
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    MainClass: pulumi.String("string"),
    MainJarFileUri: pulumi.String("string"),
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    SavepointUri: pulumi.String("string"),
    },
    RequestId: pulumi.String("string"),
    Scheduling: &dataproc.JobSchedulingArgs{
    MaxFailuresPerHour: pulumi.Int(0),
    MaxFailuresTotal: pulumi.Int(0),
    },
    SparkJob: &dataproc.SparkJobArgs{
    ArchiveUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    Args: pulumi.StringArray{
    pulumi.String("string"),
    },
    FileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    MainClass: pulumi.String("string"),
    MainJarFileUri: pulumi.String("string"),
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    SparkRJob: &dataproc.SparkRJobArgs{
    MainRFileUri: pulumi.String("string"),
    ArchiveUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    Args: pulumi.StringArray{
    pulumi.String("string"),
    },
    FileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    SparkSqlJob: &dataproc.SparkSqlJobArgs{
    JarFileUris: pulumi.StringArray{
    pulumi.String("string"),
    },
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    QueryFileUri: pulumi.String("string"),
    QueryList: &dataproc.QueryListArgs{
    Queries: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    ScriptVariables: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    TrinoJob: &dataproc.TrinoJobArgs{
    ClientTags: pulumi.StringArray{
    pulumi.String("string"),
    },
    ContinueOnFailure: pulumi.Bool(false),
    LoggingConfig: &dataproc.LoggingConfigArgs{
    DriverLogLevels: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    },
    OutputFormat: pulumi.String("string"),
    Properties: pulumi.StringMap{
    "string": pulumi.String("string"),
    },
    QueryFileUri: pulumi.String("string"),
    QueryList: &dataproc.QueryListArgs{
    Queries: pulumi.StringArray{
    pulumi.String("string"),
    },
    },
    },
    })
    
    var examplejobResourceResourceFromDataprocv1 = new Job("examplejobResourceResourceFromDataprocv1", JobArgs.builder()        
        .placement(JobPlacementArgs.builder()
            .clusterName("string")
            .clusterLabels(Map.of("string", "string"))
            .build())
        .region("string")
        .project("string")
        .pysparkJob(PySparkJobArgs.builder()
            .mainPythonFileUri("string")
            .archiveUris("string")
            .args("string")
            .fileUris("string")
            .jarFileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .properties(Map.of("string", "string"))
            .pythonFileUris("string")
            .build())
        .labels(Map.of("string", "string"))
        .pigJob(PigJobArgs.builder()
            .continueOnFailure(false)
            .jarFileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .properties(Map.of("string", "string"))
            .queryFileUri("string")
            .queryList(QueryListArgs.builder()
                .queries("string")
                .build())
            .scriptVariables(Map.of("string", "string"))
            .build())
        .hadoopJob(HadoopJobArgs.builder()
            .archiveUris("string")
            .args("string")
            .fileUris("string")
            .jarFileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .mainClass("string")
            .mainJarFileUri("string")
            .properties(Map.of("string", "string"))
            .build())
        .prestoJob(PrestoJobArgs.builder()
            .clientTags("string")
            .continueOnFailure(false)
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .outputFormat("string")
            .properties(Map.of("string", "string"))
            .queryFileUri("string")
            .queryList(QueryListArgs.builder()
                .queries("string")
                .build())
            .build())
        .driverSchedulingConfig(DriverSchedulingConfigArgs.builder()
            .memoryMb(0)
            .vcores(0)
            .build())
        .hiveJob(HiveJobArgs.builder()
            .continueOnFailure(false)
            .jarFileUris("string")
            .properties(Map.of("string", "string"))
            .queryFileUri("string")
            .queryList(QueryListArgs.builder()
                .queries("string")
                .build())
            .scriptVariables(Map.of("string", "string"))
            .build())
        .reference(JobReferenceArgs.builder()
            .jobId("string")
            .project("string")
            .build())
        .flinkJob(FlinkJobArgs.builder()
            .args("string")
            .jarFileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .mainClass("string")
            .mainJarFileUri("string")
            .properties(Map.of("string", "string"))
            .savepointUri("string")
            .build())
        .requestId("string")
        .scheduling(JobSchedulingArgs.builder()
            .maxFailuresPerHour(0)
            .maxFailuresTotal(0)
            .build())
        .sparkJob(SparkJobArgs.builder()
            .archiveUris("string")
            .args("string")
            .fileUris("string")
            .jarFileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .mainClass("string")
            .mainJarFileUri("string")
            .properties(Map.of("string", "string"))
            .build())
        .sparkRJob(SparkRJobArgs.builder()
            .mainRFileUri("string")
            .archiveUris("string")
            .args("string")
            .fileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .properties(Map.of("string", "string"))
            .build())
        .sparkSqlJob(SparkSqlJobArgs.builder()
            .jarFileUris("string")
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .properties(Map.of("string", "string"))
            .queryFileUri("string")
            .queryList(QueryListArgs.builder()
                .queries("string")
                .build())
            .scriptVariables(Map.of("string", "string"))
            .build())
        .trinoJob(TrinoJobArgs.builder()
            .clientTags("string")
            .continueOnFailure(false)
            .loggingConfig(LoggingConfigArgs.builder()
                .driverLogLevels(Map.of("string", "string"))
                .build())
            .outputFormat("string")
            .properties(Map.of("string", "string"))
            .queryFileUri("string")
            .queryList(QueryListArgs.builder()
                .queries("string")
                .build())
            .build())
        .build());
    
    examplejob_resource_resource_from_dataprocv1 = google_native.dataproc.v1.Job("examplejobResourceResourceFromDataprocv1",
        placement=google_native.dataproc.v1.JobPlacementArgs(
            cluster_name="string",
            cluster_labels={
                "string": "string",
            },
        ),
        region="string",
        project="string",
        pyspark_job=google_native.dataproc.v1.PySparkJobArgs(
            main_python_file_uri="string",
            archive_uris=["string"],
            args=["string"],
            file_uris=["string"],
            jar_file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            properties={
                "string": "string",
            },
            python_file_uris=["string"],
        ),
        labels={
            "string": "string",
        },
        pig_job=google_native.dataproc.v1.PigJobArgs(
            continue_on_failure=False,
            jar_file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            properties={
                "string": "string",
            },
            query_file_uri="string",
            query_list=google_native.dataproc.v1.QueryListArgs(
                queries=["string"],
            ),
            script_variables={
                "string": "string",
            },
        ),
        hadoop_job=google_native.dataproc.v1.HadoopJobArgs(
            archive_uris=["string"],
            args=["string"],
            file_uris=["string"],
            jar_file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            main_class="string",
            main_jar_file_uri="string",
            properties={
                "string": "string",
            },
        ),
        presto_job=google_native.dataproc.v1.PrestoJobArgs(
            client_tags=["string"],
            continue_on_failure=False,
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            output_format="string",
            properties={
                "string": "string",
            },
            query_file_uri="string",
            query_list=google_native.dataproc.v1.QueryListArgs(
                queries=["string"],
            ),
        ),
        driver_scheduling_config=google_native.dataproc.v1.DriverSchedulingConfigArgs(
            memory_mb=0,
            vcores=0,
        ),
        hive_job=google_native.dataproc.v1.HiveJobArgs(
            continue_on_failure=False,
            jar_file_uris=["string"],
            properties={
                "string": "string",
            },
            query_file_uri="string",
            query_list=google_native.dataproc.v1.QueryListArgs(
                queries=["string"],
            ),
            script_variables={
                "string": "string",
            },
        ),
        reference=google_native.dataproc.v1.JobReferenceArgs(
            job_id="string",
            project="string",
        ),
        flink_job=google_native.dataproc.v1.FlinkJobArgs(
            args=["string"],
            jar_file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            main_class="string",
            main_jar_file_uri="string",
            properties={
                "string": "string",
            },
            savepoint_uri="string",
        ),
        request_id="string",
        scheduling=google_native.dataproc.v1.JobSchedulingArgs(
            max_failures_per_hour=0,
            max_failures_total=0,
        ),
        spark_job=google_native.dataproc.v1.SparkJobArgs(
            archive_uris=["string"],
            args=["string"],
            file_uris=["string"],
            jar_file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            main_class="string",
            main_jar_file_uri="string",
            properties={
                "string": "string",
            },
        ),
        spark_r_job=google_native.dataproc.v1.SparkRJobArgs(
            main_r_file_uri="string",
            archive_uris=["string"],
            args=["string"],
            file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            properties={
                "string": "string",
            },
        ),
        spark_sql_job=google_native.dataproc.v1.SparkSqlJobArgs(
            jar_file_uris=["string"],
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            properties={
                "string": "string",
            },
            query_file_uri="string",
            query_list=google_native.dataproc.v1.QueryListArgs(
                queries=["string"],
            ),
            script_variables={
                "string": "string",
            },
        ),
        trino_job=google_native.dataproc.v1.TrinoJobArgs(
            client_tags=["string"],
            continue_on_failure=False,
            logging_config=google_native.dataproc.v1.LoggingConfigArgs(
                driver_log_levels={
                    "string": "string",
                },
            ),
            output_format="string",
            properties={
                "string": "string",
            },
            query_file_uri="string",
            query_list=google_native.dataproc.v1.QueryListArgs(
                queries=["string"],
            ),
        ))
    
    const examplejobResourceResourceFromDataprocv1 = new google_native.dataproc.v1.Job("examplejobResourceResourceFromDataprocv1", {
        placement: {
            clusterName: "string",
            clusterLabels: {
                string: "string",
            },
        },
        region: "string",
        project: "string",
        pysparkJob: {
            mainPythonFileUri: "string",
            archiveUris: ["string"],
            args: ["string"],
            fileUris: ["string"],
            jarFileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            properties: {
                string: "string",
            },
            pythonFileUris: ["string"],
        },
        labels: {
            string: "string",
        },
        pigJob: {
            continueOnFailure: false,
            jarFileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            properties: {
                string: "string",
            },
            queryFileUri: "string",
            queryList: {
                queries: ["string"],
            },
            scriptVariables: {
                string: "string",
            },
        },
        hadoopJob: {
            archiveUris: ["string"],
            args: ["string"],
            fileUris: ["string"],
            jarFileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            mainClass: "string",
            mainJarFileUri: "string",
            properties: {
                string: "string",
            },
        },
        prestoJob: {
            clientTags: ["string"],
            continueOnFailure: false,
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            outputFormat: "string",
            properties: {
                string: "string",
            },
            queryFileUri: "string",
            queryList: {
                queries: ["string"],
            },
        },
        driverSchedulingConfig: {
            memoryMb: 0,
            vcores: 0,
        },
        hiveJob: {
            continueOnFailure: false,
            jarFileUris: ["string"],
            properties: {
                string: "string",
            },
            queryFileUri: "string",
            queryList: {
                queries: ["string"],
            },
            scriptVariables: {
                string: "string",
            },
        },
        reference: {
            jobId: "string",
            project: "string",
        },
        flinkJob: {
            args: ["string"],
            jarFileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            mainClass: "string",
            mainJarFileUri: "string",
            properties: {
                string: "string",
            },
            savepointUri: "string",
        },
        requestId: "string",
        scheduling: {
            maxFailuresPerHour: 0,
            maxFailuresTotal: 0,
        },
        sparkJob: {
            archiveUris: ["string"],
            args: ["string"],
            fileUris: ["string"],
            jarFileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            mainClass: "string",
            mainJarFileUri: "string",
            properties: {
                string: "string",
            },
        },
        sparkRJob: {
            mainRFileUri: "string",
            archiveUris: ["string"],
            args: ["string"],
            fileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            properties: {
                string: "string",
            },
        },
        sparkSqlJob: {
            jarFileUris: ["string"],
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            properties: {
                string: "string",
            },
            queryFileUri: "string",
            queryList: {
                queries: ["string"],
            },
            scriptVariables: {
                string: "string",
            },
        },
        trinoJob: {
            clientTags: ["string"],
            continueOnFailure: false,
            loggingConfig: {
                driverLogLevels: {
                    string: "string",
                },
            },
            outputFormat: "string",
            properties: {
                string: "string",
            },
            queryFileUri: "string",
            queryList: {
                queries: ["string"],
            },
        },
    });
    
    type: google-native:dataproc/v1:Job
    properties:
        driverSchedulingConfig:
            memoryMb: 0
            vcores: 0
        flinkJob:
            args:
                - string
            jarFileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            mainClass: string
            mainJarFileUri: string
            properties:
                string: string
            savepointUri: string
        hadoopJob:
            archiveUris:
                - string
            args:
                - string
            fileUris:
                - string
            jarFileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            mainClass: string
            mainJarFileUri: string
            properties:
                string: string
        hiveJob:
            continueOnFailure: false
            jarFileUris:
                - string
            properties:
                string: string
            queryFileUri: string
            queryList:
                queries:
                    - string
            scriptVariables:
                string: string
        labels:
            string: string
        pigJob:
            continueOnFailure: false
            jarFileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            properties:
                string: string
            queryFileUri: string
            queryList:
                queries:
                    - string
            scriptVariables:
                string: string
        placement:
            clusterLabels:
                string: string
            clusterName: string
        prestoJob:
            clientTags:
                - string
            continueOnFailure: false
            loggingConfig:
                driverLogLevels:
                    string: string
            outputFormat: string
            properties:
                string: string
            queryFileUri: string
            queryList:
                queries:
                    - string
        project: string
        pysparkJob:
            archiveUris:
                - string
            args:
                - string
            fileUris:
                - string
            jarFileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            mainPythonFileUri: string
            properties:
                string: string
            pythonFileUris:
                - string
        reference:
            jobId: string
            project: string
        region: string
        requestId: string
        scheduling:
            maxFailuresPerHour: 0
            maxFailuresTotal: 0
        sparkJob:
            archiveUris:
                - string
            args:
                - string
            fileUris:
                - string
            jarFileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            mainClass: string
            mainJarFileUri: string
            properties:
                string: string
        sparkRJob:
            archiveUris:
                - string
            args:
                - string
            fileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            mainRFileUri: string
            properties:
                string: string
        sparkSqlJob:
            jarFileUris:
                - string
            loggingConfig:
                driverLogLevels:
                    string: string
            properties:
                string: string
            queryFileUri: string
            queryList:
                queries:
                    - string
            scriptVariables:
                string: string
        trinoJob:
            clientTags:
                - string
            continueOnFailure: false
            loggingConfig:
                driverLogLevels:
                    string: string
            outputFormat: string
            properties:
                string: string
            queryFileUri: string
            queryList:
                queries:
                    - string
    

    Job Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The Job resource accepts the following input properties:

    Placement Pulumi.GoogleNative.Dataproc.V1.Inputs.JobPlacement
    Job information, including how, when, and where to run the job.
    Region string
    DriverSchedulingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DriverSchedulingConfig
    Optional. Driver scheduling configuration.
    FlinkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.FlinkJob
    Optional. Job is a Flink job.
    HadoopJob Pulumi.GoogleNative.Dataproc.V1.Inputs.HadoopJob
    Optional. Job is a Hadoop job.
    HiveJob Pulumi.GoogleNative.Dataproc.V1.Inputs.HiveJob
    Optional. Job is a Hive job.
    Labels Dictionary<string, string>
    Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
    PigJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PigJob
    Optional. Job is a Pig job.
    PrestoJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PrestoJob
    Optional. Job is a Presto job.
    Project string
    PysparkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PySparkJob
    Optional. Job is a PySpark job.
    Reference Pulumi.GoogleNative.Dataproc.V1.Inputs.JobReference
    Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
    RequestId string
    Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
    Scheduling Pulumi.GoogleNative.Dataproc.V1.Inputs.JobScheduling
    Optional. Job scheduling configuration.
    SparkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkJob
    Optional. Job is a Spark job.
    SparkRJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkRJob
    Optional. Job is a SparkR job.
    SparkSqlJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkSqlJob
    Optional. Job is a SparkSql job.
    TrinoJob Pulumi.GoogleNative.Dataproc.V1.Inputs.TrinoJob
    Optional. Job is a Trino job.
    Placement JobPlacementArgs
    Job information, including how, when, and where to run the job.
    Region string
    DriverSchedulingConfig DriverSchedulingConfigArgs
    Optional. Driver scheduling configuration.
    FlinkJob FlinkJobArgs
    Optional. Job is a Flink job.
    HadoopJob HadoopJobArgs
    Optional. Job is a Hadoop job.
    HiveJob HiveJobArgs
    Optional. Job is a Hive job.
    Labels map[string]string
    Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
    PigJob PigJobArgs
    Optional. Job is a Pig job.
    PrestoJob PrestoJobArgs
    Optional. Job is a Presto job.
    Project string
    PysparkJob PySparkJobArgs
    Optional. Job is a PySpark job.
    Reference JobReferenceArgs
    Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
    RequestId string
    Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
    Scheduling JobSchedulingArgs
    Optional. Job scheduling configuration.
    SparkJob SparkJobArgs
    Optional. Job is a Spark job.
    SparkRJob SparkRJobArgs
    Optional. Job is a SparkR job.
    SparkSqlJob SparkSqlJobArgs
    Optional. Job is a SparkSql job.
    TrinoJob TrinoJobArgs
    Optional. Job is a Trino job.
    placement JobPlacement
    Job information, including how, when, and where to run the job.
    region String
    driverSchedulingConfig DriverSchedulingConfig
    Optional. Driver scheduling configuration.
    flinkJob FlinkJob
    Optional. Job is a Flink job.
    hadoopJob HadoopJob
    Optional. Job is a Hadoop job.
    hiveJob HiveJob
    Optional. Job is a Hive job.
    labels Map<String,String>
    Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
    pigJob PigJob
    Optional. Job is a Pig job.
    prestoJob PrestoJob
    Optional. Job is a Presto job.
    project String
    pysparkJob PySparkJob
    Optional. Job is a PySpark job.
    reference JobReference
    Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
    requestId String
    Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
    scheduling JobScheduling
    Optional. Job scheduling configuration.
    sparkJob SparkJob
    Optional. Job is a Spark job.
    sparkRJob SparkRJob
    Optional. Job is a SparkR job.
    sparkSqlJob SparkSqlJob
    Optional. Job is a SparkSql job.
    trinoJob TrinoJob
    Optional. Job is a Trino job.
    placement JobPlacement
    Job information, including how, when, and where to run the job.
    region string
    driverSchedulingConfig DriverSchedulingConfig
    Optional. Driver scheduling configuration.
    flinkJob FlinkJob
    Optional. Job is a Flink job.
    hadoopJob HadoopJob
    Optional. Job is a Hadoop job.
    hiveJob HiveJob
    Optional. Job is a Hive job.
    labels {[key: string]: string}
    Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
    pigJob PigJob
    Optional. Job is a Pig job.
    prestoJob PrestoJob
    Optional. Job is a Presto job.
    project string
    pysparkJob PySparkJob
    Optional. Job is a PySpark job.
    reference JobReference
    Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
    requestId string
    Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
    scheduling JobScheduling
    Optional. Job scheduling configuration.
    sparkJob SparkJob
    Optional. Job is a Spark job.
    sparkRJob SparkRJob
    Optional. Job is a SparkR job.
    sparkSqlJob SparkSqlJob
    Optional. Job is a SparkSql job.
    trinoJob TrinoJob
    Optional. Job is a Trino job.
    placement JobPlacementArgs
    Job information, including how, when, and where to run the job.
    region str
    driver_scheduling_config DriverSchedulingConfigArgs
    Optional. Driver scheduling configuration.
    flink_job FlinkJobArgs
    Optional. Job is a Flink job.
    hadoop_job HadoopJobArgs
    Optional. Job is a Hadoop job.
    hive_job HiveJobArgs
    Optional. Job is a Hive job.
    labels Mapping[str, str]
    Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
    pig_job PigJobArgs
    Optional. Job is a Pig job.
    presto_job PrestoJobArgs
    Optional. Job is a Presto job.
    project str
    pyspark_job PySparkJobArgs
    Optional. Job is a PySpark job.
    reference JobReferenceArgs
    Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
    request_id str
    Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
    scheduling JobSchedulingArgs
    Optional. Job scheduling configuration.
    spark_job SparkJobArgs
    Optional. Job is a Spark job.
    spark_r_job SparkRJobArgs
    Optional. Job is a SparkR job.
    spark_sql_job SparkSqlJobArgs
    Optional. Job is a SparkSql job.
    trino_job TrinoJobArgs
    Optional. Job is a Trino job.
    placement Property Map
    Job information, including how, when, and where to run the job.
    region String
    driverSchedulingConfig Property Map
    Optional. Driver scheduling configuration.
    flinkJob Property Map
    Optional. Job is a Flink job.
    hadoopJob Property Map
    Optional. Job is a Hadoop job.
    hiveJob Property Map
    Optional. Job is a Hive job.
    labels Map<String>
    Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
    pigJob Property Map
    Optional. Job is a Pig job.
    prestoJob Property Map
    Optional. Job is a Presto job.
    project String
    pysparkJob Property Map
    Optional. Job is a PySpark job.
    reference Property Map
    Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
    requestId String
    Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
    scheduling Property Map
    Optional. Job scheduling configuration.
    sparkJob Property Map
    Optional. Job is a Spark job.
    sparkRJob Property Map
    Optional. Job is a SparkR job.
    sparkSqlJob Property Map
    Optional. Job is a SparkSql job.
    trinoJob Property Map
    Optional. Job is a Trino job.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:

    Done bool
    Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
    DriverControlFilesUri string
    If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.
    DriverOutputResourceUri string
    A URI pointing to the location of the stdout of the job's driver program.
    Id string
    The provider-assigned unique ID for this managed resource.
    JobUuid string
    A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
    Status Pulumi.GoogleNative.Dataproc.V1.Outputs.JobStatusResponse
    The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
    StatusHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.JobStatusResponse>
    The previous job status.
    YarnApplications List<Pulumi.GoogleNative.Dataproc.V1.Outputs.YarnApplicationResponse>
    The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It might be changed before final release.
    Done bool
    Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
    DriverControlFilesUri string
    If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.
    DriverOutputResourceUri string
    A URI pointing to the location of the stdout of the job's driver program.
    Id string
    The provider-assigned unique ID for this managed resource.
    JobUuid string
    A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
    Status JobStatusResponse
    The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
    StatusHistory []JobStatusResponse
    The previous job status.
    YarnApplications []YarnApplicationResponse
    The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It might be changed before final release.
    done Boolean
    Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
    driverControlFilesUri String
    If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.
    driverOutputResourceUri String
    A URI pointing to the location of the stdout of the job's driver program.
    id String
    The provider-assigned unique ID for this managed resource.
    jobUuid String
    A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
    status JobStatusResponse
    The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
    statusHistory List<JobStatusResponse>
    The previous job status.
    yarnApplications List<YarnApplicationResponse>
    The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It might be changed before final release.
    done boolean
    Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
    driverControlFilesUri string
    If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.
    driverOutputResourceUri string
    A URI pointing to the location of the stdout of the job's driver program.
    id string
    The provider-assigned unique ID for this managed resource.
    jobUuid string
    A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
    status JobStatusResponse
    The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
    statusHistory JobStatusResponse[]
    The previous job status.
    yarnApplications YarnApplicationResponse[]
    The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It might be changed before final release.
    done bool
    Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
    driver_control_files_uri str
    If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.
    driver_output_resource_uri str
    A URI pointing to the location of the stdout of the job's driver program.
    id str
    The provider-assigned unique ID for this managed resource.
    job_uuid str
    A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
    status JobStatusResponse
    The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
    status_history Sequence[JobStatusResponse]
    The previous job status.
    yarn_applications Sequence[YarnApplicationResponse]
    The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It might be changed before final release.
    done Boolean
    Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
    driverControlFilesUri String
    If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as driver_output_uri.
    driverOutputResourceUri String
    A URI pointing to the location of the stdout of the job's driver program.
    id String
    The provider-assigned unique ID for this managed resource.
    jobUuid String
    A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
    status Property Map
    The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
    statusHistory List<Property Map>
    The previous job status.
    yarnApplications List<Property Map>
    The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It might be changed before final release.

    Supporting Types

    DriverSchedulingConfig, DriverSchedulingConfigArgs

    MemoryMb int
    The amount of memory in MB the driver is requesting.
    Vcores int
    The number of vCPUs the driver is requesting.
    MemoryMb int
    The amount of memory in MB the driver is requesting.
    Vcores int
    The number of vCPUs the driver is requesting.
    memoryMb Integer
    The amount of memory in MB the driver is requesting.
    vcores Integer
    The number of vCPUs the driver is requesting.
    memoryMb number
    The amount of memory in MB the driver is requesting.
    vcores number
    The number of vCPUs the driver is requesting.
    memory_mb int
    The amount of memory in MB the driver is requesting.
    vcores int
    The number of vCPUs the driver is requesting.
    memoryMb Number
    The amount of memory in MB the driver is requesting.
    vcores Number
    The number of vCPUs the driver is requesting.

    DriverSchedulingConfigResponse, DriverSchedulingConfigResponseArgs

    MemoryMb int
    The amount of memory in MB the driver is requesting.
    Vcores int
    The number of vCPUs the driver is requesting.
    MemoryMb int
    The amount of memory in MB the driver is requesting.
    Vcores int
    The number of vCPUs the driver is requesting.
    memoryMb Integer
    The amount of memory in MB the driver is requesting.
    vcores Integer
    The number of vCPUs the driver is requesting.
    memoryMb number
    The amount of memory in MB the driver is requesting.
    vcores number
    The number of vCPUs the driver is requesting.
    memory_mb int
    The amount of memory in MB the driver is requesting.
    vcores int
    The number of vCPUs the driver is requesting.
    memoryMb Number
    The amount of memory in MB the driver is requesting.
    vcores Number
    The number of vCPUs the driver is requesting.

    FlinkJob, FlinkJobArgs

    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepoint_uri str
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.

    FlinkJobResponse, FlinkJobResponseArgs

    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepoint_uri str
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.

    HadoopJob, HadoopJobArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

    HadoopJobResponse, HadoopJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

    HiveJob, HiveJobArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties Dictionary<string, string>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties map[string]string
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList QueryList
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String,String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList QueryList
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties {[key: string]: string}
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains Hive queries.
    queryList QueryList
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Mapping[str, str]
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains Hive queries.
    query_list QueryList
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

    HiveJobResponse, HiveJobResponseArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties Dictionary<string, string>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties map[string]string
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList QueryListResponse
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String,String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties {[key: string]: string}
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains Hive queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Mapping[str, str]
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains Hive queries.
    query_list QueryListResponse
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

    JobPlacement, JobPlacementArgs

    ClusterName string
    The name of the cluster where the job will be submitted.
    ClusterLabels Dictionary<string, string>
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    ClusterName string
    The name of the cluster where the job will be submitted.
    ClusterLabels map[string]string
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    clusterName String
    The name of the cluster where the job will be submitted.
    clusterLabels Map<String,String>
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    clusterName string
    The name of the cluster where the job will be submitted.
    clusterLabels {[key: string]: string}
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    cluster_name str
    The name of the cluster where the job will be submitted.
    cluster_labels Mapping[str, str]
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    clusterName String
    The name of the cluster where the job will be submitted.
    clusterLabels Map<String>
    Optional. Cluster labels to identify a cluster where the job will be submitted.

    JobPlacementResponse, JobPlacementResponseArgs

    ClusterLabels Dictionary<string, string>
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    ClusterName string
    The name of the cluster where the job will be submitted.
    ClusterUuid string
    A cluster UUID generated by the Dataproc service when the job is submitted.
    ClusterLabels map[string]string
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    ClusterName string
    The name of the cluster where the job will be submitted.
    ClusterUuid string
    A cluster UUID generated by the Dataproc service when the job is submitted.
    clusterLabels Map<String,String>
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    clusterName String
    The name of the cluster where the job will be submitted.
    clusterUuid String
    A cluster UUID generated by the Dataproc service when the job is submitted.
    clusterLabels {[key: string]: string}
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    clusterName string
    The name of the cluster where the job will be submitted.
    clusterUuid string
    A cluster UUID generated by the Dataproc service when the job is submitted.
    cluster_labels Mapping[str, str]
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    cluster_name str
    The name of the cluster where the job will be submitted.
    cluster_uuid str
    A cluster UUID generated by the Dataproc service when the job is submitted.
    clusterLabels Map<String>
    Optional. Cluster labels to identify a cluster where the job will be submitted.
    clusterName String
    The name of the cluster where the job will be submitted.
    clusterUuid String
    A cluster UUID generated by the Dataproc service when the job is submitted.

    JobReference, JobReferenceArgs

    JobId string
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    Project string
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    JobId string
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    Project string
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    jobId String
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project String
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    jobId string
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project string
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    job_id str
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project str
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    jobId String
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project String
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.

    JobReferenceResponse, JobReferenceResponseArgs

    JobId string
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    Project string
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    JobId string
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    Project string
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    jobId String
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project String
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    jobId string
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project string
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    job_id str
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project str
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
    jobId String
    Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
    project String
    Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.

    JobScheduling, JobSchedulingArgs

    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Integer
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Integer
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_per_hour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_total int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).

    JobSchedulingResponse, JobSchedulingResponseArgs

    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Integer
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Integer
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_per_hour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_total int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).

    JobStatusResponse, JobStatusResponseArgs

    Details string
    Optional. Output only. Job state details, such as an error description if the state is ERROR.
    State string
    A state message specifying the overall job state.
    StateStartTime string
    The time when this state was entered.
    Substate string
    Additional state information, which includes status reported by the agent.
    Details string
    Optional. Output only. Job state details, such as an error description if the state is ERROR.
    State string
    A state message specifying the overall job state.
    StateStartTime string
    The time when this state was entered.
    Substate string
    Additional state information, which includes status reported by the agent.
    details String
    Optional. Output only. Job state details, such as an error description if the state is ERROR.
    state String
    A state message specifying the overall job state.
    stateStartTime String
    The time when this state was entered.
    substate String
    Additional state information, which includes status reported by the agent.
    details string
    Optional. Output only. Job state details, such as an error description if the state is ERROR.
    state string
    A state message specifying the overall job state.
    stateStartTime string
    The time when this state was entered.
    substate string
    Additional state information, which includes status reported by the agent.
    details str
    Optional. Output only. Job state details, such as an error description if the state is ERROR.
    state str
    A state message specifying the overall job state.
    state_start_time str
    The time when this state was entered.
    substate str
    Additional state information, which includes status reported by the agent.
    details String
    Optional. Output only. Job state details, such as an error description if the state is ERROR.
    state String
    A state message specifying the overall job state.
    stateStartTime String
    The time when this state was entered.
    substate String
    Additional state information, which includes status reported by the agent.

    LoggingConfig, LoggingConfigArgs

    DriverLogLevels Dictionary<string, string>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    DriverLogLevels map[string]string
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String,String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels {[key: string]: string}
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driver_log_levels Mapping[str, str]
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'

    LoggingConfigResponse, LoggingConfigResponseArgs

    DriverLogLevels Dictionary<string, string>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    DriverLogLevels map[string]string
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String,String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels {[key: string]: string}
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driver_log_levels Mapping[str, str]
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'

    PigJob, PigJobArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList QueryList
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryList
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryList
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains the Pig queries.
    query_list QueryList
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

    PigJobResponse, PigJobResponseArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList QueryListResponse
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains the Pig queries.
    query_list QueryListResponse
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

    PrestoJob, PrestoJobArgs

    ClientTags List<string>
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ClientTags []string
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryList
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    clientTags string[]
    Optional. Presto client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    client_tags Sequence[str]
    Optional. Presto client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryList
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    PrestoJobResponse, PrestoJobResponseArgs

    ClientTags List<string>
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ClientTags []string
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    clientTags string[]
    Optional. Presto client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    client_tags Sequence[str]
    Optional. Presto client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    PySparkJob, PySparkJobArgs

    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris List<string>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris []string
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    mainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris string[]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    main_python_file_uri str
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    python_file_uris Sequence[str]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

    PySparkJobResponse, PySparkJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris List<string>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris []string
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris string[]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_python_file_uri str
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    python_file_uris Sequence[str]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

    QueryList, QueryListArgs

    Queries List<string>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    Queries []string
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries string[]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries Sequence[str]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }

    QueryListResponse, QueryListResponseArgs

    Queries List<string>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    Queries []string
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries string[]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries Sequence[str]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }

    SparkJob, SparkJobArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkJobResponse, SparkJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkRJob, SparkRJobArgs

    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    mainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    main_r_file_uri str
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkRJobResponse, SparkRJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_r_file_uri str
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkSqlJob, SparkSqlJobArgs

    JarFileUris List<string>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    JarFileUris []string
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryList
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris string[]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryList
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

    SparkSqlJobResponse, SparkSqlJobResponseArgs

    JarFileUris List<string>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    JarFileUris []string
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryListResponse
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris string[]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryListResponse
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

    TrinoJob, TrinoJobArgs

    ClientTags List<string>
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ClientTags []string
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryList
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    clientTags string[]
    Optional. Trino client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    client_tags Sequence[str]
    Optional. Trino client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryList
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    TrinoJobResponse, TrinoJobResponseArgs

    ClientTags List<string>
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ClientTags []string
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    clientTags string[]
    Optional. Trino client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    client_tags Sequence[str]
    Optional. Trino client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    YarnApplicationResponse, YarnApplicationResponseArgs

    Name string
    The application name.
    Progress double
    The numerical progress of the application, from 1 to 100.
    State string
    The application state.
    TrackingUrl string
    Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
    Name string
    The application name.
    Progress float64
    The numerical progress of the application, from 1 to 100.
    State string
    The application state.
    TrackingUrl string
    Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
    name String
    The application name.
    progress Double
    The numerical progress of the application, from 1 to 100.
    state String
    The application state.
    trackingUrl String
    Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
    name string
    The application name.
    progress number
    The numerical progress of the application, from 1 to 100.
    state string
    The application state.
    trackingUrl string
    Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
    name str
    The application name.
    progress float
    The numerical progress of the application, from 1 to 100.
    state str
    The application state.
    tracking_url str
    Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
    name String
    The application name.
    progress Number
    The numerical progress of the application, from 1 to 100.
    state String
    The application state.
    trackingUrl String
    Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi