Deploy AWS EMR Serverless Applications

The aws:emrserverless/application:Application resource, part of the Pulumi AWS provider, defines an EMR Serverless application: the engine type, release version, and resource limits that jobs run against. This guide focuses on four capabilities: minimal application setup, capacity planning, observability integration, and runtime tuning.

EMR Serverless applications are standalone but may reference CloudWatch log groups or Prometheus endpoints for monitoring. The examples are intentionally small. Combine them with your own job submission logic and monitoring infrastructure.

Create a minimal application for Hive workloads

Most deployments start with a minimal application that specifies the engine type and EMR release version, creating the container that jobs run against.

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const example = new aws.emrserverless.Application("example", {
    name: "example",
    releaseLabel: "emr-6.6.0",
    type: "hive",
});
import pulumi
import pulumi_aws as aws

example = aws.emrserverless.Application("example",
    name="example",
    release_label="emr-6.6.0",
    type="hive")
package main

import (
	"github.com/pulumi/pulumi-aws/sdk/v7/go/aws/emrserverless"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := emrserverless.NewApplication(ctx, "example", &emrserverless.ApplicationArgs{
			Name:         pulumi.String("example"),
			ReleaseLabel: pulumi.String("emr-6.6.0"),
			Type:         pulumi.String("hive"),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aws = Pulumi.Aws;

return await Deployment.RunAsync(() => 
{
    var example = new Aws.EmrServerless.Application("example", new()
    {
        Name = "example",
        ReleaseLabel = "emr-6.6.0",
        Type = "hive",
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.emrserverless.Application;
import com.pulumi.aws.emrserverless.ApplicationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var example = new Application("example", ApplicationArgs.builder()
            .name("example")
            .releaseLabel("emr-6.6.0")
            .type("hive")
            .build());

    }
}
resources:
  example:
    type: aws:emrserverless:Application
    properties:
      name: example
      releaseLabel: emr-6.6.0
      type: hive

The type property selects the engine (spark or hive). The releaseLabel pins the EMR version. The name identifies the application for job submissions. Without capacity configuration, EMR Serverless allocates resources dynamically as jobs arrive.

Pre-warm workers with initial capacity

Applications that need predictable startup times can pre-allocate workers before jobs arrive, reducing cold-start latency.

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const example = new aws.emrserverless.Application("example", {
    name: "example",
    releaseLabel: "emr-6.6.0",
    type: "hive",
    initialCapacities: [{
        initialCapacityType: "HiveDriver",
        initialCapacityConfig: {
            workerCount: 1,
            workerConfiguration: {
                cpu: "2 vCPU",
                memory: "10 GB",
            },
        },
    }],
});
import pulumi
import pulumi_aws as aws

example = aws.emrserverless.Application("example",
    name="example",
    release_label="emr-6.6.0",
    type="hive",
    initial_capacities=[{
        "initial_capacity_type": "HiveDriver",
        "initial_capacity_config": {
            "worker_count": 1,
            "worker_configuration": {
                "cpu": "2 vCPU",
                "memory": "10 GB",
            },
        },
    }])
package main

import (
	"github.com/pulumi/pulumi-aws/sdk/v7/go/aws/emrserverless"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := emrserverless.NewApplication(ctx, "example", &emrserverless.ApplicationArgs{
			Name:         pulumi.String("example"),
			ReleaseLabel: pulumi.String("emr-6.6.0"),
			Type:         pulumi.String("hive"),
			InitialCapacities: emrserverless.ApplicationInitialCapacityArray{
				&emrserverless.ApplicationInitialCapacityArgs{
					InitialCapacityType: pulumi.String("HiveDriver"),
					InitialCapacityConfig: &emrserverless.ApplicationInitialCapacityInitialCapacityConfigArgs{
						WorkerCount: pulumi.Int(1),
						WorkerConfiguration: &emrserverless.ApplicationInitialCapacityInitialCapacityConfigWorkerConfigurationArgs{
							Cpu:    pulumi.String("2 vCPU"),
							Memory: pulumi.String("10 GB"),
						},
					},
				},
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aws = Pulumi.Aws;

return await Deployment.RunAsync(() => 
{
    var example = new Aws.EmrServerless.Application("example", new()
    {
        Name = "example",
        ReleaseLabel = "emr-6.6.0",
        Type = "hive",
        InitialCapacities = new[]
        {
            new Aws.EmrServerless.Inputs.ApplicationInitialCapacityArgs
            {
                InitialCapacityType = "HiveDriver",
                InitialCapacityConfig = new Aws.EmrServerless.Inputs.ApplicationInitialCapacityInitialCapacityConfigArgs
                {
                    WorkerCount = 1,
                    WorkerConfiguration = new Aws.EmrServerless.Inputs.ApplicationInitialCapacityInitialCapacityConfigWorkerConfigurationArgs
                    {
                        Cpu = "2 vCPU",
                        Memory = "10 GB",
                    },
                },
            },
        },
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.emrserverless.Application;
import com.pulumi.aws.emrserverless.ApplicationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationInitialCapacityArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationInitialCapacityInitialCapacityConfigArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationInitialCapacityInitialCapacityConfigWorkerConfigurationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var example = new Application("example", ApplicationArgs.builder()
            .name("example")
            .releaseLabel("emr-6.6.0")
            .type("hive")
            .initialCapacities(ApplicationInitialCapacityArgs.builder()
                .initialCapacityType("HiveDriver")
                .initialCapacityConfig(ApplicationInitialCapacityInitialCapacityConfigArgs.builder()
                    .workerCount(1)
                    .workerConfiguration(ApplicationInitialCapacityInitialCapacityConfigWorkerConfigurationArgs.builder()
                        .cpu("2 vCPU")
                        .memory("10 GB")
                        .build())
                    .build())
                .build())
            .build());

    }
}
resources:
  example:
    type: aws:emrserverless:Application
    properties:
      name: example
      releaseLabel: emr-6.6.0
      type: hive
      initialCapacities:
        - initialCapacityType: HiveDriver
          initialCapacityConfig:
            workerCount: 1
            workerConfiguration:
              cpu: 2 vCPU
              memory: 10 GB

The initialCapacities property pre-warms workers of a specific type. The initialCapacityType identifies the worker role (HiveDriver, SparkDriver, etc.). The workerConfiguration sets CPU and memory per worker. EMR keeps these workers running until the application stops, ensuring the first job starts immediately.

Cap resource consumption with maximum capacity

Cost-conscious teams set upper bounds on CPU and memory to prevent runaway resource usage when jobs scale out.

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const example = new aws.emrserverless.Application("example", {
    name: "example",
    releaseLabel: "emr-6.6.0",
    type: "hive",
    maximumCapacity: {
        cpu: "2 vCPU",
        memory: "10 GB",
    },
});
import pulumi
import pulumi_aws as aws

example = aws.emrserverless.Application("example",
    name="example",
    release_label="emr-6.6.0",
    type="hive",
    maximum_capacity={
        "cpu": "2 vCPU",
        "memory": "10 GB",
    })
package main

import (
	"github.com/pulumi/pulumi-aws/sdk/v7/go/aws/emrserverless"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := emrserverless.NewApplication(ctx, "example", &emrserverless.ApplicationArgs{
			Name:         pulumi.String("example"),
			ReleaseLabel: pulumi.String("emr-6.6.0"),
			Type:         pulumi.String("hive"),
			MaximumCapacity: &emrserverless.ApplicationMaximumCapacityArgs{
				Cpu:    pulumi.String("2 vCPU"),
				Memory: pulumi.String("10 GB"),
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aws = Pulumi.Aws;

return await Deployment.RunAsync(() => 
{
    var example = new Aws.EmrServerless.Application("example", new()
    {
        Name = "example",
        ReleaseLabel = "emr-6.6.0",
        Type = "hive",
        MaximumCapacity = new Aws.EmrServerless.Inputs.ApplicationMaximumCapacityArgs
        {
            Cpu = "2 vCPU",
            Memory = "10 GB",
        },
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.emrserverless.Application;
import com.pulumi.aws.emrserverless.ApplicationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationMaximumCapacityArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var example = new Application("example", ApplicationArgs.builder()
            .name("example")
            .releaseLabel("emr-6.6.0")
            .type("hive")
            .maximumCapacity(ApplicationMaximumCapacityArgs.builder()
                .cpu("2 vCPU")
                .memory("10 GB")
                .build())
            .build());

    }
}
resources:
  example:
    type: aws:emrserverless:Application
    properties:
      name: example
      releaseLabel: emr-6.6.0
      type: hive
      maximumCapacity:
        cpu: 2 vCPU
        memory: 10 GB

The maximumCapacity property limits total resources across all workers. Once the application hits either the CPU or memory limit, no new workers start. This prevents unexpected costs from large jobs but may cause jobs to queue if capacity is exhausted.

Stream logs and metrics to observability systems

Production applications route driver and executor logs to CloudWatch, enable managed persistence for Spark UI access, and push metrics to Prometheus.

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const example = new aws.emrserverless.Application("example", {
    name: "example",
    releaseLabel: "emr-7.1.0",
    type: "spark",
    monitoringConfiguration: {
        cloudwatchLoggingConfiguration: {
            enabled: true,
            logGroupName: "/aws/emr-serverless/example",
            logStreamNamePrefix: "spark-logs",
            logTypes: [
                {
                    name: "SPARK_DRIVER",
                    values: [
                        "STDOUT",
                        "STDERR",
                    ],
                },
                {
                    name: "SPARK_EXECUTOR",
                    values: ["STDOUT"],
                },
            ],
        },
        managedPersistenceMonitoringConfiguration: {
            enabled: true,
        },
        prometheusMonitoringConfiguration: {
            remoteWriteUrl: "https://prometheus-remote-write-endpoint.example.com/api/v1/write",
        },
    },
});
import pulumi
import pulumi_aws as aws

example = aws.emrserverless.Application("example",
    name="example",
    release_label="emr-7.1.0",
    type="spark",
    monitoring_configuration={
        "cloudwatch_logging_configuration": {
            "enabled": True,
            "log_group_name": "/aws/emr-serverless/example",
            "log_stream_name_prefix": "spark-logs",
            "log_types": [
                {
                    "name": "SPARK_DRIVER",
                    "values": [
                        "STDOUT",
                        "STDERR",
                    ],
                },
                {
                    "name": "SPARK_EXECUTOR",
                    "values": ["STDOUT"],
                },
            ],
        },
        "managed_persistence_monitoring_configuration": {
            "enabled": True,
        },
        "prometheus_monitoring_configuration": {
            "remote_write_url": "https://prometheus-remote-write-endpoint.example.com/api/v1/write",
        },
    })
package main

import (
	"github.com/pulumi/pulumi-aws/sdk/v7/go/aws/emrserverless"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := emrserverless.NewApplication(ctx, "example", &emrserverless.ApplicationArgs{
			Name:         pulumi.String("example"),
			ReleaseLabel: pulumi.String("emr-7.1.0"),
			Type:         pulumi.String("spark"),
			MonitoringConfiguration: &emrserverless.ApplicationMonitoringConfigurationArgs{
				CloudwatchLoggingConfiguration: &emrserverless.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationArgs{
					Enabled:             pulumi.Bool(true),
					LogGroupName:        pulumi.String("/aws/emr-serverless/example"),
					LogStreamNamePrefix: pulumi.String("spark-logs"),
					LogTypes: emrserverless.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArray{
						&emrserverless.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArgs{
							Name: pulumi.String("SPARK_DRIVER"),
							Values: pulumi.StringArray{
								pulumi.String("STDOUT"),
								pulumi.String("STDERR"),
							},
						},
						&emrserverless.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArgs{
							Name: pulumi.String("SPARK_EXECUTOR"),
							Values: pulumi.StringArray{
								pulumi.String("STDOUT"),
							},
						},
					},
				},
				ManagedPersistenceMonitoringConfiguration: &emrserverless.ApplicationMonitoringConfigurationManagedPersistenceMonitoringConfigurationArgs{
					Enabled: pulumi.Bool(true),
				},
				PrometheusMonitoringConfiguration: &emrserverless.ApplicationMonitoringConfigurationPrometheusMonitoringConfigurationArgs{
					RemoteWriteUrl: pulumi.String("https://prometheus-remote-write-endpoint.example.com/api/v1/write"),
				},
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aws = Pulumi.Aws;

return await Deployment.RunAsync(() => 
{
    var example = new Aws.EmrServerless.Application("example", new()
    {
        Name = "example",
        ReleaseLabel = "emr-7.1.0",
        Type = "spark",
        MonitoringConfiguration = new Aws.EmrServerless.Inputs.ApplicationMonitoringConfigurationArgs
        {
            CloudwatchLoggingConfiguration = new Aws.EmrServerless.Inputs.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationArgs
            {
                Enabled = true,
                LogGroupName = "/aws/emr-serverless/example",
                LogStreamNamePrefix = "spark-logs",
                LogTypes = new[]
                {
                    new Aws.EmrServerless.Inputs.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArgs
                    {
                        Name = "SPARK_DRIVER",
                        Values = new[]
                        {
                            "STDOUT",
                            "STDERR",
                        },
                    },
                    new Aws.EmrServerless.Inputs.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArgs
                    {
                        Name = "SPARK_EXECUTOR",
                        Values = new[]
                        {
                            "STDOUT",
                        },
                    },
                },
            },
            ManagedPersistenceMonitoringConfiguration = new Aws.EmrServerless.Inputs.ApplicationMonitoringConfigurationManagedPersistenceMonitoringConfigurationArgs
            {
                Enabled = true,
            },
            PrometheusMonitoringConfiguration = new Aws.EmrServerless.Inputs.ApplicationMonitoringConfigurationPrometheusMonitoringConfigurationArgs
            {
                RemoteWriteUrl = "https://prometheus-remote-write-endpoint.example.com/api/v1/write",
            },
        },
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.emrserverless.Application;
import com.pulumi.aws.emrserverless.ApplicationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationMonitoringConfigurationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationMonitoringConfigurationManagedPersistenceMonitoringConfigurationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationMonitoringConfigurationPrometheusMonitoringConfigurationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var example = new Application("example", ApplicationArgs.builder()
            .name("example")
            .releaseLabel("emr-7.1.0")
            .type("spark")
            .monitoringConfiguration(ApplicationMonitoringConfigurationArgs.builder()
                .cloudwatchLoggingConfiguration(ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationArgs.builder()
                    .enabled(true)
                    .logGroupName("/aws/emr-serverless/example")
                    .logStreamNamePrefix("spark-logs")
                    .logTypes(                    
                        ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArgs.builder()
                            .name("SPARK_DRIVER")
                            .values(                            
                                "STDOUT",
                                "STDERR")
                            .build(),
                        ApplicationMonitoringConfigurationCloudwatchLoggingConfigurationLogTypeArgs.builder()
                            .name("SPARK_EXECUTOR")
                            .values("STDOUT")
                            .build())
                    .build())
                .managedPersistenceMonitoringConfiguration(ApplicationMonitoringConfigurationManagedPersistenceMonitoringConfigurationArgs.builder()
                    .enabled(true)
                    .build())
                .prometheusMonitoringConfiguration(ApplicationMonitoringConfigurationPrometheusMonitoringConfigurationArgs.builder()
                    .remoteWriteUrl("https://prometheus-remote-write-endpoint.example.com/api/v1/write")
                    .build())
                .build())
            .build());

    }
}
resources:
  example:
    type: aws:emrserverless:Application
    properties:
      name: example
      releaseLabel: emr-7.1.0
      type: spark
      monitoringConfiguration:
        cloudwatchLoggingConfiguration:
          enabled: true
          logGroupName: /aws/emr-serverless/example
          logStreamNamePrefix: spark-logs
          logTypes:
            - name: SPARK_DRIVER
              values:
                - STDOUT
                - STDERR
            - name: SPARK_EXECUTOR
              values:
                - STDOUT
        managedPersistenceMonitoringConfiguration:
          enabled: true
        prometheusMonitoringConfiguration:
          remoteWriteUrl: https://prometheus-remote-write-endpoint.example.com/api/v1/write

The monitoringConfiguration property wires three observability systems. The cloudwatchLoggingConfiguration sends driver and executor logs to a CloudWatch log group, with logTypes filtering which streams to capture. The managedPersistenceMonitoringConfiguration enables Spark UI access after jobs complete. The prometheusMonitoringConfiguration pushes metrics to a remote write endpoint for centralized monitoring.

Tune Spark settings with runtime configurations

Spark applications often need custom log levels, executor memory, or core counts that differ from EMR defaults.

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const example = new aws.emrserverless.Application("example", {
    name: "example",
    releaseLabel: "emr-6.8.0",
    type: "spark",
    runtimeConfigurations: [
        {
            classification: "spark-executor-log4j2",
            properties: {
                "rootLogger.level": "error",
                "logger.IdentifierForClass.name": "classpathForSettingLogger",
                "logger.IdentifierForClass.level": "info",
            },
        },
        {
            classification: "spark-defaults",
            properties: {
                "spark.executor.memory": "1g",
                "spark.executor.cores": "1",
            },
        },
    ],
});
import pulumi
import pulumi_aws as aws

example = aws.emrserverless.Application("example",
    name="example",
    release_label="emr-6.8.0",
    type="spark",
    runtime_configurations=[
        {
            "classification": "spark-executor-log4j2",
            "properties": {
                "rootLogger.level": "error",
                "logger.IdentifierForClass.name": "classpathForSettingLogger",
                "logger.IdentifierForClass.level": "info",
            },
        },
        {
            "classification": "spark-defaults",
            "properties": {
                "spark.executor.memory": "1g",
                "spark.executor.cores": "1",
            },
        },
    ])
package main

import (
	"github.com/pulumi/pulumi-aws/sdk/v7/go/aws/emrserverless"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := emrserverless.NewApplication(ctx, "example", &emrserverless.ApplicationArgs{
			Name:         pulumi.String("example"),
			ReleaseLabel: pulumi.String("emr-6.8.0"),
			Type:         pulumi.String("spark"),
			RuntimeConfigurations: emrserverless.ApplicationRuntimeConfigurationArray{
				&emrserverless.ApplicationRuntimeConfigurationArgs{
					Classification: pulumi.String("spark-executor-log4j2"),
					Properties: pulumi.StringMap{
						"rootLogger.level":                pulumi.String("error"),
						"logger.IdentifierForClass.name":  pulumi.String("classpathForSettingLogger"),
						"logger.IdentifierForClass.level": pulumi.String("info"),
					},
				},
				&emrserverless.ApplicationRuntimeConfigurationArgs{
					Classification: pulumi.String("spark-defaults"),
					Properties: pulumi.StringMap{
						"spark.executor.memory": pulumi.String("1g"),
						"spark.executor.cores":  pulumi.String("1"),
					},
				},
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aws = Pulumi.Aws;

return await Deployment.RunAsync(() => 
{
    var example = new Aws.EmrServerless.Application("example", new()
    {
        Name = "example",
        ReleaseLabel = "emr-6.8.0",
        Type = "spark",
        RuntimeConfigurations = new[]
        {
            new Aws.EmrServerless.Inputs.ApplicationRuntimeConfigurationArgs
            {
                Classification = "spark-executor-log4j2",
                Properties = 
                {
                    { "rootLogger.level", "error" },
                    { "logger.IdentifierForClass.name", "classpathForSettingLogger" },
                    { "logger.IdentifierForClass.level", "info" },
                },
            },
            new Aws.EmrServerless.Inputs.ApplicationRuntimeConfigurationArgs
            {
                Classification = "spark-defaults",
                Properties = 
                {
                    { "spark.executor.memory", "1g" },
                    { "spark.executor.cores", "1" },
                },
            },
        },
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.emrserverless.Application;
import com.pulumi.aws.emrserverless.ApplicationArgs;
import com.pulumi.aws.emrserverless.inputs.ApplicationRuntimeConfigurationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var example = new Application("example", ApplicationArgs.builder()
            .name("example")
            .releaseLabel("emr-6.8.0")
            .type("spark")
            .runtimeConfigurations(            
                ApplicationRuntimeConfigurationArgs.builder()
                    .classification("spark-executor-log4j2")
                    .properties(Map.ofEntries(
                        Map.entry("rootLogger.level", "error"),
                        Map.entry("logger.IdentifierForClass.name", "classpathForSettingLogger"),
                        Map.entry("logger.IdentifierForClass.level", "info")
                    ))
                    .build(),
                ApplicationRuntimeConfigurationArgs.builder()
                    .classification("spark-defaults")
                    .properties(Map.ofEntries(
                        Map.entry("spark.executor.memory", "1g"),
                        Map.entry("spark.executor.cores", "1")
                    ))
                    .build())
            .build());

    }
}
resources:
  example:
    type: aws:emrserverless:Application
    properties:
      name: example
      releaseLabel: emr-6.8.0
      type: spark
      runtimeConfigurations:
        - classification: spark-executor-log4j2
          properties:
            rootLogger.level: error
            logger.IdentifierForClass.name: classpathForSettingLogger
            logger.IdentifierForClass.level: info
        - classification: spark-defaults
          properties:
            spark.executor.memory: 1g
            spark.executor.cores: '1'

The runtimeConfigurations property overrides Spark settings at the application level. Each configuration targets a classification (spark-defaults, spark-executor-log4j2) and supplies key-value properties. These settings apply to all jobs submitted to the application, avoiding per-job configuration.

Beyond these examples

These snippets focus on specific application-level features: application creation and engine selection, capacity planning, and observability integration. They’re intentionally minimal rather than full EMR deployments.

The examples may reference pre-existing infrastructure such as CloudWatch log groups and Prometheus remote write endpoints for monitoring examples. They focus on configuring the application rather than provisioning surrounding infrastructure.

To keep things focused, common application patterns are omitted, including:

  • Auto-start and auto-stop configuration
  • VPC networking (networkConfiguration)
  • Custom container images (imageConfiguration)
  • Interactive session configuration
  • Scheduler configuration for batch/streaming jobs

These omissions are intentional: the goal is to illustrate how each application feature is wired, not provide drop-in EMR modules. See the EMR Serverless Application resource reference for all available configuration options.

Let's deploy AWS EMR Serverless Applications

Get started with Pulumi Cloud, then follow our quick setup guide to deploy this infrastructure.

Try Pulumi Cloud for FREE

Frequently Asked Questions

Application Configuration & Immutability
What properties can't I change after creating an application?
The name and type properties are immutable and require resource replacement if changed.
What application types are supported?
EMR Serverless supports spark and hive application types.
What's the default CPU architecture?
The default architecture is X86_64. You can also specify ARM64.
Capacity Management
What's the difference between initial capacity and maximum capacity?
initialCapacities sets the capacity when the application is created, while maximumCapacity defines cumulative limits across all workers at any point in time. Maximum capacity prevents new resources from being created once any defined limit is hit.
How do I specify CPU and memory for workers?
Use string format with units: cpu: "2 vCPU" and memory: "10 GB".
How do I set initial worker capacity?
Configure initialCapacities with initialCapacityType (e.g., HiveDriver), workerCount, and workerConfiguration specifying cpu and memory.
Monitoring & Logging
What monitoring options are available?
You can configure CloudWatch logging, Prometheus remote write, and managed persistence monitoring through monitoringConfiguration.
How do I enable CloudWatch logging for Spark applications?
Set monitoringConfiguration.cloudwatchLoggingConfiguration with enabled: true, logGroupName, and logTypes specifying SPARK_DRIVER or SPARK_EXECUTOR with STDOUT/STDERR values.
Can I send metrics to Prometheus?
Yes, configure prometheusMonitoringConfiguration.remoteWriteUrl with your Prometheus remote write endpoint.
Runtime & Scheduler Configuration
How do I configure Spark runtime properties?
Use runtimeConfigurations with classification (e.g., spark-defaults or spark-executor-log4j2) and properties containing key-value pairs like spark.executor.memory: "1g".
What EMR versions support scheduler configuration?
Scheduler configuration requires release labels emr-7.0.0 and above.

Using a different cloud?

Explore analytics guides for other cloud providers: