1. Tutorials
  2. Reference AWS Resources Across Stacks

Reference AWS Resources Across Stacks

In this tutorial, you will learn how to work with stack outputs, specifically how to export values from a stack and how to reference those values from another stack. You will do this by creating a simple AWS Lambda Function that will write a file to an S3 bucket. You will also create an EventBridge Scheduler resource in a new stack that will run the Lambda function from the first stack on a scheduled basis.

In this tutorial, you'll learn:

  • How to create a stack output
  • How to view the stack output
  • How to reference the output from a different stack

Prerequisites:

Understanding stack outputs

Every Pulumi resource has outputs, which are properties of that resource whose values are generated during deployment. You can export these values as stack outputs, and they can be used for important values like resource IDs, computed IP addresses, and DNS names.

These outputs are shown during an update, can be easily retrieved with the Pulumi CLI, and are displayed in Pulumi Cloud. For the purposes of this tutorial, you will primarily be working from the CLI.

Create a new project

To start, login to the Pulumi CLI and ensure it is configured to use your AWS account. Next, create a new project, then use the following code snippet to scaffold your project with the required imports and overall program structure that we will fill in as we go along:

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

// [Step 1: Create an S3 bucket.]

// [Step 2: Create a Lambda function.]

// [Step 3: Create an export.]
import pulumi
import pulumi_aws as aws

# [Step 1: Create an S3 bucket.]

# [Step 2: Create a Lambda function.]

# [Step 3: Create an export.]
name: s3-writer
runtime: yaml
description: A program to create a Lambda write to S3 workflow on AWS

resources:
  # [Step 1: Create an S3 bucket.]

  # [Step 2: Create a Lambda function.]

  # [Step 3: Create an export.]

Create project resources

The first resource you will define in your project is a simple S3 bucket as shown below.

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

// [Step 1: Create an S3 bucket.]
const bucket = new aws.s3.Bucket("my-bucket");

// [Step 2: Create a Lambda function.]

// [Step 3: Create an export.]
import pulumi
import pulumi_aws as aws

# [Step 1: Create an S3 bucket.]
bucket = aws.s3.Bucket('my-bucket')

# [Step 2: Create a Lambda function.]

# [Step 3: Create an export.]
name: s3-writer
runtime: yaml
description: A program to create a Lambda write to S3 workflow on AWS

resources:
  # [Step 1: Create an S3 bucket.]
  my-bucket:
    type: aws:s3:Bucket

  # [Step 2: Create a Lambda function.]

  # [Step 3: Create an export.]

The next resource you will add is a Lambda function with function code that will write a simple .txt file to your S3 bucket. You will also add an IAM role that will grant your Lambda function permission to access AWS services and resources.

To start, let’s create a new folder in your project named s3_writer. Inside of this folder, you’ll create a file named lambda_function.py and populate it with code that will write a simple .txt file to your bucket.

import json
import os
import boto3
import datetime

s3 = boto3.resource('s3')

BUCKET_NAME = os.environ['BUCKET_NAME']

def lambda_handler(event, context):

    current_time = str(datetime.datetime.utcnow()).replace(" ", "")
    data_string = "Hello Pulumi!"

    object = s3.Object(
        bucket_name=BUCKET_NAME,
        key=f'{current_time}_test_file.txt'
    )

    object.put(Body=data_string)

Now, you can add the Lambda function resource definition and its corresponding IAM role to your main project file.

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

// [Step 1: Create an S3 bucket.]
const bucket = new aws.s3.Bucket("my-bucket");

// [Step 2: Create a Lambda function.]
const lambdaRole = new aws.iam.Role("s3-writer-role", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [
            {
                Action: "sts:AssumeRole",
                Effect: "Allow",
                Principal: {
                    Service: "lambda.amazonaws.com",
                },
            },
        ],
    }),
});

const lambdaFunction = new aws.lambda.Function("s3-writer-lambda-function", {
    role: lambdaRole.arn,
    runtime: "python3.10",
    handler: "lambda_function.lambda_handler",
    code: new pulumi.asset.FileArchive("./s3_writer"),
    timeout: 15,
    memorySize: 128,
    environment: {
        variables: {
            BUCKET_NAME: bucket.id,
        },
    },
});


// [Step 3: Create an export.]
import pulumi
import pulumi_aws as aws

# [Step 1: Create an S3 bucket.]
bucket = aws.s3.Bucket('my-bucket')

# [Step 2: Create a Lambda function.]
lambda_role = aws.iam.Role("s3-writer-role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "sts:AssumeRole",
                "Principal": {
                    "Service": "lambda.amazonaws.com"
                },
                "Effect": "Allow",
                "Sid": ""
            }
        ]
    }""",
    managed_policy_arns=[
        "arn:aws:iam::aws:policy/AmazonS3FullAccess",
    ]
)

lambda_function = aws.lambda_.Function(
    resource_name='s3-writer-lambda-function',
    role=lambda_role.arn,
    runtime="python3.10",
    handler="lambda_function.lambda_handler",
    code=pulumi.AssetArchive({
        '.': pulumi.FileArchive('./s3_writer')
    }),
    timeout=15,
    memory_size=128,
    environment= { 
        "variables": {
            "BUCKET_NAME": bucket.id
        }
    }
)

# [Step 3: Create an export.]
name: s3-writer
runtime: yaml
description: A program to create a Lambda write to S3 workflow on AWS

resources:
  # [Step 1: Create an S3 bucket.]
  my-bucket:
    type: aws:s3:Bucket

  # [Step 2: Create a Lambda function.]
  lambda-role:
    type: aws:iam:Role
    properties:
      assumeRolePolicy: |
        { 
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Principal": {
                "Service": "lambda.amazonaws.com"
              },
              "Effect": "Allow"
            }
          ]
        }        

  s3-role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"

  lambda-function:
    type: aws:lambda:Function
    properties:
      role: ${lambda-role.arn}
      runtime: python3.10
      handler: lambda_function.lambda_handler
      code:
        fn::fileArchive: "./s3_writer"
      timeout: 15
      memorySize: 128
      environment:
        variables:
          BUCKET_NAME: ${my-bucket.id}

  # [Step 3: Create an export.]

Export resource values

Now that you have your project resources defined, you can export the values of various resource properties from your program. When defining these exports, you’ll need to provide two arguments:

ArgumentDescription
Output nameThis is the name you will use as the identifier of your output value
Output valueThis is the actual value of your output

To demonstrate how this works, let’s export the names of your Lambda function and S3 bucket. The Pulumi documentation provides more information about what properties are available to export for each resource.

You can reference both your Lambda function name and bucket name via their id property, and you’ll update your code to reflect that as shown below:

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

// [Step 1: Create an S3 bucket.]
const bucket = new aws.s3.Bucket("my-bucket");

// [Step 2: Create a Lambda function.]
const lambdaRole = new aws.iam.Role("s3-writer-role", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [
            {
                Action: "sts:AssumeRole",
                Effect: "Allow",
                Principal: {
                    Service: "lambda.amazonaws.com",
                },
            },
        ],
    }),
});

const lambdaFunction = new aws.lambda.Function("s3-writer-lambda-function", {
    role: lambdaRole.arn,
    runtime: "python3.10",
    handler: "lambda_function.lambda_handler",
    code: new pulumi.asset.FileArchive("./s3_writer"),
    timeout: 15,
    memorySize: 128,
    environment: {
        variables: {
            BUCKET_NAME: bucket.id,
        },
    },
});


// [Step 3: Create an export.]
export const bucketName = bucket.id;
export const lambdaArn = lambdaFunction.arn;
import pulumi
import pulumi_aws as aws

# [Step 1: Create an S3 bucket.]
bucket = aws.s3.Bucket('my-bucket')

# [Step 2: Create a Lambda function.]
lambda_role = aws.iam.Role("s3-writer-role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "sts:AssumeRole",
                "Principal": {
                    "Service": "lambda.amazonaws.com"
                },
                "Effect": "Allow",
                "Sid": ""
            }
        ]
    }""",
    managed_policy_arns=[
        "arn:aws:iam::aws:policy/AmazonS3FullAccess",
    ]
)

lambda_function = aws.lambda_.Function(
    resource_name='s3-writer-lambda-function',
    role=lambda_role.arn,
    runtime="python3.10",
    handler="lambda_function.lambda_handler",
    code=pulumi.AssetArchive({
        '.': pulumi.FileArchive('./s3_writer')
    }),
    timeout=15,
    memory_size=128,
    environment= { 
        "variables": {
            "BUCKET_NAME": bucket.id
        }
    }
)

# [Step 3: Create an export.]
pulumi.export("lambdaName", lambda_function.id)
pulumi.export("bucketName", bucket.id)
name: s3-writer
runtime: yaml
description: A program to create a Lambda write to S3 workflow on AWS

resources:
  # [Step 1: Create an S3 bucket.]
  my-bucket:
    type: aws:s3:Bucket

  # [Step 2: Create a Lambda function.]
  lambda-role:
    type: aws:iam:Role
    properties:
      assumeRolePolicy: |
        { 
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Principal": {
                "Service": "lambda.amazonaws.com"
              },
              "Effect": "Allow"
            }
          ]
        }        

  s3-role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"

  lambda-function:
    type: aws:lambda:Function
    properties:
      role: ${lambda-role.arn}
      runtime: python3.10
      handler: lambda_function.lambda_handler
      code:
        fn::fileArchive: "./s3_writer"
      timeout: 15
      memorySize: 128
      environment:
        variables:
          BUCKET_NAME: ${my-bucket.id}

# [Step 3: Create an export.]
outputs:
  lambdaName: ${lambda-function.id}
  bucketName: ${my-bucket.id}

Deploy your project resources

Now let’s save your file and run the pulumi up command to preview and deploy the resources you’ve just defined in your project.

Previewing update (dev):

     Type                      Name                     Plan
 +   pulumi:pulumi:Stack     my-first-app-dev           create
 +   ├─ aws:iam:Role         s3-writer-role             create
 +   ├─ aws:s3:Bucket        my-bucket                  create
 +   └─ aws:lambda:Function  s3-writer-lambda-function  create

Outputs:
    lambdaName: output<string>
    bucketName: output<string>

Resources:
    + 4 to create

Do you want to perform this update? yes
Updating (dev):

     Type                      Name                     Status
 +   pulumi:pulumi:Stack     my-first-app-dev           created (18s)
 +   ├─ aws:iam:Role         s3-writer-role             created (1s)
 +   ├─ aws:s3:Bucket        my-bucket                  created (1s)
 +   └─ aws:lambda:Function  s3-writer-lambda-function  created (13s)

Outputs:
    lambdaName: "s3-writer-lambda-function-981d4fa"
    bucketName: "my-bucket-4fb1589"

Resources:
    + 4 created

Duration: 20s

You can see that the outputs you’ve created have been provided as a part of the update details. You’ll access these outputs via the CLI in the next steps of the tutorial.

Access outputs via the CLI

Now that your resources are deployed, let’s kick off your Lambda to S3 file writing process.

The first thing you will do is validate that your S3 bucket is empty. You can use the following AWS CLI command to list all of the objects in your bucket:

aws s3api list-objects-v2 --bucket <bucket_name>
The AWS CLI will run AWS commands with the default profile by default. If you would like to run commands with a different profile, take a look at the relevant AWS documentation to learn how.

You will want to replace <bucket_name> with the actual name of your S3 bucket. While you can manually provide the name of your bucket, you can also programmatically reference your bucket name via the stack outputs.

You’ll do this by using the pulumi stack output command and provide the name of your desired output as shown below:

aws s3api list-objects-v2 --bucket $(pulumi stack output bucketName)

Right now, your bucket is empty, so the response of this command should look like the following:

{
    "RequestCharged": null
}

Now, let’s trigger your Lambda function so that it will write a new file to the bucket. You will use the aws lambda invoke command and pass your Lambda function name to the --function-name option as shown below:

aws lambda invoke \
    --function-name $(pulumi stack output lambdaName) \
    --invocation-type Event \
    --cli-binary-format raw-in-base64-out \
    --payload '{ "test": "test" }' \
    response.json

You can verify the outcome of this function execution by running the same list-objects-v2 command from before to check the contents of your S3 bucket. This time, you should see output similar to the following:

{
    "Contents": [
        {
            "Key": "2023-08-3107:53:28.137776_test_file.txt",
            "LastModified": "2023-08-31T07:53:29+00:00",
            "ETag": "\"f794802bfd4a70851294ba192d382c11\"",
            "Size": 13,
            "StorageClass": "STANDARD"
        }
    ],
    "RequestCharged": null
}

The .txt file in the Key field of the response object indicates that your Lambda function ran successfully.

You have seen how you can reference your output values from the CLI. Now let’s take a look at how you can do the same from within another stack.

Using stack references

Stack references allow you to access the outputs of one stack from another stack. This enables developers to create resources even when there are inter-stack dependencies.

For this section, you are going to create a new Pulumi program that will access the stack output values from your existing program.

Reference the name of the Lambda function

Let’s start by making a new Pulumi project in a new directory. In this new program, you need to add the code that will reference the values from your first program.

This can be done using Pulumi’s Stack Reference functionality. You’ll need to pass in the fully qualified name of the stack as an argument. This name is comprised of the organization, project, and stack names in the format of <organization>/<project>/<stack>

For example, if the name of your organization is my-org, the name of your first program is my-first-program, and the name of your stack is dev, then your fully qualified name will be my-org/my-first-program/dev.

With that being said, a stack reference will look like the following in your code:

import * as pulumi from "@pulumi/pulumi";

const stackRef = new pulumi.StackReference("my-org/my-first-program/dev");
import pulumi

stack_ref = pulumi.StackReference("my-org/my-first-program/dev")
name: event-scheduler
runtime: yaml
description: A program to create create a Stack Reference

resources:
  stack-ref:
    type: pulumi:pulumi:StackReference
    properties:
      name: my-org/my-first-program/dev
Make sure that the fully qualified name in the example above is populated with the values that are specific to your Pulumi organization, project, and stack.

You will now create an export that will output the value of the Lambda function name from your first program. This is to demonstrate how to retrieve output values from another stack for use in your program.

For the value of your export, you can retrieve it by taking your stack reference variable and using a Stack Reference function called getOutput() against it.

Update your code with the following:

import * as pulumi from "@pulumi/pulumi";

const stackRef = new pulumi.StackReference("my-org/my-first-program/dev");

// Add an export to output the value of the Lambda function name using the stack reference above
export const firstProgramLambdaName = stackRef.getOutput("lambdaName");
import pulumi

stack_ref = pulumi.StackReference("my-org/my-first-program/dev")

# Add an export to output the value of the Lambda function name using the stack reference above
pulumi.export("firstProgramLambdaName", stack_ref.get_output("lambdaName"))
name: event-scheduler
runtime: yaml
description: A program to create create a Stack Reference

resources:
  stack-ref:
    type: pulumi:pulumi:StackReference
    properties:
      name: my-org/my-first-program/dev

# Add an export to output the value of the Lambda function name using the stack reference above
outputs:
  firstProgramLambdaName: ${stack-ref.outputs["lambdaName"]}

To check that your stack reference is working, let’s run pulumi up.

Previewing update (dev):

     Type                      Name                     Plan
 +   pulumi:pulumi:Stack  my-second-app-dev             create

Outputs:
    firstProgramLambdaName: output<string>

Resources:
    + 4 to create

Do you want to perform this update? yes
Updating (dev):

     Type                      Name                     Status
 +   pulumi:pulumi:Stack  my-second-app-dev             created (2s)

Outputs:
    firstProgramLambdaName: "s3-writer-lambda-function-981d4fa"

Resources:
    + 1 created

Duration: 3s

You can see the name of your Lambda function from your first program successfully outputted in the update details of your second program.

Run the Lambda function on a schedule

In this section, you will use Pulumi documentation to create an Eventbridge Scheduler resource in one stack that will trigger the Lambda function in another stack on a scheduled basis.

The scheduler should trigger the Lambda function to run once every minute.

An updated version of the Lambda Function project code has been provided below as a starting point.

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

const bucket = new aws.s3.Bucket("my-bucket");

const lambdaRole = new aws.iam.Role("s3-writer-role", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [
            {
                Action: "sts:AssumeRole",
                Effect: "Allow",
                Principal: {
                    Service: "lambda.amazonaws.com",
                },
            },
        ],
    }),
    managedPolicyArns: ["arn:aws:iam::aws:policy/AmazonS3FullAccess", "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"],
});

const lambdaFunction = new aws.lambda.Function("s3-writer-lambda-function", {
    role: lambdaRole.arn,
    runtime: "python3.10",
    handler: "lambda_function.lambda_handler",
    code: new pulumi.asset.FileArchive("./s3_writer"),
    timeout: 15,
    memorySize: 128,
    environment: {
        variables: {
            BUCKET_NAME: bucket.id,
        },
    },
});

// Gives the EventBridge service permissions to invoke the Lambda function
const lambdaEvent = new aws.lambda.Permission("lambda-trigger-event", {
    action: "lambda:InvokeFunction",
    principal: "events.amazonaws.com",
    function: lambdaFunction.arn,
});


// [Step 3: Create an export.]
// TO-DO
import pulumi
import pulumi_aws as aws

bucket = aws.s3.Bucket('my-bucket')

lambda_role = aws.iam.Role("s3-writer-role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "sts:AssumeRole",
                "Principal": {
                    "Service": "lambda.amazonaws.com"
                },
                "Effect": "Allow",
                "Sid": ""
            }
        ]
    }""",
    managed_policy_arns=[
        "arn:aws:iam::aws:policy/AmazonS3FullAccess",
        "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
    ]
)

lambda_function = aws.lambda_.Function(
    resource_name='s3-writer-lambda-function',
    role=lambda_role.arn,
    runtime="python3.10",
    handler="lambda_function.lambda_handler",
    code=pulumi.AssetArchive({
        '.': pulumi.FileArchive('./s3_writer')
    }),
    timeout=15,
    memory_size=128,
    environment= { 
        "variables": {
            "BUCKET_NAME": bucket.id
        }
    }
)

# Gives the EventBridge service permissions to invoke the Lambda function
lambda_event = aws.lambda_.Permission("lambda_trigger_event",
    action="lambda:InvokeFunction",
    principal="events.amazonaws.com",
    function=lambda_function.arn
)

# [Step 3: Create an export.]
# TO-DO
name: s3-writer
runtime: yaml
description: A program to create a Lambda write to S3 workflow on AWS

resources:
  my-bucket:
    type: aws:s3:Bucket

  lambda-role:
    type: aws:iam:Role
    properties:
      assumeRolePolicy: |
        { 
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Principal": {
                "Service": "lambda.amazonaws.com"
              },
              "Effect": "Allow"
            }
          ]
        }        

  s3-role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"

  cloudwatch-role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"

  lambda-function:
    type: aws:lambda:Function
    properties:
      role: ${lambda-role.arn}
      runtime: python3.10
      handler: lambda_function.lambda_handler
      code:
        fn::fileArchive: "./s3_writer"
      timeout: 15
      memorySize: 128
      environment:
        variables:
          BUCKET_NAME: ${my-bucket.id}

  # Gives the EventBridge service permissions to invoke the Lambda function
  lambda-trigger-event:
    type: aws:lambda:Permission
    properties:
      action: lambda:InvokeFunction
      principal: events.amazonaws.com
      function: ${lambda-function.id}

outputs:
# [Step 3: Create an export.]
# TO-DO

Some baseline code for the Scheduler project has also been provided below:

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

const stackRef = // TO-DO

const lambdaArn = // TO-DO

const schedulerRole = new aws.iam.Role("scheduler-role", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [{
            Action: "sts:AssumeRole",
            Effect: "Allow",
            Principal: {
                Service: "scheduler.amazonaws.com",
            },
        }],
    }),
    inlinePolicies: [
        {
            name: "my_inline_policy",
            policy: JSON.stringify({
                Version: "2012-10-17",
                Statement: [{
                    Action: ["lambda:*"],
                    Effect: "Allow",
                    Resource: "*",
                }],
            }),
        }
    ],
});

const scheduler = // TO-DO
import pulumi
import pulumi_aws as aws
import json

stack_ref = # TO-DO

lambda_arn = # TO-DO

# Gives the EventBridge Scheduler the ability to execute the Lambda function
scheduler_role = aws.iam.Role("scheduler_role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "sts:AssumeRole",
                "Principal": {
                    "Service": "scheduler.amazonaws.com"
                },
                "Effect": "Allow",
                "Sid": ""
            }
        ]
    }""",
    inline_policies=[
        aws.iam.RoleInlinePolicyArgs(
            name="my_inline_policy",
            policy=json.dumps({
                "Version": "2012-10-17",
                "Statement": [{
                    "Action": ["lambda:*"],
                    "Effect": "Allow",
                    "Resource": "*" ,
                }],
            })
        )
    ]
)

scheduler = # TO-DO
name: s3-writer
runtime: yaml
description: A program to create an EventBridge Scheduler in AWS.

resources:
  stack-ref: # TO-DO

  scheduler-role:
    type: aws:iam:Role
    properties:
      assumeRolePolicy: |
        { 
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Principal": {
                "Service": "scheduler.amazonaws.com"
              },
              "Effect": "Allow"
            }
          ]
        }        
      inlinePolicies:
        - name: "my-inline-policy"
          policy:
            fn::toJSON:
              Version: 2012-10-17
              Statement:
                - Action:
                    - lambda:*
                  Effect: Allow
                  Resource: "*"

  role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"

  scheduler: # TO-DO

Use the following steps as a guide for adding the scheduling functionality:

  • Export the Lambda function ARN from the Lambda project
  • Create a stack reference for the Lambda function ARN in your Scheduler project code
  • Navigate to the AWS Registry documentation page
  • Search for the Scheduler Schedule resource
  • Define the Schedule resource in your Scheduler project code
  • Configure the Schedule to trigger the Lambda function once every minute
  • Preview and deploy your updated project code

Once you have completed these steps, wait a few minutes and then run the following command again:

aws s3api list-objects-v2 --bucket $(pulumi stack output bucketName)

You should then see a number of .txt files in your S3 bucket.

Click here to view the complete project code
Here is the complete code for exporting the Lambda function ARN.
import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

const bucket = new aws.s3.Bucket("my-bucket");

    assumeRolePolicy: JSON.stringify({

        Statement: [
            {
                Action: "sts:AssumeRole",
                Effect: "Allow",
                Principal: {
                    Service: "lambda.amazonaws.com",
                },
            },
        ],
    }),
    managedPolicyArns: ["arn:aws:iam::aws:policy/AmazonS3FullAccess", "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"],
});

const lambdaFunction = new aws.lambda.Function("s3-writer-lambda-function", {
    role: lambdaRole.arn,
    runtime: "python3.10",
    handler: "lambda_function.lambda_handler",
    code: new pulumi.asset.FileArchive("./s3_writer"),
    timeout: 15,
    memorySize: 128,
    environment: {
        variables: {
            BUCKET_NAME: bucket.id,
        },
    },
});

// Gives the EventBridge service permissions to invoke the Lambda function
const lambdaEvent = new aws.lambda.Permission("lambda-trigger-event", {
    action: "lambda:InvokeFunction",
    principal: "events.amazonaws.com",
    function: lambdaFunction.arn,
});
import pulumi
import pulumi_aws as aws

bucket = aws.s3.Bucket('my-bucket')

lambda_role = aws.iam.Role("s3-writer-role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "sts:AssumeRole",
                "Principal": {
                    "Service": "lambda.amazonaws.com"
                },
                "Effect": "Allow",
                "Sid": ""
            }
        ]
    }""",
    managed_policy_arns=[
        "arn:aws:iam::aws:policy/AmazonS3FullAccess",
        "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
    ]
)

lambda_function = aws.lambda_.Function(
    resource_name='s3-writer-lambda-function',
    role=lambda_role.arn,
    runtime="python3.10",
    handler="lambda_function.lambda_handler",
    code=pulumi.AssetArchive({
        '.': pulumi.FileArchive('./s3_writer')
    }),
    timeout=15,
    memory_size=128,
    environment= { 
        "variables": {
            "BUCKET_NAME": bucket.id
        }
    }
)

# Gives the EventBridge service permissions to invoke the Lambda function
lambda_event = aws.lambda_.Permission("lambda_trigger_event",
    action="lambda:InvokeFunction",
    principal="events.amazonaws.com",
    function=lambda_function.arn
)

pulumi.export("lambdaArn", lambda_function.arn)
name: s3-writer
runtime: yaml
description: A program to create a Lambda write to S3 workflow on AWS

resources:
  my-bucket:
    type: aws:s3:Bucket

  lambda-role:
    type: aws:iam:Role
    properties:
      assumeRolePolicy: |
        { 
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Principal": {
                "Service": "lambda.amazonaws.com"
              },
              "Effect": "Allow"
            }
          ]
        }        

  s3-role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/AmazonS3FullAccess"

  cloudwatch-role-policy-attachment:
    type: aws:iam:RolePolicyAttachment
    properties:
      role: ${lambda-role}
      policyArn: "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"

  lambda-function:
    type: aws:lambda:Function
    properties:
      role: ${lambda-role.arn}
      runtime: python3.10
      handler: lambda_function.lambda_handler
      code:
        fn::fileArchive: "./s3_writer"
      timeout: 15
      memorySize: 128
      environment:
        variables:
          BUCKET_NAME: ${my-bucket.id}

  # Gives the EventBridge service permissions to invoke the Lambda function
  lambda-trigger-event:
    type: aws:lambda:Permission
    properties:
      action: lambda:InvokeFunction
      principal: events.amazonaws.com
      function: ${lambda-function.id}

outputs:
  lambdaArn: ${lambda-function.arn}
Here is the complete code for creating the EventBridge Scheduler.
import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

const stackRef = new pulumi.StackReference("my-org/my-first-program/dev");

const lambdaArn = stackRef.getOutput("lambdaArn");

const schedulerRole = new aws.iam.Role("scheduler-role", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [{
            Action: "sts:AssumeRole",
            Effect: "Allow",
            Principal: {
                Service: "scheduler.amazonaws.com",
            },
        }],
    }),
    inlinePolicies: [
        {
            name: "my_inline_policy",
            policy: JSON.stringify({
                Version: "2012-10-17",
                Statement: [{
                    Action: ["lambda:*"],
                    Effect: "Allow",
                    Resource: "*",
                }],
            }),
        }
    ],
});

const scheduler = new aws.scheduler.Schedule("scheduler", {
    flexibleTimeWindow: {
        mode: "OFF",
    },
    scheduleExpression: "rate(1 minutes)",
    target: {
        arn: lambdaArn,
        roleArn: schedulerRole.arn,
    },
});
import pulumi
import pulumi_aws as aws
import json

stack_ref = pulumi.StackReference("my-org/my-first-program/dev")
lambda_arn = stack_ref.get_output("lambdaArn")

scheduler_role = aws.iam.Role("scheduler_role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": "sts:AssumeRole",
                "Principal": {
                    "Service": "scheduler.amazonaws.com"
                },
                "Effect": "Allow",
                "Sid": ""
            }
        ]
    }""",
    inline_policies=[
        aws.iam.RoleInlinePolicyArgs(
            name="my_inline_policy",
            policy=json.dumps({
                "Version": "2012-10-17",
                "Statement": [{
                    "Action": ["lambda:*"],
                    "Effect": "Allow",
                    "Resource": "*" ,
                }],
            })
        )
    ]
)

scheduler = aws.scheduler.Schedule("scheduler",
    flexible_time_window=aws.scheduler.ScheduleFlexibleTimeWindowArgs(
        mode="OFF",
    ),
    schedule_expression="rate(1 minutes)",
    target=aws.scheduler.ScheduleTargetArgs(
        arn=lambda_arn,
        role_arn=scheduler_role.arn,
    )
)
name: event-scheduler
runtime: yaml
description: A program to create an EventBridge Scheduler in AWS.

resources:
  stack-ref:
    type: pulumi:pulumi:StackReference
    properties:
      name: my-org/my-first-program/dev

  scheduler-role:
    type: aws:iam:Role
    properties:
      assumeRolePolicy: |
        { 
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Principal": {
                "Service": "scheduler.amazonaws.com"
              },
              "Effect": "Allow"
            }
          ]
        }        
      inlinePolicies:
        - name: "my-inline-policy"
          policy:
            fn::toJSON:
              Version: 2012-10-17
              Statement:
                - Action:
                    - lambda:*
                  Effect: Allow
                  Resource: "*"

  scheduler:
    type: aws:scheduler:Schedule
    properties:
      flexibleTimeWindow:
        mode: OFF
      scheduleExpression: rate(1 minutes)
      target:
        arn: ${stack-ref.outputs["lambdaArn"]}
        roleArn: ${scheduler-role.arn}

Clean up

Before moving on, tear down the resources that are part of your stack to avoid incurring any charges.

  1. Run pulumi destroy to tear down all resources. You'll be prompted to make sure you really want to delete these resources. A destroy operation may take some time, since Pulumi waits for the resources to finish shutting down before it considers the destroy operation to be complete.
  2. To delete the stack itself, run pulumi stack rm. Note that this command deletes all deployment history from the Pulumi Service.
Make sure to empty the contents of your S3 bucket before deleting the resources or the deletion will fail.

Next steps

In this tutorial, you created a Lambda function that writes a file to an S3 bucket. You also referenced the documentation to create an EventBridge Scheduler that would run the Lambda function on a scheduled basis.

You exported Lambda properties into stack outputs, and referenced those outputs across stacks using stack references.

To learn more about creating and managing resources in Pulumi, take a look at the following resources: