Upcoming Workshops and Events
It’s a new year and it’s time to level up your cloud engineering skills. Pulumi is there to get you started on your cloud engineering journey with workshops and technical sessions.
It’s a new year and it’s time to level up your cloud engineering skills. Pulumi is there to get you started on your cloud engineering journey with workshops and technical sessions.
When AWS Lambda launched in 2014, it pioneered the concept of Function-as-a-Service. Developers could write a function in one of the supported programming languages, upload it to AWS, and Lambda executes the function on every invocation.
Ever since then, a zip archive of application code or binaries has been the only supported deployment option. Even AWS Lambda Layers—reusable components automatically merged into the application code—used the zip packaging format.
Today, AWS announced that AWS Lambda now supports packaging serverless functions as container images. This means that you can deploy a custom Docker or OCI image as an AWS Lambda function.
Ever since AWS Lambda was released in 2015, users have wanted persistent file storage beyond the small 512MB /tmp
disk allocated to each Lambda function. The following year, Amazon launched EFS, offering a simple managed file system service for AWS, but initially only available to mount onto Amazon EC2 instances. Over the last few months, AWS has been extending access to EFS to all of the modern compute offerings. First EKS for Kubernetes, then ECS and Fargate for containers. Today, AWS announced that EFS is now also supported in Lambda, providing easy access to network file systems from your serverless functions.
In this fourth installment of Architecture as Code series, we’ll take a look at serverless, an architectural pattern that has quickly gained popularity among cloud practitioners. There are two reasons why serverless usage has proliferated: a cost-saving pay as you go model and elasticity that goes from zero to as many as needed to complete the task without managing servers.
Due to the nature of the product we build, the Pulumi team needs to have access to several cloud providers to develop and test the product. An increasing number of cloud providers comes with an associated ever-increasing cost.
Abstraction is key to building resilient systems because it encapsulates behavior and decouples code, letting each component perform its function independently. The same principles apply to infrastructure, where we want to declare behavior or state and not implementation details. As an industry, we’ve moved away from monolithic applications to distributed systems such as serverless, microservices, Kubernetes, and virtual machine deployments. In this article, we’ll take a closer look at the characteristics of these architectures and how Pulumi can abstract the components that comprise these systems.
Scheduling events has long been an essential part of automation; many tasks need to run at specific times or intervals. You could be checking StackOverflow for new questions every 20 minutes or compiling a report that is emailed every other Friday at 4:00 pm. Today, many of these tasks can be efficiently accomplished in the cloud. While each cloud has its flavor of scheduled functions, this post steps you through an example using AWS CloudWatch with the help of Pulumi.
AWS Step Functions lets you build applications by connecting AWS services. Daisy-chaining steps into a workflow simplifies application development by creating a state machine diagram which shows how services are connected to each other in your application. We’ll go into the details of creating a lambda function, IAM roles and policies, and creating a workflow. Once we have the example deployed, we’ll walk through the process of adding another function and step to the workflow. Included in the walkthrough is a discussion of one of the aspects of the Pulumi programming model. The goal of this article is to provide a foundation for building your application using serverless workflows.
Google Cloud Run is the latest addition to the serverless compute family. While it may look similar to existing services of public cloud, the feature set makes Cloud Run unique:
Cloud Run is targeted very specifically at stateless web applications. It uses ephemeral containers, and each execution is limited to 15 minutes.
Whether it’s an IoT installation, a website, or a mobile app, modern software systems generate a trove of usage and performance data. While it can be daunting to collect and manage, surfacing data empowers the business to make informed product investments. In this article, we’ll explore the following:
If you’d like to follow along, you can clone and run the reference implementation. If you’re new to Pulumi, you can follow this guide to get started.