Serverless on AWS with Pulumi: Simple, Event-based Functions

Cyrus Najmabadi Cyrus Najmabadi

One of Pulumi’s goals is to provide the simplest way possible to do serverless programming on AWS by enabling you to create cloud infrastructure with familiar programming languages that you are already using today. We believe that the existing constructs already present in these languages, like flow control, inheritance, composition, and so on, provide the right abstractions to effectively build up infrastructure in a simple and familiar way.

In a previous post we focused on how Pulumi could allow you to simply create an AWS Lambda out of your own JavaScript function. While this was much easier than having to manually create a Lambda Deployment Package yourself, it could still be overly complex to integrate these Lambdas into complete serverless application.

Read more →

Upcoming AWS + Pulumi Webinar on Feb 5

Erin Xue Erin Xue
Upcoming AWS + Pulumi Webinar on Feb 5

Pulumi is hosting a webinar with AWS Fargate on February 5th, 10AM PST (register here). We’ll be chatting about how to implement cloud native infrastructure across your organization using AWS and Pulumi: general purpose programming languages to deliver everything from VMs to Kubernetes to Serverless.

Read more →

2018 Year at a Glance

Joe Duffy Joe Duffy

As we close out 2018, and enter into a New Year, I was reflecting on our progress here at Pulumi this past year and wanted to share some thoughts. It’s been an incredible year and we are hugely thankful to our passionate community, customers, and partners.

Read more →

Connecting multiple identities to Pulumi

Praneet Loke Praneet Loke
Connecting multiple identities to Pulumi

Hot on the heels of our GitLab sign-in support, we’ve just released support for multiple identities for a single Pulumi account in the Pulumi Service. Previously, you could only sign-up for a new Pulumi account using a GitHub or GitLab identity. Starting today, you can connect your Pulumi account with additional identities, beyond what you first signed-up with.

Read more →

Delivering Cloud Native Infrastructure as Code

Marc Holmes Marc Holmes
Delivering Cloud Native Infrastructure as Code

Enterprise software has undergone a slow shift from containerless servers to serverless containers. The evolution of the cloud, combined with the shift to increasingly ephemeral infrastructure, and the connection of application code and infrastructure code, demands a different view of cloud development and devops. To a first approximation, all developers are cloud developers, all applications are cloud native, and all operations are cloud-first. Yet, there is a lack of a consistent approach to delivering cloud native applications and infrastructure.

Read more →

Epsagon: Define, Deploy and Monitor Serverless Applications

Luke Hoban Luke Hoban
Epsagon: Define, Deploy and Monitor Serverless Applications

Pulumi makes it incredibly easy to use serverless functions within your cloud infrastructure and applications - an AWS Lambda is as simple as writing a JavaScript lambda!

const bucket = new aws.s3.Bucket("my-bucket");
bucket.onObjectCreated("onNewObject", async (ev) => console.log(ev));

By making it so easy to introduce serverless functions into cloud infrastructure, Pulumi programs often incorporate many Lambdas, all wired together as part of a larger set of infrastructure and application code.

Read more →

Meet the Pulumi team at AWS re:Invent

Marc Holmes Marc Holmes

Heading to AWS re:Invent? Concerned about how you’ll manage to get that much YAML into your carry on bag? Or maybe you just like purple.

Whatever the reason, the Pulumi team will be there all week at **Booth 316, Startup Central, Aria Quad, **and we’d love to chat with you about AWS and Pulumi.

Catch up with us on serverless functions, containers and Kubernetes, managed services and any other cloud native infrastructure as code, and see how you can more productively manage your AWS cloud resources with general purpose programming languages. We can even help you migrate your CloudFormation to Pulumi.

If you want to grab a specific time to talk through your needs, then use this link, otherwise we’ll just see you at the booth!

Read more →

Reusable CI/CD components with CircleCI Orbs for Pulumi

Chris Smith Chris Smith
Reusable CI/CD components with CircleCI Orbs for Pulumi

This morning CircleCI announced the launch of CircleCI Orbs which enable you to create reusable components for CircleCI workflows. Orbs enable you to simplify your CI/CD configuration by reusing existing orb jobs or commands, in much the same way Pulumi enables you to simplify the delivery of your cloud native infrastructure by sharing and reusing existing components.

Pulumi is proud to be a CircleCI technology partner, and we were excited to get a head start on seeing how orbs could make it easier to take Pulumi into production within CircleCI. The Pulumi Orbs for CircleCI are available today for you to start using.

Read more →

Data science on demand: spinning up a Wallaroo cluster

Marc Holmes Marc Holmes Simon Zelazny Simon Zelazny
Data science on demand: spinning up a Wallaroo cluster

This guest post is from Simon Zelazny of Wallaroo Labs. Find out how Wallaroo powered their cluster provisioning with Pulumi, for data science on demand.

Last month, we took a long-running pandas classifier and made it run faster by leveraging Wallaroo’s parallelization capabilities. This time around, we’d like to kick it up a notch and see if we can keep scaling out to meet higher demand. We’d also like to be as economical as possible: provision infrastructure as needed and de-provision it when we’re done processing.

If you don’t feel like reading the post linked above, here’s a short summary of the situation: there’s a batch job that you’re running every hour, on the hour. This job receives a CSV file and classifies each row of the file, using a Pandas-based algorithm. The run-time of the job is starting to near the one-hour mark, and there’s concern that the pipeline will break down once the input data grows past a particular point.

In the blog post, we show how to split up the input data into smaller dataframes, and distribute them among workers in an ad-hoc Wallaroo cluster, running on one physical machine. Parallelizing the work in this manner buys us a lot of time, and the batch job can continue processing increasing amounts of data.

Read more →