Meet the Pulumi team at AWS re:Invent

Marc Holmes Marc Holmes

Heading to AWS re:Invent? Concerned about how you’ll manage to get that much YAML into your carry on bag? Or maybe you just like purple.

Whatever the reason, the Pulumi team will be there all week at **Booth 316, Startup Central, Aria Quad, **and we’d love to chat with you about AWS and Pulumi.

Catch up with us on serverless functions, containers and Kubernetes, managed services and any other cloud native infrastructure as code, and see how you can more productively manage your AWS cloud resources with general purpose programming languages. We can even help you migrate your CloudFormation to Pulumi.

If you want to grab a specific time to talk through your needs, then use this link, otherwise we’ll just see you at the booth!

Read more →

Reusable CI/CD components with CircleCI Orbs for Pulumi

Chris Smith Chris Smith
Reusable CI/CD components with CircleCI Orbs for Pulumi

This morning CircleCI announced the launch of CircleCI Orbs which enable you to create reusable components for CircleCI workflows. Orbs enable you to simplify your CI/CD configuration by reusing existing orb jobs or commands, in much the same way Pulumi enables you to simplify the delivery of your cloud native infrastructure by sharing and reusing existing components.

Pulumi is proud to be a CircleCI technology partner, and we were excited to get a head start on seeing how orbs could make it easier to take Pulumi into production within CircleCI. The Pulumi Orbs for CircleCI are available today for you to start using.

Read more →

Data science on demand: spinning up a Wallaroo cluster

Marc Holmes Marc Holmes Simon Zelazny Simon Zelazny
Data science on demand: spinning up a Wallaroo cluster

This guest post is from Simon Zelazny of Wallaroo Labs. Find out how Wallaroo powered their cluster provisioning with Pulumi, for data science on demand.

Last month, we took a long-running pandas classifier and made it run faster by leveraging Wallaroo’s parallelization capabilities. This time around, we’d like to kick it up a notch and see if we can keep scaling out to meet higher demand. We’d also like to be as economical as possible: provision infrastructure as needed and de-provision it when we’re done processing.

If you don’t feel like reading the post linked above, here’s a short summary of the situation: there’s a batch job that you’re running every hour, on the hour. This job receives a CSV file and classifies each row of the file, using a Pandas-based algorithm. The run-time of the job is starting to near the one-hour mark, and there’s concern that the pipeline will break down once the input data grows past a particular point.

In the blog post, we show how to split up the input data into smaller dataframes, and distribute them among workers in an ad-hoc Wallaroo cluster, running on one physical machine. Parallelizing the work in this manner buys us a lot of time, and the batch job can continue processing increasing amounts of data.

Read more →

From Terraform to Infrastructure as Software

Pat Gavlin Pat Gavlin
From Terraform to Infrastructure as Software

Here at Pulumi, we love programming the cloud using infrastructure as code. From the project’s outset, we’ve been inspired by technologies like Terraform, AWS CloudFormation, and Helm, and in fact leverage the Terraform Providers ecosystem, to support a broad range of clouds, including AWS, Azure, and Google Cloud.

Just recently, we extended this with first class support for Kubernetes. Pulumi delivers the same infrastructure as code workflows only using general purpose languages like JavaScript, TypeScript, Python, and Go, extending robust infrastructure provisioning with abstraction and reuse, highly productive tooling, and access to all the other things we already know and love about programming languages.

In this article, we will convert existing Terraform configuration to Pulumi TypeScript. By doing so, we’ll see how using general purpose programming languages can help you create simpler, more flexible infrastructure as code, with greater productivity and less repetition. The infrastructure we’ll be working with describes a load-balanced web server hosted by an AWS EC2 instance per availability zone with an option to allow SSH access. Of course, these same benefits would also accrue were we to target Azure, Google Cloud, or Kubernetes instead.

Read more →

Using Helm and Pulumi to define cloud native infrastructure

Alex Clemmer Alex Clemmer
Using Helm and Pulumi to define cloud native infrastructure

The Helm community is one of the brightest spots in the infrastructure ecosystem: collectively, it has accumulated person-decades of operational expertise to produce Kubernetes manifests that “just work.”

But for many users, it is not feasible to run everything in Kubernetes, and the community is just starting to develop answers to questions like: what happens when a Helm Chart needs to interface with, for example, a managed database like AWS RDS or Azure CosmosDB?

Pulumi is a cloud native development platform designed to be able to express any cloud native infrastructure as code in a natural, intentional manner using familiar languages. The most natural way to solve this challenge would be to stand up an instance of AWS RDS, populate a Kubernetes Secret with the connection details, and then simply let my application use these newly available resources. Pulumi gives users the primitives they need in order to achieve tasks like this most effectively.

Read more →

Building a future of cloud engineering

Joe Duffy Joe Duffy

We founded Pulumi because of a deeply held belief that the cloud promises to change all aspects of software development and that there remains an incredible opportunity to reimagine the entire experience, from idea to creation to delivery to management, with one person in mind: you, the engineer.

Read more →

Continuous Delivery to Any Cloud using GitHub Actions

Joe Duffy Joe Duffy
Continuous Delivery to Any Cloud using GitHub Actions

Today we announced our partnership with GitHub on the new GitHub Actions feature. We are super excited about this bold and innovative technology, especially as it relates to Pulumi, and CI/CD more broadly. We truly believe that Pulumi plus GitHub Actions delivers the easiest, most capable, and friction-free way to achieve continuous delivery of cloud applications and infrastructure, no matter your cloud – AWS, Azure, Google Cloud, Kubernetes, or even on-premises. In this post, we’ll dig deeper to see why, and how to get up and running. It’s refreshingly easy!

Read more →

Lambdas as Lambdas: The magic of simple serverless Functions

Cyrus Najmabadi Cyrus Najmabadi
Lambdas as Lambdas: The magic of simple serverless Functions

Pulumi’s approach to infrastructure as code uses familiar languages instead of YAML or DSLs. One major advantage of this approach is that AWS Lambdas, Azure Functions, Google Cloud Functions, et al. can just be real lambdas in your favorite language, offering a flexible and simple path to serverless. Such functions behave as normal functions, allowing you to treat serverless code as part of your application instead of separate “infrastructure” that needs to be configured, managed, and versioned manually. In this post, we’ll examine this capability in JavaScript, which is already very function- and callback-oriented, making serverless feel like a natural extension of the language we already know and love.

While Functions as a Service (FaaS) systems have become more popular, getting up and running can still feel overly complex compared to normal application development. FaaS offerings today divide the development experience between “infrastructure” – doing all the work to configure the Lambda runtime itself (i.e. how much memory to use, what environment variables should be present, etc.) – and writing and maintaining the code that will execute in the function itself when triggered. Most developers just want to focus on the latter, write some code, and have it work.

Read more →

How do Kubernetes Deployments work?

Alex Clemmer Alex Clemmer
How do Kubernetes Deployments work?

This post is part 3 in a series on the Kubernetes API. Earlier, Part 1 focused on the lifecycle of a Pod and Part 2 focused on the lifecycle of a Service.

What is happening when a Deployment rolls out a change to your app? What does it actually do when a Pod crashes or is killed? What happens when a Pod is re-labled so that it’s not targeted by the Deployment?

Deployment is probably the most complex resource type in Kubernetes core. Deployment specifies how changes should be rolled out over ReplicaSets, which themselves specify how Pods should be replicated in a cluster.

In this post we continue our exploration of the Kubernetes API, cracking Deployment open using kubespy, a small tool we developed to observe Kubernetes resources in real-time.

Read more →

Running a Serverless Node.js HTTP Server on AWS and Azure

Cyrus Najmabadi Cyrus Najmabadi

The newly introduced cloud.HttpServer in Pulumi makes it easy to serve a standard Node.js HTTP server as a serverless API on any cloud platform.  This new API brings together the flexibility and rich ecosystem of Node.js HTTP servers, the cost and operational simplicity of serverless APIs, and the multi-cloud authoring and deployment of Pulumi.  In this post, we walk through some of the background on why we introduced this new API and how it fits into the Node.js HTTP ecosystem.

Read more →