Enterprise software has undergone a slow shift from containerless servers to serverless containers. The evolution of the cloud, combined with the shift to increasingly ephemeral infrastructure, and the connection of application code and infrastructure code, demands a different view of cloud development and devops. To a first approximation, all developers are cloud developers, all applications are cloud native, and all operations are cloud-first. Yet, there is a lack of a consistent approach to delivering cloud native applications and infrastructure.
Heading to AWS re:Invent? Concerned about how you’ll manage to get that much YAML into your carry on bag? Or maybe you just like purple.
Whatever the reason, the Pulumi team will be there all week at Booth 316, Startup Central, Aria Quad, and we’d love to chat with you about AWS and Pulumi.
Catch up with us on serverless functions, containers and Kubernetes, managed services and any other cloud native infrastructure as code, and see how you can more productively manage your AWS cloud resources with general purpose programming languages. We can even help you migrate your CloudFormation to Pulumi.
If you want to grab a specific time to talk through your needs, then use this link, otherwise we’ll just see you at the booth!
This guest post is from Simon Zelazny of Wallaroo Labs. Find out how Wallaroo powered their cluster provisioning with Pulumi, for data science on demand.
Last month, we took a long-running pandas classifier and made it run faster by leveraging Wallaroo’s parallelization capabilities. This time around, we’d like to kick it up a notch and see if we can keep scaling out to meet higher demand. We’d also like to be as economical as possible: provision infrastructure as needed and de-provision it when we’re done processing.
If you don’t feel like reading the post linked above, here’s a short summary of the situation: there’s a batch job that you’re running every hour, on the hour. This job receives a CSV file and classifies each row of the file, using a Pandas-based algorithm. The run-time of the job is starting to near the one-hour mark, and there’s concern that the pipeline will break down once the input data grows past a particular point.
In the blog post, we show how to split up the input data into smaller dataframes, and distribute them among workers in an ad-hoc Wallaroo cluster, running on one physical machine. Parallelizing the work in this manner buys us a lot of time, and the batch job can continue processing increasing amounts of data.
When you’re able to build an app for any cloud using familiar languages, the obvious question is “Where to start?”. We hear you, and so we’ve built some new features to help you scaffold your app and program the cloud even faster than before.
In this post, we’ll look at how to use
pulumi new and our selection
of templates to build your Pulumi
- write code just like an Express app… but end up with a fully deployable serverless app
- lambdas are… just lambdas
- no YAML required… freedom from indentation
- all the features of the V8 runtime… async await ahoy
- all the behaviors of immutable infrastructure as code tools… but we really mean ‘as code’
Pulumi also supports containers (including Kubernetes), managed services, infrastructure and everything else in between that you might need for building cloud applications. Better than that, you can even combine them all in the same program.