Posts Tagged Customer

Mapbox IOT-as-code with Pulumi Crosswalk for AWS

Mapbox IOT-as-code with Pulumi Crosswalk for AWS

Guest Author: Chris Toomey, Solution Architect Lead @ Mapbox

With 8 billion+ connected IoT devices and 2 billion GPS-equipped smartphones already online, logistics businesses are tracking assets at every step in the supply chain. At this scale and complexity, it is imperative to have a flexible way to ingest, process, and act upon this data, without sacrificing security or best practices.

To meet this need, Mapbox has created an Asset Tracking Solution that uses Pulumi’s open source JavaScript libraries (AWS, AWSX) available with multi-language support with Pulumi Crosswalk for AWS. Pulumi Crosswalk for AWS is an open source framework that streamlines creation, deployment and management of AWS services with built-in AWS Best Practices and minimal lines of code in common programming languages.

In this blog, we will show snippets of the Javascript code that embraces the power of Pulumi to program AWS service APIs to create the Mapbox solution. To see the full architecture in action with a live bike race across America, please refer to this webinar recorded on June 13th 2019 and the Mapbox whitepaper. Also refer to this blog of the Race across America showcased live during the webinar tomorrow.

Read more →

Announcing Per User Pricing and Unlimited Stacks for Teams

Since launching last year, thousands of users and hundreds of companies, from startups to Fortune 500 Enterprises, have chosen Pulumi for cloud applications and infrastructure delivery across AWS, Azure, Google Cloud, and Kubernetes. Today we are announcing important changes to better align our product and pricing with how we’ve heard you want to use Pulumi in production. We’re optimistic that these changes will help companies of all sizes choose Pulumi, enabling their teams to deliver cloud applications and infrastructure faster and more reliably.

Read more →

Data science on demand: spinning up a Wallaroo cluster is easy with Pulumi

Data science on demand: spinning up a Wallaroo cluster is easy with Pulumi

This guest post is from Simon Zelazny of Wallaroo Labs, and previously appeared on the Wallaroo Labs blog. Find out how Wallaroo powered their cluster provisioning with Pulumi, for data science on demand.

Oh no, more data!

Last month, we took a long-running pandas classifier and made it run faster by leveraging Wallaroo’s parallelization capabilities. This time around, we’d like to kick it up a notch and see if we can keep scaling out to meet higher demand. We’d also like to be as economical as possible: provision infrastructure as needed and de-provision it when we’re done processing.

If you don’t feel like reading the post linked above, here’s a short summary of the situation: there’s a batch job that you’re running every hour, on the hour. This job receives a CSV file and classifies each row of the file, using a Pandas-based algorithm. The run-time of the job is starting to near the one-hour mark, and there’s concern that the pipeline will break down once the input data grows past a particular point.

In the blog post, we show how to split up the input data into smaller dataframes, and distribute them among workers in an ad-hoc Wallaroo cluster, running on one physical machine. Parallelizing the work in this manner buys us a lot of time, and the batch job can continue processing increasing amounts of data.

Read more →