Serverless platforms are supposed to make life easier for Developers and by integrating Pulumi, we can help simplify the life for Operators too.
In this talk, you’ll learn how to use Pulumi with Google Cloud (GKE and Cloud Run) to deploy a serverless platform with dependencies easily.
Presenters
- Jason (Jay) SmithApp Modernization Specialist, Google
Transcript
Hi and welcome to the Cloud Engineering Summit. My name is Jason Smith, but you can call me Jay and I am actually an app modernization specialist at Google Cloud Today we’re going to be talking about standing up a serverless platform and we’re going to be using Pulumi, Kubernetes, K-Native, and a few other little tools. So, I want to start by talking a little bit about kubernetes and I think everybody who works in the cloud today knows what it is. So we’ll try to make this quick.
Kubernetes is kind of the de-facto platform for running containers. Don’t believe me? Look at all these people. This isn’t an exact number, might be a little dated, but of the large kubernetes ecosystem and this is actually just a really small one, C-N-C-F actually released a new chart that is way larger than this, but for the sake of saving your eyes from a lot of color we’re going to do the smaller one, but trust me, this is larger. And of course, it makes sense that a lot of people want to use it because it abstracts away infrastructure. If we are trying to move to the cloud, it only makes sense that we try to make the infrastructure as easy as possible. We want to make sure it is easy for us to provision nodes, provision networks, provision all of that stuff that we need. In the old days you had to have S-S-H access, a bastion server, script after script after script.
I was in the data center world years ago and we relied heavily on Perl scripts and I’m sure I just gave a few people some horror flashbacks when I mentioned Perl scripts, but you know with kubernetes makes it so much easier. Why? Well kubernetes provides us with a declarative A-P-I that allows us to observe, compare, and act. It allows us to see what’s happening, compare what we want it, what we expect to happen, act on it, and reiterate and reiterate and reiterate. And of course, that A-P-I is extensible. We can write custom A-P-I types. We aren’t stuck to a specific platform or a specific set of rules or anything. We are allowed to extend beyond that. If you’ve ever seen that ecosystem really that we talked about, a lot of those people are people who’ve created custom resource definitions to extend the kubernetes is capable of doing, and offer you services that you never thought of before.
It’s so easy anybody can do it, but there’s always catch. Kubernetes really isn’t for developers. At least not out of the box. It’s not the right abstraction for the end-developer experience. It’s great if you want to build a platform, it makes it so much easier to build a platform. But it’s not for building apps. If you don’t believe me, let’s take a look at this. So anybody here who’s used kubernetes will be able to tell you that if you want to deploy an application, these are all the steps you have to take, and these are just the basic steps. There are additional steps there as well, you know exposing the internet can also include setting up ISTIO, standing up Ambassador, NGINX, all of that fun stuff. What do developers actually care about? Writing code. That’s their job. They just want to write code. That’s what they’re best at.
Why not let them focus on what they’re best at? This brings us to serverless. You might have already been thinking that when I mentioned making things easier for developers. You might be saying, well haven’t we heard of this before, isn’t this called serverless technology? And I’d say you are absolutely right. Now let’s talk a little bit about serverless. Why is serverless so popular? Well, we see two models within the serverless realm, as you can see here. So from a programming standpoint, when we’re talking about our developers, they love the idea that they’re able to write service-based applications, service-based usually means that they can also be decoupled, and they can also run in a stateless, stateless environment, in a stateless-state, so to speak.
Because of that, they don’t have to hard-code or imperatively code any kind of setup on that. And then of course from operational model, we don’t want to have to handle a lot of ops to scale up as our application becomes popular, as our customer base grows, but we also want to know that everything is being taken care of. We want to tell somebody else, hey you manage the security, you make sure nobody hacks into the servers, you make sure the servers are up. Oh and on top of that, I only want to pay for my usage. I don’t want to actually have to pay for idol workers.
That makes, that makes perfect sense, you know, and that’s kind of why a lot of people move to the cloud. Back in the day, if you wanted to have side resources just in case of a spike on say Black Friday or something, you would have to have servers on standby, but what happens if it’s an off-period you know? Those things are just gathering dust. Maybe you can find some use for it. So the serverless philosophy is: efficient developers and efficient operators. One way to think of it is, we want to give people the ability to focus on what they are good at. We don’t want developers to have to be operators.
We don’t necessarily want operators to have to be developers. Now granted we’re seeing a lot more operators function as developers, and of course, we see a lot of developers function as operators, you know, that’s kind of where the whole full-stack developer, devops, that whole idea came from. But realistically, if we can have people focus on what matters to them and what they are best at, that’s how we bring the best value to our projects. So while we’re talking about developers, what do they care about? Velocity and reproducibility. They don’t not care a thing about the infrastructure. At the end of the day, they just want to know that their app works, their app scales, their app does what it’s supposed to do. That’s it.
If there is a load balancer issue, they don’t want to, they don’t really care about it, at least in terms of their persona. Now, if somebody gives them that duty, then they care about it, but now it’s taking away from their other work. So I’ve created kind of a serverless platform. Now usually our serverless paradigm if you will, usually it’s build, deploy, and consume, but thanks to my friends at Pulumi I’ve actually learned that there are four steps: stage, build, deploy, and consume. So staging with Pulumi. Now, I’m sure you’ve heard a lot of talks about Pulumi, you’re joining this conference, so you’ve probably heard a little bit about it.
But let’s just take a little time back, and talk about what we have here. So infrastructure management is now —, is now orchestrated by definition files, not hardware tooling. So this brings us to infrastructure as code. I’m sure you’ve all heard every tool that exists out there, whether it be Terraform, Cloud Formation, Chef, Puppet, the list goes on and on and on and on. And it’s a great because when the cloud became a thing, it made it so much easier just to deploy my application, while also standing up the environment with just code rather than physically putting servers somewhere, running some startup script. That’s…we all used to do that back in the day. Infrastructure as code does not necessarily come without its own burden, though.
We often see custom language types, so we, whether it be called the different types of D-S-L, like so, H-C-L languages, a lot of them tend to be bespoke. So they will be very, very unique to a specific tool set or specific platform and you’re finding yourself having to work around that and maybe that, maybe it doesn’t work as well on all platforms, so you’re using one tool for one, one tool for another, you’re trying to find new ways. You have to manage state files. So the state files tend to be saved in a directory or in the cloud somewhere to let you know where your application, where your infrastructure, what it looks like after the last push. Configuration management becomes difficult.
Where do we save all of our files? Where do we save all of our recipes? Our definitions? This all becomes very difficult. And also, all of this tends to exist outside of our base code. So we have like this entire different box, just to stand up our application, then we can deploy code. You know, for the most part it did make things easier and we just kind of worked around it, but that doesn’t necessarily have to be the case anymore, because you know with Pulumi, I find that I don’t have to write YAML cookbooks, I don’t have to write JSON cookbooks or definition files.
I don’t have to use any kind of D-S-L for that matter. I can use the code that I use to write my regular application to deploy a serverless application on kubernetes. Now, you might be saying well kubernetes, that’s not exactly serverless. Bear with me and we’ll talk about it a little longer, but from a developer standpoint, I can stand up code using or I can stand up my infrastructure using nothing but code, regular code in my regular coding cycle in my regular C-I-C-D pipeline. I can actually create a definition file in Typescript, in Python, in Go. Less copy-and-paste, more productivity.
It’s just in my normal workflow. So as a developer who writes in Python or Ruby or whatever your language of choice is, this fits right into my normal workload. This doesn’t feel like additional work so to speak, because I can write it into my normal loop or my own normal workflow as I mentioned earlier, and it can be put into C-I-C-D pipelines as part of that building status. So let’s talk a little bit about C-I-C-D pipelines. So we’re going to jump into the build portion of the serverless. Many of you may have heard of Tekton, you may not have. It is an open source tool governed by the C-D Foundation. If you’re not familiar with the C-D Foundation, it is something of a spin-off of the C-N-C-F Foundation. What they’re trying to solve is a way to make cloud native declarative C-I-C-D pipelines. So Tekton uses kubernetes’ native components.
What does that mean? Well, it means that everything is a kubernetes A-P-I extendable, everything is extended from the kubernetes A-P-I. Everything’s a kubernetes object. Every step is a kubernetes container or it runs on a pod. So everything is kubernetes. It can actually live in your cluster, which a lot of people actually like because if you’re running a large cluster say on Prem, and you don’t want your, you don’t want to have to ping to the outside world in order to trigger your pipeline or whatever. This is perfect. We also offer catalogs, or Tekton offers catalogs.
So for a lot of the tooling that you use that is pretty common, so like pulling from GitHub or pushing to GitLab or standing up a Google kubernetes cluster, or a Kind cluster or whatever that might be, very common actions, we can create a catalog that has reusable tasks in pipeline, so you can just download, plug in the specifics of your information or of your environment and run. And then it also integrates with other products that exist out there such as JenkinsX, integrates with K-native, which we’ll talk about shortly and even more. And as more people join the C-D Foundation, we’re starting to see more and more companies adopt Tekton. And really, I think it’s going to become the gold standard of cloud native pipelines. And this kind of gives you a quick overview of what a pipeline is.
So you’ve probably seen something like this, before so you can create a trigger in Tekton that whenever you push something to a specific branch or with a specific tag in your git repository, it will then trigger the pipeline. Each step, each pipeline creates, or each pipeline has a variety of steps. So each little box here can be seen as a step. You can actually create some additional logic to tell it based on this criteria execute this step. So as you can see the branching here, as a step completes, spins up another pod for the next step, and the next step, until things are done. So, all in the cloud, you can actually automate using code the entire C-I-C-D pipeline. Now, if we’re taking a step back with Pulumi or Pulumi, sorry.
What we have is the opportunity to actually create a pipeline for code that builds clusters. That’s actually pretty interesting when you think about it. Just like you would create a pipeline to create a service that does machine learning or anything like that. Now we’re going to jump into K-native. Now, what is K-native? I don’t like to say it’s a serverless platform or a serverless framework, because it’s more like the components to build a serverless framework.
We don’t try to define specifically what a serverless framework is, as much as we want to give you the ability to fulfill that serverless paradigm that I mentioned earlier of being developer-focused and not focusing so much on the infra or the deploy process of your application building. So K-native is an open-source project, It was open-sourced by Google back in 2018 at Google Next. It is 100% open-source. We have a variety of companies involved in maintaining it, but of course Google is 100% committed to it as well. So, you have kind of this huge mind-trust in building it. It creates a set of building blocks so you can create your own Faas or PaaS.
So when I mentioned earlier, we’re not trying to tell you in an opinionated way, well serverless is functions or serverless is PaaS. What we are saying is serverless abstracts kubernetes tasks from the user. How you want to stand that up is up to you. So it’s an abstraction on top of kubernetes. It automates a lot of the kubernetes deployment. So if you want to, if you want to move it up to the higher-level to where it acts as a function, as a service, with say open PaaS, you can do that.
If you want to do it lower-level and make it more like a platform-as-a-service based on containers, you can do that as well. And it runs on containers at the end the day. I do want to emphasize, it is not a Google product. It is an open-source product that Google open-sourced and Google contributes to. It is not a Google product. You do not have to pay a license fee in order to download it. You can go to GitHub right now, pull it down, use it, and do whatever you want, and it’s open source, You can contribute, you can extend it. We encourage contributing, and of course, like I said, it’s not a FaaS, it’s not functions, we’re not talking about functions.
You can build a function as a service framework on top of K-native, but it’s not functions in and of itself. So what can you do? From a developer perspective, directly deploy code. It’s not easy, but it works great. So I try to avoid telling people we make anything easy because easy as kind of, you know objective. It depends on who you are. If you know, some people think just writing on the C-L-I is easy, whereas other people prefer the U-I. What we do is we simplify the deployment process to where developers don’t have to focus as much On that tedious task. The operators love it because it puts a level of abstraction between the devs and kubernetes. You know, if you’re an operator you have a lot of stuff to do already.
You don’t want to have to, on top of that, do deployment work, you want to be able to focus on what you need to and let the developers focus on what they need to, and enable them to do the deployment without hassle. Now for your platform architects, they can define what their platform looks like because it’s not super opinionated. It’s not saying yes, you have to use functions. It’s saying hey, we are abstracting kubernetes and you can build whatever you want on top of this abstraction. Now out of the box, I would describe it more closely as a PaaS, but we have seen people install other tooling on top of it to make it more FaaS-related, kind of removing a lot of the containerization if you will. So let’s talk a little bit of what that step looks like.
So kubernetes is the platform and that we’ll build out later. The primitives that we offer are serving events and well, I put build on there and it’s a funny story. So, build was originally weren’t part of the K-native components, but it became such a way that, that the developers thought, hey, this is such a great product, it shouldn’t be strictly for Knative. It should be for anything cloud native C-I-C-D. So build spun out, became Tekton and since about version 0.8, It’s been deprecated from the K-native stack. I usually like to reference it just in case somebody’s diving into old documentation. Again, this is a 2018 summer product. So there’s, most documentation is relatively recent. So, you know, kind of given that context. And on top of that, as you can see, you can install a bunch of different products.
So Google, we actually have Cloud Run which is a managed version of K-native serving, but you can see there are a lot of other tools that are built on top of these K-native primitives. Let’s talk a little bit about the components. So K-native serving. What makes this easier? Well K-native serving is what actually handles the deployments. When you deploy a new version, it automates that revision handling, it automates the traffic-splitting, and it automates the auto-scaling. What does that mean? Well, it means it’s seamless to scale-up-and-down. It is seamless to build in, to do the traffic between revisions.
If you want to do like canary test, A-B, whatever. It integrates directly with a service mesh, so out of the, I wouldn’t say out of the box, but originally it supported just steel, but now it’s importing Contour, and Glue, and Ambassador, and a few others depending on what your needs are. And it’s easy to reason about it. And again, it is extensible because it’s built on top of kubernetes, kubernetes objects. So, if you want to use your own auto-scaler, if you want to use your own monitoring platform, you’re absolutely allowed to do it. You’re not boxed in. And here’s a quick look at what it might do. So, you know, where you see service, my function here. That’s what I’ve deployed. That’s the application. I’ve deployed in a container.
The the configuration will then handle the revisions to the different versions, so I push a version a day later. I push another version, it will then deploy the next one, and then the route is what routes the traffic. So quick look here is that kubernetes does memory and C-P-U based scaling. So, if we just talk about straight kubernetes without K-native. K-native does it based on requests. Scale to zero, kubernetes can’t do it. K-native, your applications absolutely can scale to zero and there is a way to set like one pod if you want to have warm start-ups instead of cold, but it will scale to zero because the K-native operator, the K-native components, the K-native serving components, that is what’s actually listening to traffic coming from the, coming from outside world, inside world.
And it is when it gets the traffic, it wakes up the the application, saying hey, we need to run this application X amount of pods and route the traffic there. So you’re able to scale down to zero if there’s no traffic. The load, the load balancer much easier to setup. It’s based on requests and you can do simple traffic splitting. And let’s actually take a look at what kubernetes looks like, or with K-native. So anybody who’s deployed a kubernetes app has seen something like this. This is a simple hello world app, but look at all that text. Is there any way to make this easier? And by the way, this is two files or you can just stack them in one. But with K-native, I don’t really need to set replicas because serving already does that for me. I don’t really need to set these labels either. Because I don’t really need all this, like I only need these lines, the name and I need to call it a service. I need to know what container I’m using, maybe set some limits. A lot of these lines aren’t really necessary.
So, instead I can write this simple service, K-native service, using the K-native A-P-I and as you can see, that at that exact file, I can deploy that exact application with just these lines here. Same exact thing. Cloud Run for Anthos, I want to mention is a Google managed K-native offering for kubernetes. We also, it is a kubernetes offering, we have a fully-managed version as well. So we have one that’s K-native serving, A-P-I compliant, but it’s running on top of different things. So if you don’t care about kubernetes, if you just want pure serverless Cloud Run fully manages for you. If you want to extend it, and you want more freedom, Cloud Native or Cloud Run on Anthos is for you because it runs in a regular kubernetes offering.
Now, let’s talk about eventing. What is eventing? Now, I would encourage you to go to serverless eventing dot com, because I write a lot about it, but we’ll touch upon it here a little bit. Anybody who has had to write an application that connected to code, or connected to a Kafka-bus, or some kind of message queue out there, knows that you have to imperatively bind, bind your code to that. Well, that doesn’t make much sense in the world of microservices because the whole idea of microservices is that there are a bunch of decoupled service. We don’t want to have to declaratively bind them to anything specific. Or imperatively bind them. What if we could get declaratively bind them? K-native eventing kind of creates that abstraction between your application and whatever your messaging queue is, to where instead of writing an application that connects directly to the queue, you just write an application that either handles egress or ingress.
K-native eventing will then handle that traffic and tell it where to route what topic, uh, what topic it’s supposed to subscribe to, how to authenticate with secure T-L-S and mutual T-L-S. You can create your own pipelines. You can do view events, live-streams, and it connects to your existing system. So we’re not saying you have to throw away everything you have today to use K-native eventing. You can use whatever it is you use today, Kafka. We support a lot of things, Kafka, Nats, Pub-Sub, the list goes on and on, if you go to K-native dot dev, you can see it all. So this kind of gives you a quick idea of what K-native eventing looks like.
Obviously you can change because it is an open source and kind of pre, I don’t want to, it’s pre I guess, enterprise release, if you will? So we have the two basic paradigms here when it comes to delivery. We have simple delivery, something hits a source, let’s say our Kafka topic, and we just want it to go straight to the service like simple as that. You can set up a simple delivery for that. All that service has to do is be able to read a post request and it’s good to go. So it doesn’t have to directly connect to anything. Now maybe you have a more advanced topic and you want to give a little intelligence to it. You’re actually able to create a channel which operates under the subscription model. So you create various subscriptions to the channel, and based on the traffic that comes in, or other parameters, it can route that message to a different service or a different channel as you can see.
So you can do some really advanced routing too, which is great when you’re scaling out and building larger apps. Why don’t we jump into a demo and I’ll show you how we can do this. So let’s take a look at the demo. So I’m not going to belabor this part, because I’m sure you’ve seen plenty of Pulumi demos today, but I did want to point out some of the basics here. So we have some type-script and what it’s going to do is it’s going to provision a kubernetes cluster for us. But we also have a few other features here. So we’re going to pull down K-native. What we have here is we have our ISTIO C-R-D.
So ISTIO is a requirement for K-native or it was an original requirement, I should say. So we do support or K-native does support other versions such as Ambassador, Glue, a variety of other types of service meshes, ingress controllers, etc. For the sake of this we’re going to use ISTIO since that was kind of the original. So we’re going to install that, we’re going to install some required ISTIO components for K-native. Then we’re going to go ahead and install the K-native eventing and the K-native serving components. Now, the beautiful thing is, lately K-native team is actually created an operator. So you don’t have to install the components individually and their C-R-Ds individual. You can just kind of install it as one thing.
So we’re going to actually install that operator. Back in the day you had to install it separately and honestly, sometimes I still do that. But you know, I’m starting to get used to using the operator since it’s new and easier to use. Some basic, we also have some streaming, so we’re going to be installing a strimzi operator. If you’re not familiar, strimzi is an open-source solution based on C-N-C-F. It’s essentially a way to run Kafka easily on a kubernetes cluster making it easier to do it without having to do a lot of Zookeeper and whatnot provisions. And so we set some utils as well for role-binding all that good stuff.
We have this Tekton thing that we’re not going to show the Tekton today, but we do have the code. I do encourage people to go and play with it and figure out how the best way to get through that and run that. We also have a sample application. So this is going to be the interesting part. So we have a simple application that pulls code from Alpha Vantage, not code, but it pings the Alpha Vantage A-P-I. I really like using the Alpha Vantage A-P-I because one it’s free and we’re up to I think 500 requests a day.
But also, if you are a person building streaming software and you want to build a demo, I can’t really think of a better example of streaming data than your financial data since that seems to change every second almost every micro-second really. So yeah, so we pull their some currency information. We’re going to just do some exchange rate of Japanese Yen per U.S. dollar. So that’s that part. We also have a producer. So this is what’s actually going to act as our event source, so producer is going to send the data to the Kafka cluster. So basically our event source egress is to the Kafka producer which then writes to Kafka. Now you might be asking yourself. Well, you know in this code it doesn’t actually say to connect to a specific service.
What I’d have up here is a U-R-L called K-sync or a variable called K-sync and k-sync is essentially saying events sync. Now how does it know what the events sync is? That’s a very good question. What we do here is we look at sync-binding and a sync-binding is another kubernetes object that tells K-native eventing: hey things coming from this subject, so this is going to be our source, things coming from currency source should go to Kafka, uh, producer. So, you know when I mentioned earlier you just worry about egress and ingress. That’s exactly what we’re doing right here.
This is just sending a post request to whatever our sync U-R-L is, K-sync. This just simply getting any post requests coming in. Simple as that. We also have some, so talking about strimzi, that’s how easy it is to deploy a cluster on strimzi. Once you have the operator installed, this is also how we created, this is a service called a Kafka consumer. So if we have something writing to a topic, we need something to consume said topic. So this is what’s going to consume the topic and you can see in the same idea it’s using K-sync. So it’s sending to an event viewer. We have an event viewer. Yaml.
If you actually want to see the code we have the code right here. It’s simply just displays whatever comes to it through that post. And also want to point out one more thing. So we just create a topic on our cluster called finance. Simple enough. Alright, so let’s see what we got here. Alright, so, I actually have these running a while ago, but let me go ahead and delete them so you can see it fresh. So we’re going to delete Kafka producer. I just create use, because as you can see and what you’ll see in the read me is that you’re able to replace it with your project I-D. So when pushing up the code, I just created a separate file called use and ignored it. So in case you’re curious. Data, we’re going to just delete these files really easily.
Alright simple as that, so let’s go ahead and send the producer first. Basically with the way sync-binding works is the sync has to be set up before the source is set up. So essentially there has to be something catching the data before you send, you create the thing that’s sending the data. So let’s go ahead and do that. So our source is called currency source. Containers creating. So I wrote some code into the currency source that’s also going to output stuff so that way we can see, okay, well, this is the currency that’s coming out. So let’s do it this way. Alright. And let’s take a look here, or I’m sorry, it’s actually in producer.
It’s alright, If we see nothing in currency source, that means that it’s working. Alright, so here is our currency exchange rate. n=Now ideally what we’re going to do is we are going to set up our event viewer now. This is just a simple kind of proof of concept if you will. You know if this was a real app, it might very well be something that is, you know, displaying like a front-end or something to that effect. Maybe you have a machine learning pod that is running data, or you know, running some kind of process against the data that’s coming in. There’s various things that we can do here. See, let me do user container. Oh, look at that. In real-time too. Because if we go back here to producer we should probably see a new one, 7-1-5. Alright. Yeah.
So it’s pretty, pretty neat. So it takes, it’s going to take a second because I have a low-level container, but as we can come up here into G-K-E in Google Cloud console. So, if you look through my example and we’ll put it in the in the notes where you can get my GitHub and you can test around with this and whatnot. We have a secret Alpha Vantage key that does A-P-I call, we’re able to pull that code, we are able to run that, we’re able to pretty much do everything that we need to do and in rea- time, we’re able to stream some financial application. Now, why is this important? If you actually look at what we have here, we have, we have stood-up these clusters right here using nothing but code, and as you can see this is just standard typescript.
This isn’t a special type of language that we need to use to create a, create a definition. This is standard code I can put into my standard pipeline. And on top of that we have more code. And with this more code we are able to actually write the application. All I needed to do to deploy was create a simple docker file. And then, as you can see, all I did was using these YAML files, was able to push the application as you can see, very simple YAML file. All I have to do is give it a name, kind of declare with the KIND and then also say where the image is hosted. Simple enough. Once that happens, you know, we have the eventing portion.
As you can see here as the developer, now, this might be a little bit of a different example, but from the consumer — part, excuse me, from the consumer perspective, but as an event source, there’s very little actual connecting to anything here. So my event viewer is just egress, uh, just ingressing the information as we can see here. Rather than connecting to anything specific it is actually K-native in the K-native operators. K-native components that are connecting the K-Native eventing. So from a developer perspective, I am able to, from the ground up build the entire application as code, as true code not a third-party thing that is hard to maintain some special language. It is all simple code that I use every single day.
I was able to literally be a full-stack developer. I built the infrastructure. I built my code. I deployed it. I didn’t have to do a lot of configuration. It’s all running on top of kubernetes. At the end of the day, this is as you can see here, this is a kubernetes cluster at the end of the day. So this is all very, very just I would say, the future of development of cloud native full-stack development and it’s all thanks to Pulumi and K-native and kubernetes.
So wasn’t that easy? I was able to stage my environment, build my code, deploy it and use it all with a code layer. I didn’t actually have to do much at all from an infrastructure portion. I was able to just use the languages I use on a daily. So, that was standing up a serverless platform. I really hope you enjoy it. I encourage you to tweet me. I am usually pretty responsive on Twitter.
So, yeah, please message me. You can also check out my LinkedIn, please also check out serverless eventing dot com. And also check out what Google Cloud has to offer and we, we work with Pulumi all the time. So I recommend giving us all a talk. Thank you and have a great day.
Get Started with Pulumi
- Create an AWS S3 Bucket then modify the bucket to host a static website.
- Create an Azure Resource Group and Storage Account, then export the storage account’s connection string.
- Create a Google Cloud Storage Bucket and apply labels to that bucket.
- Create a Kubernetes NGINX deployment and add a config value for MiniKube deployments.