Advanced Infrastructure as Code - Workshop with Luke Hoban


In this workshop, Pulumi experts cover advanced infrastructure as code topics including authoring components, multi-stack architectures, and testing. You’ll also learn how to apply infrastructure as code to Kubernetes - both for provisioning managed Kubernetes clusters and deploying Kubernetes applications and services on top of existing clusters. Get started:

Show video transcript
All right. Hi, everyone and welcome to part two of our infrastructures code workshop. Uh This one on advanced infrastructures code. Uh My name is Luke Hoban. I’m gonna be walking us through the workshop today. Um We’re gonna give folks a few minutes uh to stream in here. Uh So we’ll probably start in about three minutes. So first, uh just to make this a little bit interactive off the off the gun, uh wanna just sort of ask folks to uh fill out this poll so you can respond up here at poly V dot com slash lu co 275. Uh and just want to know how many people here attended uh part one of the workshop last week or checked it out on youtube uh or have generally uh already uh you know, used Pulumi a fair bit. All right. And wait for a few more to come in, see, see how much uh input we can get here. We’ll have a few more of these polls throughout as well. So, um we’ll try to make sure that uh for folks who do get this set up, it’ll be easier to go and answer kind of some of the next polls as well. All right, great. Uh So a decent split. So a lot of folks did attend. Uh So that’s good. Um uh I’ll give a really quick recap of uh part one, but I am gonna dive into some uh some meaningfully more kind of advanced concepts and material throughout this. Um So definitely hope that that folks uh um have had a chance to check out Pulumi uh prior to this. Um And if not uh feel free to ask questions as we go and then we’ll try and address any, any topics that uh that you guys want to cover his background. OK. So just as a recap for last time, I think we talked about a few key things. Thanks. Get rid of this. So presentation bar down here. That’s right. OK. So as a recap, um we talked about a few key things. So one is we talked about modern infrastructures code and kind of Pulumi enabling us to do uh infrastructures code for the modern cloud and that being things like containers and servers and, and not just for the compute parts of that, but for all of the infrastructure we need to develop. So that could be the compute uh you know, compute containers, whether it’s the B MS or the serverless functions or the container uh or it could be the core infrastructure layers, the networking and security uh that we need to set up or it could be the data stores, the um the object stores with S3 that we demoed uh and walked through in the workshop last time, uh or you know R DS databases or what have you. Um And then finally kind of the application layer and how do we deploy the application components themselves into that computer? And so Pulumi really, uh you know, we looked at kind of what that modern infrastructures code looks like, how that shift into uh into kind of the cloud infrastructure world is impacting the way that we think about uh needing to uh use more expressive ways to describe our cloud infrastructure instead of just pointing and clicking or, or uh scripting uh are our infrastructure deployments. And, and part of that was really the idea of kind of enabling developers and infrastructure engineers to collaborate. And so one of the key things with Pulumi uh is trying to bring those two worlds a bit more together um as we move quicker with our infrastructure, we need to uh have the development and ops teams kind of be working in unison and often actually uh collaborating together closely. Uh And so, one of the things that Pulumi really enables both with modern infrastructures and with uh using real programing languages uh is a closer uh ability for development and infrastructure engineers to work together. And then we talked about kind of uh the, the sort of key thing that makes ploy different than some of the other infrastructure code code tools that that folks may have worked with in the past, whether it’s cloud formation or terraform or he uh in the space. And that is that plume kind of lets you use real programming languages. So it lets you use Python or javascript or go or dot net. Uh And this brings some sort of basic things that are just nice to have like loops and conditionals and functions and classes. And we saw some of these uh kind of in the workshop uh last time. And so if you haven’t yet seen that you can go back and see kind of what it looks like to really use loops, what it looks like to be able to use packages and that sort of thing. Um But the, the more important piece there is really that lets us share and reuse uh components of infrastructure in the same way that we do when we build application software instead of just copying and pasting blocks of the AML around all over our crew base. And so as as things get more complicated, this becomes really important. And then finally, we really emphasized and kind of demoed a lot about how even though we’re using these fully expressive programming languages, Pulumi is still a desired state uh infrastructures code tool. So uh the program you write, even though it’s imperative will run to create a desired state and then we will the Pulumi engine will drive our infrastructure to that desired state. This means you kind of get the best of both worlds. You get the expressiveness of real programing languages uh plus the uh you know, desired state declarative model of infrastructures code. And finally, by using existing languages, we sort of get all these nice uh just sort of nice benefits around our end to end application development life cycle. So we get ID support and linters, we get to use test frameworks, we get all of the communities and libraries and packages and things around our language of choice, whether it’s Python or javascript or what have you. And so we get to bring to bear all these software engineering concepts uh and use them now in our infrastructure as well as in our application. So again, if any of this, uh if any folks are interested in going deep on this or have questions on this, feel free to raise your hand or drop a note. Uh And we can answer that now, otherwise definitely encourage you to go check out part one of the, of the workshop. OK. So in terms of what we’re going to cover today, uh sort of uh a couple of key things. So first we’re gonna talk about some concepts. Uh So three of the sort of a more advanced infrastructures code concepts that I wanted to touch on today were components. And this is really speaking to that point on the last slide about instead of copy pasting, we really want to think about how to create reusable blocks of uh infrastructure that we can apply throughout our code base and treat like uh like software, you know, artifacts like new API S and new packages. And so we’ll talk a little bit about components and then how we can use those. We also talk about multi stack architectures. And so as your, as your infrastructure grows beyond the complexity of just a single uh you know, single deployment unit, you want to have multiple different things that are being deployed uh that are maybe related to each other. How do you think about that? How do you break that up? And how do you structure that uh within your infrastructures code? And then finally, we’ll uh time permitting, we’ll touch on kind of testing uh and how we can apply some testing practices that we might be used to in our existing uh languages and existing application frameworks and apply those into our infrastructure as well. In particular, how we can test these components of functionality uh as we as we develop them. So we can have confidence that they accomplish what we want them to accomplish. And then we’ll get very hands on and you know, build up some real infrastructure. And this time we’re going to move from the very, very simple infrastructure we looked at uh with just standing up an S3 buckets last time and standing up a couple of EC2 instances. This time, we’ll stand up a lot more infrastructure. Uh We’ll stand up Kate clusters which involve several different source is all kind of working in interesting ways and then we’ll stand up uh applications and services running inside those clusters. And so we’ll use Pulumi infrastructures code for both uh managing cloud and managing uh which is a pattern we’ve seen, you know, a lot of uh teams are trying to uh to, to approach. OK. So I see uh a question about was the part one recorded and is there a link? Um Yeah, it was recorded, I believe it’s up on youtube. Uh and somebody else uh who’s on the call can probably drop a link in here in parallel to uh to, to me moving on with that discussion, but definitely uh definitely encourage folks to check that out as well, but we’ll make sure that that gets shared here. All right. Now, any other questions before we kind of dive in? OK. Great. I’ll keep going then. So the uh the next, the first kind of lab we’re going to do. And before I go into any of the, um before I go into any of the sort of conceptual things, just because it’s gonna take a while to stand up our cluster, I’m gonna walk through the first lab here. Um And then we’ll kind of step back and talk a little bit about uh some of the concepts that we touched on here in a little bit more detail. Um But just to give ourselves time to deploy this cluster, I’m gonna do this now for folks who are, who do want to follow along, this is optional. Um You know, you don’t have to do this. If you have access to some existing cluster, whether it’s doer for desktops or uh something else you have within your organization, you’ll be able to follow along uh with the, the next lab where we go and work with resources. Um But if you don’t already have a cluster, uh feel free to uh you know, follow along with this lab. So let me go ahead and uh jump into uh this lab. And so the, the lab I’m gonna be working through uh just for folks who are following along uh is was linked on that slide as well, but it’s in the Pulumi infrastructures code workshop uh on github. And then I’m gonna be doing labs aws, I’m gonna be doing this in typescript this time for the first part, I’m going to switch back to Python to the second part. Um We’re gonna be doing lab four which is about deploying a Cooper cluster. So feel free to follow along or uh or just do it yourself afterwards as well. OK. So let’s get started. So I’m just gonna start in an empty directory. I’m gonna create, you know, um IC two. It’s, and then I’m gonna just open up uh an ID here so we can start working with this. So first, I’m gonna create two folders because I’m gonna work with the cluster here first and then we’ll go ahead and uh do the app work later. Let me just open up the terminal so I can uh I can write some code here. So just as in the last one, I’m gonna just do gloomy new. You want me to go in that cluster folder? And I mentioned I’m gonna use typescript. So I’m gonna do uh a s typescript. I’m gonna be working with some a S typescript resources and this one will be um I’m just accepting some of the defaults here and then I’m gonna run this in Us West too just so it’s a little bit quicker for me. Uh Working here in Seattle. OK. So we’ve now got our basic project. Uh We talked through kind of what all these files are last time, so I won’t go into too much detail there. Um But what I will do is kind of start with a clean slate here. Uh And the one thing I’m going to use this time is as well as using some of the libraries, some of the base libraries like the AWS library we used last time. I’m also going to use a package. We have uh called Pulumi Es. So I’m gonna do M PM install Pulumi EK and this is going to bring down a package that has some additional higher level components that we can use to make it really easy to work with EP. And so now that I’ve done that, I can say import Star as EKS from and as before, you know, because we’re in a real program language, we can do something like, you know, cluster and you can see we get, you know, uh in intelligence and ID E support. And if we miss type one of these things, we get squiggles. So all of these sort of ID E features that you’d expect from just using any kind of library we get here. Uh But in this case, we have the ability to create just an EKS cluster. Uh And this cluster involves many different uh resources in AWS. It involves standing up uh the cluster itself, some security groups and node groups, uh some uh an autos scale group behind the scenes, so many different things. But we get to just think about it as a very simple abstract concept. We just want an EPS cluster with some high level IZATION. So I just call this cluster. You can see that because there’s a bunch of different things I can set on here. And so lots of different uh capabilities that this component exposes uh for how I can enable it. Um But the only one I’m gonna set is deploy dashboard false. Uh And that’s gonna say I don’t want a dashboard associated with this Now, the the other thing I can do uh as well as creating this component. Uh I can also do sort of a my own custom VPC for example. So I can say, you know, constant uh VPC, this new database. This is a similar kind of component where I just have a high level ability to uh create a AB PC. Now I can set some things like uh you know, what kind of network, you know, the number of availability zones, I want it to be exposed in what have you. And so each of these will sort of create lots of different resources that, that capture this higher level of capability to create just a whole B PC that’s configured with, with good defaults or a whole cluster that’s configured with defaults. Keep things simple. I’ll just uh you know, do it like this. I’m gonna also just export uh uh something as well. So like we saw last time we can export something from our stack. So that’s available to other pieces of code that I run outside of this deployment. And so in particular, the thing I want to export is the coup config that would let me uh access this. So I can say export, constant coup config equals cluster dot co config. OK. So this is a very simple program. In fact, I can even get rid of all of these. Um So it’s just, you know, seven lines or so. And now I can come over here and say blu me up, I’m actually gonna make this a lot bigger so we can really see what this looks like. Some of the things we see here is that um we’re gonna get that preview as we saw before of all the different resources that can be created as part of this deployment. You can see it’s actually gonna create 28 resources for us. So even though we only wrote, you know, kind of one line of code here effectively, uh we’re gonna get all of these different resources. And like I mentioned, that’s a whole lot of different things in a us that enable us to have a working EPS system. So I could go look at the details and understand exactly what all these look like. But one of the real values of doing this is that I don’t have to do that. So I can just say uh I’m gonna go ahead and do that update. And so this is gonna go ahead and go out to AWS and actually provision all of the different resources I need. It’s going to do this in parallel where it can and to many of these resources are actually independent. So we’re gonna go out and deploy all of the resources, we need to have a kind of a working uh EP set up. So that’s gonna take uh 10 to 12 minutes. Um So I’m gonna go back and talk about some of the concepts here and we’ll come back and look more at this cluster uh in a second and start looking at what we ended up creating. All right. So let me just talk for a second about kind of components here. So, uh components are really reusable building blocks for cloud infrastructure. Um And so just like any other API you might use in, in your language of choice, uh You frequently don’t work with just the, the OS primitives themselves, you frequently use libraries or packages or, or things that have been provided um by either the designers of the language itself um or by third party package methods, things that you find in PI pi or N PM or what have you. Um And so most developers are very used to working with uh abstractions and libraries and things that, that make it easier to work with the the domain they’re trying to work with than it would be if they had to drop down to the raw operating system primitives uh that, that are available. And so this is the same idea with infrastructure we want to provide, we want to Pulumi wants to provide these components that are, that are higher level for you like the VPC and the BKS cluster we talked about, but also you uh as a developer have the ability to create your own uh components, there’s a couple of interesting things you know, um these components can end up looking almost identical to a normal resource, but just like you could create a, uh you know, a A Bs dot EC2 dot instance, you can create an A SX dot EC2 dot E PC. Um And so these components and custom resources kind of can feel and look very similar. And so you can raise the abstraction level without really changing the way that consumers within your application domain can work. And so here, you know, very simple. So when you’re using these components, there’s a couple of interesting things that are sort of uh gloomy specific. Uh So one is uh sometimes, you know, for folks who have used abstractions over cloud infrastructure, you may be wondering what if that abstraction doesn’t accomplish exactly what I want. What if there’s one little knob somewhere on one of the some uh some resources that I want to tweak in a way which isn’t supported by that my group. And this is something that’s sort of more true with uh infrastructure we’ve found than it is with many other things in kind of application development. And so we have tools for example, transformations uh which would let you say I want to use this component, but I want to make a few uh edits to its behavior uh kind of like an aspect or programming or something. Uh And this, we found is a nice escape patch, lets you make sure that any component you use, even if you need to use something slightly different. You don’t have to fork that whole component and copy its whole code base into yours. You can still just use the component reliably and tweak it. If you want to, similarly, if you’re trying to refactor, you know, from the components, uh or you know, from just raw resources, you have into a component, you’ve decided that there’s a group of things you’ve got that you now want to call a component or you’ve taken a component and you’ve decided, hey, I actually want to manage those resources myself. Uh You know, how can you refactor your code reliably? Um This is also sort of interestingly uh you know, kind of different in the infrastructure world where we have to make sure not just that the code does the same thing but that the identities of the different resources involved are the same. So we have this ability uh called aliases, which lets you do that as well. So very easy to tweak things and, and override uh and a factor and those kind of things. So all the patterns that you expect from kind of software hygiene and software development, we can bring the bear on our infrastructure here. And of course, as, as we highlighted, uh I think in some of the uh in some of the last kind of demo there, we can use off the shelf libraries like Ploy Aws X like Sloomy KX, we have a handful of these libraries that, that we’ve provided. Uh There’s some that are out there in the community uh that third parties have provided. And then we found that many teams who we talked to about uh the use of Pulumi are actually building their own. Uh And so, uh they love seeing that we have the AD BS X library, but the library they want to use for, for their uh internal uh deployments actually has some other custom things that are related to, you know, their teams uh or their, their organizations best practices. And so they want to have those components that they provide to use within the organization. So that ability to create your own components, I think is really the thing which makes this particularly exciting in terms of writing your components. We have an example just over here, probably to show an example of this live a little bit later, but it’s very easy. It’s just just a class, just a component in the language. So my component extends a component resource and then just basically takes 33 things, a name, the arguments and the options uh and calls a super super call uh which actually constructs uh that and then within its body, you can just create any Children at once. Uh child resources like this one and uh assign them to uh Children itself. So just like you’d expect effectively from a kind of uh any one of these program languages and sort of a class based API. OK. So that’s a little bit about uh components. Uh I’ll pause here just a second. I suspect my deployment is still not done. So I’ll answer any questions that folks have uh now before we move on. All right, no questions going once, going twice. OK. So your question, what’s uh Pulumi story for configuration? Uh And then uh is the A STS EC2 provision is still the best way to set up post provisioning? Yes, I think the question here is about um uh you know, provisioning in guest provisioning for VM in particular. So if I have a BM base set up, how do I go and uh and, and run some configuration scripts, whether it’s some bash scripts or some or some uh chef or puppet or what have you. Uh And yeah, so um Pulumi today, you know, I think in the first uh demo here we showed kind of using user data to bootstrap. Um And that’s, that’s one thing that you could do today. The example that uh the the person who asked the question linked to uh is an approach where you can use a feature, we call dynamic providers to, to build the ability to do that provisioning and ssh into the instance and all that yourself. Uh So you can, that’s very flexible mechanism for kind of doing any uh custom code that you want in the life cycle of the deployments. Uh But it’s something actually that we’re also looking at uh over the next, you know, few months, we are scenario we plan on investing more in to provide some first class capabilities around injecting code into that life cycle, especially for the use case uh of, you know, kicking off uh provisioning scripts uh related to EC2 instances. Now, one thing there I’d say is is also that for some of the modern cloud workloads like with which we’re going to be spending some time with today with serve. Uh That kind of post provisioning tends to not be as, as important because the cloud resources themselves manage that provisioning life cycle, whether it’s booting up the container or uh deploying the packaging around the service thing. And so we find this is a really important use case for kind of the EC2 and the instance provisioning uh as folks move into some of the, the other places here, uh we found that that those concepts uh the provisioning piece is not as important, but at the same time, uh uh the broader idea of being able to inject code in the life cycle is really as a general capability that we, we’re very excited about um investing more in the coming months. It’s a great question. OK. So let’s keep going. Uh Let’s go jump back to our code, see where we’re at. OK. So it’s still creating. So just to not slow ourselves down, I’m gonna jump into a another cluster, which I have and we’ll use that one to kind of show off uh some of the aspects of what we’re doing here. So let me just open up uh this window over here. OK. So this one is a cluster that’s already been stood up. You see, it’s, it’s similar to what I just demoed except I, I created my own B PC and set up some sort of passed in some of the outputs that B PC to the cluster. So a little bit more configuration um but effectively the same thing and I’ll throw a couple of things here. So the first is when I write type gloomy stack, um As you showed, last time I can get this link into the gloomy console to kind of get that view of of what uh what this looks like, what all the, the details are here. And I can see, you know, for example, here’s my cob config file, here’s the, the settings I provided on this. But what’s really interesting is I can see all the resources under management here. So I can see that, you know, there’s, there’s quite a few resources uh in AWS and even a few in in KTIS itself that are being managed here. So for example, I look for the config map and this is actually a uh Kubernetes resource, not AAA DS resource. And so we’re actually managing both and a a resources within the same deployment here. I can also see sort of a visualization to kind of get a better bird’s eye view of this. And so I can see in this case, uh if I zoom out a bit um this sort of two key things, there’s a cluster uh which is that, you know EPS cluster I created and there’s a VPC, which is that VPC I created. And I can also kind of come over here and see some of the shape of this so that VPC itself equals a component, but it also had a bunch of components that’s built up. Um And so this allows us to sort of, you know, nest how we uh reuse infrastructure components. And so we only had to kind of create this this way of building subnets once in our code and then we applied it four times to build the four different subnets here. Um So we’re seeing reuse at many different levels here and kind of that software engineering uh concepts being applied. But this gives us a view of all the resources that are kind of currently deployed as part of the stack. And as we saw here, you know, we could even dive into AWS to go look at some of these resources uh and see what they do. What we’re gonna do. Now is we’re gonna say ploy stack output uh coup config and you see what that’s gonna give us is a uh is a coup config for this cluster. And so I’m gonna go ahead and take that and say coup config that Jason MS A dot dot slash. And so now I’ve, I’ve put that into a coup config file. Uh Here we go. And now I can use that to query what’s actually in my cluster. So I can do exports group. Config Yeah. And so now if I run coal uh cluster info, I can actually see what, what Kubernetes thinks is actually running inside this cluster. That’s so um I’m kind of dropping back into some of my operational tools here because I’ve done the deployment with, I’m using Cobe control to kind of go and look at what’s running inside this cluster. Indeed, we can see we’re connected to that cluster running in Eks in the Oregon region. Uh And then we have some of this running and we can also go look at those sort of what’s going on inside this. So we can say um could, could get nodes to see that we have 2 EC2 nodes uh running inside this cluster also uh get pods dash A to get all the pods running. And so we have a bunch of systems components running inside this cluster. Uh you know, the the nodes themselves, Coors and coup proxy. Um And so a lot of these resources were already running and bootstrapped as part of this. It turns out this cluster, uh I already had some app deployed into it. So you also see that running here. Um That wouldn’t be in my, my new cluster, I just spun up. Um But we will run an app inside this cluster uh very soon here. All right. So uh that’s it for standing up a cluster and kind of what we can get uh in terms of uh how the cluster works. The key things there are really because we have the components here very, very simple to stand these up. We can make that process of standing up a cluster just effectively one line of code. Um But as we need more complexity as we need to tweak more of the settings, we can, we can both do that using uh parameter of the cluster itself here or by dropping into the raw A W US concepts, tweaking those working with those directly. So great. All right. OK. So some people already started uh started answering the next question. So um for the next topic, I kind of wanted to go into multis stack architectures and then we’ll kind of see how that relates to the, the demos we’re doing here in a second. But curious, uh if folks want to jump in, um I think a few folks joined since our last poll. So if you, if you haven’t yet go to poll ev dot com slash lubin 275 and you can uh you can vote on this poll. Um I’m kind of just curious how many independent cloud infrastructure employments. Uh Do you have uh both, either you or your organization? Uh No one’s yet said thousands. That’s, that’s probably good. All right, give folks just a couple of minutes, see, see what kind of answers we get here. All right, great. So, so a decent mix over the different kind of scales uh involved, in fact now. Wow, perfect mix. Uh That’s good. Um And somebody does have thousands. Great. Uh um So, yeah, so it’s a sort of an interesting thing that um the number of different independent uh cloud infrastructure deployments uh can vary a lot between different organizations and even, even for the same total amount of complexity, uh there’s lots of different ways you can break that complexity up between, you know, kind of the monolithic deployments that you might do and kind of a, a, you know, uh micro services kind of deployment. And so presumably we have good support for kind of any of the different structures you may, may wanna use there. But let me talk a little bit about how some of those work. So the key thing that we sort of uh think of and when we talk to folks who are doing uh um you know, polling deployments is that it makes sense to break up infrastructure into multiple different stacks where the infrastructure really versions independently. So infrastructure can version independently for a few reasons. One, just because it fundamentally changes at a very different pace uh than another set of infrastructure. Uh So, for example, your core security primitives for your aws accounts uh probably don’t change very often, but your application uh uh infrastructure might change very often. And in fact, if you’re doing serverless, you might be changing infrastructure every time you want to deploy a new version of your function. And so you may actually be turning that multiple times a day or even multiple times an hour. And so there’s very different rates of, of iteration there. And you may want to separate those things just because they’re, they’re deploying and versioning at a different rate. Another reason why things uh can um conversion differently is just because they’re owned by different teams. Uh So if you have two independently operating uh parts of your organization, uh they may want to independently own inversion and drive the life cycle of uh the infrastructure funds. Uh So the application development team may want to deploy infrastructure related to the application with a life cycle that matches the applications deployments. Whereas the core platform or infrastructure team may want to deploy things with a life cycle uh that matches the sort of credence of delivery for the platform itself. Um And there’s lots of good reasons to break things up and, and draw boundaries where it makes sense based on organizational or velocity kind of reasons uh within, within deployment. But I’d say there’s a couple of best practices. We uh we see. So first is, you know, starting with one stack or few stacks generally, if you don’t, if you don’t have a good reason to, to have multiple uh starting with fewer is always going to be a bit easier. Um Because, you know, for all the same reasons that, that having fewer things is, is easier generally, uh it just means you don’t have to define the boundaries and the interfaces between uh layers as tight as cleanly as crisply uh until you’ve discovered what the right boundaries really are, what, what, what API should be on those boundaries. But once you do discover those boundaries and you can start breaking up and building the contracts for what are the outputs from this layer and what are the inputs from this layer? And how, how can I minimally couple those two things together uh to achieve what I want to achieve? The next thing is really the idea of uh stack references. So Pulumi kind of has a first class support for this idea of breaking up your infrastructure into uh multiple stacks. And that’s this notion of uh stack references. So from a higher level stack, uh like an application deployment uh uh stack, you can refer to the outputs of a lower level stack. So we’ve been using uh you know, uh exports and outputs uh here in, in all the demos that we’ve done both last time and so far today and those outputs were useful for being able to script against the the stacks. So we’re able to do ploy stack, output coop config or PLOS stack, output URL and use that to build scripts that work with things. But the other reason they’re really useful is because they give us access to the outputs for other stacks that they want to build on top of this. And so by exporting the coup config uh but from our KTIS cluster stack, we’re now able to build new stacks that use that coup config and reference the uh uh the underlying stack. And so if there are changes to that stack, those changes will get picked up in the higher level application performance as well. The other key thing here is that while Pulumi has really great support for this, when both layers of Pulumi, uh there are actually also ways to do this when one of both layers is uh is in another uh system. So when, when another layer is in cloud formation or as a resource manager or in terraform, you can still actually refer to those resources in those external systems. So maybe you’ve already deployed your VPC and networking layer um using cloud formation and you just want to build your application layer using Pulumi. Well, you can just reference the resources that are exported from a cloud formation stack and use those. Um So you can still draw a line in your your infrastructure and bring Pulumi in just for the piece where it might make the most sense where you don’t have to go and rewrite a bunch of existing infrastructure and do all that work up front. Uh So very easy to coexist with some of those tools as well and draw boundaries between these components even when it crosses different infrastructures, code tools. And one of the sort of general guides I kind of have around this is a lot of the themes here are very similar to kind of the monoliths versus microservices uh debates or, or lines of thinking uh in, in how people think about service architectures. Uh So just like monoliths can be a lot simpler. Uh so can uh sort of monolithic uh stacks and single deployments. Um But just like microservices can make sense once you have multiple independently operating teams and you wanted to find clear contracts between them. Uh Similarly, with, with poum, with infrastructure, uh it can make a lot of sense to do that as your needs. Uh And finally, we have this sort of diagram in the bottom. I won’t go through this in too much detail, but just to give a sense that, you know, in a typical architecture, we might have, uh we’re gonna kind of maybe have one organization uh that’s working with infrastructure and that organization may have multiple projects. Um And so for instance, it might have our core identity layer, it might have a core, you know, cluster kind of layer, then it might have various application layers. And so we have each of these three layers uh as independent projects which are different code bases which evolve at different paces. Um But then for each of those, we might have different environments. So these would each be different stacks. We’d have a, a test DEV environment for our identity stack. We have a DEV environment for our cluster stack and we have a DEV environment for our APP step and we might have a pro environment as well. So really supports this idea of, of both projects and stacks and a matrix between them. So in this case, I end up having six different deployments which might all be driven through my C I CD process or whatever. Um But I can use those stack references. Uh You’ll see these lines um between the layers here are actually using stack references to refer to other pieces of code. OK. Uh So before I dive into the demo of this, uh let me, I see this one more question if folks have more questions as well, uh feel free to uh to, to dump those in right now and I’ll address them. But let me answer the one question that we’ve got here. So the question was uh does Pulumi support policies on stack output changes? We found Staines itself is not good enough since it does not enforce deployment order in multiple stacks, sometimes we only have backward compatible changes. Um But sometimes changes, special requires multiple stacks to be deployed in a specific order. Yeah, so this is a really great question. Um And it’s true today, you know, stack references. Uh You know what you, you do have to still coordinate the deployments. Uh you know, such that you have compatibility between layers if you built a stack reference. Um we have several things we we do uh want to do here. I think there’s a lot of interesting things we can do as folks build more and more complex multi stack architecture with Pulumi. We, we can’t imagine supporting sort of uh orchestration of deployments across stacks. Uh So that if you update something in a lower level stack, which is gonna cause some resource to get replaced, then we can cause that replace to cascade through to the higher level stacks and then ultimately uh back into the lower level stack. And so we see a lot of opportunity to do even richer things around uh orchestration here with Pulumi now because we do have such a good understanding of all the different layers here. Uh But it is true today. Uh But if you need to make breaking changes in a lower level stack, you will have to coordinate that just like you would in a sort of microservices deployment. If you want to make a uh you know, breaking changes in any service that you’re running in your microservices world, you do have to do those in uh in staged ways where you introduce the new capability in parallel, adopted into other parts of your service infrastructure. And then only when the other services that are running within your service infrastructure have adapted to that change, do you uh you know, take away the previous capability? The balloon provide lots of tools for doing that but it definitely is more work. And this is one of the reasons why kind of multi stack architectures do require a little bit more uh work and build our effort because they do require you to think about those contracts you’ve created between different layers. Um And like I said, I think it’s something where we think Ballum has a lot of opportunity to keep going further here and enabling even richer things in this life cycle. Great. I think that’s all the questions I see right now. So I’ll uh I’ll keep moving here. OK. Come back to that question in just a second. But uh OK. So, so what we’re gonna do now is gonna create another, let’s just see. OK, perfect. So our previous stack that we had um uh did deploy, I’ll go and show this as well. Um Just to show you the new stack as well. So um this one has slightly fewer resources because it doesn’t have that VPC. Um It took 14 minutes to ultimately to deploy it. Uh um And we can see all the different resources involved here. We’ll see if it doesn’t have that, that VPC component. And so a little bit simpler, there’s still a lot of capabilities inside here. OK. So now we have the cluster. Um Let’s go ahead and uh actually first, let me go ahead and do some of the same things we did at the end of last time, just so we’re able to use uh this stack. So we’re gonna say gloomy stack, got that CIN Fig, put it down here and then we’ll export. Let’s just make sure I’ve got that set up right now, right? OK. Yes. Not sure why that isn’t working correctly. Ah Because I typed it wrong. Yeah. Um That makes sense. So it was just come back to this. Let me just do this again. Uh Plumbing stack outputs uh coon pig. OK. No, make sure I can connect to this thing. Uh Perfect. OK. Sorry about that. Um But yeah, now we got it working. Uh So we, we’re connected to our new cluster over here and so we’ll use this for our next part right now. So let me just bump out of here and make their app. What I’m now gonna do is actually go through uh through another part of the workshop. Uh So let me bring that up here. OK. So we’re gonna go through now deploying containers to KTIS cluster. And so again, you can do this in any uh plu supported language. So you can do this in Python or typescript or, or go or dot net. Um And so we got labs, Aws uh Python and lab 05. So I’m gonna walk through this and kind of show you how to deploy some KTIS resources into our cluster. OK. So as before I’m gonna go ahead and say gloomy new this time, I’m gonna do uh Python and I’m gonna do. So I’m gonna go ahead and click that and I’m gonna say, you know, OK. So just to make this a little easier, I’m actually going to create a new window that’s just opened here. Um That’ll let VS code, give me a little bit more room and I’m gonna do the same three commands that I suggested. So I’m gonna create the virtual environments, activate it and install those dependencies. OK? There we go. Uh And let me just pick uh the local virtual environment. So we get the all the kind of pooling and things from visual studio code here. OK. So we’ve set up our, our basic uh you know, Python project now to deploy things to and so just like we can deploy resources to aws and Azure and GCP and our cloud providers, we do the same for deploying into so we can connect to any cluster we want and deploy resources um into that. So let me go ahead and do uh a couple of things here. So I’m gonna say from Pulumi Import, I need a couple of things, I’m gonna import export, staff, reference outputs and research options. There are a few things that I’m gonna use throughout here and then we also have the Plume Ktis Library. Uh So I can import provider. Finally, I’m just gonna import Pulumi itself so I can use things there. And so one of the things you’ll notice is that I can if I actually just do as well. So one of the things you notice is I can do, you know all the different things, all different API si might expect from within Ktis. And so if I’m used to Ktis dot co dot B one dot For example, uh every different API that’s available as part of KTIS uh is available within pou inside my program. Um And so here I, you know, pods or services or config maps or anything I might want to use uh is available to me uh to work with within uh just in the same way that we project the entire surface area of data. We project the entire surface area of the community’s API so I can work with resources here. Um The first thing I wanna do though is instead of me dealing with that, you know, I created that um config file in my local file system, but I don’t wanna have to uh dump that out to the file system, figure out how to get it back in here. So what I wanted to instead do is use that uh that stack reference capability that we just talked about uh in the, in the previous slide. And so to do that, uh I’m gonna do a couple of things. Actually, I’m just gonna make this a little bit simpler than what the folks are in the workshop uh walk through. Um You know, they can, they can do the full thing there, but I’m gonna do something very simple and this to say infra equals uh staff reference in sacre, lets me point at uh some stack that I have. Uh And so in my case, I want to point at the stack that had all those other resources. And so I can come over here uh let me go back into the cluster and just see what its name was. So you see its name was Lu Hobin slash cluster IC slash Deb. So this is the organization, the project name and then the stack. So the instance of that is if I grab all that and just put it in here, now, I can say uh you know, config equals infra dot And now I can get an output from that stack. And so here I know that the output was called and now it turns out that K config is actually adjacent object. And so what I need is a string. And so I’m just gonna use a PLY which we talked about last time to take uh that, that uh config file and Jason dot dump it. Ok. So now I’ve got my kook config as a string and now I can do a set of variable which lets me talk to this, uh this command provider, like I said, is uh KS provider equals provider. And now I’ll just give you the name and I can set the coupon fig to that coupon fig. Ok. So what this has done is actually, I’ve configured my knee provider here to talk not to whatever is ambient, set up uh as my environment, but to actually configure it dynamically. So based on some inputs that I uh captured programmatically, and this is a key thing we can use this with, with anything that uses with NS with Azure. If we want to from a single program, deploy into multiple different aws accounts, for example, or multiple native s regions, we can use the same ability to use a provider instance to construct programmatically a way to talk to a particular account or use a particular set of credentials. Um But here we’re using it to uh programmatically based on the output of this stack, figure out which we want to talk to. OK. So now let’s at least create one resource just so we can uh we can finally deploy something into our cluster. And actually, we’re gonna let me just write this all out. I’m gonna say bloomy communities. Uh Well, actually, I’ll just get one of these here uh from PLO co dot apps dot V one import the and from core dot B one, I wanna import service the name. So folks you free in whatever language you’re using to give us kind of whatever style you want here, you can type them all out every time or you can kind of do these, these name reports. Um But just to keep the code a little simpler, I’ll just do it like this. So I’ll just create a name space. Uh We call it a name space. Then we’re gonna set a couple of uh standard Kuti properties on this. So first off, we’re gonna set the metadata equal to name you cooking and then we’re gonna set S um and so ops is something we, we touched on briefly in the last uh last uh last session, but it got a set of general options we want to configure our cluster with. And so in this case, we’re gonna set the provider option. Uh And that’s, we’re gonna use that provider that we created. So what that means is, don’t deploy this with the ambient available uh provider instead use this specific provider to deploy that resource. OK? So let’s try ahead and go ahead and deploy this just to make sure things are kind of working correctly. And what we should see is that this actually deploys a oops, oops, I didn’t import Jason. You see, we got Jason is not defined, but let me import Jason, not sure why, uh, my winter didn’t catch that. But, ah, ok, we picked up, uh, two recent of a thing. I’m not sure where that problem is actually coming from. My, let me just try one thing real quick here actually, just to make this a little simpler. I’m gonna go back to, uh, um, a variant of this that we have, uh, here that has this fill out a little bit more and has it set up as I would sort of expect. So let me open this up. That’s not the limit, right? Let me start with this one which has uh which I set up beforehand and has the environment set up uh correctly. So, OK. So I’m gonna create a new uh stack here and then we’ll just take that code that we are working with there uh and bring that over here. OK. So let’s try this again and we’ll just do, pull me up. OK. Great. So we see a couple of things here. So um we see that we’re creating that kate provider. So this is the provider which knows how to connect to this cluster. We’ve got, uh and we’ve got the name space. And so if we go look at this as well, we can see um that we ended up using that provider or use that coup config that we specified. So I got that from the other stacks output and he’s using it here. Uh And then the uh name space here. Um It’s been created and it has that name that I specified. It’s been created with that provider that we asked for. So if I say yes, this should actually go and deploy that uh into I could cost. OK. Look at what name spaces are, are deployed in our cluster. Um But I’ll skip that for now just so we can keep moving. OK? So now I’ll go quick through this uh folks who know as well. You know, this should look very familiar uh for folks who haven’t worked with as much, definitely uh feel free to kind of go and spend time looking through all this, but we can start doing a few more things now. So we can, we can create a deployment uh as well. And so in COTIS we use deployments to sort of a key way to uh deploy a set of pots out into uh the in this case, I can specify all the exact same things I’d expect from the, the API I can provide the image, uh I can provide the la that sort of thing. You’ll see a couple of like kind of nice things that are Pulumi specific here though. Um Instead of doing this just raw and YAML, we can do things like we can refer to the name spaces name. So we’ve created this namespace object instead of just embedding a magic string all over the place and making sure that string is the same everywhere. We can actually say, hey, this name space I created just grab whatever name we use for that name space, refer to it by object reference and embed that name space in here. So if we want to change this name space later, everything else will get fixed up kind of automatically. Similarly, just very simple things like we have some labels here that we’re going to replicate a bunch of times. And so we can just create a variable uh that has those labels and then use that into the places. So we can reduce some simple boilerplate as well as deployment. I’m gonna go and grade a service as well. This will give me uh an exposed endpoint for this. And so we’re gonna um you know, map port 80 80 in that container image uh to put ad on a load balancer. Uh So again, just some standard kind of and then the very last thing you know, as with all these stacks, we kind of want to export uh some, you know, an end point. And so in this case, the service has a H TB M point that it exposes. And so we’ll just uh go ahead and get that. And so again, we can read off of the service, its status and the low balance or an ingress and the host name. All these are are things that Kuti provides as uh as outputs on its resources. We can go ahead and grab those, that host name and port as the outputs and then construct the URL to access those. Let me go and run Plumy up again, gonna save my, let me me up again. OK. And so here we see again, three unchanged. So those existing resources, the name space and what have you don’t have to change. Um But our service and deployment are now gonna get created using all those settings that we provide. So one of the nice things actually with Kubernetes um that we didn’t see with A vs is actually you’ll notice that as these things are deploying, we get sort of a rich status updates on what’s going on. Uh And so that deployment, you know, you saw that it was actually waiting for it to roll out the deployment and the service was saying, hey, I’m waiting for there to be paused to redirect traffic too. So plume actually has baked into it, not just the ability to deploy resources to, but also to understand when those resources are ready. Um And so it will wait for the deployment to complete to only indicate that it’s completed successfully when those resources have met a set of readiness criteria uh that are defined by. And this is a really valuable thing because it means we can do orchestration of different pieces of our infrastructure even at the Kubernetes layer using a notion of kind of doneness like when is this available? So that we can then go and uh script some other piece of infrastructure that depends on. Um So it’s a really kind of handy thing to have. Now, unfortunately, there’s one thing when we’re running these things in uh in native BS, it does take a while. It, it says that the, the load balancer is fully uh done, but sometimes the load bouncer in native S itself uh is not done. So it’s gonna take a second for this end point to actually be available. Um But uh we’ll wait for that to be though and come back and check out in a second. So before I do, uh let me just see if there’s any questions uh on what we’ve covered so far, uh If anyone does have more questions, feel free to drop them. I see there’s one more here. I’ll take a look at that one and answer it right now. So isn’t there a better way to get the coup config into the file system? Um So, so, so you, you’re kind of uh you know, you have a lot of different options for how to do this. So uh your program itself could actually, instead of using a stack, export, it could write the pro the, the coup config to the file system. Uh It could even emit out a uh bash script that you can run locally to, you know, set the, the coop config uh and run and run coop control. Um But as we saw kind of in the demo, I’d say an even better thing to do is actually uh use stack references. So you don’t have to even have it on the file system at all. If you don’t want to, all of your Pulumi deployments can, can work through real uh real objects and real references, not have to manage it through the file system or through any environment variables. Great. All right. Any, any other questions before we keep moving here? No. OK. Yeah. So we see here that um now this is running now. So a very simple little image uh that, you know, says hello boot camp. So we’ve got, we’ve got now a a load balanced uh container running inside uh inside. As I see a question about um what about openshift? Uh So yeah, so, I mean openshift, uh you know, I think it kind of has the core and then a bunch of pieces on top of that. Um Those pieces tend to be described by Ktis uh CRD. And so Pulumi lets you work with CR Ds. Um uh You can use the Pulumi dot uh You know, I think it’s custom resource is what it’s called inside the, the API um Now to get really strongly typed support for those in the same way that you kind of get the, the rich experience that I gotta come over here and look at my service. Um You know, you noticed I had sort of meditated and spec and I kind of get strong typing over some of this stuff uh for the Cr Ds, you’re not gonna by default to get those because, you know, doesn’t know what all the different resources are. Um But you can build those layers yourself so you can easily build your own wrapper around uh the, the custom resource that has a strongly typed API for either some of the Cr Ds and openshift or with whatever other kind of platform is service offering you might be using or building on top of. Uh um So definitely you can kind of get both there, you can get the raw access to work with those Cr Ds and deploy and so you can deploy an open shift. Um But if you want that really rich experience, you’ll kind of want to build a little layer on top that projects a nice API. OK. Yeah. So this was the blo um So let’s go ahead and try to make a change and kind of see what that looks like. Um So the change we’re gonna make uh make two changes actually. Uh So the first is this one, we deployed one replica. So we’re gonna change it to three. And then the second one is we’re gonna change the uh image that we use. And so let me just grab a new image to deploy did this cluster. So I’m go ahead and just do uh do that right there. OK. So now if you me up, uh we again see that, you know, plumy gives us even for it, gives us the ability to sort of see what’s going to change. And so this deployment is gonna change and the spec has changed, we can look at the details of that and you can see those, those two changes we expect the replicas is changing from 1 to 3, an image changing from this boot camp B one to this boot camp B two. So go ahead and say yes. And again, we’ll sort of see some status for what happens as we’re making this change. Um So I see it’s waiting for the replica set to be marked as ready. In fact, if we come over here and kind of start refreshing, looks like we haven’t yet rolled out the new ones. OK? The deployment is done. I may actually try and curl that in point maybe that we’re getting some browser cache in here. Uh OK. There we go. Yeah, so we got the equals two. Now, um let me see if our browser is now doing this, still getting this browsing engine engine there. So, so we can see that now, we’re hitting that second instance of this that was running. So we, we did update our deployment, we could even do something like uh run for loop here. OK. We’ll see that they’re all running V two now. Uh in fact, that we’re seeing them running on different pods. And so because we scale this out to three pots, um We’re now seeing three, we should be seeing because we’re only seeing two here for some reason. But uh oh no, there’s the third. Uh so the, the internal uh kind of load balancing is spreading out load between the three different uh instances of this uh of this pod that are running. Ok. So if that was doing the updates to our Pulumi uh infrastructure and we could do that, you know, we could do that to update to new versions of our app. We could do that to uh you know, change things in our configuration to add config maps to add environment variables, anything we wanted to do there. OK. Uh So I see one other question. I’ll just go ahead and answer that. Um So the question was uh how were errors meant to be handled with Pulumi during incremental changes to a given step? Um So yeah, so, you know, it’s, it’s a, it’s a reasonably expected thing that, you know, as you’re making changes to infrastructure, you’ll try to make a change. But isn’t possible either because you made a mistake uh or because something went wrong in the cloud provider. Uh And those are kind of expected things that sort of failure is part of uh development process here. And so the deployment will fail uh as soon as it sees any problem as soon as the cloud provider says, it’s not able to accomplish what you ask for uh when you will fail the deployments. Uh and it will recognize what has been changed so far and record that so that when you try and you know, make a change to now your, your specification and deploy that we’ll actually start from where we left off. So all the changes that you had already made will still be there. Uh If you do want to kind of roll back yourself, you can change back your code to the state it was in before and, and deploy again and that will drive back from the partially deployed state you’re in to uh the, the state you’re in beforehand. Uh But, but generally if, if you have a failure, we’re gonna stop the deployment there. Uh We’re gonna report the error back to you that we got from the cloud provider. Uh And uh you’re gonna be able to make the changes you need to go and do that. Uh So the next question is about uh is actually about integration testing. So give me one second and I’m uh in the next section, I’m gonna be talking a little about testing. So I’ll, I’ll address that as part of looking more broadly. Ok, great. So, so there we go. So we built some infrastructure uh and deployed that. So let’s jump back into the slides and talk a little bit about testing. OK? OK. Good. So some folks already started answering this question. So before we dive into kind of Pulumi testing, kind of curious how many folks are, are testing infrastructures code today? I’m actually super impressed that so many people are saying uh saying yes, all this, let’s give it a second CC. Uh All right. Yeah. So, so testing with infrastructure is something I think, I think if my question had been, who wants to be testing infrastructure? I think everyone would have said yes. Uh I think, you know, in general, when we see something as complex and as, as important as our cloud infrastructure, we immediately think, hey, how can I sort of inject quality and, and assurance into my infrastructure deployments and, and sort of front load any risk associated with those things. And so I might say, hey, testing, this is really valuable today though. It’s, it’s hard, right? It’s hard to test our infrastructure. It’s not quite, it doesn’t feel quite the same as our application software in terms of what are the tools available to us to uh to test this? Uh And so let’s let’s kind of look at kind of how Pulumi uh can offer it and help with that. So when we think about testing in, in Pulumi and really with infrastructure uh as code generally, I think we think of kind of three categories of testing and really there’s sort of a continuum, there’s lots of different things folks can do but I like to think about it with sort of three different areas uh that are worth thinking about as you plan out. How can, how can you test the infrastructure that you’re built? So the first is, is unit testing um and unit testing kind of in all kinds of uh you know, application development is really valuable and in large part because it’s very focused, very fast, uh very, you know, targeted. And so we can, we can write a lot of tests, we can do kind of sort of ted kind of things with, with our testing approach here. Uh We can validate very fine grained criteria and we can write a lot of tests that run very quickly uh and gain that validation. So we can do that in the inner loop of our development just as we’re typing our code, we can be, you know, testing the background, right. So unit testing can be really valuated. The question is sort of how do I unit test and what kind of unit test around my infrastructure? Um And the key thing for, for to make that to enable us to make that kind of testing fast is of course, we can test the logic of our polluting applications, but we can’t test the cloud providers themselves because it’s gonna take minutes or, you know, uh eight minutes or longer to deploy a whole set of infrastructure. So we think about unit tests, we really think about what can we do to mock out the um uh the infrastructure we’re actually deploying and just validate that the code we’ve written the imperative code we’ve written is correct and is doing what we want. And this is particularly important when we think about these components. So we, we’ve talked in, you know, in the previous section about building these reusable components, like EKS and like the VPC components and having tests that validate that all the different logic about how we’re wiring up pieces of that component is being done correctly. So we can gain confidence in the correctness of the component is, is really valuable, right? And we can actually do that using these unit tests without having to actually deploy a whole sense of infrastructure every time we want to run our validation. And so now we really can have these tests just running in the background worker and reporting any time we fail. Uh And so we can use all of our sort of standard test practices uh to get this really fast in the loop of kind of unit testing, application development. But that of course, isn’t gonna test everything right. That’s just gonna test the logic of our looming code. It’s not gonna test whether that logic was the correct way of, of configuring those resources in native communities. And so we do need to actually validate things about the resources we’re really creating. And so that’s where the next two categories uh come in. So the first of those is sort of property tests, uh which really run resource level assertions while the infrastructure is being provisioned. So you can think of these as sort of policies where I, I have things that I want to be true about my infrastructure that I don’t want to get violated. I have, you know, I want to make sure that certain ports are not opened up on any of my uh low balancers that uh no instance is directly exposed to the internet that only uh the only things exposed to the internet are uh are my load balancers or my, my DNS records or whatever. Um And so I can set all these things up as policies that I enforce. And I can then run those as a resource level assertions during the deployment process. Any time I deploy my resources, I’m gonna run a bunch of tests right before I deploy it to catch that it is not violating those. Um And so this is a nice way to just ensure confidence and ensure compliance criteria. Uh There’s a sort of bloomy has this uh thing called policy as code uh that lets us actually write uh in our program languages in Python or javascript uh assertions about the correctness of our uh of our resources and then run those just as part of any deployment that we happen to do. And then the third category is kind of integration tests. Um and so these are the ability to go and stand up, uh you know, a real set of infrastructure uh based on the code that you wrote, validate some things about that infrastructure behaving the way you want it to like maybe hit an end point on the infrastructure or run your application level unit tests against that infrastructure. Uh And then when we’re done running those tests, tear it all down. And so this kind of ephemeral infrastructure testing uh can be really powerful. And because now we can really validate the full correctness of uh infrastructure. Uh Of course, it’s a lot more expensive because I have to stand up real infrastructure. It’s gonna take a longer time, it may take tens of minutes to run a test. Um But there’s a few things that can be really important here. So one I can bring, you know, I can write, if I have lots of different tests I want to run, I can, I can often run those in parallel. So my long pole still may be tens of minutes, but I may only need to pay that long pole once to be able to paralyze everything else I do in terms of my uh integration testing. Uh The second one is that, you know, we talk about sort of the importance of components. And if I’m building uh infrastructure components, I can actually integration test those components as individual units without having to integration test my entire application all at once. And so, for example, with that EKS cluster uh component uh that we build, we actually have a library of of 40 or 50 integration tests that we run against that on every commit we make to change that code base. And it validates that with a whole bunch of different configurations of that um of that ETS cluster, we can stand up a cluster. The, the behavior of that cluster is what we expect and we can tear it down and running those tests uh is actually reasonably cheap. It does take, you know, uh 30 minutes or so to run that battery of tests on non commits. But the total cost of that because the infrastructure has only stood up for a matter of minutes, we only pay cents to validate that. Um because we don’t have any long lived infrastructure, every one of these deployments is an ephemeral environment uh for that component. And this lets us gain confidence that our component behaves as we expect. Now, we don’t have to as heavily test that as part of our full application, we can trust that it behaves as, as expected and more, more focus on the testing of the application behavior we’ve got when we can bring that in. OK. So, uh and this example down here um is just a very simple example of kind of that unit test category. Um It just shows that I can, you know, write really effectively as a standard Python unit test in this case. Um So I can say, you know, this is, I can put an annotation. This is plu do run time dot test. Uh And now inside that, uh, inside that test, I can just go and grab some things like I can grab my, my server, uh urn and tags. And I can say here’s some assertions I want to run against those so I can assert that, uh you know, the tags is not none. And that name is one of the tags. So this is some, some logic I might want to inject in and just run in tests. And this will just validate that if I ever create an instance in my code, it must have a name and so I can get that feedback immediately as I’m going. All right. So before I dive into kind of the last uh demo section, uh curious if folks have any other questions on testing or components or, or, or generally, I give just 30 seconds in case anyone throws something up. All right. Great. Well, let’s keep going then. So what I’m gonna do now, um, in this one, we don’t, we don’t have a, a workshop, uh kind of written up for this, but folks can feel free to follow along with kind of what I’m gonna show uh on screen here, gonna show just a couple of things to, uh to kind of take this infrastructure and uh test it. So let me just bring up uh it’s good I want to use here. OK. So, uh this is all today in a main uh dot pi file. But to really test it, what I want to do is kind of split this up a little bit and, and organize my code. This is something, of course, since we’re using a kind of a program language here, we’re, we’re capable of sort of reorganizing and end of uh you know, organizing our code in the way that we want. So let me put this in an app dot pi file because I already have an app dot pi file. Uh Let me go ahead and just take all of this code I have here. Almost all the cool and move it into my app dot P five. This is the same code we had um before I’m just gonna put that in F 0.5 and then here instead of all of this to leave the imports, even though I won’t use them, we’re going to imports. OK? So I’ve, I’ve not really made any changes. I’ve just sort of said I’m gonna move this code around. So if I go ahead and try and deploy this, we should see uh if I’ve done this correctly that um uh yes. So what I need to do is do service you have dot Yep. So now now these services defined within uh within an app. So go ahead and do that. OK? So five unchanged. So I, I reorganized my code, but it still does the same thing and I just put that in the file. So now what I wanna do is instead of running these entry points, I want to run some tests that also use what’s in this app file. Uh But just test its behavior without actually kind of pulling some infrastructure. So I’ll go ahead and take this uh test do pi file and I’ll just go ahead and write this part from scratch. The first thing is I want to uh import unit test and import. Uh So this is the standard uh Python unit testing framework. And the uh API the way all I can do is sort of create a class, my mock, which is gonna be my way of mocking Apple’s uh you know, resource providers. So I can just create a class that inherits from plum dot runtime dot mo. And there’s two API si have to uh have to influence. So the first is new resource, which is the, the mock that’s gonna be called whenever I try to create a resource. So instead of me going out to AWS and actually creating it or out to, to create it, uh We’re gonna go ahead and create it uh just with a fake name. So we’re gonna, its ID is gonna be whatever the name of the resource was plus this ID thing and the outputs are going to be just the same thing as the inputs. And so you can see here, the constructor says this function should return the physical identifier and the outputs for the resource being constructed. So that’s what we’re doing here, physical identifier and the outputs. So as we need to mock more and more functionality, we can go ahead and add special cases in here that handle uh other kinds of resources. The other one we need to do is uh is this call API won’t actually be using that this time. But if I uh made some invoke uh to look something up, for example, that would uh hit this call endpoint. And so instead of me looking that up in Aws, I would just mock out what I want to. OK? So now we’ve done that we can use those mocks. And so we can tell Pulumi uh instead of using the fault connection, you have to uh the cloud provider uh just go ahead and use these mocks uh to run your application. And now finally, uh once I’ve done that I can import my application code. And so now that I, when I do this import it will use these mocks instead of using what’s built into uh boom. And then finally, uh I can create a unit test so I can create a testing class that inherits from this. So I can use that flu one time that test. Uh Now I can write an actual test itself. And so for this example, it’s gonna write a very simple one. What it’s gonna do is it’s gonna take the uh this is gonna be a, a deployment, I think. Yes, what is it called? It’s called app deployment. So it’s gonna take this app dot app deployment and it’s gonna take the metadata of that and get its name space and then it’s gonna say that that name space should be equal to. So I’m gonna validate that I correctly set uh Luke open as the um as the, the example there. And so, OK. So I’ve written a little unit test against this uh that can run without me deploying any infrastructure. Um So now I can just run this with uh think its sound the new test. Oops. Uh Yes. OK. So, uh so what I tried to do here is I tried to run this unit test um with just these simple mocks that are written here. Uh And we see that we actually get an error. Um And it’s actually an error within the implementation of um of stack reference here. So stack reference dot pi and the reason for that is that uh the stack reference itself, if you recall invite to Pulumi stack here, the stack reference itself was a resource. Uh And so it’s a special kind of resource that is in the Pulumi Blooming name space. And so because I’m not actually connected to an engine this, this resource is not gonna get a reference. And so we actually need to mock out that stack reference so that we can run our code without having to connect to a blooming backend service or anything like that. And so to do that, we’ll actually just mock uh that. So let me just show you what it looks like to uh add a real mock as well here. So what we’ll do here is we’ll say if the type of the resource is bloomy bloomy stack reference like we saw down there, then I want to use a different set of outputs. So I want to pass all the inputs, but I also want to pass an output called outputs. Uh And that output should have a co config and that coup config will be uh this cluster. OK. So now, now when I run this, uh my uh my stack reference will have outputs even in my unit test. And so I’ll be able to get those outputs and pull the coupon thing off and set up all that stuff. So if I run Python unit test, OK, there we go. So slight, slightly anti climactic. But we got a test, a passing test front. And if I did, you know, and I like this and because it’s not working correctly, uh I won’t go look into that now. But uh but we ran the one new test. So we did run the unit test in this case. Um And I will go look into why, why that wasn’t running correctly when I made that update there. Um But the other key thing is that this is running just in about one second. Uh So we’re able to get very quick feedback on the correctness of our infrastructure just by running these tests, we can run this uh in a watcher process or what happened. And that’s one example of testing with uh with unit tests. Uh Ultimately, once you’ve got a little bit of this mocking framework set up, we can really easily just add tests with a few lines of code here to validate more and more criteria about the currents of our software. OK. So the one last thing I just want to touch on, I didn’t have a chance to demo this. Um but I kind of talked about how testing is particularly useful along with uh components. And so the other thing we could consider doing here with this is, you know, there’s a lot of boil plate here, right? We’ve got this deployment and this service. Uh And there’s a lot of just, you know, has a lot of boilerplate. And so we could imagine doing is just creating our own uh component, right? So we could create our own component called like a service deployment, for example, which packages up both of these resources into a new component. And lets me just specify effectively what name space I want to put it into what image I want to deploy, optionally how many replicas I want and what reports with those five pieces of information. That’s all I need to sort of set up a standard uh configuration here. And so I could actually introduce that component. I could validate that that component’s correctness independently of this specific usage of the component. Then I could just create uh create and use that component within my application itself and have come. Um So that that ability to uh kind of re factor things into components, test those components about it and just standard kind of software engineering best practices, but really easy to apply. OK. So uh with that uh kind of wrapped up on demos, I’ll I see a couple of questions here. So let me answer those and then we’ll go into a very quick uh kind of wrap up for the session. Uh So the question was uh you know, it’s still not clear whether I can achieve the same as Terra test with Pulumi. So say terra test is a tool um that’s used uh uh for terraform and for, for, for actually a number of other uh kind of infrastructure things. Um I believe terra test itself does not yet support bloomy. I think we’re actually talking to uh some of the folks who build terror test about potentially adding support directly in the terra test. Um But at the same time, there’s uh that integration testing category that I talked about on the slide. It’s really the same category as what terra test does. Uh Terra test lets you stand up a set of infrastructure, uh run validation against it and tear it down. I prefer, it offers a number of other capabilities, but that’s roughly uh kind of the main way it gets used. We do have frameworks like that. Uh for Pulumi, uh we ourselves have a go based integration test similar to terror test. Um But it scripts kind of Pulumi deployments. We’ve seen a number of uh you know, folks in the Pulumi community build tools as well. There’s a tool called pitfall uh in the Python world that does the same kind of thing. We’ve seen folks build internally, build their own test frameworks. Uh you know, in the work with the internal infrastructure, they have uh to deploy uh infrastructure in the same way that terra test does so very much possible that that’s sort of this integration testing uh category. Uh We find that to be, you know, one of probably the most valuable uh kind of things to do with integration testing. Um But if you also want to augment that with really fast uh tests and things we use unit testing, uh That’s a super valuable way to get uh upfront validation of uh of the way that you do your testing. OK. Uh Next question I see. Um Is there any way of developing locally without the development environment having to be in the cloud. Um So yeah, so this is actually part of what’s great about the unit testing. And the mocks that we just showed here is I can do that completely independently from touching my cloud provider. Uh And so I can really quickly develop and test uh there. Now, of course, there’s a lot I don’t get right if I’m, if I’m not actually touching my cloud provider, a lot of the correctness of my application is not being valued whether or not my instance is actually going to run with the computer I specified. Uh You know, I won’t really know for sure until I ask you to create it. Um And so, so you do need, you know, if you’re building cloud infrastructure, you do need to ultimately uh be able to, to deploy and validate against the cloud itself. Um But that unit testing capability lets you do a lot of the inner loop of the development without having to touch uh the cloud provider. And so you, so you can make more progress without that. I said the follow up question on the kind of testing framework. So um is the testing framework you mentioned available in other supported program languages? Um So something we, we definitely want to go and expand out uh some of the sort of integration testing frameworks that we provide as well. The unit testing that I mentioned here is available, I think in all the languages we support. Um It’s actually a great uh guide. Um So I go there, got plenty docks and got a testing guide here. Uh And so this has, you know, description of these different categories, the tradeoffs and kind of how you work with these things. Uh And then does have examples for, you know, unit testing in each of these languages and then examples for integration testing with go that’s the only language to support that framework right now, you could use it with programs written any language uh and then property testing and typescript. Um So, so uh we’re working to expand out the matrix of where these things are supported. Um But you know, already the unit testing you can do in any of the languages there. All right. And then uh one last question I see here is uh you know, will uh will your pr for multi-language blooming component libraries allow languages transpired to just to be used for coding infrastructure with Ploy? Uh So I think it’s just referring to some, some ongoing work we’re doing to kind of allow uh you know, components to be written in one language. Um but then get used from other languages. So things like that EPS component I showed best components today available for folks using Pless typescript. Really want people to be able to use that from Python and T Shark and go as well. So we’re working on the infrastructure to do that. Um So, uh so yeah, so, you know, we definitely will uh enable any, any ploy library to ultimately be used by any language that works with. And, and that’s something that’s kind of in progress right now, in terms of the specific question about language is transpired to javascript. Uh those actually can already be used. So any language you can, you can transpire to javascript, you can run with Pulumi because Pulumi has support for uh you know, no jas, so anything that you can run uh on no jas uh which these days is that anything uh you can already do um with Pulumi now that often doesn’t provide the, the best experience uh for using those languages. Um But if there is a compiler for the language you like and you, you know how to use that um integrate with, with, no, you can use that with Pulumi. Uh you can use that today. So definitely uh encourage folks to check that out if that’s the path they want through. All right, let me switch back into slides and we’ll uh we’ll wrap things up. OK. So this, we, we covered a lot of different topics today from kind of components to uh you know, multi stack architectures to testing and, and um and so there’s a lot more though that, that we haven’t been able to cover here yet. And so, you know, if you’re interested in these topics, I encourage you to go, there’s great documentation on these uh in the Pulumi website. You can go check that out um on tons and tons and tons of examples out there for all the different pieces here that you can use to kind of get a look at how Pulumi works with various scenarios you might be actually working with. So just a couple of those, I’ll call out in case they pique your interest, uh something to follow up on. Uh One is uh secrets. So we didn’t really talk much about uh secrets here. We talked about config and the ability to configure inputs to my stack in the last session. Uh But secrets are sort of an extension of configuration where we can say that some of our configuration is secret. And so it’ll actually be encrypted. Uh It won’t be stored in, in uh in plain text in uh in my configuration. And when I actually do my deployment, it will always be serialized in the ploy state file and in any place where the state file gets sent, it’ll always be serialized uh in an encrypted form. Uh So you can then plug in your own encryption back ends. So you can use the ploy service or you can uh you can work with, you know, K MS or, or um key vault or whatever it is you want to work with, to, to do your encryption. And this gives you sort of end end encryption where you don’t have to trust Pulumi, for example, you can have any secrets you want uh be encrypted with keys that you control. Uh And then the only time they’ll be decrypted is in memory inside the Pulumi deployment that’s running within uh within your environment. So you get complete control over that. So really interesting support there. Uh Something that I, you know, secrets are a very important part of any infrastructure deployments. And so definitely encourage folks to, to kind of look at those. Um if they’re interested, the second one is uh you know, importing and adopting existing infrastructure. So we know that many teams today already have existing infrastructure, whether it’s deployed with point and click or cloud formation or care and what have you uh plume has great support for adopting that infrastructure or coexisting with that infrastructure. And so, uh you know, you can definitely go and you know, if you have existing infrastructure that should not be a barrier in any sense to trying out and using plu pluming offer is a bunch of different ways to sort of work with that existing infrastructure and even move it over. Uh when you’re ready to uh that includes actually being able to if you have stuff in terraform. Uh you know, it’s actually very easy to convert terraform over to Pulumi, you have a tool called TF to ploy. Uh So if you, if you do have some existing terraform stuff and are looking for kind of the the real program languages, some of the software engineering benefits. Uh We do have some tools to help with that. Uh I mentioned policies, code uh very briefly there. But um if you’re looking for sort of enforcing compliance and security, right? Policies, uh ploy has a whole another sort of capability around how we author and write these, these policies in a really rich way. Uh So the whole another thing to, to check out and finally, you know, we focused really on kind of the development process here, um just in demoing kind of the developer loop. But when you’re taking the stuff into production and trying to do this in a more robust way, you’re almost certainly going to be moving your deployments into sort of AC I CD kind of pipeline potentially triggered by, you know, uh get flows and Pulumi has great support for that, this guides on the website there for integration with, you know, a dozen or so different C I CD providers. Um And in general, we see that pretty much everyone once they hit, you know, a certain level of maturity with Pulumi, um they are really starting to move into using their production, uh is, is gonna tend them to move into AC I CD system. So definitely encourage check that out. And finally, we touched on two providers, uh AWS and, uh and um, but there are 50 or so other providers that are available with blooming today. So, you know, no matter what you’re working with, uh there probably is a provider for it out there. Um And so Pulumi, one of the great things about blooming is you don’t have to, you know, build just for one platform. You can do AWS plus cloud plus plus, you know, plus, you know, you can do all these things together and coordinate your deployments across all those things. Uh So that’s another thing to, to go uh check out as well and to see if the, the platforms that you work with are available. I mean, if they’re not drop us a note and let us know and uh and we’ll see what we can do about that. All right. So just to wrap up, um and I’ll answer Q and A until, until we’re done after this, but just to wrap up, um you know, our docs are uh are a great place to go to learn about sloomy dot com slash docs. Uh You can follow us on at Bloomy Corp on Twitter or me at Luke on Twitter. Um We have great examples in github dot com slash bloomy examples uh for, I think it’s 100 and 20 or so examples now. So a lot of different things covered there. Um If you’re looking for getting started on something, uh And of course, our Slack Channel um got a great community of folks uh helping people to, to get started and answer questions around Pulumi. So join us at slap dot dot com uh to, to, to ask any questions you have. Great. So that’s it for me today. Uh I’ll answer questions uh while we have them but uh thank you everyone for coming. Uh and we’ll see you next time. Bye. Ok. So there’s a question off topic but any updates on supporting the go to use the Pulumi cli within custom scripts. Um So, so yeah, so I mean, today, we definitely support this uh in the sense that you can, you know, shell out to, to Pulumi. Um We have dash dash, Jason uh commands uh flags for many of the commands. So you can write scripts that use Pulumi up. In fact, many of the most heavy uh users of blooming that we talked to are actually scripting this in various ways, either scripting it into their, into their C I CD workflow or scripting it into sort of programmatic uh tool. They’ve got to provision and deep provision infrastructure. Um It’s a very common to build scripts around Pulumi today that’s done by invoking the cli. But we’re, I think the issue of this reference here, I suspect is one where we’re looking at whether we can expose that in a more api driven way. So expose API S in javascript and Python and go which let you coordinate the, the Plumy engine directly without having to shell out to AC O I command. So something we’re looking heavily at, we’ve seen this use case come up a ton uh very excited about kind of what we can do there and really aligned with the ability to kind of use your program languages for infrastructure is the ability to programmatically control the deployment uh work flows of your infrastructure as well. So uh definitely something where we’re thinking a lot about uh the next question, um how to advocate for adding a new provider that already exists in Terraform, but not yet in Pulumi. Um So I tell you the, the biggest thing, drop on the Slack channel. And uh there’s a, there’s a hash contribute uh channel in there. Um Drop a note in there saying, hey, you know, here’s a provider. Um uh you know, that, that I use uh in Tara form would love to have this available in Pulumi as well. Uh And someone from the team, I’m sure will jump on and uh and kind of give you guidance on a how you could build that. It’s actually very easy to, to do that. We have some, some bootstrap repos that that should be cloned and update a few things. Uh But then once you have something working, um we, we tend to be able to sort of take that over and publish it and, and test it and that kind of thing. Um So that, you know, we can make it available to all users as well. Um So, so Yeah, uh, definitely drop in there, let us know and, and feel free as with anything to open up an issue in the open source project. Uh, and let other folks up put on it that will, uh, that will give us, uh, a great sense of kind of what the things are that the community at large, uh, thinks are most important here. Um, but it’s also quite possible, we already have it because we, we have quite a few of these providers now. So great. And then the, the last question I see here is, uh, you know, will this recorded webinar be posted to youtube? And yes, it will. Uh, this should be up on youtube uh soon. So, uh, feel free to forward this on to other folks who may have missed this. And of course, we have other webinars where we’ll be covering some of the same material as well as some new material uh coming up uh within the coming weeks. And so feel free to jump into bloom dot com slash webinars to, to check out uh the upcoming webinars we’ve got on deck. All right. Well, thanks again everyone for joining us. Uh, have a great rest of your day. Uh And we’ll talk to you soon. Thanks. Bye.

Learn more

Discover the getting started guides and learn about Pulumi concepts.

Explore the docs →

Pulumi AI

Generate Pulumi infrastructure-as-code programs in any language.

Try Pulumi AI →