Multiple Azure Kubernetes Service (AKS) Clusters
This example demonstrates creating multiple Azure Kubernetes Service (AKS) clusters in different regions and with different node counts. Please see https://docs.microsoft.com/en-us/azure/aks/ for more information about AKS.
Ensure you have downloaded and installed the Pulumi CLI.
Running the Example
Note: Due to an issue in the Azure AD Terraform Provider (https://github.com/hashicorp/terraform-provider-azuread/issues/4) the creation of an Azure Service Principal, which is needed to create the Kubernetes cluster (see main.py), is delayed and may not be available when the cluster is created. If you get a Service Principal not found error, as a work around, you should be able to run
pulumi upagain, at which time the Service Principal should have been created.
After cloning this repo,
cd into it and run these commands.
Create a new stack, which is an isolated deployment target for this example:
$ pulumi stack init
Set the required configuration variables for this program:
$ pulumi config set azure-native:environment public $ pulumi config set password --secret [your-cluster-password-here] $ ssh-keygen -t rsa -f key.rsa $ pulumi config set sshPublicKey < key.rsa.pub
Deploy everything with the
pulumi upcommand. This provisions all the Azure resources necessary, including an Active Directory service principal and AKS clusters:
$ pulumi up
After a couple minutes, your AKS clusters will be ready. The AKS cluster names will be printed as output variables once
$ pulumi up ... Outputs: + aksClusterNames: [ + : "akscluster-east513be264" + : "akscluster-westece285c7" ] ...
At this point, you have multiple AKS clusters running in different regions. Feel free to modify your program, and run
pulumi upto redeploy changes. The Pulumi CLI automatically detects what has changed and makes the minimal edits necessary to accomplish these changes.
Once you are done, you can destroy all of the resources, and the stack:
$ pulumi destroy $ pulumi stack rm