Posts Tagged eks

Create Amazon EKS clusters in your favorite language

Create Amazon EKS clusters in your favorite language

Pulumi’s infrastructure as code tooling combines the programming languages and tools you already know with the full power of cloud infrastructure. But until now, some Pulumi components for cloud infrastructure, like our popular EKS package for Amazon’s Elastic Kubernetes Service, were only available in a subset of the languages supported by Pulumi.

Now, you can use the EKS package–previously only available for TypeScript–in all four Pulumi languages: TypeScript, Python, .NET, and Go. Regardless of the language you choose, you can manage EKS clusters with Pulumi, starting with the v0.22.0 release. Check out our Modern Infrastructure Wednesday video to see it in action:

Read more →

Getting Started with Amazon EKS Distro & Pulumi

Getting Started with Amazon EKS Distro & Pulumi

As Kubernetes grows in popularity, the number of options for Kubernetes users continues to increase. Providers of managed Kubernetes offerings will often learn lessons about operating large numbers of clusters at scale; it’s increasingly common that they will contribute this knowledge back to the ecosystem, allowing those organizations who need more control and flexibility to reap the benefits.

With the announcement of the Amazon EKS Distro during AWS re:Invent, the Amazon EKS team has contributed back to the cloud-native community in a big way. In this post, we’ll take a brief look at what the Amazon EKS Distro is, explore why you might choose this over current managed service offerings and finally, explore how you can get started with the Amazon EKS Distro on day 1 using Pulumi.

Read more →

Access Control for Pods on Amazon EKS

Access Control for Pods on Amazon EKS

Amazon EKS clusters can use IAM roles and policies for Pods to assign fine-grained access control of AWS services. The AWS IAM entities map into Kubernetes RBAC to configure the permissions of Pods that work with AWS services.

Together, AWS IAM and Kubernetes RBAC enable least-privileged access for your apps, scoped to the appropriate policies and user requirements.

Read more →

AWS EKS - How to Scale Your Cluster

AWS EKS - How to Scale Your Cluster

AWS Elastic Kubernetes Service (EKS) provides a range of performance and control for dynamically scaling your Kubernetes clusters, including Managed Node Groups, Fargate, and Manually-Managed Node Groups in EC2. In this post, we’ll see how to use each of these compute options, and when to prefer one over the other in order to maximize productivity, flexibility, and control, based on your needs.

Read more →

Multicloud Kubernetes: Running Apps Across EKS, AKS, and GKE

Multicloud Kubernetes: Running Apps Across EKS, AKS, and GKE

Kubernetes clusters from the managed platforms of AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and GCP Google Kubernetes Engine (GKE) all vary in configuration, management, and resource properties. This variance creates unnecessary complexity in cluster provisioning and application deployments, as well as for CI/CD and testing.

Additionally, if you wanted to deploy the same app across multiple clusters for specific use cases or test scenarios across providers, subtleties such as LoadBalancer outputs and cluster connection settings can be a nuisance to manage.

In this post, we’ll see how to use Pulumi to deploy the kuard app across EKS, AKS, GKE and a local Kubernetes cluster, such as Docker Desktop or a self-managed cluster. We’ll spin up the clusters in each provider, launch the app, and manage both cluster and app using the TypeScript programming language.

Read more →

Day 2 Kubernetes: Migrate EKS Node Groups with Zero Downtime

Day 2 Kubernetes: Migrate EKS Node Groups with Zero Downtime

Managed Kubernetes offerings greatly reduce the overhead required in administering Kubernetes. However, the cluster is only one of the components under management, as app lifecycles are self-driven tasks that vary by workloads.

In Kubernetes, node groups are a useful mechanism for creating pools of resources that can enforce scheduling requirements. They also provide a utility for shifting workloads around during cluster management and updates.

In this post, we’ll see how to use Pulumi for Day 2 Kubernetes administration. We’ll spin up a new EKS cluster with two node groups and a given workload. Then we’ll add one more node group with an updated configuration, and migrate the workload over to it with zero downtime using code and kubectl.

Read more →

Persisting Kubernetes workloads with Amazon EFS CSI volumes

Persisting Kubernetes workloads with Amazon EFS CSI volumes

Warning Some parts of this blog post are out-of-date. As an alternative, please refer to the EFS CSI Helm Chart and Pulumi’s support for deploying helm charts The Amazon Elastic File System Container Storage Interface (CSI) Driver implements the CSI specification for container orchestrators to manage the lifecycle of Amazon EFS filesystems. The CSI specification defines an interface along with the minimum operational and packaging recommendations for a storage provider to implement a CSI compatible plugin.

Read more →

Kubernetes Ingress with AWS ALB Ingress Controller

Kubernetes Ingress with AWS ALB Ingress Controller

Kubernetes Ingress is an API object that allows you manage external (or) internal HTTP[s] access to Kubernetes services running in a cluster. Amazon Elastic Load Balancing Application Load Balancer (ALB) is a popular AWS service that load balances incoming traffic at the application layer across multiple targets, such as Amazon EC2 instances, in a region. ALB supports multiple features including host or path based routing, TLS (Transport layer security) termination, WebSockets, HTTP/2, AWS WAF (web application firewall) integration, integrated access logs, and health checks.

The AWS ALB Ingress controller is a Kubernetes SIG-AWS subproject - it was the second sub-project added to SIG-AWS after the aws-authenticator subproject. The ALB Ingress controller triggers the creation of an ALB and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource on the cluster. TargetGroups are created for each backend specified in the Ingress resource. Listeners are created for every port specified as Ingress resource annotation. When no port is specified, sensible defaults (80 or 443) are used. Rules are created for each path specified in your ingress resource. This ensures that traffic to a specific path is routed to the correct TargetGroup.

In this post, we will work through a simple example of running ALB based Kubernetes Ingresses with Pulumi EKS, AWS, and AWSX packages.

Read more →