1. Docs
  2. Clouds
  3. Kubernetes
  4. Guides
  5. Updating worker nodes

Updating Kubernetes worker nodes

    Updating the worker nodes is a multi-step process that includes proper management of the nodes themselves, and the apps running on the them. Kubernetes is best equipped to easily update stateless apps, but can also manage stateful apps using StatefulSets with user and ops-driven assistance.

    Apps must comply to the Pod termination lifecycle to properly terminate, and should leverage capabilities such as Node selectors, affinity, and probes to guarantee expected scheduling, and readiness during updates.

    The full code for this stack is on GitHub.

    The full code for this stack is on GitHub.

    The full code for this stack is on GitHub.

    Overview

    We’ll examine how to:

    Update an Existing Node Group

    Updating an existing node group can be trivial for basic property changes.

    1. Verify that enough capacity is available in the cluster to handle workload spillover when the desired node group is scaled down.

    2. Edit the desiredCapacity and minSize of the node group to scale down to a value of 0.

    3. Run an update with pulumi up.

    4. Update the desired node group properties, such as the instanceType or amiId.

    5. Scale the node group up to the desired value.

    6. Run an update with pulumi up.

      Note: Don’t drift far apart in minor Kubernetes versions between the node group workers and the control plane.

    See the official AWS docs for more details.

    Updating an existing node group can be trivial for basic property changes.

    1. Verify that enough capacity is available in the cluster to handle workload spillover when the desired node group is scaled down.

    2. Edit the node pool in VMSS portal to a value of 0, as node pools cannot currently be scaled down to 0, but it’s availability is planned.

    3. Update the desired node group properties, such as the vmSize or kubernetesVersion.

    4. Scale the node pool up to the desired value.

    5. Run an update with pulumi up.

      Note: Don’t drift far apart in minor Kubernetes versions between the node group workers and the control plane.

    See the official AKS docs for more details.

    By default, GKE clusters and node pools have their versions upgraded automatically.

    Manually updating an existing node group can be trivial for basic property changes.

    1. Verify that enough capacity is available in the cluster to handle workload spillover when the desired node group is scaled down.

    2. Edit the initialNodeCount of the node group to a value of 0.

    3. Run an update with pulumi up.

    4. Update the desired node group properties, such as the machineType or version.

    5. Scale the node pool up to the desired value.

    6. Run an update with pulumi up.

      Note: Don’t drift far apart in minor Kubernetes versions between the node group workers and the control plane.

    See the official GKE docs, and cluster autoscaler docs for more details.

    Migrate to a New Node Group

    For an example of migrating node groups, see the general steps outlined in Migrating Node Groups with Zero Downtime.

      Introducing Drift Detection, TTL Stacks, and Scheduled Deployments. Learn More.