kubeadm upgrade - v1.32.x to v1.33.x

kubeadm upgrade – v1.32.x to v1.33.x

Kubernetes

Upgrading a kubeadm cluster involves careful planning. This guide covers steps from v1.32.x to v1.33.x and minor bumps. It includes control-plane upgrades, node upgrades, volume lifecycle, CSI changes, quotas, security contexts, snapshot handling, and event-driven workflows.


TL;DR

  • Plan control-plane upgrade with kubeadm upgrade plan and apply commands.
  • Drain, cordon, and upgrade worker nodes in sequence.
  • Validate CSI drivers, volume snapshots, PVC lifecycle and storage class changes.
  • Review ResourceQuota and PodSecurityContext impacts.
  • Test event-driven triggers using Kubernetes events and controllers.
  • Follow version skew policy: no skipping minor versions.

Kubeadm Upgrade Overview

kubeadm upgrade installs new control-plane binaries. It updates static Pod manifests under /etc/kubernetes/manifests. It leaves kubelet configuration intact. You must match kubelet to apiserver within one minor release.


kubeadm upgrade Prerequisites

  • Ensure root or sudo access on control plane nodes.
  • Backup etcd cluster.
  • Check existing version: kubectl version --short.
  • Validate cluster health: kubectl get nodes,pods -A.
  • Confirm available upgrades: kubeadm upgrade plan shows target versions, channel info, and recommended kubelet versions.

Control-Plane Upgrade Steps

After apply, verify static Pods rotated under /etc/kubernetes/manifests. Wait until all control-plane components report Ready.


Worker Node Upgrade in kubeadm upgrade

On each worker, cordon and drain before upgrading:

Repeat per node. Always upgrade one node at a time to maintain availability.


Storage Volumes and CSI Considerations

Upgrades can affect volume lifecycles. Validate storage classes, CSI drivers, and snapshot controllers.

Volume types: emptyDir, hostPath, NFS, iSCSI, CSI-based. See full list at Kubernetes concepts docs.

Lifecycle: PVC → binding → Pod usage → cleanup. Dynamic provisioning uses StorageClass parameters. During upgrade, ensure CSI pods remain ready.

Volume snapshots: ensure VolumeSnapshot CRDs and snapshot controller match cluster version. Upgrade controller first, then CSI drivers.


ResourceQuota and PodSecurityContext

New versions tighten default PodSecurityContext settings. Check your Pod specs for fsGroup, runAsNonRoot, and seccompProfile. Adjust if cluster-level PodSecurity policies block upgrade.

ResourceQuota changes in v1.33 enforce stricter count of CSI volumes and snapshots. Verify kubectl describe quota shows allowances for persistentvolumeclaims, volumesnapshots, total storage.


Example Upgrade Workflows

Use Kubernetes events to trigger automation. Sample controller watches for NodeUpgraded events and updates an external registry.

This pattern treats upgrade as an event-driven architecture. You can emit custom events in pre- and post-upgrade phases.


Validation and Testing After kubeadm upgrade

After all nodes are at v1.33, run conformance tests and smoke tests on workloads. Confirm volumes mount correctly and snapshots restore data.

Check network policies and Ingress controllers for version mismatches.


Rollback Strategies

kubeadm does not support automatic rollback. Use etcd backup to restore control plane state. For workloads, rely on replicaset history and image tags.


References

Suggested Reading

PostHashID: 617557fa8588696c67b5a009b27a1bb3d21aa98af0edd789b3d66f13b4d71803

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.