Kubernetes clusters use multiple proxies to connect clients, services, and nodes. Each proxy serves distinct roles. This article explains kubectl proxy, apiserver proxy, and kube-proxy. You will learn how they route traffic, enforce security, and integrate with your cluster architecture.
TL;DR
kubectl
proxy exposes the Kubernetes API on localhost for tools and scripts.- Apiserver proxy built into the API server routes external traffic to Nodes, Pods, or Services
kube-proxy
runs on each Node to maintain network rules and load balance Service traffic.- Secure communication uses TLS, authentication headers, and RBAC policies.
- Implement proxies via CLI flags, DaemonSet manifests, and API server configuration.
- Mermaid diagram illustrates end-to-end proxy flow within a cluster.
Kubernetes Proxies Overview
Kubernetes proxies act as intermediaries. They route requests, enforce security, and balance loads. The cluster uses three main proxies: kubectl proxy, apiserver proxy, and kube-proxy. Understanding their roles helps you troubleshoot networking and secure your cluster at scale.
kubectl Proxy Explained
The kubectl proxy runs on a user’s machine or inside a pod. It listens on a localhost port and forwards HTTP requests to the Kubernetes API server. It adds authentication headers automatically. Use cases include debugging, port forwarding for dashboards, and custom scripts.
# Start kubectl proxy on port 8001
echo "Starting kubectl proxy..."
kubectl proxy --port=8001 --accept-hosts="^*$" --accept-paths="^/api/.*$"
The proxy supports HTTP to HTTPS translation. It locates the apiserver using the kubeconfig context. It injects the bearer token from your local credentials. You can secure it further with TLS flags if running in pod.
Apiserver Proxy Fundamentals
The apiserver proxy acts as a bastion. It runs inside the API server process. It lets users outside the cluster reach cluster-internal IPs. You call it via:
# Access a pod on port 8080 via apiserver proxy
curl -k \
--header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https:///api/v1/namespaces/default/pods/my-pod:8080/proxy/
This request uses HTTPS to connect to the apiserver. The server then proxies to the target Pod or Service. It can also handle HTTP upgrades for WebSockets
. The apiserver proxy uses the same authentication and authorization checks as other API calls.
Kube-Proxy Mechanics
kube-proxy runs as a DaemonSet on every Node. It watches Service and Endpoint objects. It programs iptables or IPVS rules to direct traffic to healthy Pods. It also load balances across backend endpoints.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
labels:
k8s-app: kube-proxy
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: k8s.gcr.io/kube-proxy:v1.25.0
command:
- kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --proxy-mode=iptables
volumeMounts:
- mountPath: /etc/kube-proxy
name: config-volume
volumes:
- name: config-volume
configMap:
name: kube-proxy-config
The proxy-mode flag selects between iptables and IPVS. IPVS offers better performance for large clusters. kube-proxy also supports userspace mode, but this is deprecated.
Kubernetes Proxies Security and Controls
All proxies enforce TLS encryption. kubectl proxy trusts your local kubeconfig TLS certs. The apiserver proxy uses server certificates and validates client tokens. kube-proxy does not expose ports by default; it relies on the Node’s API server and kubelet TLS configuration.
RBAC rules govern proxy operations. You grant permissions via ClusterRole
and ClusterRoleBinding
. For example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: proxy-access
rules:
- apiGroups: [""]
resources: ["pods/proxy", "services/proxy"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: proxy-access-binding
subjects:
- kind: User
name: alice
roleRef:
kind: ClusterRole
name: proxy-access
apiGroup: rbac.authorization.k8s.io
Use NetworkPolicies
to restrict pod-to-pod traffic if your cluster uses a CNI plugin that implements them. Configure HTTP request timeouts
and keepalive
settings in kube-proxy config to avoid stale sessions.
Use Cases for Kubernetes Proxies
- Debugging: Forward Pod logs or debug ports to localhost via kubectl proxy.
- Private clusters: Use apiserver proxy to access internal-only Services without VPN.
- Service mesh integration: Direct sidecar traffic using kube-proxy rules.
- Edge scenarios: Run kube-proxy in IPVS mode on resource-constrained Nodes for performance.
- Health checks: Probe services through apiserver proxy for centralized monitoring.
Implementing Kubernetes Proxies
Below is a diagram showing proxy flow from a client through the API server to Pods and Services.

To enable apiserver proxy in your cluster, ensure your API server flags include --enable-aggregator-routing=true
. For secure kubectl proxy in a pod, mount the ServiceAccount
token and CA certificate into the container.
References
Suggested Reading
PostHashID: a17f5810d29c3ce46791f36e000b78c4c04cab6dc2274a461b8b5218177bcfe4