Kubernetes v1.36: Mixed Version Proxy Moves to Beta
The Mixed Version Proxy (MVP) exists to solve a critical issue during Kubernetes cluster upgrades. When you upgrade your cluster, older API servers may not recognize new resources, leading to 404 errors for clients. MVP addresses this by intelligently routing requests from an older API server to a newer peer API server that can handle them. This capability significantly reduces downtime and confusion during upgrades, making your operations smoother.
Here's how it works: When a client requests a resource from API Server A, and that server cannot fulfill the request, it checks its Discovery Cache for a capable peer. If found, API Server A proxies the request to API Server B, adding the x-kubernetes-peer-proxied header. API Server B processes the request and sends the response back to API Server A, which then forwards it to the client. This seamless interaction allows for a unified view of APIs across the cluster, enhancing the overall experience.
In production, you need to ensure that the critical --peer-ca-file flag is set correctly. This CA bundle is essential for authenticating the serving certificates of destination peer API servers. Without it, proxying will fail due to TLS verification errors. Starting with Kubernetes v1.36, MVP is enabled by default, so make sure to configure your API servers accordingly to leverage this feature effectively.
Key takeaways
- →Understand that MVP prevents 404 errors during cluster upgrades by routing requests to capable peer API servers.
- →Set the `--peer-ca-file` flag to authenticate peer API servers and avoid TLS verification errors.
- →Utilize the `--feature-gates` flag to enable the UnknownVersionInteroperabilityProxy feature, which is true by default in v1.36.
- →Configure `--peer-advertise-ip` and `--peer-advertise-port` if your network setup requires specific addresses.
- →Leverage Peer-Aggregated Discovery for a unified view of all APIs available across your cluster.
Why it matters
In production, MVP can drastically reduce the risk of service disruptions during upgrades. By ensuring that requests are routed correctly, you maintain a seamless experience for users and reduce the operational overhead associated with managing API versioning.
Code examples
1apiVersion: kubeadm.k8s.io/v1beta4
2kind: ClusterConfiguration
3apiServer:
4 extraArgs:
5 peer-ca-file: "/etc/kubernetes/pki/ca.crt"
6 # peer-advertise-ip and port if neededAccept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList;profile=nopeerWhen NOT to use this
The official docs don't call out specific anti-patterns here. Use your judgment based on your scale and requirements.
Want the complete reference?
Read official docsUnified observability — logs, uptime monitoring, and on-call in one place. Used by 50,000+ engineering teams to ship faster and sleep better.
Try Better Stack free →Reclaiming Engineering Time: Streamlining Kubernetes Upgrades
Kubernetes upgrades can drain your engineering resources, often consuming weeks of effort. With critical vulnerabilities emerging mid-upgrade, it’s crucial to streamline your upgrade process to minimize downtime and maximize productivity.
Kubernetes v1.36: Unlocking Declarative Validation for Production
Kubernetes v1.36 brings Declarative Validation to General Availability, solving the problem of inconsistent validation rules across your API. With the new validation-gen code generator, you can define validation directly in your types.go files using +k8s: tags.
Mastering Pod Lifecycle Upgrades in Kubernetes
Upgrading Pods in Kubernetes is crucial for maintaining application reliability and performance. Understanding the Pod lifecycle phases and container states can help you manage upgrades effectively. Dive into the details to avoid common pitfalls during your upgrade processes.
Get the daily digest
One email. 5 articles. Every morning.
No spam. Unsubscribe anytime.