Unlocking Mutable PersistentVolume Node Affinity in Kubernetes v1.35
Kubernetes v1.35 brings a significant change with mutable PersistentVolume node affinity, addressing a common pain point in resource management. Previously, once set, the node affinity for a PV was immutable, which could lead to inefficiencies when the underlying infrastructure changed. Now, you can modify the node affinity to reflect real-time conditions, enhancing the way your applications interact with the underlying storage.
To leverage this feature, you need to enable the MutablePVNodeAffinity feature gate on the APIServer, which is disabled by default. Once enabled, you can change the PV node affinity using the VolumeAttributesClass API. For example, you might adjust the node affinity from a specific zone to a broader region, allowing for more flexible scheduling. Here's how you might define the node affinity in your PV specification:
1spec:
2 nodeAffinity:
3 required:
4 nodeSelectorTerms:
5 - matchExpressions:
6 - key: topology.kubernetes.io/zone
7 operator: In
8 values:
9 - us-east1-bHowever, there are important considerations to keep in mind. Changing the PV node affinity does not alter the actual accessibility of the underlying volume. There's also a race condition when tightening node affinity; the scheduler might still place a Pod on an old node that can no longer access the volume. If you update the PV and immediately start new Pods in a script, you may encounter unexpected behavior. Always ensure the underlying volume is updated in the storage provider before making changes to the PV affinity.
This feature is currently in alpha, so be cautious about its stability and potential edge cases in production environments.
Key takeaways
- →Enable the MutablePVNodeAffinity feature gate on the APIServer to use this functionality.
- →Adjust PV node affinity dynamically to optimize resource allocation and scheduling.
- →Remember that changing PV node affinity does not change the volume's accessibility.
- →Be aware of race conditions when tightening node affinity; the scheduler may not reflect changes immediately.
- →Update the underlying volume in the storage provider before modifying PV affinity.
Why it matters
This feature allows for more dynamic and efficient resource management in Kubernetes, which can lead to better application performance and reduced downtime during infrastructure changes.
Code examples
1spec:
2 nodeAffinity:
3 required:
4 nodeSelectorTerms:
5 - matchExpressions:
6 - key: topology.kubernetes.io/zone
7 operator: In
8 values:
9 - us-east1-b1spec:
2 nodeAffinity:
3 required:
4 nodeSelectorTerms:
5 - matchExpressions:
6 - key: topology.kubernetes.io/region
7 operator: In
8 values:
9 - us-east11spec:
2 nodeAffinity:
3 required:
4 nodeSelectorTerms:
5 - matchExpressions:
6 - key: provider.com/disktype.gen1
7 operator: In
8 values:
9 - availableWhen NOT to use this
The official docs don't call out specific anti-patterns here. Use your judgment based on your scale and requirements.
Want the complete reference?
Read official docsBuilding a Memcached Operator with Go: A Practical Guide
Operators are a powerful way to extend Kubernetes, and building one with Go can streamline your application management. This guide walks you through creating a Memcached operator, focusing on the Custom Resource Definition (CRD) and the controller's role in reconciliation.
Mastering Admission Control in Kubernetes: What You Need to Know
Admission control is a critical gatekeeper in Kubernetes, ensuring that only valid requests reach your cluster. Understanding the difference between mutating and validating admission controllers can save you from costly misconfigurations.
CustomResourceDefinitions: Extending Kubernetes for Your Needs
Unlock the power of Kubernetes by extending its API with CustomResourceDefinitions (CRDs). Learn how to create custom resources that fit your application’s specific requirements, including namespaced and cluster-scoped options.
Get the daily digest
One email. 5 articles. Every morning.
No spam. Unsubscribe anytime.