Mastering EKS Container Network Observability for Inter-AZ Traffic
In a cloud-native world, visibility into network traffic is paramount. EKS Container Network Observability provides the tools you need to track inter-AZ and NAT gateway traffic, ensuring your applications run smoothly across availability zones. This capability is vital for diagnosing issues, optimizing performance, and maintaining high availability in your Kubernetes clusters.
At the core of this observability is the Network Flow Monitor Agent, an eBPF daemon that collects and monitors network traffic at the pod level. This allows you to see how traffic flows between your services, including insights on latency and throughput. You can also control traffic distribution using the trafficDistribution field in your Kubernetes Service spec, which guides kube-proxy on how to route traffic to service endpoints. For example, setting trafficDistribution: PreferSameZone helps keep traffic within the same availability zone, reducing latency and improving performance.
In production, you need to be aware of the Kubernetes version requirements—this feature is available from version 1.35 onward. Additionally, while using traffic distribution is beneficial, moving workloads to a single AZ is generally not recommended for critical applications that require fault tolerance. Always consider the implications of your architecture on availability and performance.
Key takeaways
- →Leverage the Network Flow Monitor Agent to gain pod-level insights into network traffic.
- →Use the `trafficDistribution` field to control how traffic is routed to service endpoints.
- →Ensure your EKS cluster runs Kubernetes version 1.35 or later to utilize these features.
- →Avoid moving workloads to a single AZ to maintain high availability and fault tolerance.
Why it matters
In production, understanding and optimizing network traffic can significantly enhance application performance and reliability, directly impacting user experience and operational efficiency.
Code examples
1apiVersion: v1
2kind: Service
3metadata:
4 name: receiver
5spec:
6 trafficDistribution: PreferSameZone
7 selector:
8 app: receiver
9 ports:
10 - port: 80kubectl get pods -l 'app in (sender,receiver)' -o custom-columns='NAME:.metadata.name,APP:.metadata.labels.app,NODE:.spec.nodeName,AZ:.metadata.labels.topology\.kubernetes\.io/zone' --sort-by='.metadata.labels.app'1apiVersion: v1
2kind: Service
3metadata:
4 name: my-service
5spec:
6 selector:
7 app: my-app
8 trafficDistribution: PreferSameZone
9 ports:
10 - protocol: TCP
11 port: 80
12 targetPort: 8080When NOT to use this
Move your workload to a single AZ: This is generally not recommended for critical workloads that require high availability and fault tolerance.
Want the complete reference?
Read official docsUnified observability — logs, uptime monitoring, and on-call in one place. Used by 50,000+ engineering teams to ship faster and sleep better.
Try Better Stack free →Kyverno 1.18: Embrace the Future of Kubernetes Policy Management
Kyverno 1.18 is here, and it’s a game changer for Kubernetes policy management. With the deprecation of ClusterPolicy resources, it's crucial to migrate to newer policy types like ValidatingPolicy and MutatingPolicy. This release also strengthens security around HTTP calls, making your clusters safer.
Securing GitHub Actions: Best Practices for Dependency Management
In a world where CI/CD pipelines are critical, securing your GitHub Actions dependencies is non-negotiable. Pinning versions and enforcing strict permissions can prevent vulnerabilities from third-party actions. Let's dive into how to implement these strategies effectively.
Unlocking Performance with Kubernetes Pod-Level Resource Managers
Kubernetes v1.36 introduces Pod-Level Resource Managers, a game changer for performance-sensitive workloads. This feature allows for hybrid resource allocation models, enhancing efficiency without compromising NUMA alignment.
Get the daily digest
One email. 5 articles. Every morning.
No spam. Unsubscribe anytime.