Streamline Your Hybrid Kubernetes Networking with EKS Hybrid Nodes Gateway
In today's hybrid cloud landscape, integrating on-premises infrastructure with cloud resources can be a daunting task. The Amazon EKS Hybrid Nodes gateway addresses this challenge by simplifying networking between your on-premises and AWS environments. It enables you to treat your on-premises nodes as remote nodes in your EKS cluster, effectively bridging the gap between different infrastructures.
The gateway utilizes Cilium's Container Network Interface (CNI) VXLAN Tunnel Endpoint (VTEP) feature to create VXLAN tunnels between EC2-based gateway nodes in your VPC and Cilium-managed hybrid nodes in your on-premises environment. This setup automatically maintains VPC route table entries, directing hybrid pod traffic to the correct gateway instance. Additionally, Cilium on hybrid nodes encapsulates VPC-bound traffic and forwards it through the VXLAN tunnel to the remote VTEP device, ensuring efficient and secure communication.
To implement this, you need to configure several parameters, such as enabling EKS Auto Mode with the autoModeConfig parameter, which is set to true by default. You also need to define your remoteNetworkConfig, which includes CIDR blocks for your on-premises networks. Make sure your environment meets prerequisites like having a VPC with public and private subnets and ensuring bi-directional communication between your on-premises network and AWS. This setup can significantly enhance your hybrid cloud operations, but be mindful of the complexities involved in managing network configurations and security groups.
Key takeaways
- →Utilize the Cilium VXLAN Tunnel Endpoint feature for seamless hybrid networking.
- →Configure the `autoModeConfig` parameter to enable EKS Auto Mode.
- →Define your `remoteNetworkConfig` with appropriate CIDR blocks for on-premises connectivity.
- →Ensure bi-directional communication between your on-premises network and AWS for effective operation.
- →Set up a VPC with the required public and private subnets across two availability zones.
Why it matters
This solution can drastically reduce the complexity of managing hybrid environments, allowing teams to focus on application development rather than networking issues. By simplifying connectivity, it enhances performance and reliability for distributed applications.
Code examples
1apiVersion: eksctl.io/v1alpha5
2kind: ClusterConfig
3metadata:
4 name: <"CLUSTER_NAME">
5 region: <"CLUSTER_REGION">
6 version: <"KUBERNETES_VERSION">
7# Disable default networking add-ons as EKS Auto Mode
8# comes integrated VPC CNI, kube-proxy, and CoreDNS
9addonsConfig:
10 disableDefaultAddons: true
11
12vpc:
13 subnets:
14 public:
15 public-one: { id: "PUBLIC_SUBNET_ID_1" }
16 public-two: { id: "PUBLIC_SUBNET_ID_2" }
17 private:
18 private-one: { id: "PRIVATE_SUBNET_ID_1" }
19 private-two: { id: "PRIVATE_SUBNET_ID_2" }
20
21 controlPlaneSubnetIDs: ["PRIVATE_SUBNET_ID_1", "PRIVATE_SUBNET_ID_2"]
22 controlPlaneSecurityGroupIDs: ["ADDITIONAL_CONTROL_PLANE_SECURITY_GROUP_ID"]
23
24autoModeConfig:
25 enabled: true
26 nodePools: ["system", "general-purpose"]
27
28remoteNetworkConfig:
29 # Either ssm or ira
30 iam:
31 provider: ssm
32 # Required
33 remoteNodeNetworks:
34 - cidrs: ["192.168.100.0/24"]
35 # Optional
36 remotePodNetworks:
37 - cidrs: ["192.168.32.0/23"]
38Deploy the EKS cluster using the ClusterConfig file.
39eksctl create cluster -f cluster-configuration.yaml
40Wait for the cluster state to become
41Active.
42aws eks describe-cluster \
43 --name <"CLUSTER_NAME"> \
44 --output json \
45 --query 'cluster.status'When NOT to use this
The official docs don't call out specific anti-patterns here. Use your judgment based on your scale and requirements.
Want the complete reference?
Read official docsUnified observability — logs, uptime monitoring, and on-call in one place. Used by 50,000+ engineering teams to ship faster and sleep better.
Try Better Stack free →AI Sandboxing: Kubernetes' Next Frontier
AI sandboxing is revolutionizing how we think about workload isolation in Kubernetes. By eliminating the shared Linux kernel, we can prevent exploits from cascading across workloads. This architectural shift is crucial for securing AI applications in production.
Kubernetes v1.36: Mastering In-Place Vertical Scaling for Pods
Kubernetes v1.36 introduces a game-changing feature: in-place vertical scaling for pod-level resources. This allows you to adjust resource budgets without container restarts, streamlining your operations. Dive into how this works and what you need to know to leverage it effectively.
Mastering Memory QoS in Kubernetes v1.36: Tiered Memory Protection Explained
Kubernetes v1.36 introduces Memory QoS, a game-changer for managing container memory. This feature leverages cgroup v2 to provide tiered memory protection, ensuring your critical workloads get the resources they need without starving others.
Get the daily digest
One email. 5 articles. Every morning.
No spam. Unsubscribe anytime.