Migrating from ingress-nginx to Envoy Gateway: A Practical Guide
In the world of Kubernetes, managing ingress traffic efficiently is crucial for application performance and reliability. Migrating from ingress-nginx to Envoy Gateway can significantly enhance your traffic management capabilities. Envoy Gateway is a CNCF open-source project that provides a robust solution for managing Envoy Proxy, whether as a standalone or integrated within Kubernetes. This migration allows for dynamic provisioning and configuration of Envoy Proxies using Gateway API resources, which can simplify your architecture and improve your service mesh capabilities.
The migration process involves configuring Envoy Gateway to utilize a reserved IP address while shifting all traffic at once. This is achieved by creating a LoadBalancer service for each Gateway object. A critical parameter here is the externalTrafficPolicy, which determines how traffic is routed to the Envoy pods. Setting this to Cluster is essential to prevent connection failures during health checks, as the default Local setting can lead to issues if health checks occur on nodes without running Envoy pods. For instance, the reserved IP address is integrated into the EnvoyProxy configuration, ensuring seamless traffic flow.
In production, you must be aware of the potential pitfalls. One major gotcha is the externalTrafficPolicy setting. If you overlook this and stick with the default Local, you may face connection failures during health checks, marking all backends as unhealthy. Always ensure that your configuration is aligned with your load balancer's health check strategy. This migration is straightforward but requires careful attention to detail to avoid disruptions in service.
Key takeaways
- →Configure externalTrafficPolicy to Cluster to avoid health check failures.
- →Use a reserved loadBalancerIP for consistent traffic routing.
- →Create a LoadBalancer service for each Gateway object for proper traffic management.
- →Monitor health checks closely during migration to catch issues early.
Why it matters
Migrating to Envoy Gateway can enhance your Kubernetes ingress management, leading to improved traffic handling and reduced downtime. This transition can significantly impact the reliability of your services in production environments.
Code examples
1apiVersion: gateway.envoyproxy.io/v1alpha1
2kind: EnvoyProxy
3metadata:
4 name: ha-envoy-proxy
5 namespace: envoy-gateway
6spec:
7 provider:
8 type: Kubernetes
9 kubernetes:
10 envoyService:
11 externalTrafficPolicy: Cluster
12 type: LoadBalancer
13 patch:
14 type: StrategicMerge
15 value:
16 spec:
17 loadBalancerIP: "146.235.214.235" # Reserved IP address on the cloud provider
18 ports:
19 - name: https-443
20 port: 443
21 targetPort: 10443
22 protocol: TCP
23 nodePort: 32050 # Fixed NodePort for external LB backend and firewall configuration1apiVersion: gateway.networking.k8s.io/v1
2kind: Gateway
3...
4spec:
5 gatewayClassName: envoy
6 listeners:
7 - name: https
8 protocol: HTTPS
9 port: 443
10 hostname: "*.cncf.io"
11 tls:
12 mode: Terminate
13 certificateRefs:
14 - name: guac-tls
15 namespace: guac
16 kind: Secret
17 group: ""
18 - name: auth-dex-tls
19 namespace: auth
20 kind: Secret
21 group: ""kubectl get certificate -A -o json | jq -r '.items[] | select(.metadata.ownerReferences[]? | .kind == "Ingress") | "\(.metadata.namespace) \(.metadata.name)"' | while read NS NAME
do
kubectl patch certificate $NAME -n $NS --type=json
-p=When NOT to use this
The official docs don't call out specific anti-patterns here. Use your judgment based on your scale and requirements.
Want the complete reference?
Read official docsUnified observability — logs, uptime monitoring, and on-call in one place. Used by 50,000+ engineering teams to ship faster and sleep better.
Try Better Stack free →Securing GitHub Actions: Best Practices for Dependency Management
In a world where CI/CD pipelines are critical, securing your GitHub Actions dependencies is non-negotiable. Pinning versions and enforcing strict permissions can prevent vulnerabilities from third-party actions. Let's dive into how to implement these strategies effectively.
Unlocking Performance with Kubernetes Pod-Level Resource Managers
Kubernetes v1.36 introduces Pod-Level Resource Managers, a game changer for performance-sensitive workloads. This feature allows for hybrid resource allocation models, enhancing efficiency without compromising NUMA alignment.
Streamline Your Hybrid Kubernetes Networking with EKS Hybrid Nodes Gateway
Hybrid cloud environments are complex, but the Amazon EKS Hybrid Nodes gateway simplifies networking between on-premises and cloud resources. By leveraging Cilium's VXLAN Tunnel Endpoint feature, it creates seamless connections that keep your applications running smoothly.
Get the daily digest
One email. 5 articles. Every morning.
No spam. Unsubscribe anytime.