Understanding the Request Journey
When a user accesses your web application hosted on Kubernetes, the request travels through multiple layers of infrastructure—from the internet to DNS, load balancers, ingress controllers, services, and finally to your application pods. Understanding this flow is crucial for troubleshooting, security, and performance optimization.
Example: User visits https://myapp.example.com/api/users
User / Client
Browser, Mobile App, API ClientThe journey begins when a user types a URL or clicks a link in their browser.
- User enters
https://myapp.example.comin the browser - Browser initiates an HTTPS connection
- Request includes headers, cookies, and authentication tokens
DNS Resolution
Azure DNS, Route 53, Cloud DNS, CloudflareThe domain name is resolved to the public IP address of your load balancer.
- Browser queries DNS server for
myapp.example.com - DNS returns the public IP address (e.g.,
203.0.113.10) - Browser establishes TCP connection to this IP on port 443 (HTTPS)
Cloud Load Balancer (Layer 4)
Azure Load Balancer, AWS NLB, GCP Load BalancerExternal L4 load balancer receives the traffic and forwards it to Kubernetes cluster nodes.
- Receives TCP/HTTPS traffic on public IP
- Performs SSL/TLS termination (optional)
- Distributes traffic across multiple Kubernetes worker nodes
- Health checks ensure only healthy nodes receive traffic
Ingress Controller (Layer 7)
NGINX Ingress, Traefik, Istio Gateway, AmbassadorThe Ingress Controller routes HTTP/HTTPS traffic based on hostnames, paths, and rules.
- Inspects the HTTP request (hostname, path, headers)
- Matches request against Ingress rules
- Performs SSL/TLS termination if not done at Load Balancer
- Routes
/api/*to backend service,/web/*to frontend service - Can apply rate limiting, authentication, and URL rewriting
Kubernetes Service
ClusterIP, NodePort, LoadBalancerService provides a stable virtual IP and load balances traffic across pod replicas.
- Ingress forwards request to Service (e.g.,
backend-service:8080) - Service uses label selectors to find matching pods
- kube-proxy or CNI handles load balancing across pod endpoints
- Distributes requests using round-robin or session affinity
Application Pod
Container running your application (Node.js, Python, Java, Go)Request reaches your application container, which processes the business logic.
- Pod receives request on container port (e.g.,
8080) - Application code processes the request
- May call database, cache (Redis), or other microservices
- Generates response (JSON, HTML, etc.)
Backend Services (Optional)
Database, Cache, Message Queue, External APIsIf needed, the application connects to databases, caches, or other microservices.
- Query database (PostgreSQL, MongoDB) for data
- Check cache (Redis, Memcached) for faster lookups
- Call other microservices via internal Service DNS
- Publish events to message queues (Kafka, RabbitMQ)
Application Generates Response
Pod sends response backAfter processing, the pod generates an HTTP response with status code, headers, and body.
- Application builds response (e.g., JSON data, HTML page)
- Sets HTTP status code (200 OK, 404 Not Found, etc.)
- Adds response headers (Content-Type, Cache-Control, etc.)
- Sends response back through the same path
Service → Ingress Controller
Response flows back through ServiceResponse travels back through the Kubernetes Service to the Ingress Controller.
Ingress Controller → Load Balancer
Ingress adds headers, applies policiesIngress Controller may add security headers, compress response, and forward to Load Balancer.
Load Balancer → Internet
Response exits the clusterCloud Load Balancer forwards the response back to the internet.
User Receives Response
Browser renders the pageThe response arrives at the user's browser, which renders the content.
- Browser receives HTTP response
- Parses HTML/JSON and renders UI
- Executes JavaScript if needed
- User sees the final result!