containers · level 4

Kubernetes Networking

Services, DNS, Ingress, and NetworkPolicy.

150 XP

Kubernetes Networking

Pods come and go. Their IPs come and go with them. Kubernetes networking exists to give you stable names and addresses in front of a shifting set of pods — and to let you control who can talk to whom.

Three primitives do most of the work: Service, Ingress, and NetworkPolicy.

Analogy

Think of a big taxi rank at an airport. Individual drivers (pods) clock in and out all day, but the sign at the front of the rank ("Yellow Cabs — 555-0199") never changes — customers just call that number and whoever is next in line picks them up. The Service is that fixed number. Ingress is the terminal's main road sign that directs arriving traffic to the right rank depending on the destination. NetworkPolicy is the airport security perimeter saying which cabs are allowed to pick up from the international terminal.

The Service

A Service is a stable virtual IP and DNS name that load-balances traffic to a selected set of pods. It does not know about pods directly — it knows about Endpoints (or EndpointSlices), which the control plane keeps in sync with the pods matching the Service's selector.

apiVersion: v1
kind: Service
metadata: { name: api }
spec:
  selector: { app: api }
  ports: [{ port: 8080, targetPort: 8080 }]

Inside the cluster, any pod can reach this Service as api.<namespace>.svc.cluster.local:8080 or just api:8080 when the caller is in the same namespace.

Service types

Type Reachable from Provisioning
ClusterIP (default) Inside the cluster only Virtual IP from the service CIDR
NodePort A fixed high port on every node Port in 30000–32767
LoadBalancer Public internet Cloud LB (AWS NLB, GCP LB, Azure LB)
ExternalName DNS CNAME, no proxy Maps to an external hostname
Headless (clusterIP: None) In-cluster DNS, one A record per pod Required for StatefulSet peer discovery

On bare metal you either run NodePort + your own external LB, or install MetalLB to make LoadBalancer work without a cloud.

Cluster DNS

Every Service gets a DNS name. Every pod's resolver is pre-configured to look in svc.cluster.local first.

api.default.svc.cluster.local       # ClusterIP Service
db-0.db-headless.default.svc.cluster.local  # one specific pod of a StatefulSet

kubectl exec into a pod and dig api +search will walk the search list. If api resolves, Services are working; if it doesn't, your CoreDNS is down or your NetworkPolicy is blocking DNS.

Ingress

A Service of type LoadBalancer per app gets expensive fast. Ingress is the L7 HTTP(S) router that sits in front of many Services:

rules:
  - host: api.example.com
    http: { paths: [{ path: /, pathType: Prefix, backend: { service: { name: api, port: { number: 8080 }}}}]}
  - host: app.example.com
    http: { paths: [{ path: /, pathType: Prefix, backend: { service: { name: web, port: { number: 80 }}}}]}

An Ingress object is inert until an Ingress controller (ingress-nginx, Traefik, Contour, cloud-native ones) sees it and configures itself to match. The controller is what actually terminates TLS and forwards to Services.

The newer Gateway API is the evolving successor to Ingress — richer routing, multi-team-friendly. Ingress remains the baseline; Gateway is worth learning if you're picking today.

NetworkPolicy

By default every pod can talk to every other pod. This is usually not what you want in production. NetworkPolicy is the in-cluster firewall.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata: { name: payments-allow-checkout, namespace: payments }
spec:
  podSelector: {}                 # applies to every pod in the namespace
  policyTypes: [Ingress]
  ingress:
    - from:
        - podSelector: { matchLabels: { app: checkout } }

Critical rules:

  • Additive: the moment a NetworkPolicy selects a pod for Ingress, only the listed from: sources can reach it. "No policy" and "one policy" differ by a default-deny.
  • CNI-dependent: NetworkPolicy is enforced by the network plugin (Calico, Cilium, Canal, …). Some older CNIs don't enforce it at all.
  • Egress is separate from ingress — lock down outbound with policyTypes: [Egress] rules.

A practical mental model

  • Want two pods to talk inside the cluster? → ClusterIP Service.
  • Public web traffic? → Ingress in front of ClusterIP Services, with a single LoadBalancer for the ingress controller.
  • StatefulSet peer discovery? → Headless Service (clusterIP: None).
  • Point at an external database? → ExternalName Service, or just resolve the hostname directly.
  • Restrict who can call whom? → NetworkPolicy, default-deny, add explicit allows.
  • On bare metal, no cloud LB? → NodePort + external LB, or MetalLB.

Debugging

kubectl get svc                   # Service → ClusterIP, external IP, ports
kubectl get endpoints api         # which pod IPs are behind the Service
kubectl describe ingress          # rules and controller status
kubectl exec deploy/api -- wget -qO- http://other-svc.ns:8080
kubectl run -it --rm dns --image=busybox -- nslookup api.default

If endpoints is empty, the selector doesn't match any ready pods. If DNS works but connections hang, suspect NetworkPolicy.