Kubernetes

SWE-literacy set. Enough to answer infra-ish questions in a backend interview without panicking. Not a platform-engineering reference.

Core resources

Pod

Smallest deployable unit — one or more containers that share network and storage.

Pods are disposable. You rarely create a Pod directly; a Deployment / Job / DaemonSet creates them for you and replaces them when they die. IP addresses change on restart — never refer to a pod by IP.

Deployment

Declarative management of replicated stateless pods.

You describe "I want N replicas of this pod template"; the controller makes it true. Rolling updates, pause/resume, and one-command rollback come free (`kubectl rollout undo`).

Service

Stable virtual IP + DNS name in front of a changing set of pods.

Four types. ClusterIP: in-cluster only (default). NodePort: exposed on every node at a high port. LoadBalancer: cloud-provisioned external LB. ExternalName: CNAME to an external host — useful for abstracting external dependencies.

Ingress

HTTP(S) routing to services based on hostname / path.

Not a resource the cluster implements on its own — requires an Ingress Controller (nginx-ingress, Traefik, cloud-native). The Ingress object is the spec; the Controller turns it into real load-balancer rules.

ConfigMap

Non-sensitive key/value config, decoupled from the image.

Mount as env vars or as files under a volume. Changes do NOT auto-restart pods — either roll the Deployment or use a tool like reloader. Config is the right place for feature flags, environment names, tuning knobs.

Secret

Same as ConfigMap but for sensitive values.

Secrets are NOT encrypted by default — only base64-encoded. Enable encryption-at-rest in etcd, restrict RBAC to view secrets, and consider an external secrets operator for real key management.

Probes

The most-asked K8s interview topic. Each probe answers a different question — mixing them up is the classic mistake.

Probe Answers On failure Use for
Liveness Is this container wedged? Should I kill it? Kubelet restarts the container For deadlock detection — process is running but making no progress. Keep it cheap; a flaky liveness probe causes restart loops
Readiness Is this container ready to serve traffic? Pod is removed from Service endpoints until it recovers For warm-up, transient downstream outages, circuit-breaker integration. Failing a readiness probe is NOT destructive — the pod keeps running
Startup Has the app finished starting? Disables liveness / readiness until it passes; kills pod on timeout For slow-starting apps (JVM warm-up, large caches) — lets liveness have a tight timeout in steady state without triggering during startup

Senior insight

A liveness probe that calls the same HTTP endpoint as readiness is usually wrong — liveness should be a process-health check (can the main loop run?), not a dependency-health check. If your DB is down and liveness hits an endpoint that queries the DB, the pod restarts forever instead of going out of the service rotation while the DB recovers.

Requests vs limits vs QoS class

Senior insight

Setting memory requests = memory limits is the simplest way to get Guaranteed QoS for critical workloads — no surprise evictions. CPU limits are more subtle: they throttle even when spare CPU is available. For most workloads, set CPU requests but leave limits unset.

Rolling updates

Apply a manifest change

kubectl apply -f deployment.yaml

Watch the rollout progress

kubectl rollout status deployment/my-app

Pause mid-rollout (hold current progress)

kubectl rollout pause deployment/my-app

Resume

kubectl rollout resume deployment/my-app

Rollback to previous revision

kubectl rollout undo deployment/my-app

Rollback to a specific revision

kubectl rollout undo deployment/my-app --to-revision=3

See rollout history

kubectl rollout history deployment/my-app

kubectl essentials

List pods in the current namespace

kubectl get pods

Watch pods live

kubectl get pods -w

All resources in a namespace

kubectl get all -n my-namespace

Wide output (IPs, node)

kubectl get pods -o wide

Full YAML of a resource

kubectl get pod my-pod -o yaml

Events + config + status on a single resource

kubectl describe pod my-pod

Tail logs

kubectl logs -f my-pod

Logs from previous (crashed) container

kubectl logs my-pod --previous

Multi-container pod — pick a container

kubectl logs my-pod -c my-container

Interactive shell into a container

kubectl exec -it my-pod -- sh

Port-forward to your local machine

kubectl port-forward my-pod 8080:80

Resource usage (requires metrics-server)

kubectl top pods

metrics-server must be installed

Context + namespace at a glance

kubectl config current-context && kubectl config view --minify | grep namespace

Out of scope for a SWE interview

These topics exist, matter, and are important — but they sit in platform-engineering territory. A software engineer is rarely asked to design them from scratch in a loop.

  • StatefulSets (used for databases, message brokers — stateful workloads)
  • DaemonSets (one pod per node — monitoring agents, log shippers)
  • Jobs and CronJobs (batch workloads)
  • HPA / VPA (horizontal / vertical pod autoscaling)
  • Operators and CRDs (extending the API)
  • Helm charts internals, templating, upgrade semantics
  • Network policies, service mesh (Istio, Linkerd, Cilium)
  • Admission controllers, pod security standards, RBAC deep-dive