ReplicaSets, Deployments, rolling updates, DaemonSets, StatefulSets, Jobs, and CronJobs.
You rarely create Pods directly. If a bare Pod crashes, nothing restarts it. If the node dies, that Pod is gone forever. Controllers solve this by managing the Pod lifecycle for you.
Every controller in Kubernetes follows the same pattern:
┌────────────────────────────────────────────┐
│ RECONCILIATION LOOP │
│ │
│ 1. Read DESIRED state (your YAML spec) │
│ │ │
│ ▼ │
│ 2. OBSERVE actual state (running Pods) │
│ │ │
│ ▼ │
│ 3. ACT to make actual match desired │
│ │ │
│ └──────── loop forever ─────► │
└────────────────────────────────────────────┘
Example: You declare replicas: 3
- Controller sees 2 Pods running → creates 1 more
- Controller sees 4 Pods running → deletes 1
- Controller sees 3 Pods running → does nothing
This "desired state vs actual state" model is fundamental. You never say "start a Pod." You say "I want 3 Pods running" and the controller makes it so, continuously.
Tip: This reconciliation pattern is the reason Kubernetes is called a declarative system. You declare the end state; controllers figure out how to get there. This is also how self-healing works: if a Pod dies, the controller notices the drift and creates a replacement.
A ReplicaSet ensures a specified number of identical Pods are running at any time. It is the basic building block behind Deployments.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web # ← must match template labels
template: # ← Pod template (same as a Pod spec)
metadata:
labels:
app: web # ← must match selector above
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
The selector and template labels must match. The selector tells the ReplicaSet which Pods to count as "mine." The template defines what new Pods should look like.
kubectl apply -f replicaset.yaml
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# web 3 3 3 10s
kubectl get pods -l app=web
# NAME READY STATUS RESTARTS AGE
# web-7k2xq 1/1 Running 0 10s
# web-b8m4t 1/1 Running 0 10s
# web-dn5rz 1/1 Running 0 10s
# Delete a Pod — the ReplicaSet immediately creates a replacement
kubectl delete pod web-7k2xq
kubectl get pods -l app=web
# NAME READY STATUS RESTARTS AGE
# web-b8m4t 1/1 Running 0 45s
# web-dn5rz 1/1 Running 0 45s
# web-pq8nv 1/1 Running 0 3s ← replacement
Gotcha: You almost never create ReplicaSets directly. Deployments manage ReplicaSets for you and add rolling update capabilities. If an interviewer asks "when would you create a ReplicaSet directly?", the answer is almost always "I wouldn't — I'd use a Deployment."
Deployments are the workhorse of Kubernetes. They manage ReplicaSets, which in turn manage Pods. This three-layer hierarchy enables rolling updates and rollbacks.
┌─────────────────────────────────────────────────────────────┐
│ DEPLOYMENT │
│ name: web │
│ replicas: 3 │
│ image: nginx:1.25 │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ REPLICASET (managed) │ │
│ │ web-7d4b8c6f5 │ │
│ │ replicas: 3 │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Pod │ │ Pod │ │ Pod │ │ │
│ │ │ web- │ │ web- │ │ web- │ │ │
│ │ │ 7d4b- │ │ 7d4b- │ │ 7d4b- │ │ │
│ │ │ abc12 │ │ def34 │ │ ghi56 │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Deployment → creates/manages → ReplicaSet → creates/manages → Pods
The Pod names follow the pattern: <deployment>-<replicaset-hash>-<random>. When you see web-7d4b8c6f5-abc12, the 7d4b8c6f5 is the ReplicaSet template hash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
strategy:
type: RollingUpdate # default
rollingUpdate:
maxSurge: 1 # max Pods over desired count during update
maxUnavailable: 0 # max Pods that can be unavailable during update
revisionHistoryLimit: 10 # keep 10 old ReplicaSets for rollback (default)
minReadySeconds: 5 # wait 5s after Pod is Ready before considering it Available
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
kubectl create deployment web --image=nginx:1.25 --replicas=3
# deployment.apps/web created
kubectl apply -f deployment.yaml
# deployment.apps/web created
kubectl get deploy
# NAME READY UP-TO-DATE AVAILABLE AGE
# web 3/3 3 3 15s
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# web-7d4b8c6f5 3 3 3 15s
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# web-7d4b8c6f5-abc12 1/1 Running 0 15s
# web-7d4b8c6f5-def34 1/1 Running 0 15s
# web-7d4b8c6f5-ghi56 1/1 Running 0 15s
# Imperative scaling
kubectl scale deployment web --replicas=5
# deployment.apps/web scaled
kubectl get deploy web
# NAME READY UP-TO-DATE AVAILABLE AGE
# web 5/5 5 5 2m
# Or edit the YAML and re-apply
# spec:
# replicas: 5
kubectl apply -f deployment.yaml
Scaling doesn't create a new ReplicaSet. The existing ReplicaSet simply adjusts its Pod count.
Rolling updates are the default strategy. When you change the Pod template (image, env, resources, etc.), the Deployment creates a new ReplicaSet and gradually shifts Pods from old to new.
Rolling Update: nginx:1.25 → nginx:1.26
Step 1: New RS created, starts scaling up
OLD RS (web-7d4b8c6f5): ████ ████ ████ 3 Pods
NEW RS (web-59d6c8b449): 0 Pods
Step 2: New RS scales up, old RS scales down
OLD RS (web-7d4b8c6f5): ████ ████ 2 Pods
NEW RS (web-59d6c8b449): ████ ████ 2 Pods
Step 3: Continues until complete
OLD RS (web-7d4b8c6f5): ████ 1 Pod
NEW RS (web-59d6c8b449): ████ ████ ████ 3 Pods
Step 4: Complete
OLD RS (web-7d4b8c6f5): 0 Pods (kept for rollback)
NEW RS (web-59d6c8b449): ████ ████ ████ 3 Pods
# Change the image
kubectl set image deployment/web nginx=nginx:1.26
# deployment.apps/web image updated
# Watch the rollout
kubectl rollout status deployment web
# Waiting for deployment "web" rollout to finish: 1 out of 3 new replicas have been updated...
# Waiting for deployment "web" rollout to finish: 2 out of 3 new replicas have been updated...
# Waiting for deployment "web" rollout to finish: 2 of 3 updated replicas are available...
# deployment "web" successfully rolled out
# Now you'll see two ReplicaSets
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# web-59d6c8b449 3 3 3 30s ← new (nginx:1.26)
# web-7d4b8c6f5 0 0 0 5m ← old (kept for rollback)
These two parameters control the speed and safety of rolling updates:
| Parameter | Meaning | Example (3 replicas) |
|---|---|---|
maxSurge |
Max extra Pods allowed above desired count | 1 means up to 4 Pods total |
maxUnavailable |
Max Pods that can be down during update | 1 means at least 2 must be ready |
Common configurations:
# Default — balanced speed and availability
strategy:
rollingUpdate:
maxSurge: 25% # rounds up: ceil(3 * 0.25) = 1
maxUnavailable: 25% # rounds down: floor(3 * 0.25) = 0 → at least 1
# Zero-downtime — always have full capacity
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0 # never go below 3 running Pods
# Fast update — allow temporary reduced capacity
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 1 # can drop to 2 Pods briefly
Tip: For production, use
maxUnavailable: 0withmaxSurge: 1. This means you always have at least the desired number of Pods running. The trade-off: you need one extra Pod worth of cluster capacity during updates.
Every time a Deployment's Pod template changes, a new revision is recorded. You can roll back to any previous revision.
# View revision history
kubectl rollout history deployment web
# deployment.apps/web
# REVISION CHANGE-CAUSE
# 1 <none>
# 2 <none>
# See details of a specific revision
kubectl rollout history deployment web --revision=1
# deployment.apps/web with revision #1
# Pod Template:
# Labels: app=web
# pod-template-hash=7d4b8c6f5
# Containers:
# nginx:
# Image: nginx:1.25
# Roll back to the previous revision
kubectl rollout undo deployment web
# deployment.apps/web rolled back
# Roll back to a specific revision
kubectl rollout undo deployment web --to-revision=1
# deployment.apps/web rolled back
# Verify
kubectl get deploy web -o jsonpath='{.spec.template.spec.containers[0].image}'
# nginx:1.25
Tip: To make
CHANGE-CAUSEuseful, annotate your Deployments when updating:kubectl annotate deployment web kubernetes.io/change-cause="Update nginx to 1.26"Or use
kubectl apply -f deployment.yamlwith--record(deprecated but still works in older clusters).
The revisionHistoryLimit field controls how many old ReplicaSets are kept. Default is 10. Set it lower if you have many Deployments and want to save resources (each old ReplicaSet is a Kubernetes object, even with 0 replicas).
Gradually replaces Pods. Zero downtime when configured correctly. Suitable for most stateless applications.
Kills all existing Pods before creating new ones. There will be downtime.
spec:
strategy:
type: Recreate # no rollingUpdate settings needed
When to use Recreate:
You can pause a rollout partway through to test the new version with a fraction of traffic (a simple canary technique):
# Start an update
kubectl set image deployment/web nginx=nginx:1.26
# Immediately pause — only some Pods will have updated
kubectl rollout pause deployment web
# deployment.apps/web paused
# Check the current state
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# web-59d6c8b449 1 1 1 5s ← new, partially rolled out
# web-7d4b8c6f5 2 2 2 10m ← old, still running
# Test the new version, check metrics, verify logs...
# If everything looks good, resume
kubectl rollout resume deployment web
# deployment.apps/web resumed
# If something is wrong, undo instead
kubectl rollout undo deployment web
A DaemonSet ensures that one Pod runs on every node (or a subset of nodes). When a new node joins the cluster, the DaemonSet automatically schedules a Pod on it. When a node is removed, the Pod is garbage collected.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
updateStrategy:
type: RollingUpdate # or OnDelete
rollingUpdate:
maxUnavailable: 1 # update one node at a time
template:
metadata:
labels:
app: fluentd
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule # run on control-plane nodes too
containers:
- name: fluentd
image: fluentd:v1.16
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
hostPath:
path: /var/log
kubectl apply -f daemonset.yaml
kubectl get ds -n kube-system
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# fluentd 3 3 3 3 3 <none> 10s
# One Pod per node
kubectl get pods -n kube-system -l app=fluentd -o wide
# NAME READY STATUS RESTARTS AGE IP NODE
# fluentd-abc12 1/1 Running 0 10s 10.1.0.5 node-1
# fluentd-def34 1/1 Running 0 10s 10.1.1.8 node-2
# fluentd-ghi56 1/1 Running 0 10s 10.1.2.3 node-3
Use nodeSelector or nodeAffinity to target specific nodes:
spec:
template:
spec:
nodeSelector:
disk: ssd # only nodes labeled disk=ssd
| Strategy | Behavior |
|---|---|
| RollingUpdate (default) | Updates Pods one node at a time. maxUnavailable controls the pace. |
| OnDelete | Only updates a Pod when you manually delete it. Gives you full control over when each node updates. |
Gotcha: DaemonSets don't have
maxSurge— you can't run two DaemonSet Pods on the same node. Each node gets exactly one. During a rolling update, the old Pod is killed before the new one starts on that node.
StatefulSets are for workloads that need stable identity and persistent storage. Unlike Deployments, which treat all Pods as interchangeable, StatefulSets give each Pod a unique, predictable identity.
<statefulset>-0, <statefulset>-1, <statefulset>-2, etc. These names never change. Deployment Pods: StatefulSet Pods:
(interchangeable) (unique identity)
web-7d4b-abc12 mysql-0 ← always the primary
web-7d4b-def34 mysql-1 ← always a replica
web-7d4b-ghi56 mysql-2 ← always a replica
Random names, Predictable names,
shared storage, per-Pod storage,
any order strict ordering
StatefulSets require a headless Service (clusterIP: None) to give each Pod a stable DNS name:
Normal Service: mysql.default.svc.cluster.local → random Pod IP
Headless Service: mysql-0.mysql.default.svc.cluster.local → Pod 0's IP
mysql-1.mysql.default.svc.cluster.local → Pod 1's IP
mysql-2.mysql.default.svc.cluster.local → Pod 2's IP
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
clusterIP: None # ← headless Service
selector:
app: mysql
ports:
- port: 3306
name: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql # ← must reference the headless Service
replicas: 3
selector:
matchLabels:
app: mysql
podManagementPolicy: OrderedReady # default: create 0, wait, create 1, wait, create 2
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0 # update all Pods (set higher for canary)
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumeClaimTemplates: # ← per-Pod persistent storage
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 10Gi
kubectl apply -f statefulset.yaml
# Pods are created in order
kubectl get pods -w
# NAME READY STATUS RESTARTS AGE
# mysql-0 1/1 Running 0 30s ← created first
# mysql-1 1/1 Running 0 20s ← created after mysql-0 is Ready
# mysql-2 1/1 Running 0 10s ← created after mysql-1 is Ready
# Each Pod has its own PVC
kubectl get pvc
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
# data-mysql-0 Bound pv-abc123 10Gi RWO standard 30s
# data-mysql-1 Bound pv-def456 10Gi RWO standard 20s
# data-mysql-2 Bound pv-ghi789 10Gi RWO standard 10s
# Stable DNS names via the headless Service
kubectl run test --image=busybox --rm -it -- nslookup mysql-0.mysql
# Name: mysql-0.mysql.default.svc.cluster.local
# Address: 10.1.0.50
| Policy | Behavior |
|---|---|
| OrderedReady (default) | Create Pods sequentially (0 → 1 → 2). Wait for each to be Ready before creating the next. Delete in reverse order (2 → 1 → 0). |
| Parallel | Create and delete all Pods simultaneously. Use when ordering doesn't matter but you still need stable identity and storage. |
mysql-3 is created after mysql-2 is Ready.mysql-2 is deleted first, then mysql-1.mysql-1 dies, it is recreated with the same name and reattached to the same PVC (data-mysql-1).kubectl delete statefulset mysql deletes the StatefulSet but does not delete the PVCs. Data is preserved. You must delete PVCs manually.Gotcha:
volumeClaimTemplatescreate PVCs that persist after the StatefulSet is deleted. This is intentional — it protects data. But it also means you can accumulate orphaned PVCs. Always clean up PVCs when decommissioning a StatefulSet:kubectl delete pvc -l app=mysql.
A Job creates Pods that run to completion. Unlike Deployments (which restart Pods forever), a Job's Pods exit when done and are not restarted.
apiVersion: batch/v1
kind: Job
metadata:
name: data-migration
spec:
completions: 1 # how many Pods must succeed (default: 1)
parallelism: 1 # how many Pods run at once (default: 1)
backoffLimit: 3 # retry count before marking as Failed
activeDeadlineSeconds: 600 # timeout: kill after 10 minutes
template:
spec:
restartPolicy: Never # required: Never or OnFailure
containers:
- name: migrate
image: myapp/migrate:v2
command: ["python", "migrate.py"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
kubectl apply -f job.yaml
kubectl get jobs
# NAME COMPLETIONS DURATION AGE
# data-migration 1/1 25s 30s
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# data-migration-7k2xq 0/1 Completed 0 30s
# Check logs
kubectl logs data-migration-7k2xq
# Running migration v2...
# Applied 15 migrations.
# Done.
# Single task (default)
completions: 1
parallelism: 1
# → 1 Pod runs to completion
# Fixed completion count — process exactly 5 items
completions: 5
parallelism: 2
# → 2 Pods run at a time, until 5 total succeed
# Work queue — Pods determine when to stop themselves
completions: null # or omit
parallelism: 3
# → 3 Pods run simultaneously, each pulls work from a queue
# → Job completes when any Pod succeeds (all others are stopped)
| Policy | Behavior |
|---|---|
Never |
If the Pod fails, the Job creates a new Pod. Old Pods are kept (for log inspection). You may see multiple failed Pods. |
OnFailure |
If the container fails, kubelet restarts it in the same Pod. Cleaner, but you lose logs from previous attempts. |
Tip: Use
restartPolicy: Neverduring development (you can inspect failed Pods). UseOnFailurein production (fewer leftover Pods to clean up).
A hard timeout for the entire Job. If the Job hasn't completed in this many seconds, Kubernetes kills all running Pods and marks the Job as Failed:
spec:
activeDeadlineSeconds: 3600 # fail after 1 hour
backoffLimit: 3 # also fail after 3 retries
Both limits apply. Whichever triggers first marks the Job as Failed.
A CronJob creates Jobs on a schedule, using standard cron syntax.
apiVersion: batch/v1
kind: CronJob
metadata:
name: db-backup
spec:
schedule: "0 2 * * *" # daily at 2:00 AM
concurrencyPolicy: Forbid # don't start new if previous is still running
startingDeadlineSeconds: 300 # if missed by 5 min, skip this run
successfulJobsHistoryLimit: 3 # keep last 3 successful Jobs
failedJobsHistoryLimit: 1 # keep last 1 failed Job
suspend: false # set true to pause the schedule
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 3600
template:
spec:
restartPolicy: OnFailure
containers:
- name: backup
image: myapp/db-backup:latest
command: ["/bin/sh", "-c", "pg_dump $DATABASE_URL | gzip > /backups/$(date +%F).sql.gz"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
kubectl apply -f cronjob.yaml
kubectl get cronjobs
# NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
# db-backup 0 2 * * * False 0 <none> 5s
# Manually trigger a run (for testing)
kubectl create job db-backup-manual --from=cronjob/db-backup
# job.batch/db-backup-manual created
# Check the Job
kubectl get jobs
# NAME COMPLETIONS DURATION AGE
# db-backup-manual 1/1 12s 15s
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6, Sun = 0)
│ │ │ │ │
* * * * *
| Expression | Meaning |
|---|---|
*/5 * * * * |
Every 5 minutes |
0 * * * * |
Every hour, on the hour |
0 2 * * * |
Daily at 2:00 AM |
0 0 * * 0 |
Weekly, Sunday at midnight |
0 0 1 * * |
Monthly, 1st day at midnight |
30 8 * * 1-5 |
Weekdays at 8:30 AM |
0 0,12 * * * |
Twice daily at midnight and noon |
Controls what happens when a new scheduled run fires while the previous Job is still running:
| Policy | Behavior |
|---|---|
| Allow (default) | Multiple Jobs can run concurrently. Use for independent tasks. |
| Forbid | Skip the new run if the previous is still active. Use for tasks that shouldn't overlap (DB backups). |
| Replace | Kill the running Job and start a new one. Use for tasks where only the latest run matters. |
Gotcha: If
startingDeadlineSecondsis not set and the CronJob controller misses more than 100 consecutive schedules (e.g., the controller was down), it will stop scheduling entirely. Always setstartingDeadlineSecondsto avoid this edge case.
| Controller | Replicas | Identity | Storage | Use Case |
|---|---|---|---|---|
| ReplicaSet | N identical | Random names | Shared | (Managed by Deployments) |
| Deployment | N identical | Random names | Shared | Stateless apps: web servers, APIs |
| DaemonSet | 1 per node | Per-node | Shared/host | Node agents: logs, monitoring, networking |
| StatefulSet | N ordered | Stable names (0,1,2) | Per-Pod PVC | Databases, distributed systems |
| Job | Run-to-completion | Random names | Optional | Migrations, batch processing |
| CronJob | Scheduled Jobs | Per-Job | Optional | Backups, reports, periodic cleanup |
Decision flow:
Does the workload run forever or to completion?
RUN FOREVER:
Need one Pod per node? ──────────────► DaemonSet
Need stable identity/storage? ───────► StatefulSet
Stateless? ──────────────────────────► Deployment
RUN TO COMPLETION:
Run on a schedule? ──────────────────► CronJob
Run once (or fixed count)? ──────────► Job
Let's walk through the complete lifecycle of a Deployment with a rolling update and rollback.
# Create a Deployment with 3 replicas running nginx:1.24
kubectl create deployment webapp --image=nginx:1.24 --replicas=3
# deployment.apps/webapp created
kubectl rollout status deployment webapp
# deployment "webapp" successfully rolled out
kubectl get deploy webapp
# NAME READY UP-TO-DATE AVAILABLE AGE
# webapp 3/3 3 3 15s
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# webapp-6b7c4f8d9 3 3 3 15s
# Update the image
kubectl set image deployment/webapp nginx=nginx:1.25
# deployment.apps/webapp image updated
# Watch the rollout
kubectl rollout status deployment webapp
# Waiting for deployment "webapp" rollout to finish: 1 out of 3 new replicas have been updated...
# Waiting for deployment "webapp" rollout to finish: 2 out of 3 new replicas have been updated...
# deployment "webapp" successfully rolled out
# Two ReplicaSets now exist
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# webapp-6b7c4f8d9 0 0 0 2m ← old (nginx:1.24)
# webapp-85b4d7f69 3 3 3 30s ← new (nginx:1.25)
kubectl set image deployment/webapp nginx=nginx:1.26
kubectl rollout status deployment webapp
# deployment "webapp" successfully rolled out
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# webapp-6b7c4f8d9 0 0 0 4m ← revision 1 (nginx:1.24)
# webapp-85b4d7f69 0 0 0 2m ← revision 2 (nginx:1.25)
# webapp-c4e6a1b32 3 3 3 30s ← revision 3 (nginx:1.26)
kubectl rollout history deployment webapp
# deployment.apps/webapp
# REVISION CHANGE-CAUSE
# 1 <none>
# 2 <none>
# 3 <none>
# Inspect revision 1
kubectl rollout history deployment webapp --revision=1
# Pod Template:
# Containers:
# nginx:
# Image: nginx:1.24
# Inspect revision 2
kubectl rollout history deployment webapp --revision=2
# Pod Template:
# Containers:
# nginx:
# Image: nginx:1.25
kubectl rollout undo deployment webapp --to-revision=2
# deployment.apps/webapp rolled back
# Verify the image
kubectl get deploy webapp -o jsonpath='{.spec.template.spec.containers[0].image}'
# nginx:1.25
# The old ReplicaSet for nginx:1.25 scales back up
kubectl get rs
# NAME DESIRED CURRENT READY AGE
# webapp-6b7c4f8d9 0 0 0 6m ← nginx:1.24
# webapp-85b4d7f69 3 3 3 30s ← nginx:1.25 (rolled back to)
# webapp-c4e6a1b32 0 0 0 2m ← nginx:1.26
# Note: the rollback creates a NEW revision (4), which reuses the old RS
kubectl rollout history deployment webapp
# REVISION CHANGE-CAUSE
# 1 <none>
# 3 <none>
# 4 <none> ← revision 2 became revision 4 after rollback
kubectl delete deployment webapp
# deployment.apps/webapp deleted
Progress through each section in order, or jump to where you need practice.
Practice individual concepts you just learned.
Combine concepts and learn patterns. Each challenge has multiple variants at different difficulties.
maxSurge and maxUnavailablekubectl rollout undo — old ReplicaSets are kept (up to revisionHistoryLimit) for instant rollbackkubectl rollout pause/resume enables canary-style partial rolloutspod-0, pod-1), persistent per-Pod storage, and ordered operations — use for databases and distributed systemsclusterIP: None) and use volumeClaimTemplates for storagecompletions and parallelism control how many Pods run and how many must succeedconcurrencyPolicy (Allow, Forbid, Replace) controls overlapping CronJob runs