intermediateCKA8-10 weeks prep7 min read

CKA: Certified Kubernetes Administrator — Study Guide

Complete study guide for the CKA Certified Kubernetes Administrator exam. Covers cluster setup, workloads, networking, storage, security, and troubleshooting — all in a hands-on performance-based format.

kubernetesckak8scluster-adminintermediatecncflinux-foundation

Domains

9

Key concepts

10

Study time

8-10 weeks

Exam Overview

DetailInfo
Exam codeCKA
Duration2 hours
FormatHands-on performance-based (real Kubernetes clusters in browser)
Passing score66%
Cost$395 USD (includes 1 free retake)
Validity3 years
Kubernetes versionChanges; check Linux Foundation for current version

Warning

CKA is entirely hands-on — no multiple choice. You must perform tasks in real Kubernetes clusters under time pressure. Speed with kubectl is essential. Practice daily.

Domain Weightings

DomainWeight
Cluster Architecture, Installation & Configuration25%
Workloads & Scheduling15%
Services & Networking20%
Storage10%
Troubleshooting30%

Exam Environment Tips

  • You get access to the Kubernetes documentation (kubernetes.io/docs) during the exam — know how to navigate it quickly.
  • Use kubectl aliases: alias k=kubectl and export do="--dry-run=client -o yaml".
  • Set your context at the start of each question: kubectl config use-context <context>.
  • Use kubectl explain <resource> for field reference.
  • Time management: skip hard questions, come back. Each question shows its weight.

Domain 1: Cluster Architecture, Installation & Configuration (25%)

kubeadm cluster setup

# Initialize control plane
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<master-ip>

# Configure kubectl
mkdir -p $HOME/.kube
cp /etc/kubernetes/admin.conf $HOME/.kube/config

# Install CNI (Flannel example)
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# Join a worker node (token from init output)
kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

etcd backup and restore

# Backup etcd (know the certs path from /etc/kubernetes/manifests/etcd.yaml)
ETCDCTL_API=3 etcdctl snapshot save /backup/etcd-snapshot.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# Verify snapshot
ETCDCTL_API=3 etcdctl snapshot status /backup/etcd-snapshot.db --write-out=table

# Restore
ETCDCTL_API=3 etcdctl snapshot restore /backup/etcd-snapshot.db \
  --data-dir=/var/lib/etcd-restore

# Update etcd manifest to point to new data dir
vim /etc/kubernetes/manifests/etcd.yaml
# Change --data-dir to /var/lib/etcd-restore

Cluster upgrade

# Upgrade kubeadm on control plane first
apt-mark unhold kubeadm && apt-get install -y kubeadm=1.29.x-00 && apt-mark hold kubeadm

# Plan and apply upgrade
kubeadm upgrade plan
kubeadm upgrade apply v1.29.x

# Upgrade kubelet and kubectl on control plane
apt-mark unhold kubelet kubectl
apt-get install -y kubelet=1.29.x-00 kubectl=1.29.x-00
apt-mark hold kubelet kubectl
systemctl daemon-reload && systemctl restart kubelet

# Drain + upgrade + uncordon each worker node
kubectl drain <worker-node> --ignore-daemonsets
# SSH to worker, upgrade kubeadm, kubelet, kubectl
kubectl uncordon <worker-node>

RBAC configuration

# Create service account
kubectl create serviceaccount pod-reader -n default

# Create a Role
kubectl create role pod-reader --verb=get,list,watch --resource=pods -n default

# Bind role to service account
kubectl create rolebinding pod-reader-binding \
  --role=pod-reader \
  --serviceaccount=default:pod-reader \
  -n default

# Cluster-wide: ClusterRole + ClusterRoleBinding
kubectl create clusterrole node-reader --verb=get,list,watch --resource=nodes
kubectl create clusterrolebinding node-reader-binding \
  --clusterrole=node-reader --user=jane

Domain 2: Workloads & Scheduling (15%)

Deployment management

# Create deployment
kubectl create deployment nginx --image=nginx:1.21 --replicas=3 $do > deploy.yaml
kubectl apply -f deploy.yaml

# Scale
kubectl scale deployment nginx --replicas=5

# Update image (rolling update)
kubectl set image deployment/nginx nginx=nginx:1.22

# Rollback
kubectl rollout undo deployment/nginx
kubectl rollout history deployment/nginx
kubectl rollout status deployment/nginx

Pod scheduling controls

# Node affinity — prefer nodes with label zone=east
affinity:
  nodeAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
            - key: zone
              operator: In
              values: [east]

# Taints and tolerations
# Taint a node: kubectl taint nodes node1 key=value:NoSchedule
tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"

# Resource requests and limits
resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  limits:
    memory: "128Mi"
    cpu: "500m"

ConfigMaps and Secrets

# Create ConfigMap
kubectl create configmap app-config --from-literal=DB_HOST=mydb --from-file=config.properties

# Create Secret
kubectl create secret generic db-creds --from-literal=password=mysecret

# Use in pod
kubectl run pod --image=nginx $do > pod.yaml
# Edit: add envFrom: configMapRef / secretRef or volumeMount

Domain 3: Services & Networking (20%)

Service types

TypeExposesUse case
ClusterIPInternal VIP (cluster only)Default; inter-service communication
NodePortEach node's IP:portDev/test; direct node access
LoadBalancerCloud LB with public IPProduction internet-facing
ExternalNameDNS CNAME aliasPoint to external service
# Expose a deployment
kubectl expose deployment nginx --port=80 --target-port=80 --type=ClusterIP
kubectl expose deployment nginx --port=80 --type=NodePort

Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080

NetworkPolicy

# Allow only pods with label app=frontend to reach app=backend on port 3000
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-allow-frontend
spec:
  podSelector:
    matchLabels:
      app: backend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - port: 3000
  policyTypes:
  - Ingress

CoreDNS

  • Service DNS format: <service>.<namespace>.svc.cluster.local
  • Pod DNS: <pod-ip-dashes>.<namespace>.pod.cluster.local
  • Debug: kubectl run tmp --image=busybox --rm -it -- nslookup kubernetes

Domain 4: Storage (10%)

Persistent Volumes

# PersistentVolume (admin creates)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-log
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteMany]
  hostPath:
    path: /pv/log
---
# PersistentVolumeClaim (developer requests)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: log-claim
spec:
  resources:
    requests:
      storage: 50Mi
  accessModes: [ReadWriteMany]
# Mount in pod
volumes:
  - name: log-volume
    persistentVolumeClaim:
      claimName: log-claim
containers:
  - volumeMounts:
    - mountPath: /log
      name: log-volume

Storage Classes

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Domain 5: Troubleshooting (30%)

This is the highest-weight domain. Master these debugging flows:

Pod not running

kubectl describe pod <pod> -n <ns>   # look at Events section
kubectl logs <pod> -n <ns>
kubectl logs <pod> -n <ns> --previous  # logs from crashed container
kubectl get events -n <ns> --sort-by='.lastTimestamp'

Node not ready

kubectl describe node <node>    # conditions + events
ssh <node>
systemctl status kubelet
journalctl -u kubelet -f        # kubelet logs
# Common: disk pressure, memory pressure, network plugin issue

Service not reachable

# Check endpoints
kubectl get endpoints <service>    # should show pod IPs
# If empty: label selector mismatch between Service and Pods

# Test DNS
kubectl run test --image=busybox --rm -it -- nslookup <service>.<ns>

# Test connectivity
kubectl run test --image=busybox --rm -it -- wget -O- http://<service>:<port>

Control plane troubleshooting

# Static pod manifest issues
ls /etc/kubernetes/manifests/
# Check: kube-apiserver.yaml, etcd.yaml, kube-scheduler.yaml, kube-controller-manager.yaml

# View static pod logs (they run as containers)
crictl ps | grep apiserver
crictl logs <container-id>

Study Plan (8–10 Weeks)

WeeksFocus
1–2Core concepts — pods, deployments, services, namespaces
3Cluster setup with kubeadm; etcd backup/restore
4Workloads — DaemonSets, StatefulSets, Jobs, CronJobs; scheduling
5Networking — Services, Ingress, NetworkPolicy, CoreDNS
6Storage — PV, PVC, StorageClass, ConfigMaps, Secrets
7Security — RBAC, ServiceAccounts, SecurityContext, NetworkPolicy
8Cluster upgrade + troubleshooting drills
9–10Timed mock exams (killer.sh) + pure kubectl speed practice

Key Resources

ResourceNotes
killer.shComes with CKA purchase; hardest practice environment — use it
KodeKloud CKA CourseBest hands-on labs and mock exams
Mumshad Mannambeth (Udemy)Most popular CKA course
kubernetes.io/docsLearn to navigate quickly — it's open during the exam
kubectl cheat sheetkubectl cheat sheet in Kubernetes docs