DevOps Kubernetes Fundamentals
Kubernetes (often shortened to K8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Docker creates containers — Kubernetes orchestrates them at scale.
Imagine running 50 Docker containers across 10 servers. Doing this manually — starting containers, monitoring health, restarting failed ones, distributing traffic — is impossible to manage by hand. Kubernetes automates all of it.
Why Kubernetes?
- Self-healing: Restarts crashed containers automatically.
- Scaling: Adds or removes container instances based on demand.
- Load balancing: Distributes traffic evenly across containers.
- Rolling updates: Deploys new versions without downtime.
- Rollback: Reverts to the previous version instantly if something goes wrong.
- Service discovery: Containers find each other by name, not IP address.
Kubernetes Architecture
A Kubernetes cluster consists of two types of machines:
Control Plane (Master Node)
The brain of the cluster. It manages the desired state of the cluster.
- API Server: The main entry point. All commands (kubectl, dashboards, CI/CD) talk to this.
- Scheduler: Decides which worker node runs each new container.
- Controller Manager: Watches the cluster state and makes corrections (restarts failed pods, etc.).
- etcd: A distributed key-value store that holds the cluster's configuration and state.
Worker Nodes
The machines that actually run application containers.
- kubelet: An agent that runs on each worker node and communicates with the control plane.
- kube-proxy: Manages network rules so containers can communicate.
- Container Runtime: Usually Docker or containerd — runs the actual containers.
Core Kubernetes Objects
Pod
A Pod is the smallest deployable unit in Kubernetes. It contains one or more containers that share the same network and storage. Usually, one Pod runs one container.
Deployment
A Deployment manages Pods. It defines how many replicas (copies) of a Pod should run and handles rolling updates and rollbacks.
Service
A Service exposes a set of Pods as a stable network endpoint. Even if Pods restart and get new IP addresses, the Service IP stays constant.
Namespace
A Namespace is a logical partition within a cluster. Teams use namespaces to separate environments (dev, staging, production) or applications on the same cluster.
ConfigMap and Secret
A ConfigMap stores non-sensitive configuration data (like app settings). A Secret stores sensitive data (like passwords and API keys) in encoded form.
Ingress
An Ingress manages external HTTP/HTTPS access to services inside the cluster. It acts like a smart router — directing traffic to different services based on URL path or hostname.
Kubernetes YAML – Defining Resources
Every Kubernetes resource is defined in a YAML file. Here is a Deployment for a Node.js app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: myrepo/webapp:1.5
ports:
- containerPort: 3000
env:
- name: DB_HOST
value: "mysql-service"
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"And a Service to expose it:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- port: 80
targetPort: 3000
type: ClusterIPEssential kubectl Commands
kubectl is the command-line tool for interacting with a Kubernetes cluster.
# Apply a YAML file to the cluster
kubectl apply -f deployment.yaml
# List all pods in the default namespace
kubectl get pods
# List pods in a specific namespace
kubectl get pods -n production
# Describe a pod for detailed information
kubectl describe pod webapp-deployment-abc123
# View logs from a running pod
kubectl logs webapp-deployment-abc123
# Open a shell inside a pod
kubectl exec -it webapp-deployment-abc123 -- sh
# Scale a deployment to 5 replicas
kubectl scale deployment webapp-deployment --replicas=5
# Check the rollout status
kubectl rollout status deployment/webapp-deployment
# Roll back to the previous version
kubectl rollout undo deployment/webapp-deployment
# Delete a deployment
kubectl delete deployment webapp-deploymentRolling Updates and Rollbacks
Kubernetes deploys new versions without downtime using a rolling update strategy. New Pods start before old ones stop. Traffic shifts gradually from old to new.
Updating an Image
kubectl set image deployment/webapp-deployment webapp=myrepo/webapp:1.6Kubernetes replaces old Pods one by one. If the new version has a problem, one command restores the previous state:
kubectl rollout undo deployment/webapp-deploymentScaling Applications
Horizontal scaling adds more Pod replicas to handle more traffic. Kubernetes also supports Horizontal Pod Autoscaler (HPA), which automatically scales based on CPU or memory usage.
# Manual scaling
kubectl scale deployment webapp-deployment --replicas=10
# Create an autoscaler (scale between 3 and 20 based on CPU)
kubectl autoscale deployment webapp-deployment --min=3 --max=20 --cpu-percent=60Persistent Storage in Kubernetes
Pods are temporary. When a Pod restarts, its local storage is gone. Kubernetes uses Persistent Volumes (PV) and Persistent Volume Claims (PVC) to provide durable storage.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiKubernetes in a DevOps Pipeline
A complete DevOps workflow with Kubernetes:
- Developer pushes code to GitHub.
- GitHub Actions builds a Docker image and pushes it to AWS ECR.
- The pipeline updates the Kubernetes Deployment YAML with the new image tag.
- The pipeline runs
kubectl applyto trigger a rolling update. - Kubernetes gradually replaces old Pods with new ones.
- Monitoring tools (Prometheus, Grafana) track the new version's health.
Managed Kubernetes Services
Setting up Kubernetes from scratch is complex. Cloud providers offer managed services that handle the control plane automatically:
| Provider | Service Name |
|---|---|
| Amazon Web Services | EKS (Elastic Kubernetes Service) |
| Microsoft Azure | AKS (Azure Kubernetes Service) |
| Google Cloud | GKE (Google Kubernetes Engine) |
Summary
- Kubernetes orchestrates containers at scale — handling deployment, scaling, and self-healing automatically.
- The cluster has a Control Plane (brain) and Worker Nodes (muscle).
- Key objects include Pods, Deployments, Services, ConfigMaps, and Ingress.
- YAML files define the desired state — Kubernetes constantly works to match it.
- Rolling updates and rollbacks ensure zero-downtime deployments.
- Managed Kubernetes services from AWS, Azure, and Google Cloud simplify cluster operations.
