The Journey of Pod Creation in Kubernetes

Back to Blog

Introduction

I'll be honest — until recently, Kubernetes was just this mystical black box to me. I knew it was something developers used to manage containers, but the actual mechanics? Total mystery. Then I started digging into how Pods are created, and wow, what a rabbit hole of fascinating complexity!

What I thought was a simple "create a container" process is actually an intricate dance involving multiple Kubernetes components, each playing a crucial role. Let me walk you through what I've learned about the incredible journey of a Pod from definition to running container.

The Initial Step: Submitting a Pod Definition

Let's start with a simple Pod definition:


                apiVersion: v1
                kind: Pod
                metadata:
                name: my-pod
                spec:
                containers:
                    - name: web
                    - name: web
                    image: nginx
                    ports:
                        - name: web
                        containerPort: 80

When you run kubectl apply -f pod.yaml, you're triggering a chain of events that most developers never see.

The API Server and Scheduler: First Line of Processing

  1. API Server Intake: The Kubernetes API receives your Pod definition and stores it in etcd, the cluster's distributed database. This is the first checkpoint in your Pod's lifecycle.
  2. Scheduling Magic: The Scheduler then takes center stage. It doesn't just randomly place your Pod—it's a sophisticated matchmaker:
    • Analyses the Pod's resource requirements
    • Evaluates available nodes
    • Uses complex "Filters and Predicates" to find the perfect node

At this point, the Pod is marked as scheduled in etcd, but—interestingly—it doesn't actually exist yet!

The Kubelet: Your Cluster's Diligent Worker

Enter the kubelet, the workhorse of each node. It is constantly polling the control plane: "Any new Pods for me?"

When a Pod is assigned to its node, the kubelet doesn't create the Pod alone. Instead, it orchestrates three critical interfaces:

1. Container Runtime Interface (CRI)

Responsible for creating containers—similar to running docker run -d <image>. It brings your container to life.

2. Container Network Interface (CNI)

This is where networking magic happens:

  • Generates a unique IP address for the Pod
  • Connects the container to the cluster network
  • Handles complex networking scenarios (IPv4, IPv6, multiple IPs)

3. Container Storage Interface (CSI)

Manages volume mounting, ensuring your containers have the right storage attached.

The Control Plane Update

Here's a crucial detail: After the CNI assigns an IP, the kubelet reports back to the control plane. This step is vital—it's how the cluster knows the Pod is truly ready.

Services and Endpoints: The Traffic Directors

When your Pod is part of a Service, another layer emerges:


                apiVersion: v1
                kind: Service
                metadata:
                name: my-service
                spec:
                ports:
                - port: 80
                    targetPort: 3000
                selector:
                    name: app

When you submit the Service to the cluster with kubectl apply, Kubernetes finds all the Pods that have the same label as the selector (name: app) and collects their IP addresses — but only if they passed the Readiness probe. Then, for each IP address, it concatenates the IP address and the port. If the IP address is 10.0.0.3 and the targetPort is 3000, Kubernetes concatenates the two values and calls them an endpoint.

The Endpoint Lifecycle

Endpoints are dynamic! They update when:

  • Pods are created
  • Pods are deleted
  • Pod labels change

Beyond Services: Ingress and kube-proxy

Endpoints aren't just for show:

  • kube-proxy uses them to set up iptables rules
  • Ingress controllers use endpoints to route external traffic directly to Pods

Conclusion

  1. Pod creation is a multi-step, collaborative process
  2. Multiple Kubernetes components work in conjuction with each other
  3. The system is designed to be dynamic and responsive

Next time I run kubectl apply, I'll take a moment to appreciate the sophisticated process happening behind the scenes!