Kubernetes is built around a reconciliation loop. Every component in the system watches for a gap between the desired state (what the API server has recorded) and the actual state (what is running on the nodes). When a gap appears, a Pod crashes, a new Deployment is created, a node goes offline, the relevant controller creates, deletes, or adjusts workloads until the actual state matches the desired state again.
The control plane stores all desired state in etcd, a distributed key-value store. The API server is the single entry point for all reads and writes to that state. Controllers run in a loop reading from the API server and issuing instructions to worker nodes through the kubelet. This separation of concerns, declaring intent in the API, executing intent on nodes, is what makes Kubernetes both powerful and complex to reason about.
A Deployment is a good example of how this works in practice. You declare 'I want three replicas of this container image running at all times.' The Deployment controller creates a ReplicaSet, which in turn creates three Pods. If one Pod crashes, the ReplicaSet notices there are only two actual replicas and creates a new one. If you update the image version, the Deployment controller performs a rolling replacement, creating new Pods with the new image and deleting old ones in a controlled sequence.
This model makes environments more reproducible and recovery more automatic, but it also means that the cluster depends on clear, disciplined object definitions. Workloads that drift from their declared state, that use manually patched configurations, or that depend on cluster state outside version control are harder to operate and harder to debug when something goes wrong.