"Demystifying Kubernetes: A Comprehensive Guide to Container Orchestration"
Welcome back! In this article, we will be discussing Kubernetes, even if you don't know about anything about Kubernetes after reading this blog you will get a clear understanding of what is "Kubernetes". what do we mean by "container Orchestration" let us begin!
before starting with Kubernetes we need to understand a few things,
What is Configuration management?
configuration contains all information about the Application. configuration tells us about everything which is related to the application. it contains all information for running an application.
previously if you want to make changes in the running container you can easily you can do changes in that container while they are running. but now you can not change container configuration while they are running because it supports "Immutability"
Immutability:
You can not change container configuration while they are running. if you want to make some changes, stop those running containers and mark your changes then run them again. that's the Immutable nature of Containers. Immutability provides us with reliability and is good for security purposes.
Chef & Puppet:
Puppet and Chef Can Both Be Used for Configuration Management. Puppet and Chef can both help you automate aspects of your infrastructure management, like machine provisioning (standing up a virtual machine, laying down the operating system, etc.) and enforcing compliance.
Puppet is like writing configuration files whereas using Chef is like programming the control of your nodes. we will talk about nodes just in this blog.
Managing Containers:
I already talked about containers in my last blog. you can check it out. let me explain what I mean by managing containers with the help of an Example, consider you are running multiple services on your server. each service may be working on different versions. so who will take care of that container management to solve this problem.
Monolithic App. vs Microservices!
Monolithic Applications:
previously we had a Monolithic application. For example, consider a web application, which is built by using various components like Frontend, backend, chatting, database, and networking. all those are considered as only one Application.
The disadvantage of a monolithic application, if you want to scale only the front end, all other components also get scaled! So now we are using Microservices.
Microservices :
In microservices, we consider each service as a separate application. we deploy on individual applications. we run applications in their containers.
Orchestration:
Orchestration is done by Orchestrators.
Orchestrators:
orchestrators are used for dynamically deploying, and managing containers with zero Downtime.
Kubernetes and cloud-native applications help in that, huge shout out to CNCF!
Kubernetes (K8's) :
History:
Title: "From the Origins to Dominance: A Brief History of Kubernetes"
Introduction: Kubernetes, often abbreviated as K8s, has emerged as the de facto standard for container orchestration in the modern era of cloud computing. Its journey from a Google internal project to an open-source powerhouse has been nothing short of remarkable. In this article, we will take a chronological journey through the history of Kubernetes, exploring its origins, key milestones, and the factors that have led to its widespread adoption.
Inception at Google (2003-2014): Kubernetes traces its roots back to Google's early experiments with managing containers. The company was already running applications in containers, but the need for a more efficient, automated, and scalable system became evident as the infrastructure grew. In 2003, Google started developing an internal platform called Borg, which served as the precursor to Kubernetes. Borg allowed Google engineers to manage containerized workloads at scale, providing features like automated deployment, scaling, and self-healing.
Birth of Kubernetes (2014-2015): In 2014, Google decided to share its container management knowledge with the world and open-sourced the Kubernetes project. The technology was donated to the Cloud Native Computing Foundation (CNCF), a Linux Foundation project that aims to foster cloud-native computing. Kubernetes quickly gained traction as developers recognized its potential to simplify container orchestration and streamline cloud-native application deployment.
Rising Popularity and Community Growth (2016-2017): As Kubernetes entered the open-source arena, its development rapidly accelerated, and its community grew exponentially. Major cloud providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), started offering Kubernetes-based services, making it accessible to a broader audience. The community actively contributed to Kubernetes, enhancing its features, performance, and stability.
Kubernetes 1.0 and Beyond (2018-2019): In 2018, Kubernetes reached a significant milestone with the release of version 1.0, signaling its readiness for production workloads. This marked a turning point as enterprises began adopting Kubernetes for managing their containerized applications. The ecosystem around Kubernetes expanded, with various tools and services supporting and extending its capabilities.
Cloud-Native Revolution and Standardization (2020-2021): The cloud-native movement gained momentum, and Kubernetes emerged as a critical component of cloud-native architectures. Its ability to facilitate microservices, continuous delivery, and hybrid/multi-cloud deployments solidified its position in the tech industry. @CNCF continued to foster Kubernetes' growth, and it became the first project to graduate from CNCF, demonstrating its maturity and stability.
Kubernetes Today and Future Prospects (2022 and Beyond): As of 2022, Kubernetes remains the go-to solution for container orchestration, powering a vast number of production workloads worldwide. Its ecosystem has expanded, encompassing a rich array of tools, libraries, and platforms built around Kubernetes. The community continues to innovate, enhancing security, scalability, and ease of use. Looking ahead, Kubernetes is poised to play a vital role in the ongoing evolution of cloud-native technologies and remains a cornerstone of modern infrastructure.
Why K8s? Advantages :
Kubernetes offers a wide range of advantages that have contributed to its widespread adoption as the leading container orchestration platform. Some of the major advantages of Kubernetes include:
Automated Container Orchestration: Kubernetes automates the deployment, scaling, and management of containerized applications. It handles the distribution of containers across a cluster of nodes, ensuring that the right number of replicas are running at all times, and automatically replaces failed containers.
High Scalability: Kubernetes allows applications to scale easily and seamlessly. It can scale both vertically (increasing resources like CPU and memory for individual containers) and horizontally (adding more instances of containers) based on demand, traffic, or custom metrics.
Self-Healing and Fault Tolerance: Kubernetes continuously monitors the health of containers and nodes. If a container or node fails, Kubernetes automatically reschedules or restarts them, ensuring that the application remains available and resilient to failures.
Service Discovery and Load Balancing: Kubernetes provides built-in service discovery and load balancing mechanisms. This allows applications to be easily accessed through a stable DNS name or IP address, and incoming traffic is evenly distributed among the available instances of the application.
Rolling Updates and Rollbacks: Kubernetes allows seamless rolling updates of applications, enabling new versions to be deployed without downtime. If an update introduces issues, Kubernetes supports easy rollbacks to the previous stable version.
Secrets Management: Kubernetes provides a secure way to manage sensitive data, such as passwords and API keys, through its secrets management feature. Secrets are stored securely and can be made available to the containers that need them without exposing the actual values in plain text.
Declarative Configuration: Kubernetes uses a declarative approach to define the desired state of the system through YAML or JSON files. This approach makes it easy to manage complex applications and infrastructure as code, promoting consistency and reducing the risk of configuration drift.
Multi-Cloud and Hybrid Cloud Support: Kubernetes is cloud-agnostic, which means it can run on various cloud providers or even on-premises infrastructure. This flexibility allows organizations to avoid vendor lock-in and implement multi-cloud or hybrid cloud strategies.
Rich Ecosystem and Community Support: Kubernetes has a vibrant and active open-source community. As a result, there is an extensive ecosystem of tools, extensions, and plugins that integrate seamlessly with Kubernetes, providing solutions for various use cases.
Cost-Effectiveness: By efficiently utilizing resources and automating scaling and management, Kubernetes optimizes resource allocation, leading to cost savings for organizations running containerized applications.
Support for Stateful and Stateful Applications: Kubernetes has evolved to support stateful applications through features like StatefulSets and Persistent Volumes, enabling the management of databases, key-value stores, and other stateful workloads in a containerized environment.
Overall, Kubernetes simplifies the deployment and management of containerized applications, enhances resource utilization, and improves application availability, making it an essential tool for modern cloud-native infrastructures. that is our Kubernetes.
Terminologies in K8s :
Kubernetes Cluster:
Kubernetes cluster contains Control plane & worker Nodes.
Control plane: previously control plane is known as Master Node. As the control plane controls all other nodes so we previously called it as Master node but now, by convention we called it the control plane.
Worker nodes: worker nodes are those small small nodes, where our application is running. you can consider as servers.
let us dive deep into Cluster:
Above is the architecture of the cluster.
Control Plane:
The control plane is a combination of various components, all various components keep our cluster healthy.
Kubectl: kubectl is a command line tool that interacts with the control plane with the help of an API server. It interacts with control in two manners, Declarative way and Interactive way;
Declarative way: we write some YAML files, we write code then it happens. (best way)
Interactive way: we communicate with the cluster by using some commands.
Flow:
Applications--->
Micro-Services --->
Runs in worker nodes--->
managed by control plane --->
the control plane is controlled by ---> Kubectl.
Controller manager: It controls various controllers themselves and controllers control worker nodes. and all those controllers were controlled by our Controller.
controller: controller takes care to reach the desired state. meet customers' requirements. it knew about the current state of the application. if you want to make some changes you can do it.
API server: we communicate with the control plane by using the API server. The API server is used for communication purposes.
ETDC: it's a database. it contains all information about our cluster. if the API server wants information about the cluster it asks ETDC.
Schedulers: Schedulers are scheduling units of clusters. it will see which worker node is free so it can able to put applications on it**. Cloud providers use their Manager.**
Node Components
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The Kubelet doesn't manage containers that were not created by Kubernetes.
kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports container runtimes such as containers, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).
Addons
Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features.
Selected addons are described below; for an extended list of available addons, please see Addons.
DNS
While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Web UI (Dashboard)
The dashboard is a general-purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.
Container Resource Monitoring
Container Resource Monitoring records generic time-series metrics about containers in a central database and provides a UI for browsing that data.
Cluster-level Logging
A cluster-level logging mechanism is responsible for saving container logs to a central log store with a search/browsing interface.
Network Plugins
Network plugins are software components that implement the container network interface (CNI) specification.
PODS: the smallest execution unit in Kubernetes!
A pod is the smallest execution unit in Kubernetes. A pod encapsulates one or more applications. Pods are ephemeral by nature, if a pod (or the node it executes on) fails, Kubernetes can automatically create a new replica of that pod to continue operations.
Where does POD lie in the cluster?
Inside of deployment -> POD -> Container.
Conclusion:
As we conclude this guide, it's evident that Kubernetes has become the backbone of modern cloud-native infrastructures, revolutionizing the way applications are developed, deployed, and managed. While the journey into the world of Kubernetes may seem daunting at first, the rewards it offers in terms of scalability, reliability, and efficiency are well worth the investment.
So, whether you're just starting your Kubernetes journey or looking to enhance your existing expertise, remember that learning and embracing Kubernetes is an investment in the future of cloud computing. As the technology continues to evolve, Kubernetes will remain at the forefront, shaping the way we build and run applications in the dynamic world of cloud-native computing.
Embrace Kubernetes, and unlock the full potential of container orchestration to propel your applications into the future of cloud-native success!
Thank you, Cloud Native Computing Foundation for bringing Kubernetes to us! thanks to Kunal Kushwaha for Introducing such a beautiful Kube world.