What is ?
Last updated
Last updated
Kubernetes is an open source, portable, extensible platform for managing containerized applications and services, facilitating configuration and automating application deployment. Kubernetes is a large and rapidly growing ecosystem. Services, support and tools are widely available.
The name Kubernetes is of Greek origin, meaning helmsman or pilot. Google open-sourced Kubernetes in 2014. Kubernetes builds on the decade and a half of experience Google has with running large volumes of real-world workloads , combined with ideas and best practices from the community. copper.
Let's take a look at why Kubernetes is so useful by going back in time.
Traditional deployment era: Initially, applications were run on physical servers. There is no way to define resource boundaries for applications within the physical server, and this causes resource allocation problems. For example, if multiple applications run on a physical server, there may be situations where one application will take up a larger portion of the resources and other applications will perform worse as a result. One solution to this would be to run each application on a different physical server. But this solution is not optimal because resources are underutilized and it is expensive for organizations to maintain so many physical servers.
The era of virtualization implementation: As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on the CPU of a single physical server. Virtualization allows applications to be isolated between VMs and provides a level of security because one application's information cannot be freely accessed by another application.
Virtualization allows for better utilization of resources within a physical server and allows for better scalability because an application can be added or updated easily, reducing hardware costs and more. With virtualization, you can have a set of physical resources in the form of a cluster of available virtual machines.
Each VM is a computer that runs all components, including its own operating system, on top of virtualized hardware.
Age of Container Deployment: Containers are similar to VMs, but they have isolation to share the Operating System (OS) between applications. Therefore, containers are considered lightweight. Similar to a VM, a container has a filesystem, CPU, memory, process space, etc. When they are decoupled from the underlying infrastructure, they are portable in the cloud or other instances. operating system distribution.
Containers have become popular because they have many added benefits, such as:
Create and deploy Agile applications: increase the ease and efficiency of creating container images compared to using VM images.
Continuous development, integration, and deployment: provides frequent and reliable build and deployment of container images with easy, fast rollbacks.
Separation between Dev and Ops: create images of application containers at build/release time instead of deployment time, thus decoupling applications from infrastructure.
Observability not only displays OS-level information and metrics, but also application health and other signals.
Environmental consistency throughout development, testing and in production: Runs the same on a laptop as in the cloud.
Portability across clouds and OS distributions: Runs on Ubuntu, RHEL, CoreOS, on-premises, Google Kubernetes Engine and anywhere else.
Centralized application management: Increases the level of abstraction from running an OS on virtualized hardware to running an application on an OS using logical resources.
Distributed, elastic micro-services: the application is decomposed into smaller, independent parts that can be deployed and managed flexibly - not a monolithic app.
Isolation of resources: predicting application performance
Resource use: efficient
Containers are a good way to package and run your applications. In a production environment, you need to manage the containers running the applications and ensure that there is no downtime. For example, if one container is shut down, another container needs to be started up. This would be easier if handled by a system.
That's how Kubernetes came to us. Kubernetes provides you with a framework to run distributed systems in a powerful way. It takes care of scaling and failover for your application, provides deployment templates, and more. For example, Kubernetes can easily manage a canary deployment for your system.
Kubernetes gives you:
Service discovery and load balancing Kubernetes can expose a container using its own DNS or IP address. If traffic to a container is high, Kubernetes can load balance and distribute network traffic so the deployment is stable.
Kubernetes storage orchestration allows you to automatically mount a storage system of your choice, like local storages, public cloud providers, etc.
Automated rollouts and rollbacks You can describe the desired state for deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled frequency. For example, you can automate Kubernetes to create new containers for your deployment, delete existing containers, and apply all of their resources to the new container.
Automated packaging You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can orchestrate containers to nodes to make the best use of your resources.
Self-healing Kubernetes restarts failed containers, replaces containers, removes containers that do not respond to user-defined health check configurations, and makes them invisible to clients until they are ready to operate.
Kubernetes security and configuration management allows you to store and manage sensitive information such as passwords, OAuth tokens, and SSH keys. You can deploy and update application secrets and configurations without rebuilding container images and without exposing secrets in your stack configuration.
Kubernetes is not a traditional, comprehensive PaaS (Platform as a Service) system. Because Kubernetes operates at the container layer rather than the hardware layer, it provides a number of features that are common to PaaS services, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic and these default solutions are optional and pluggable.
Kubernetes:
There is no limit to the types of applications supported. Kubernetes aims to support an extremely diverse workload, including stateless, stateful, and data processing. If an application can run in a container, it will run very well on Kubernetes.
Don't deploy the source code and don't build your application. The CI/CD process is defined by the organization as well as the technical requirements.
Does not provide application-level services, such as middleware (e.g., message buses), data processing frameworks (e.g., Spark), databases (e.g., MySQL), caches, or as the cluster's storage system (for example, Ceph). Such components can run on top of Kubernetes and/or can be accessed by applications running on Kubernetes through portability mechanisms, such as Open Service Broker .
Logging, monitoring or alerting solutions are not required. It provides several integrations such as proof-of-concept, and mechanisms to collect and export metrics.
A language/system configuration (e.g. Jsonnet) is not provided or required. It provides a declarative API that can be targeted by arbitrary declarative forms.
Do not provide nor apply any comprehensive configuration, maintenance, management or self-healing systems.
Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for coordination. The technical definition of orchestration is the execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes consists of a set of independent control processes. independent, combineable, and continuously control the current state according to a given desired state. It doesn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use, more powerful, more flexible and scalable.
Kubernetes (k8s) is used when you need to manage and deploy a complex system with many containers or microservices. K8s is a good choice for projects that need:
Scalability : K8s allows you to easily and flexibly increase or decrease the number of containers in the system.
Availability : K8s provides features such as replication, self-healing, automatic rollbacks and horizontal scaling to ensure that the system is always robust and uninterrupted.
Continuous Deployment/Delivery : K8s allows you to deploy and update source code quickly and easily.
Portability: K8s helps deploy and manage applications consistently and reliably. It supports easy migration between cloud providers or on-premise environments.
In short, if you are building a complex system with many containers or microservices, using k8s can help you manage and deploy the system more easily.
Kubernetes (k8s) is a very powerful container management and microservices deployment tool, but it also has some limitations and disadvantages:
Complexity : K8s offers a lot of features and options for system deployment and management, but they are very complex to set up and operate, requiring a high level of expertise and resources to set up. Set up and manage K8s.
Resource Requirements : K8s requires a lot of server resources to operate, especially when you manage multiple nodes.
Networking Challenges : K8s offers a lot of features for network management, but configuring and using these features can also make deployment complex.
Cost : K8s can increase the cost of server and network resources, especially when you use multiple nodes or cloud services such as Amazon EKS, GKE on Google Cloud and AKS on Azure.
Security : K8s offers a lot of security features, but configuring and using these features can also make security complicated.
Some concepts to know in Kubernetes Architecture
K8s cluster is a machine called nodes used to run containerized applications.
There are two core components in K8s cluster : control plane and worker nodes.
Control plane
The control plane is responsible for managing the state of the cluster. In a production environment, the control plane often runs on multiple nodes and spread across several data centers.
Control plane will include a number of core components such as:
API Server supports updates, scaling and lifecycle orchestration by providing different types of applications. The API server is one of the most important components in the control plane because it provides a RESTful API used to manage and control objects in K8s such as pods, deployments and services.
etcd is a distributed key-value management system used in the control plane. It is the place to store the state of the cluster including configuration, state of K8s objects and status of nodes in the cluster.
Scheduler is a component in the control plane responsible for allocating (scheduling) pods on nodes in the cluster. The scheduler uses information about the state of the cluster such as resources, capacity, and constraints from pod requests to decide which node will host the pod.
Controller manager is a component in the control plane responsible for controlling k8s objects such as Replication Controllers, Deployments and StatefulSets. The controller manager uses information about the state of k8s objects stored in etcd to monitor and adjust the state of k8s objects to reach their goals.
A Pod is the smallest deployment unit you can create and manage in Kubernetes, it represents one or more containers running on a local node in the cluster. Pod is a unified management unit for containers in the cluster, making it easier to manage and deploy containers.
Worker nodes
K8s core components running on worker nodes include Kubelet, Container runtime and Kube proxy.
Kubelet is a daemon running on each worker node, which is responsible for communicating with the control plane. They receive instructions from the control plane about which pods should run on the node and ensure the state of the pods is maintained.
The container runtime is a part of the node and has the main task of running containers sent by Kubelet. Examples of container runtime include Docker, CRI-O, and containerd.
Kube proxy runs on each node in the cluster and has the main task of forwarding network requests between Pods and Services in the cluster. Kube proxy acts as an automatic load balancer for Services, forwarding requests from a local IP address or Service IP address to one or more Pods that match the request.
Kubernetes and Docker are often misunderstood as direct competitors but that is not true because Kubernetes and Docker are different technologies that complement each other to run container applications.
You can refer to this article to better understand Docker.
Kubernetes may or may not use Docker
Kubernetes uses Docker to deploy, manage, and scale containerized applications
Docker is not a replacement for Kubernetes, but rather it is about using Kubernetes with Docker to run applications with containers at scale.