What is Kubernetes?

suyog shinde
7 min readDec 26, 2020

--

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Why Kubernetes and the Power of Kubernetes?

Containers are a decent method to package and run your applications. In a creation climate, you need to deal with the containers that run the applications and guarantee that there is no disturbance.

That is the way Kubernetes acts the hero! Kubernetes furnishes you with a structure to run conveyed frameworks versatilely. It deals with scaling and failover for your application, gives arrangement examples, and the sky is the limit from there. For example, Kubernetes can undoubtedly deal with a canary arrangement for your framework.

The Power of Kubernetes :

1.Administration disclosure and load balancing Kubernetes can uncover a holder utilizing the DNS name or utilizing their own IP address. In the event that traffic to a compartment is high, Kubernetes can stack adjust and convey the organization traffic so the sending is steady.

2.Capacity organization Kubernetes permits you to naturally mount a capacity arrangement of your decision, for example, nearby stockpiles, public cloud suppliers, and that’s only the tip of the iceberg.

3.Mechanized rollouts and rollbacks You can depict the ideal state for your conveyed compartments utilizing Kubernetes, and it can change the real state to the ideal state at a controlled rate. For example, computerize Kubernetes to make new holders for your arrangement, eliminate existing compartments, and embrace every one of their assets to the new holder.

4.Programmed receptacle pressing You give Kubernetes a group of hubs that it can use to run containerized assignments. You disclose to Kubernetes the amount of and memory (RAM) every compartment needs. Kubernetes can fit compartments onto your hubs to utilize your assets.

5.Self-mending Kubernetes restarts holders that fizzle, replaces compartments, slaughters compartments that don’t react to your client characterized wellbeing check, and doesn’t publicize them to customers until they are prepared to serve.

6.Secret and arrangement the board Kubernetes allows you to store and oversee touchy data, for example, passwords, OAuth tokens, and SSH keys. You can convey and refresh privileged insights and application design without revamping your holder pictures, and without uncovering insider facts in your stack arrangement

What if there is no Kubernetes?

Kubernetes is certifiably not a conventional, comprehensive PaaS (Platform as a Service) framework. Since Kubernetes works at the holder level instead of at the equipment level, it gives some commonly appropriate highlights normal to PaaS contributions, for example, arrangement, scaling, load adjusting, and allows clients to coordinate their logging, observing, and alarming arrangements. Be that as it may, Kubernetes isn’t solid, and these default arrangements are discretionary and pluggable. Kubernetes gives the structure squares to building designer stages, however protects client decision and adaptability where it is significant.

— — — — — —Kubernetes architecture — — — — — —

Enter Kubernetes, a holder organization framework — an approach to deal with the lifecycle of containerized applications across a whole fleet. It’s such a meta-measure that allows the capacity to computerize the arrangement and scaling of a few holders immediately. A few holders running a similar application are gathered. These compartments go about as reproductions and serve to stack balance approaching solicitations. A holder orchestrator, at that point, administers these gatherings, guaranteeing that they are working effectively.

Kubernetes Terminology and Architecture

1. PODS
Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster.

Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources. Generally, running multiple containers in a single Pod is an advanced use case.

2.Deployments

Kubernetes deployments define the scale at which you want to run your application by letting you set the details of how you would like pods replicated on your Kubernetes nodes. Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment. Kubernetes will track pod health and will remove or add pods as needed to bring your application deployment to the desired state.

3.Services

A service is an abstraction over the pods, and essentially, the only interface the various application consumers interact with. As pods are replaced, their internal names and IPs might change. A service exposes a single machine name or IP address mapped to pods whose underlying names and numbers are unreliable. A service ensures that, to the outside network, everything appears to be unchanged.

4.Nodes

A Kubernetes node manages and runs pods; it’s the machine (whether virtualized or physical) that performs the given work. Just as pods collect individual containers that operate together, a node collects entire pods that function together. When you’re operating at scale, you want to be able to hand work over to a node whose pods are free to take it.

5.Master Server

This is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts.

6.Cluster

A cluster is all of the above components put together as a single unit.

Master Server Components
— API Server
— Scheduler
— Controller-Manager

Understanding Kubernetes limits and requests.

Request and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. limits, on the other hand, make sure a container never goes above a certain value.

How Kubernetes is changing the DevOps space?

— Master runs the API for the entire cluster
— Nodes, physical or virtual machines within the cluster
— Pods, the basic building blocks which can run a set of containers
— A replication controller ensures the requested number of pods are running at all times
— Services, a dynamic load balancer for a given number of pods

The replication controller allows the Kubernetes cluster to self-heal. It will restart containers that fail, kill unresponsive containers, and replace and reschedule containers if a node within the cluster goes offline.

And if you need to roll out changes to your application or tweak the configuration, Kubernetes handles the process progressively. It will monitor the application’s health to retain availability throughout the update process. Moreover, should an issue arise, the tool will roll back the changes to the working order.

But the best bit: Kubernetes is an open-source solution. This means you have the freedom to use it whether in your on-premise cluster, a hybrid environment, or a public cloud.

Who uses Kubernetes?

2253 companies reportedly use Kubernetes in their tech stacks, including Google, Shopify, and Slack.

Google.
Shopify.
Slack.
Robinhood.
StackShare.
Delivery Hero.
OpenAI
Nubank.

Why OpenAI uses Kubernetes?

OpenAI is a non-profit AI research company dedicated to “safe artificial general intelligence”. Basically, they want to prevent a Terminator scenario.

The work they do is meant to be shared and distributed. Unlike many of the other companies we’ve mentioned who are running applications on k8s clusters, OpenAI is running deep learning experiments on a large scale. They primarily make use of Kubernetes for batch scheduling and autoscaling their experiments with low latency.

The team at OpenAI began using Kubernetes in 2016 after seeking a low-cost, highly portable. This solution allows them to run GPU-intensive experiments on-premises, CPU-intensive experiments in the cloud, and some experiments on whichever cluster has enough capacity.

Today, they’ve scaled their largest Kubernetes cluster to over 2500 nodes on Azure. Experiments that previously took months to deploy can now be done in weeks.

⭐Hope you enjoyed the article.⭐
Keep Learning !! Keep Sharing !!

--

--