Once you’re deep into the Kubernetes ecosystem, Kubernetes becomes as defined by its boundaries as it is by its benefits.
Kubernetes is a platform that, through a central API server, allows controllers to watch and adjust what’s going on. The server interacts with all the nodes to do basic tasks like start containers and pass along specific configuration items such as the URI to the persistent storage that the container requires.
Knowing what is or isn’t absolutely required to run a Kubernetes cluster can be beneficial when it comes to troubleshooting and scaling the components, especially as the organization manages a larger number of containers.
What is Vanilla Kubernetes?
The core Kubernetes project only contains six individual running pieces. They are as follows: 1) kube-apiserver, 2) kube-scheduler, 3) kube-controller-manager, 4) cloud-controller-manager, 5) kubelet, and 6) kube-proxy. That’s it.
1) The kube-apiserver is the core. Everything talks to this service in order to get anything done. The kube-apiserver is stateless by design.
2) The kube-scheduler matches new requests for pods to the nodes they’ll be run on. This decision can be affected for various reasons including labels, resource requirements, affinity rules, and data location – if local volumes were used, for example.
3) The kube-controller-manager contains four separate controllers that are bundled together for easier deployments. They include:
The node manager, which watches the health of the nodes in the cluster and reacts accordingly.
The replication controller, which matches the instances of each pod running with their desired state and requests starts/stops as required to meet the specification.
The service account and token controller manages secrets in the configuration and default settings for new members of the cluster, from namespaces to nodes.
The endpoint controller maintains the list of pods to service mappings, including updating DNS records for cluster-wide availability.
4) The final piece of the master control plane is the cloud-controller-manager. The cloud-controller-manager essentially emulates the features of the controller-manager but knows how to delegate to cloud specific sub-components to request services and nodes from the configured cloud. EBS storage on AWS or standing up a new instance of Aurora and passing the connection information back to the cluster for use by an application, are a couple examples of this.
The two components that go on each and every node from the masters to the workers are the kubelet and kube-proxy processes.
5) The kubelet process is the supervisor on each node that interacts with the container runtime to manage the state of containers based on their specification. The specification can be passed along from a controller or through local manifests, as done on the master control plane nodes for pieces like the kube-apiserver.
6) The kube-proxy is responsible for ensuring network traffic is routed properly to internal and external services as required and is based on the rules defined by network policies in kube-controller-manager and other custom controllers.
Prerequisites for Kubernetes
The most commonly used products to fill in the prerequisites are Ubuntu for the Linux base, Docker containerd for the runtime, CoreDNS for service discovery (DNS), CNI to handle connecting to a networking layer, and etcd for the configuration store. This is the most common base configuration and is even used as part of the official Certified Kubernetes Administrator (CKA) exam.
Single node deployment of Kubernetes
If your objective is to run a minimalistic version of Kubernetes due to limited resources or you just need a place to test your application in a container, or test the deployment, pod, or service specifications you’ve written, then Kubernetes has an official mini distribution available called minikube. It can be started and stopped with a single command and contains the core features of Kubernetes in a developer-friendly option. This version is usually run on the developer’s actual workstation.
There are other community-based single machine distributions like minishift (Red Hat) and microk8s (Canonical). But, those distributions are vendor-specific spins on Kubernetes and drift away from what would be considered a Vanilla install.
To get started using minikube, there are installation instructions on the Kubernetes.io site for Mac OS X, Windows and Linux. These three steps on a Windows desktop demonstrate the ease of getting started:
1) Download and run the installer from: https://storage.googleapis.com/minikube/releases/latest/minikube-installer.exe
2) Have Hyper-V or Virtualbox installed (to check for Hyper-V use “systeminfo” command)
3) Start it up:
minikube start --vm-driver=virtualbox
4) (Optional) Verify it’s running by listing all pods
kubectl get po -A
Multi-node deployment of Kubernetes
As most Kubernetes clusters running in the wild today are multi-node, there are benefits to knowing the bare minimum deployment required to setup such a cluster. As with anything in the technology world, there are multiple ways to accomplish this feat. One way is to follow the 14 hands-on labs that are part of Kubernetes the Hard Way by Kelsey Hightower, who is a Staff Developer Advocate at Google and a long-time Kubernetes evangelist.
This is designed to run on Google Cloud but walks through all the steps – from nothing to a barebones working cluster and then tearing it down again. This is as close to a Vanilla Kubernetes deployment as you can get.
Another way is to follow the Kubernetes documentation for using kubeadm to check the prerequisites and deploy a cluster. Using the kubeadm method still uses a lot of the same kubectl commands found in Kubernetes the Hard Way. But, its value comes through its simplification of numerous routine tasks, like adding and removing nodes and creating authentication tokens.
The complete set of steps are:
Prepare the servers running Linux (Windows is possible, but not common)
Install etcd, containerd, and the Kubernetes command line tools
Generate the certificates for etcd, cluster config, and cluster authentication
Configure and run etcd (can be a single node but 3, 5, or 7 nodes are the recommended best practice, and are often co-located on the master nodes in most clusters)
Configure and run the master control plane. (It‘s possible to have only one but three or more are required for high availability, and three is the most common initial configuration for clusters)
Configure and run work nodes. (2 is the recommended minimum but it can be any number including zero if you’re brave enough to run things on the master nodes)
At this point you’ll have a Vanilla Kubernetes install.
If you want additional steps to make the cluster useful to developers to function then you’ll want to configure additional components including routing and DNS services which, together, allow for container-to-container communications and service discovery.
Next steps after Vanilla Kubernetes
With the 100+ certified Kubernetes distributions and hosted offerings available on the market today, there’s really no reason for the average organization to ever deploy Vanilla Kubernetes on servers beyond training purposes or for very specific use cases, like developing plugins. For most purposes, minikube is adequate when just the basic building blocks of Kubernetes are needed. And, as demonstrated above, Kubernetes is never actually pure. As you can see, for reasons like enterprise networking capabilities or monitoring and alerting, Kubernetes can quickly move away from being a Vanilla install.
Kubernetes is a stable enough platform now and can often be considered table stakes in any container management offering. The value comes from what else is distributed with Kubernetes and pre-configured to run with it. By only using certified offerings, you can be confident in knowing that every Kubernetes offering has the same stable core.
The future of observability and real-time incident response in any environment, including Vanilla Kubernetes, includes VictorOps. Check out a 14-day free trial or request a personalized demo to learn how we make alerting, on-call scheduling and incident response suck less for modern DevOps and IT teams.
About the author
Vince Power is a Solution Architect who has a focus on cloud adoption and technology implementations using open source-based technologies. He has extensive experience with core computing and networking (IaaS), identity and access management (IAM), application platforms (PaaS), and continuous delivery.