Published at 24.12.2020
The article in German: DATACENTER INSIDER
Kubernetes gaining more and more momentum over the last few years, it appears to start stealing the thunder from other technologies. Hence, the question arises whether Kubernetes threatens the existence of OpenStack.
Hereto: Julian Fischer, CEO of the anynines GmbH, Berlin with his Insider’s viewpoint.
Beyond any doubt, OpenStack has its own problems anyway. Quite a few experts have underestimated the difficulty to transform any given data center into a modern virtualized infrastructure. Many of them failed for technical or financial reasons. Lack of either stability or economic efficiency are only two of manifold reasons why such enterprises may have been terminated at an early stage.
Moreover, Virtual Infrastructures are certainly not easy to deal with, presuming they have become reliable. Their operation is just as challenging as their use.
The orchestration of virtual machines has caused immense progress for the operation of complex, distributed applications. The initial setup of multipart application systems can be described with automation ever since.
Ephemeral virtual machine (VM) and persistent disk have evolved into essential automation paradigms.
Table of Contents
Within this paradigm one renounces to increase the availability of one single VM. Facing the point where the use and value of each invested Euro starts to decline rapidly, each and every infrastructure should work within the realm of the optimum price-performance ratio and the software operation must be sustainable with the resulting availability.
Higher availability is demanded for persistent data, such as the data of a database. These are stored on a highly available storage server, providing its capacity in virtual network drives. A VM mounts these network drives for the storage of data that are supposed to survive the demise of a VM. This procedure gave birth to the idea of self-healing systems:
A failed VM, whether for reasons of an error or due to scheduled maintenance of the host-system, is simply created anew on another infrastructure-host and the according network-drive is remounted. This method reduces the time of repair to the range of minutes.
This method bears one disadvantage: the enormous overhead of involved VMs. For a clear separation of responsibilities, a middle-sized application system requires easily several dozens of them. Moreover, each VM uses its own operating system or at least its own kernel which then quickly leads to the need of some hundreds megabyte memory.
It didn’t take long until the tech community recognized this waste and reacted to it with application platforms like ‘Heroku’ and later with Cloud Foundry.
Application platforms directly target the application developer.
The tasks of system administrators then split into two major roles: first the application developer with expertise concerning operation know-how and second the platform operator with basic knowledge about application development. The containerization of applications reduces the overhead of memory and CPU requirements of VMs to the need of containers.
The more applications are operated the more perceivable is this effect.
The container movement, often associated with ‘Docker’, has once again thrilled the technology community. A standardized format for construction, storage and distribution of container images is a fundamental feature of this movement.
Application developers now get the chance to pack applications locally but operate them in far remote environments.
This procedure forms an alternative to the use of so-called Buildpacks, known from platforms like ‘Heroku’ and Cloud Foundry. Using those Buildpacks, the application developer only accounts for the code of his application and leaves the transformation of it into a usable application up to a Buildpack.
In contrast, the developers themselves are under obligation whenever they want to create their own container images. By now, it is even possible to use Buildpacks for the creation of container images.
The upcoming container trend has – like the former trend for virtual machines- set up the task of orchestration. As virtual infrastructures have to orchestrate the scheduling of virtual machines, so must container platforms orchestrate containers.
Today it has become obvious: this contest, often referred to as ‘Container Orchestration War’, was won hands down by Kubernetes.
This fact has extensive consequences for many adjacent technologies.
Application platforms, such as Cloud Foundry, adapt e.g. by replacing their container orchestrator, the ‘Diego’ -subsystem, by Kubernetes. Moreover, those application platforms often leave a deep infrastructure footprint.
Their operation requires dozens of virtual machines and their orchestration, not counting the user-applications, which demands many hundreds of Gigabyte working memory and a large number of virtual CPUs for the application platform alone.
Consequently, application platforms challenge their economy of scale. Many applications must be operated on them to create economies of scale ensuring their efficiency.
But once this point is reached, these platforms turn out to be enormously efficient. Thus, a team of moderate size can easily manage many thousand application instances.
The need to create a more efficient and user friendly application operation is looming. Examples are detectable in all technologies mentioned above. Even here, Kubernetes seems to be gaining momentum.
Kubernetes offers a declarative language to describe distributed workloads. Unlike pure application platforms, such as Cloud Foundry, which only allow the operation of stateless applications, by means of Stateful Sets, Kubernetes also allows the operation of stateful services, like databases. Although lower isolation of input-output loads can bear some challenges.
No wonder then, that the inherent flexibility of it makes Kubernetes not only poach in the territory of application platforms but also dominate the realm of virtual infrastructures. The assumption that Kubernetes, due to its fast distribution, will soon be found in every data centre, leads to interesting implications concerning automation of loads typically packed in virtual machines.
Anyone, who ever had the task to automate large, complex application systems on a global scale and therefore across infrastructures, knows the pain related to that venture.
There are plenty of public or on-premise infrastructure providers and each of them offer their own, mostly mandatory, Application Programming Interfaces (API) for remote control.
Now it is up to the automation developer to perform an infrastructure abstraction, for which there are many ideas and tools but which leads to considerable extra expense, anyway.
And that, in turn, entails enormous waste.
For that very reason the weal and woe of platforms like Cloud Foundry depend for one on the ability to grant this infrastructure independence and for the second on the possibility to minimize the operating expenses. Cloud Foundry has solved this task with one infrastructure automation tool, called ‘BOSH’.
If Kubernetes is available on every infrastructure and Kubernetes is as able to operate heavyweight stateful application systems, the conclusion is obvious to operate any application on Kubernetes in general. That is why nowadays developers in many organizations ask for Kubernetes first and then hardly want to deal with subordinate infrastructures.
Using one language only to describe application loads is convenient and that is: Kubernetes.
Therefore the replacement of Cloud Foundry’s (CF) container operator by Kubernetes as a first and the relocation of all further CF components hither in a second step does not surprise anyone.
Considering the average on-premises CloudStack, there is a hardware-layer containing the physical network, the VM-hosts, storage server and the like. Above, the virtualisation layer may consist of infrastructures like ‘VMware vSphere’ or OpenStack. On this virtual infrastructure one or more Kubernetes clusters are operated, which are realized with VMs. Thus, application pods run in Kubernetes Node VMs.
At this point Kubernetes has already devoured a great portion of the virtual infrastructure: the interaction with the user. The infrastructure only serves as a vicarious agent and has lost the dominance of the APIs.
Still, the conquest of the infrastructure territory has not been completed yet. Again, the eyes of an expert view optimization potential. As the orchestration of virtual machines resembles that of containers. Both of them require a resource-request that must be operated from the pool of existing hosts and nodes in order to place either a VM or a Pod.
There are situations when the description of application loads sometimes bears a problem which occurs less due to the structure of Kubernetes but more due to the implementation of common container services: this is the weak isolation of containers. A container is mostly a storage separate from the operation system with a defined number of CPU-Shares.
Is the use of the capacity of hard disks additionally limited, the limitation of input and output- operations on hard disk and networks is often missing completely. Hence, a container would be able to influence other containers on the same host or even the same network concerning their performance, which then would become more and more unpredictable.
Given the possibility to isolate I/O critical loads on dedicated Kubernetes nodes, using affinity or anti-affinity rules, this alteration in the placing algorithm of Kubernetes means responsibility for the according developer. This could be avoided if the containers were isolated more effectively.
What is more: The placement of automatically provisioned critical workloads in the Kubernetes cluster requires the availability of free Kubernetes nodes in order to isolate the critical workload. The capacity of Kubernetes nodes is used as means to receive the advantages of virtual machines. But this is at best a workaround.
Teaching Kubernetes to create highly isolated containers would certainly be the more elegant solution. As highly isolated containers take much more time to start, and also cause much more overhead, this solution is only useful for critical workloads.
Thus, there is a growing need for more multiple container classes in Kubernetes, with one providing lightweight containers and another one providing the highly isolated variety. Since ‘Project Pacific’ , by using the Kubernetes-API, VMware offers the possibility to create VMs which play their part as highly isolated containers.
The Problem of increased fixed expenses from container to virtual machine is one issue of recent development. Wanted is a type of technology which on the one hand grants higher isolation, on the other hand comes with little memory overhead and at the same time starts in no time.
Technologies like this do exist already, e.g. ‘Amazon Firecracker’, a software creating lightweight VMs. Thus, there are also first Kubernetes distributions using Amazon Firecracker, to provide well isolated containers.
Another area of recent Kubernetes distributions is the installation of Kubernetes on hardware (Bare Metal). Here, Kubernetes takes the role of the virtual infrastructure. Those are the very scenarios requiring higher container isolation, in order to avoid the above mentioned isolation problems (Noisy Neighbours). The installation of a Kubernetes distribution directly on hardware using standard containers is rightly to be viewed with scepticism.
To sum up, Kubernetes doubtlessly has a strong impact on the creators of infrastructures. One example is OpenStack but VMware and also public infrastructures are affected as well. Hereby the fact must be considered that multiple hypotheses are tested.
Current trials to show adequate reaction to the powerful Kubernetes trend range from the saddle-up of Kubernetes clusters using VMs over the subsidence of the Kubernetes- API in virtualization products to the complete replacement of traditional virtualization. If all these varieties find their customers, a decent minimum life span should be expectable for them.
More than likely, Kubernetes-Clusters will be packed into VMs for years and the investments into existing infrastructures will be deducted in due form. But little by little the wheat will be separated from the chaff. Some approaches will disappear, while others will remain and rise in popularity.
For those ready to gamble, one could posit that too many technology layers piled-up on top of each other will one day collapse under their own weight. This, of course, would favour the approach to maintain the Kubernetes API and blend various virtualization techniques up to a nearly seamless transition from VMs to Containers.
Developers will face Kubernetes on small edge devices, in on-premises data centers or on public infrastructures. The use of notions in categories such as operation system, server architecture and hypervisor will decrease in the everyday language of developers.
Finally, the focus on the most essential aspect will be enhanced: the description of workloads with respect to their business benefits. Technology recedes into the background.
Presumably, this trend will not cause the death of OpenStack or comparable technologies, but it will promote their development towards Commodity, which will only be feasible by stronger integration of Kubernetes.