Published at 02.12.2022
Aaand… KubeCon & CloudNativeCon NA 2022 is already over. We had the privilege to attend the event in Detroit from the 24th to the 29th of October. There, we spoke with many people from the cloud industry, and we could watch a lot of talks on the exciting work the community is doing.
As we did for the past edition of the event in Valencia, we used the information we gathered to distill the list of major trends and noteworthy projects that we could spot.
It’s interesting how some of the trends we identified in Valencia were confirmed, while others reversed entirely!
Without further ado, here are our highlights of KubeCon NA 2022.
After the KubeCon in Valencia, we were surprised to see how many people and companies are writing their own custom K8s controllers and operators. At this KubeCon, this phenomenon was even more apparent. Many people and companies we spoke to had written their own Operator or were in the process of doing that.
This is reflected in the talks offer: the number of sessions centered around writing and operating controllers was much higher than in the past edition!
In vanilla Kubernetes usage scenarios, an individual application runs within a single Kubernetes cluster. As organizations use Kubernetes more and more (and more pervasively) and their number of users increases, confining a workload within a single cluster can become too limiting. In these cases, they sometimes turn to spreading replicas of the same (stateless or stateful) application across multiple clusters, potentially in different cloud regions. One reason to do that is to reduce latency by placing the application closer to users in different geographical areas. Another one is increased availability.
Indeed, from our conversations at KubeCon we sense that this use case is
on the rise.
Of course, doing multi-cluster comes with its own caveats, especially around networking. To learn more, check out our CEO Julian Fischer’s talk on this topic at the co-located event Data on Kubernetes.
Nowadays, the most common way to extend a Kubernetes cluster is to deploy an Operator (together with its CRDs) in it. The implication is that applications that want to consume the software provided by the Operator need to run in the same cluster as the Operator, which means that the Operator needs to be installed AND lifecycle managed in that cluster.
This costs time and money, especially if organizations have many clusters, so the Operator(s) must be installed and managed in more than just one cluster. But often, that’s completely unwanted! Consumers of the CRDs would gladly consume the software an Operator provides without having to manage the Operator itself.
The situation has been summarized in this tweet by Dr. Stefan Schimanski from RedHat, which points out how nowadays, Operators are often incompatible with Software as a Service (SaaS).
Lo and behold, Dr. Stefan Schimanski and a few other engineers didn’t just complain but also prototyped a solution: kube-bind. It’s a new project that aims to make Operators, CRDs, and SaaS compatible.
The idea is very simple: there’s a “provider” cluster where the Operators run. In that cluster, Operators can “export” CRDs, which means that the CRDs can be made accessible in other clusters. Applications that want to consume the CRDs run in other clusters, which we’ll call “consumer” clusters here.
To make CRDs that the provider exports available in a consumer cluster, the CRD needs to be explicitly imported from within the consumer cluster. Then, custom resources can be created in the consumer cluster out of the CRD. But the actual software or resources that back the CRDs run in the provider cluster, not the consumer one, just like the Operator.
The only thing that runs in the consumer cluster is a small agent that syncs custom resources back and forth between consumer and provider cluster (the spec goes from the consumer cluster to the provider’s, so that the Operator can reconcile the custom resource, the status in the opposite direction, so that the user learns whether its resources are running as intended).
If you want to learn more, check out this talk!
In our recap of the KubeCon in Valencia, we wrote that people were giving up on having multi-tenancy within a single Kubernetes cluster, and had come to peace with the idea of giving separate tenants separate clusters. Probably we were wrong.
We’ve met many people who are still actively looking for ways to implement multi-tenancy within the same cluster (for example, to save money). But most importantly, a project that might accomplish this was presented: kcp – a Kubernetes-like control plane .
kcp allows creating and deleting virtual Kubernetes clusters called “workspaces” extremely quickly and cheaply. A key idea to accomplish this is that a workspace out of the box lacks most of the Kubernetes API resources that are in normal Kubernetes clusters, such as Pods, Deployments, etc… (although you can re-add them).
The thought there is that workspaces users will only install the APIs (native or custom, such as CRDs) that they need. Each workspace still contains resources such as namespaces and those required to implement RBAC.
This project opens up a new world of possibilities for multi-tenancy, geo-replicated workloads, or blazing-fast provisioning of clusters for testing applications as part of a CI/CD workflow, to name a few.
Moreover, its full potential can be leveraged in conjunction with
kube-bind to provide tenants
with the functionalities of Operators without running the Operators in
If you’re curious about it, check out this video on the project!
Sadly, something that hasn’t changed since the last KubeCon is a shortage of contributors in Kubernetes and most CNCF projects. However, an interesting initiative was started to onboard new people at this KubeCon: ContribFest.
ContribFests are per-project in-person sessions at KubeCon, where
attendees are split into groups and are guided through a small
contribution to the project by the maintainers.
If you’d like to become a contributor, register for a ContribFest at the next KubeCon!
Traditionally, the Kubernetes ecosystem has been great for stateless applications but lacking for stateful ones. The times are changing however.
At this KubeCon there was a lot of talk about running stateful workloads and especially data-intensive ones on Kubernetes. This is a much-needed use case, and many organizations are working actively on it. This finding is consistent with the 2022 Data On Kubernetes Report, according to which “Kubernetes is on its way to becoming the industry standard for managing cloud-native data applications.”
We hope that you found our list interesting. Stay tuned for our recap of the next KubeCon, which will take place in Amsterdam in April 2022.
Also, we’re hiring, so if you want to help us build the next generation of Kubernetes-native automation for data services, check our career page!
Products & Services
© anynines GmbH 2023