KubeCon & CloudNativeCon Detroit 2022 – The Recap

KubeCon NA 2022 Recap Article Header Image

Aaand… KubeCon & CloudNativeCon NA 2022 is already over. We had the privilege to attend the event in Detroit from the 24th to the 29th of October. There, we spoke with many people from the cloud industry, and we could watch a lot of talks on the exciting work the community is doing.

As we did for the past edition of the event in Valencia, we used the information we gathered to distill the list of major trends and noteworthy projects that we could spot.

It’s interesting how some of the trends we identified in Valencia were confirmed, while others reversed entirely!

Without further ado, here are our highlights of KubeCon NA 2022.

K8s Operators and Controllers Are Still Going Strong

After the KubeCon in Valencia, we were surprised to see how many people and companies are writing their own custom K8s controllers and operators. At this KubeCon, this phenomenon was even more apparent. Many people and companies we spoke to had written their own Operator or were in the process of doing that.

This is reflected in the talks offer: the number of sessions centered around writing and operating controllers was much higher than in the past edition!

Multi-Cluster is on the Rise

In vanilla Kubernetes usage scenarios, an individual application runs within a single Kubernetes cluster. As organizations use Kubernetes more and more (and more pervasively) and their number of users increases, confining a workload within a single cluster can become too limiting. In these cases, they sometimes turn to spreading replicas of the same (stateless or stateful) application across multiple clusters, potentially in different cloud regions. One reason to do that is to reduce latency by placing the application closer to users in different geographical areas. Another one is increased availability.

Indeed, from our conversations at KubeCon we sense that this use case is on the rise.
Of course, doing multi-cluster comes with its own caveats, especially around networking. To learn more, check out our CEO Julian Fischer’s talk  on this topic at the co-located event Data on Kubernetes.

K8s and SaaS: United at Last?

Nowadays, the most common way to extend a Kubernetes cluster is to deploy an Operator (together with its CRDs) in it. The implication is that applications that want to consume the software provided by the Operator need to run in the same cluster as the Operator, which means that the Operator needs to be installed AND lifecycle managed in that cluster.

This costs time and money, especially if organizations have many clusters, so the Operator(s) must be installed and managed in more than just one cluster. But often, that’s completely unwanted! Consumers of the CRDs would gladly consume the software an Operator provides without having to manage the Operator itself.

The situation has been summarized in this tweet by Dr. Stefan Schimanski from RedHat, which points out how nowadays, Operators are often incompatible with Software as a Service (SaaS).

Lo and behold, Dr. Stefan Schimanski and a few other engineers didn’t just complain but also prototyped a solution: kube-bind. It’s a new project that aims to make Operators, CRDs, and SaaS compatible.

The idea is very simple: there’s a “provider” cluster where the Operators run. In that cluster, Operators can “export” CRDs, which means that the CRDs can be made accessible in other clusters. Applications that want to consume the CRDs run in other clusters, which we’ll call “consumer” clusters here.

To make CRDs that the provider exports available in a consumer cluster, the CRD needs to be explicitly imported from within the consumer cluster. Then, custom resources can be created in the consumer cluster out of the CRD. But the actual software or resources that back the CRDs run in the provider cluster, not the consumer one, just like the Operator.

The only thing that runs in the consumer cluster is a small agent that syncs custom resources back and forth between consumer and provider cluster (the spec goes from the consumer cluster to the provider’s, so that the Operator can reconcile the custom resource, the status in the opposite direction, so that the user learns whether its resources are running as intended).

If you want to learn more, check out this talk!

KCP: Time for Multi-Tenancy on K8s?

In our recap of the KubeCon in Valencia, we wrote that people were giving up on having multi-tenancy within a single Kubernetes cluster, and had come to peace with the idea of giving separate tenants separate clusters. Probably we were wrong.

We’ve met many people who are still actively looking for ways to implement multi-tenancy within the same cluster (for example, to save money). But most importantly, a project that might accomplish this was presented: kcp – a Kubernetes-like control plane .

kcp allows creating and deleting virtual Kubernetes clusters called “workspaces” extremely quickly and cheaply. A key idea to accomplish this is that a workspace out of the box lacks most of the Kubernetes API resources that are in normal Kubernetes clusters, such as Pods, Deployments, etc… (although you can re-add them).

The thought there is that workspaces users will only install the APIs (native or custom, such as CRDs) that they need. Each workspace still contains resources such as namespaces and those required to implement RBAC.

This project opens up a new world of possibilities for multi-tenancy, geo-replicated workloads, or blazing-fast provisioning of clusters for testing applications as part of a CI/CD workflow, to name a few.

Moreover, its full potential can be leveraged in conjunction with kube-bind to provide tenants with the functionalities of Operators without running the Operators in their workspaces.
If you’re curious about it, check out this video on the project!

K8s Still Needs Contributors

Sadly, something that hasn’t changed since the last KubeCon is a shortage of contributors in Kubernetes and most CNCF projects. However, an interesting initiative was started to onboard new people at this KubeCon: ContribFest.

ContribFests are per-project in-person sessions at KubeCon, where attendees are split into groups and are guided through a small contribution to the project by the maintainers.
If you’d like to become a contributor, register for a ContribFest at the next KubeCon!

Data on Kubernetes Wanted

Traditionally, the Kubernetes ecosystem has been great for stateless applications but lacking for stateful ones. The times are changing however.

At this KubeCon there was a lot of talk about running stateful workloads and especially data-intensive ones on Kubernetes. This is a much-needed use case, and many organizations are working actively on it. This finding is consistent with the 2022 Data On Kubernetes Report, according to which “Kubernetes is on its way to becoming the industry standard for managing cloud-native data applications.”

We hope that you found our list interesting. Stay tuned for our recap of the next KubeCon, which will take place in Amsterdam in April 2022.

Also, we’re hiring, so if you want to help us build the next generation of Kubernetes-native automation for data services, check our career page!

Error Handling in GO – Part 3

Error Handling and Abstracting in GO - Part 3

Chapter 3 – Writing a Custom Error Type

In the first chapter, we learned the basics of errors in go. How to create an error with a particular message. Then we looked into exposing different types of errors in our packages so that consumers of our code can handle them differently, if appropriate.

In the second chapter of our series, we learned how to enhance our error messages to provide better user information. We learned how to add additional information to our error messages while retaining the ability to distinguish between different error cases. The additional information can be used by whoever receives the error message to hopefully resolve their issue. In our example of a configuration provider, we return a syntax error if the provided configuration is in an invalid syntax.

We then learned that we can wrap errors with additional information, such as the line number of the syntax error. This additional information is not easily machine-readable, and we cannot use it to refine our automated error handling, such as retry logic. We can only differentiate based on the error we wrapped with the additional information.

In this part, you will learn how to create your own error types, which can have any behavior you may find helpful. We will use this to resolve an issue we found in the last chapter: how to decorate an error that was returned from a library our application uses, decide how to handle it based on our sentinel errors, and yet, still have access to the original error. Once you know how to implement custom error types, we will show you some examples where this may be useful.

(more…)

Error Handling in GO – Part 2

Error in Golang Part 2

Chapter 2: Using fmt.errorf to Wrap Sentinel Errors of the Interface

We finished the previous chapter of this series with returning Sentinel error values, and noticed that we may want to provide additional information to the user. We learned that sentinel errors are values of type error you can expose within your go project  by making them public variables. This allows users of your project to compare these values with the errors you return from your projects’ functions, to handle different kinds of errors gracefully.

(more…)