For anyone who works with serverless computing or cloud-based application development, Jacques Chester says Knative frees up a lot of a developer’s time and mindshare. Developers already have to think about many different moving parts when building out a cloud system or application. However, as a Senior Software Engineer at VMware, Jacques believes that there are some decisions that developers shouldn’t have to worry about making.
In this article, Jacques explains why Knative could be the answer to your stressors. He outlines how Knative removes some of the complexities in serverless computing, how that enables developers to work faster, and offers some best use cases for Knative.
If Knative is new to you, here’s the elevator pitch: It’s a serverless computing system and collection of components that build on the existing Kubernetes system. As an open source project made by developers from companies like Google and Pivotal, Knative provides several tools that integrate natively with Kubernetes, and enable other middleware components for building various applications anywhere - ‘on prem’, in the cloud, or even in third-party data centers.
This automates a lot of the work you normally had to do manually - things like building containers or deploying containerised code into a serverless environment within a Kubernetes-based pipeline. What really excites Jacques about Knative, though, is that it allows developers like himself to build more complex serverless systems faster and more easily by abstracting away many of the complexities.
Jacques has been working with Knative for a number of years, and is also busy writing a book about it. What excited him about it the most, though, is how it reduces the mindshare it takes him to set up a serveless system. It does this by separating the ‘accidental complexities’ away from the ‘core (or essential) complexities’:
- Accidental complexities are the things you have to figure out as you go, and problems you have to solve for, that don’t directly impact your mission. This could include having to choose which components to install, installing them, and making sure they connect.
- Core/Essential complexities, on the other hand, are the things that define, make, and break your mission directly. Jacaues explains that he wants to focus on the core complexity of the thing so he can say things like, “I have my application running, and I want to read the logs. Where are they? I’m running Knative, and I want to change the distribution of traffic between two revisions of my software. So do this. I don’t want to have to sweat the details.”
By removing the ‘accidental complexities’ from a developer’s to-do list, Knative preempts many of Jacques’ decisions. This lets him work more productively and more efficiently: Instead of writing systems from scratch and worrying about how to integrate them or what to install, his team can trust that their systems will have the same ‘baseline’ installed, and so can focus purely on the ‘core complexities’ within their individual scopes:
“If I’m building an application that helps a water charity keep track of the wells it needs to repair in a developing country - that’s not relevant to what I’m doing. It’s relevant as a necessity, but it’s an accidental complexity; it’s not the core complexity of the problem. The effort I’m investing there should be about better understanding the domain model of drilling for water, or the time it takes to repair something.”
Below, Jacques discusses:
- How Knative frees up his mindshare
- Use cases he’s found are most effective for Knative
How Knative frees up mindshare
As mentioned above, setting up a serverless computing system can come with many ‘accidental complexities’, which take up a lot of a developer’s free mindshare:
“There’s a lot of cognitive overhead in having to learn 20 different things just to see which five things are relevant for what you’re doing”, Jacques explains. In Kubernetes, for example, a developer might have to learn about Kubernetes’ services and ingresses, pick an ingress controller to use, decide on naming conventions, decide how services will be put together, and configure the ingress controller. “These things pile up,“ he says.
Instead of making both kinds of complexity a developer’s problem, Knative removes the details of the accidental complexities, which takes away the cost of reinvention and ‘drag’ that normally slows a developer down:
“The thing that is easy to see when you’re building your own platform is what you want it to be. That doesn’t have many details. The thing that gets you is everything else; there are so many details to consider. I have become a fan of not thinking about details, and letting somebody else do it.”
Two ways in which Knative preempts and takes care of those details are:
- It auto-installs key components a developer would otherwise have to worry about, and
- It connects event-driven systems across an organisation for easier communication between teams, and between developers and operators
Auto-installing key components
Normally, setting up a system from scratch means choosing all the components needed, and then installing them. “If I want all these features, like logging, metrics, tracing, ingress controls, and so on, these things don’t happen by themselves; we have to install them.” Knative installs many of those things automatically, like Istio, which itself sets up a few of its own components.
“Sometimes” he explains, “it’s better to accept the limits that are imposed by taking an outside tool, than to embrace the ultimate customisability - because customisability is advantageous in the areas where you need it, but it’s a cost everywhere else.”
In Jacques’ experience, too much customisability in setting up a system from scratch means that even a simple idea can quickly become very complex. As a developer, Knative means he doesn’t have to worry about what gets installed, and only worry about where to find things that get installed, for example, or switch things off he doesn’t need. This saves him the time cost of, firstly, figuring out what to install and, secondly, going on to install it:
“Knative liberates me from having to think about details that are not relevant to my direct mission. As more people start to use Kubernetes, it becomes less viable - just on practicality - to do everything yourself.”
Connecting event-driven systems across an organisation
When many different developers are setting up many different systems from scratch, across the organisation, Jacques has found it hard to later connect them all. In his experience, communicating and connecting systems is often an afterthought, which means that when things break it’s hard to find out where they are.
In this sense, Knative can serve as a ‘dab of glue’, or connective tissue, between many different systems. This makes it easier for developers to:
- Work from a language-agnostic workflow,
- Communicate with each other about system updates, and
- Connect systems together cleanly, and in a way that makes it easy to surface things when they break
“Historically“, Jacques describes, “people did this in all sorts of ways. You’d have little shell scripts hanging out in strange places, for example, being run by Cron. You would have an endpoint sprouting on a web service somewhere that didn’t really belong there, but was somewhere where they could deploy it.”
Instead, Knative builds all of that complexity into a consistent structure: “In Knative, all these relationships are defined initially with
.yaml. It’s a declarative document that you can check into a repository, and it’s all in a consistent structure. If I’m looking for the dabs of glue, and I exercise a little bit of development discipline, I can find them all in one place, and all in the same format.
Best use cases for using Knative
Although Knative has many benefits for a developer, Jacques says that there are two situations when Knative makes the most sense to use:
When you aren’t worried about the intricacies of Kubernetes
Knative automates a lot of the ‘DIY’ aspects in Kubernetes. So, for a junior developer, someone who’s excited to have a deep understanding of Kubernetes services and systems, or for an ‘operator’ (someone whose job it is to install, configure and monitor a Kubernetes system), Knative will probably take away the kind of customisation you’d want in that context.
If none of the above cases are important to your context, Knative carries the load of the customisation that would otherwise risk you overcomplicating both your workflow and your system. It’ll give you back cognitive bandwidth to focus on the things you care about.
When your workload is cost-sensitive, but not necessarily latency-sensitive
The tradeoff of latency-sensitivity for cost-sensitivity is, as Jacques explains, “something where you want to be efficient, but you don’t necessarily mind if somebody, once in a while, has to wait a few seconds to get a response.” Knative is still in ongoing development, so although the latency time has been cut down a lot, Jacques still says developers should keep that consideration in mind when implementing Knative.
For developers, the time and cognitive costs that Knative saves enables deeper focus on what really matters in setting up serverless systems. Abstracting away the accidental complexities helps constrain the scope of a system, which gives developers more time and more cognitive bandwidth for figuring out the core complexities of their workloads and their systems.