eBPF or not, sidecars are the future of service mesh – The New Stack

eBPF is a hot topic in the world of Kubernetes, and the idea of ​​using it to create a “no-sidecar service mesh” has been a hot topic recently. Proponents of this idea claim that eBPF allows them to reduce the complexity of the service mesh by removing sidecars. What is left unsaid is that this model simply replaces sidecar proxies with per-host proxies – a significant step backwards for security and operability that increases, not decreases, complexity.

The sidecar model represents a tremendous breakthrough for the industry. Sidecars enable the dynamic injection of functionality into the application at runtime, while retaining – critically – all the isolation guarantees achieved by containers. By moving from sidecars to multi-tenants, shared proxies lose this critical isolation and cause significant security and operability regressions.

In fact, the service mesh market has seen it firsthand: the first service mesh, Linkerd 1.0, offered a “sidecar-less” service mesh around 2017 using the same per-host proxy model, and resulting challenges in operations, management, and security led directly to Linkerd 2.0 based on sidecars.

eBPF and sidecars are not a choice, and the claim that eBPF must replace sidecars is a marketing construct, not an actual requirement. eBPF has a future in the service mesh, but it will be as eBPF and sidecars, not eBPF Where sidecars.

eBPF in a nutshell

To understand why we must first understand eBPF. eGMP is a powerful feature of the Linux kernel that allows applications to dynamically load and execute code directly in the kernel. This can provide a substantial performance boost: rather than continually moving data between the kernel and the application space for processing, we can do the processing in the kernel itself. This increase in performance means that eBPF opens up a whole class of applications that were previously unfeasible, especially in areas such as network observability.

But eBPF is not a silver bullet. eBPF programs are very limited, and for good reason: running code in the kernel is dangerous. To prevent bad actors, the kernel must impose significant constraints on the eBPF code, not the least of which is the “verifier”. Before an eBPF program is allowed to run, the verifier performs a series of rigorous static analysis checks on the program.

Automatically checking for arbitrary code is difficult, and the consequences of errors are asymmetrical: rejecting a perfectly safe program might be an annoyance for developers, but allowing a dangerous program to run would be a major kernel security vulnerability. Because of this, eBPF programs are severely limited. They cannot block, nor have unlimited loops, nor even exceed a predefined size. The verifier must evaluate all possible execution paths, which means that the overall complexity of an eBPF program is limited.

Thus, eBPF is only suitable for certain types of work. For example, functions that require limited state, such as “count the number of network packets matching an IP address and a port”, are relatively simple to implement in eBPF. Programs that require accumulating state in a non-trivial way, for example, “parse this HTTP/2 stream and do a regular expression match against a user-supplied configuration”, or even “negotiate this handful of main TLS”, are either downright impossible to implement or require Rube Goldberg’s levels of contortion to use eBPF.

eBPF and the Service Mesh

Now let’s move on to service meshes. Can we replace our sidecars with eBPF?

Predictably, given the limitations of eBPF, the answer is no – what the service mesh does is far beyond what pure eBPF is capable of. Service meshes handle all the complexities of modern cloud native networking. Linkerd, for example, initiates and terminates mutual TLS; retries queries on transient failures; transparently upgrades connections from HTTP/1.x to HTTP/2; enforces authorization policies based on the workload’s cryptographic identity; and much more.

eBPF and sidecars are not a choice, and the claim that eBPF must replace sidecars is a marketing construct, not an actual requirement.

Like most service meshes, Linkerd does this by inserting a proxy into each application pod – the proverbial sidecar. In Linkerd’s case, it’s the ultra-lightweight Linkerd2-proxy “micro proxy”, written in Rust and designed to consume as few system resources as possible. This proxy intercepts and augments all TCP communications to and from the pod and is ultimately responsible for implementing the full feature set of the service mesh.

Some features of this proxy can be realized with eBPF. For example, sometimes the sidecar’s job is just to proxy a TCP connection to a destination, without parsing or L7 logic. This could be offloaded to the kernel using eBPF. But the majority of what the sidecar does requires significant state and is impossible or at best infeasible to implement in eBPF.

So even with eBPF, the service mesh still needs userspace proxies.

The case of the sidecar

If we are designing a service mesh, it is up to us to decide where to place the proxies. From an architectural level, we could put them at the sidecar level, at the host level, or even at the cluster level, or even somewhere else. But from an operational and safety point of view, there is really only one answer: compared to all the alternatives, sidecars offer substantial and concrete advantages in terms of safety, maintainability and operability.

A sidecar proxy handles all traffic to a single application instance. Indeed, it acts within the framework of the application. This translates into significant advantages:

  • Sidecar proxy resource consumption scales with the traffic load to the application, so resource limits and Kubernetes requests are directly applicable.
  • The “explosion radius” of sidecar failure is limited to the pod, so Kubernetes’ pod lifecycle controls are directly applicable.
  • Upgrading sidecar proxies is handled similarly to upgrading application code, such as through rolling rollouts.
  • The security boundary of a sidecar proxy is clearly delineated and tightly bounded: the sidecar proxy contains only secret material relating to that pod and acts as an enforcement point for the pod. This granular enforcement is at the heart of zero-trust approaches to network security.

In contrast, per-host proxies (and other forms of multitenancy, such as cluster-wide proxies) handle traffic to any arbitrary set of Kubernetes pods scheduled on the host. This means that all the operational and safety advantages of sidecars are lost:

  • Proxy resource consumption per host is unpredictable. This is a function of Kubernetes dynamic scheduling decisions, which means resource limits and requests are no longer useful – you can’t know in advance how much system the proxy requires.
  • Per-host proxies must ensure fairness and QoS or else the application may starve. This is a non-trivial requirement and no popular proxy is designed to handle this form of “concurrent multitenancy”.
  • The blast radius for per-host proxies is large and continually changing. A failure in one proxy per host will affect arbitrary sets of arbitrary application pods that have been scheduled on the host. Similarly, upgrading one proxy per host will impact arbitrary applications to arbitrary degrees depending on the scheduled pods on the machine.
  • The security story is…messy. One proxy per host must contain keying material for all scheduled pods on that host and must perform enforcement on behalf of all scheduled applications on that host. This turns the proxy into a new attack vector vulnerable to confused assistant problemand any CVEs or flaws in the proxy now have a significantly greater security impact.

In short, sidecar proxies built on top of the isolation guarantees achieved through containers, allowing Kubernetes and the kernel to enforce security and fairness. Per-host proxies escape these guarantees entirely, introducing significant complexities in operations, security, and maintenance.

So where do we go from here?

eBPF is a big step forward for networking and can optimize some work from the service mesh by moving it to the core. But eBPF will still require userspace proxies. Given this, the correct approach is to combine eBPF and sidecars, not avoid sidecars.

Offering a mesh of services without a sidecar with eBPF means putting the marketing cart before the engineering horses. Of course, “gradually improving sidecars with eBPF” doesn’t quite have the same buzz factor as “goodbye sidecars”, but from a user perspective it’s the right one. decision.

The sidecar model is a tremendous breakthrough for the industry. It’s not without its challenges, but it’s by far the best approach we have to managing all cloud-native networking while maintaining the isolation guarantees gained by adopting containers in the first place. eBPF can complement this model, but it cannot replace it.

Featured image via Shutterstock.

Comments are closed.