Every eBPF tutorial ends the same way. You compile your program, load it with bpftool or a few lines of library code, attach it to a hook point, and watch it work. The tutorial ends there.

What happens in production, when multiple teams want to attach programs to the same hook, when you need to distribute programs across hundreds of nodes, when you can't hand out CAP_BPF to every process that asks? That part is left as an exercise for the reader.

bpfman exists to answer those questions.

The problems nobody talks about

eBPF has a well-documented loading story for a single program on a single machine. It has almost no story for anything more complex. Three problems in particular kept coming up.

Access control. Loading an eBPF program requires CAP_BPF (or historically CAP_SYS_ADMIN). These are privileged capabilities. The naive approach is to give them to every process that needs to load programs - your CNI plugin, your observability agent, your security tool, whatever else wants to attach to the kernel. That's a large and growing attack surface. What you want is a single privileged daemon that loads programs on behalf of unprivileged callers, acting as a gatekeeper with its own access policy.

Distribution. eBPF bytecode is kernel-version sensitive. A program compiled for one kernel may not verify on another. The build-and-distribute cycle for eBPF programs is painful: you need to manage kernel headers, BTF data, and compilation outputs for every target kernel in your fleet. There's no standard packaging format, no registry, no equivalent of docker pull. Every project that ships eBPF programs rolls its own approach.

Multi-tenancy. An XDP or TC hook point can only have one program attached at a time - or so it seems. In practice, you have a CNI plugin wanting XDP, a DDoS mitigation tool wanting XDP, an observability agent wanting XDP, all on the same interface. Without coordination, they'll overwrite each other. The workaround is usually "whoever loads last wins," which is not a policy.

Distribution: Docker for eBPF

The distribution problem has an obvious solution once you notice that the container ecosystem already solved it: OCI images. An eBPF program can be packaged as an OCI image and pushed to any container registry. bpfman pulls images the same way a container runtime does - same tooling, same infrastructure, same security scanning pipelines.

The image contains the compiled eBPF bytecode plus bpfman-specific metadata describing how the program should be loaded and attached. You get versioning, signing, and distribution for free, because the container ecosystem already built all of that. The cycle time for deploying a new or updated eBPF program drops from "rebuild, copy binaries, restart daemon" to "push image, update configuration."

Multi-tenancy: the dispatcher protocol

The multi-tenancy problem is harder. bpfman's approach is based on a protocol defined by libxdp for XDP: rather than attaching programs directly to a hook point, bpfman loads a dispatcher program that calls each registered program in sequence. Programs register with a priority, and the dispatcher calls them in priority order.

bpfman extended this protocol to TC (traffic control) hook points - libxdp only defined it for XDP. More importantly, bpfman acts as the orchestrator that determines ordering based on user intent. You declare what you want; bpfman figures out the priority ordering. Multiple independent programs can share a hook point without knowing about each other.

Access control: one privileged daemon

The access control solution follows naturally from the architecture. bpfman is a daemon that holds CAP_BPF. Everything else talks to it over a gRPC API. Callers don't need elevated privileges - they just need permission to talk to the daemon. The daemon enforces policy about who can load what.

Kubernetes integration

On Kubernetes, bpfman runs as a DaemonSet - one instance per node, holding CAP_BPF on that node. eBPF programs are declared as custom resources. You kubectl apply an eBPF program the same way you'd apply a Deployment or a ConfigMap. The bpfman operator watches for those resources and instructs the per-node daemon to load them.

The result is that eBPF programs become first-class Kubernetes workloads. They have lifecycle management, version history, and can be managed with standard GitOps tooling. A platform team can define policy about what programs are allowed to run, using standard Kubernetes RBAC.

Why this needed to exist

The container ecosystem took years to develop the infrastructure that makes containers operationally tractable at scale: images, registries, orchestration, access control. eBPF was arriving at the same destination - general-purpose programmability at the kernel level - without any of that infrastructure.

bpfman is an attempt to shortcut that journey. Take the lessons from containers, apply them to eBPF, and give platform teams something they can actually operate.

The next post goes into the architecture in more detail - how the daemon, the API, and the Kubernetes operator fit together.