Releases

What is eBPF? A Practical Guide for Kubernetes and DevOps Engineers

What is eBPF? eBPF (extended Berkeley Packet Filter) is a technology that lets you run sandboxed programs inside the Linux kernel without changing kernel source code or loading traditional kernel modules. Originally designed for packet filtering (the “Berkeley Packet Filter” part), eBPF has evolved into a general-purpose in-kernel virtual machine that is now used for […]

February 16, 2026 6 min read

What is eBPF?

eBPF (extended Berkeley Packet Filter) is a technology that lets you run sandboxed programs inside the Linux kernel without changing kernel source code or loading traditional kernel modules. Originally designed for packet filtering (the “Berkeley Packet Filter” part), eBPF has evolved into a general-purpose in-kernel virtual machine that is now used for observability, networking, and security across production infrastructure.

For Kubernetes and DevOps engineers, eBPF matters because it powers a growing number of critical tools: Cilium (the CNI plugin used by major cloud providers), Falco and Tetragon (runtime security), and Pixie (auto-instrumented application observability). Understanding the fundamentals of eBPF helps you make better decisions about these tools and troubleshoot issues when they arise.

At a high level, eBPF works like this: you write a small program, the kernel’s eBPF verifier checks it for safety, the JIT compiler translates it to native machine code, and it runs at a specific hook point inside the kernel — all without the risks of a traditional kernel module.

How eBPF Works Under the Hood

To understand why eBPF is significant, you need to understand the problem it solves. Traditionally, if you wanted to add functionality to the Linux kernel — a new network filter, a tracing capability, a security check — you had two options:

  1. Modify the kernel source and recompile. This is slow, impractical for production systems, and requires waiting for upstream acceptance if you want your changes distributed.
  2. Write a kernel module. Faster to iterate on, but kernel modules run with full kernel privileges. A bug in a kernel module can crash the entire system. Modules also need to be recompiled for each kernel version.

eBPF provides a third option: write a small program that the kernel verifies for safety before running it. This program is sandboxed, cannot crash the kernel, and runs with near-native performance thanks to JIT compilation.

The eBPF Pipeline

The lifecycle of an eBPF program follows these steps:

  1. Write the program — eBPF programs are typically written in a restricted subset of C, then compiled to eBPF bytecode using LLVM/Clang. Higher-level tools like bpftrace let you write programs in a simpler scripting language.
  2. Load and verify — When the program is loaded into the kernel, the eBPF verifier statically analyzes it to ensure safety. The verifier checks that the program terminates (no infinite loops), does not access memory out of bounds, and does not use unprivileged operations.
  3. JIT compile — After verification, the JIT compiler translates the eBPF bytecode to native machine instructions (x86_64, ARM64, etc.) for maximum performance.
  4. Attach to a hook point — The compiled program is attached to a specific kernel hook point where it will execute.
  5. Communicate via maps — eBPF programs communicate with user-space applications and with each other through eBPF maps — key-value data structures that live in the kernel and are accessible from both kernel and user space.

Hook Points: Where eBPF Programs Run

eBPF programs attach to specific hook points in the kernel. The most important ones are:

  • kprobes / kretprobes — Attach to any kernel function entry or return. You can trace virtually any kernel function, though kprobe attachment points are not part of the stable kernel ABI and may change between kernel versions.
  • tracepoints — Stable, well-defined instrumentation points in the kernel (e.g., tracepoint:syscalls:sys_enter_openat). These are the preferred attachment points for production use because they are part of the kernel’s stable ABI.
  • XDP (eXpress Data Path) — Attaches at the earliest point in the network receive path, before the kernel allocates a socket buffer. This enables extremely high-performance packet processing (millions of packets per second). Used for DDoS mitigation, load balancing, and packet filtering.
  • TC (Traffic Control) — Attaches to the Linux traffic control layer, providing a hook for both ingress and egress network traffic. Used by Cilium for Kubernetes network policy enforcement.
  • uprobes — Similar to kprobes but for user-space functions. You can trace function calls in any compiled binary (Go, C, Rust) without modifying the application.
  • LSM (Linux Security Module) — Attach to security-relevant decision points in the kernel. Used by tools like Tetragon for runtime security enforcement.

eBPF vs Traditional Kernel Modules

πŸ”” Never Miss a Breaking Change

Monthly release roundup β€” breaking changes, security patches, and upgrade guides across your stack.

βœ… You're in! Check your inbox for confirmation.

The comparison between eBPF programs and traditional kernel modules highlights why eBPF has seen such rapid adoption:

  • Safety: eBPF programs are verified before execution and cannot crash the kernel. Kernel modules run with full privileges and a bug can cause a kernel panic.
  • Portability: With CO-RE (Compile Once, Run Everywhere), eBPF programs compiled against one kernel version can run on other versions without recompilation. Kernel modules must be compiled for each specific kernel version.
  • No reboot required: eBPF programs can be loaded and unloaded dynamically at runtime. Kernel modules can also be loaded dynamically, but some require a reboot.
  • Performance: eBPF programs are JIT-compiled to native code and run with minimal overhead. The verifier ensures that programs terminate quickly, preventing performance degradation.
  • Iteration speed: You can write, load, test, and unload eBPF programs in seconds. Kernel module development cycles are significantly longer.

Practical eBPF: bpftrace Examples

bpftrace is a high-level tracing language for eBPF, similar in spirit to AWK or DTrace. It lets you write powerful tracing programs as one-liners or short scripts. Here are practical examples you can run on any Linux system with bpftrace installed.

Trace File Opens

See every file being opened on the system, along with the process name and path:

# Trace all openat() syscalls and print the filename
bpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%s %s\n", comm, str(args.filename)); }'

Example output:

nginx /etc/nginx/nginx.conf
postgres /var/lib/postgresql/16/main/base/16384/1247
kubelet /var/lib/kubelet/config.yaml
containerd /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1234/fs

This is useful for understanding what files a process is accessing, debugging permission issues, or auditing file access patterns.

Count Syscalls by Process

Count the total number of syscalls made by each process over a 5-second window:

# Count syscalls per process name, print after 5 seconds
bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count(); } interval:s:5 { print(@); clear(@); }'

Example output:

@[kubelet]: 4521
@[containerd]: 3892
@[etcd]: 2103
@[kube-apiserver]: 1847
@[nginx]: 892
@[postgres]: 456

This helps you identify the noisiest processes on a node, which is valuable for performance debugging and capacity planning.

Latency Histogram for Read Syscalls

Measure the latency distribution of read() syscalls and display it as a histogram:

# Histogram of read() latency in microseconds
bpftrace -e 'tracepoint:syscalls:sys_enter_read { @start[tid] = nsecs; }
tracepoint:syscalls:sys_exit_read /@start[tid]/ {
  @usecs = hist((nsecs - @start[tid]) / 1000);
  delete(@start[tid]);
}'

Example output:

@usecs:
[0]                  512 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[1]                  238 |@@@@@@@@@@@@@@@@@@@@@@@@                            |
[2, 4)               187 |@@@@@@@@@@@@@@@@@@@                                |
[4, 8)                93 |@@@@@@@@@                                          |
[8, 16)               42 |@@@@                                               |
[16, 32)              18 |@@                                                 |
[32, 64)               7 |@                                                  |
[64, 128)              3 |                                                   |

This is a powerful technique for understanding I/O latency characteristics on your nodes.

The eBPF Toolchain

Beyond bpftrace, several tools and libraries make up the eBPF ecosystem:

BCC (BPF Compiler Collection)

BCC is a toolkit for creating eBPF programs using Python or Lua frontends with C for the eBPF code. It includes dozens of ready-to-use tools for performance analysis:

  • execsnoop — Trace new process executions.
  • opensnoop — Trace file opens.
  • biolatency — Block I/O latency histogram.
  • tcpconnect — Trace TCP active connections (connect()).
  • runqlat — CPU scheduler run queue latency.

BCC is excellent for one-off debugging sessions and for learning eBPF concepts. However, it requires kernel headers to be installed on the target system, which can be inconvenient in container environments.

libbpf and CO-RE

libbpf is the canonical C library for loading and interacting with eBPF programs. Combined with CO-RE (Compile Once, Run Everywhere), it enables eBPF programs that are compiled once and can run on different kernel versions without recompilation.

CO-RE works through BTF (BPF Type Format), which embeds type information into the kernel. When an eBPF program is loaded, libbpf uses BTF to adjust field offsets in the program to match the running kernel’s data structures. This solved one of the biggest practical barriers to eBPF adoption in production: the need to recompile programs for each kernel version.

Most modern eBPF-based tools (Cilium, Tetragon, Falco) use libbpf and CO-RE internally.

eBPF in Kubernetes

eBPF’s impact on Kubernetes has been transformative, particularly in networking, observability, and security. Here are the major projects.

Cilium: eBPF-Powered CNI and Service Mesh

Cilium is the most prominent eBPF-based project in the Kubernetes ecosystem. It serves as a CNI (Container Network Interface) plugin, replacing iptables-based networking with eBPF programs attached at the TC and XDP hook points.

Cilium provides:

  • Kubernetes networking — Pod-to-pod, pod-to-service, and pod-to-external networking, all implemented in eBPF rather than iptables. This eliminates the performance degradation that iptables-based solutions suffer at high rule counts.
  • Network policy enforcement — Kubernetes NetworkPolicy and Cilium’s own extended network policies (CiliumNetworkPolicy), enforced at the kernel level using eBPF.
  • Load balancing — Kubernetes service load balancing (replacing kube-proxy) using eBPF, with support for Direct Server Return (DSR) and Maglev consistent hashing.
  • Service mesh — Cilium Service Mesh provides L7 traffic management without sidecar proxies, using eBPF for data plane processing and Envoy only where L7 protocol parsing is required.
  • Transparent encryption — WireGuard-based or IPsec encryption for pod-to-pod traffic, managed automatically by Cilium.

Cilium is now the default CNI for Google Kubernetes Engine (GKE), Amazon EKS Anywhere, and Azure AKS (as an option), which speaks to its maturity and production readiness. For more on Kubernetes adoption trends, see our Kubernetes statistics and adoption report.

Hubble: Network Observability

Hubble is Cilium’s observability layer. It uses eBPF to provide deep visibility into network flows between pods, services, and external endpoints. Hubble can show you:

  • Which pods are communicating with which services.
  • DNS query and response details.
  • HTTP request/response metadata (method, path, status code, latency).
  • Network policy verdict decisions (allowed vs. denied flows).

Hubble provides both a CLI (hubble observe) and a web UI for visualizing network flows. This level of visibility was previously only available through sidecar-based service meshes, but eBPF enables it without sidecars and with lower overhead.

Tetragon: Security Observability and Runtime Enforcement

Tetragon (by Isovalent, now part of Cisco) uses eBPF to provide security observability and runtime enforcement in Kubernetes. It can monitor and enforce policies on:

  • Process execution — Detect and optionally block unauthorized process executions inside containers (e.g., a shell spawning inside a production container).
  • File access — Monitor and enforce which files and directories a container can access.
  • Network connections — Track and control outbound network connections from pods.
  • Kernel capabilities — Monitor the use of privileged kernel capabilities.

Tetragon’s key advantage over user-space security tools is that it operates at the kernel level, making it extremely difficult to evade. A compromised container cannot disable or circumvent eBPF-based security monitoring because the monitoring runs in kernel space, outside the container’s reach.

Here is an example Tetragon TracingPolicy that detects shell executions in any pod:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: detect-shells
spec:
  kprobes:
  - call: "security_bprm_check"
    syscall: false
    args:
    - index: 0
      type: "linux_binprm"
    selectors:
    - matchBinaries:
      - operator: "In"
        values:
        - "/bin/sh"
        - "/bin/bash"
        - "/bin/dash"
        - "/usr/bin/zsh"

Falco: Runtime Threat Detection

Falco (a CNCF incubating project) uses eBPF (among other drivers) to detect abnormal behavior in containers and Kubernetes environments. Falco ships with a comprehensive set of default rules covering common threats:

  • Container escape attempts.
  • Unexpected network connections from pods.
  • Sensitive file access (reading /etc/shadow, writing to /etc/).
  • Privilege escalation attempts.
  • Cryptocurrency mining detection.

Falco initially used a kernel module driver but has migrated to eBPF as the preferred driver for modern kernels, gaining the safety and portability benefits of eBPF.

Pixie: Auto-Instrumented Application Debugging

Pixie (a CNCF sandbox project, originally by New Relic) uses eBPF to automatically capture application-level telemetry without any code changes or sidecar containers. Pixie can capture:

  • Full HTTP/gRPC request and response bodies.
  • Database queries (MySQL, PostgreSQL, Cassandra, Redis).
  • DNS requests and responses.
  • CPU and memory profiling data (continuous profiling via CPU flame graphs).

Pixie achieves this by using eBPF uprobes and kprobes to intercept protocol parsing at the kernel and library level. The “auto-instrumented” aspect means you deploy Pixie and immediately get visibility without modifying your applications — a significant advantage for debugging in production Kubernetes clusters.

The State of eBPF in 2026

eBPF development continues to accelerate, with several notable trends shaping its evolution.

Linux Kernel Improvements

The Linux 6.x kernel series has brought substantial eBPF enhancements:

  • BPF Arena and user-space memory mapping — eBPF programs can now work with user-space-mapped memory regions, enabling more complex data sharing between kernel and user space.
  • kfuncs — Kernel functions that eBPF programs can call directly. The kfunc interface has expanded significantly, providing eBPF programs with access to more kernel functionality in a safe, versioned manner.
  • BPF tokens and delegation — New security primitives that allow container runtimes to grant specific eBPF capabilities to unprivileged processes, improving the security model for tools like Pixie that run inside containers.
  • Improved verifier performance — The verifier has been optimized to handle larger and more complex programs, removing practical limits that constrained earlier eBPF implementations.

eBPF on Windows

Microsoft has been developing eBPF for Windows, bringing the eBPF programming model to the Windows kernel. While still earlier in its maturity curve compared to Linux, this effort signals that eBPF is becoming a cross-platform standard for kernel programmability. For Kubernetes engineers managing mixed Linux/Windows clusters, this will eventually enable consistent eBPF-based tooling across both operating systems.

The eBPF Foundation

The eBPF Foundation, hosted by the Linux Foundation, was established to coordinate eBPF development across projects and ensure that eBPF remains vendor-neutral. Members include Meta, Google, Microsoft, Netflix, and Isovalent. The foundation maintains the ebpf.io website, which serves as the central resource for eBPF documentation and community coordination.

Getting Started with eBPF on Kubernetes

If you want to start using eBPF-based tools in your Kubernetes clusters, here is a practical path:

  1. Start with Cilium. If you are provisioning new clusters, use Cilium as your CNI. It is the most mature eBPF-based Kubernetes tool and provides immediate benefits (better networking performance, network policies without iptables, service mesh without sidecars). Managed Kubernetes services from GKE, EKS, and AKS all support Cilium.
  2. Add Hubble for network visibility. Once Cilium is running, enable Hubble to get network flow visibility. This replaces the need for packet captures and tcpdump in many debugging scenarios.
  3. Deploy Tetragon or Falco for security. Choose based on your requirements: Tetragon for deep kernel-level enforcement and observability, Falco for a broader rule-based threat detection approach with a large community rule library.
  4. Explore bpftrace for ad-hoc debugging. Install bpftrace on your nodes (or run it as a privileged pod) for one-off performance investigations. The one-liners shown earlier in this article are a good starting point.
  5. Check kernel compatibility. Most eBPF features require Linux kernel 4.15 or later, with significant improvements in 5.x and 6.x kernels. Check the specific kernel version requirements for the tools you plan to use. Managed Kubernetes services typically run kernels new enough for full eBPF support.

When planning cluster upgrades to take advantage of newer eBPF features, our Kubernetes upgrade checklist can help ensure you do not miss critical steps.

Summary

eBPF has fundamentally changed how observability, networking, and security work in Linux and Kubernetes. By allowing sandboxed programs to run inside the kernel — verified for safety, JIT-compiled for performance, and attached to precise hook points — eBPF eliminates the trade-off between deep kernel-level visibility and operational safety.

For Kubernetes engineers, the practical impact comes through the tools built on eBPF: Cilium replaces iptables-based networking with faster, more scalable eBPF programs. Hubble provides network observability without sidecars. Tetragon and Falco enforce runtime security at the kernel level. Pixie delivers auto-instrumented application debugging without code changes.

The eBPF ecosystem is maturing rapidly, with continued Linux kernel improvements, progress on Windows support, and growing adoption as the default networking layer across major managed Kubernetes services. Understanding eBPF — even at a conceptual level — is becoming essential knowledge for anyone operating Kubernetes infrastructure in production.

πŸ› οΈ Try These Free Tools

⚠️ K8s Manifest Deprecation Checker

Paste your Kubernetes YAML to detect deprecated APIs before upgrading.

πŸ—ΊοΈ Upgrade Path Planner

Plan your upgrade path with breaking change warnings and step-by-step guidance.

πŸ—οΈ Terraform Provider Freshness Check

Paste your Terraform lock file to check provider versions.

See all free tools β†’