netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime
@ 2026-01-02 15:00 Leon Hwang
  2026-01-02 15:00 ` [PATCH bpf-next 1/4] bpf: tailcall: Introduce bpf_arch_tail_call_prologue_offset Leon Hwang
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Leon Hwang @ 2026-01-02 15:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Puranjay Mohan, Xu Kuohai, Catalin Marinas, Will Deacon,
	David S . Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Andrew Morton,
	linux-arm-kernel, linux-kernel, netdev, kernel-patches-bot,
	Leon Hwang

This patch series optimizes BPF tail calls on x86_64 and arm64 by
eliminating runtime memory accesses for max_entries and 'prog->bpf_func'
when the prog array map is known at verification time.

Currently, every tail call requires:
  1. Loading max_entries from the prog array map
  2. Dereferencing 'prog->bpf_func' to get the target address

This series introduces a mechanism to precompute and cache the tail call
target addresses (bpf_func + prologue_offset) in the prog array itself:
  array->ptrs[max_entries + index] = prog->bpf_func + prologue_offset

When a program is added to or removed from the prog array, the cached
target is atomically updated via xchg().

The verifier now encodes additional information in the tail call
instruction's imm field:
  - bits 0-7:   map index in used_maps[]
  - bits 8-15:  dynamic array flag (1 if map pointer is poisoned)
  - bits 16-31: poke table index + 1 for direct tail calls

For static tail calls (map known at verification time):
  - max_entries is embedded as an immediate in the comparison instruction
  - The cached target from array->ptrs[max_entries + index] is used
    directly, avoiding the 'prog->bpf_func' dereference

For dynamic tail calls (map pointer poisoned):
  - Fall back to runtime lookup of max_entries and prog->bpf_func

This reduces cache misses and improves tail call performance for the
common case where the prog array is statically known.

Leon Hwang (4):
  bpf: tailcall: Introduce bpf_arch_tail_call_prologue_offset
  bpf, x64: tailcall: Eliminate max_entries and bpf_func access at
    runtime
  bpf, arm64: tailcall: Eliminate max_entries and bpf_func access at
    runtime
  bpf, lib/test_bpf: Fix broken tailcall tests

 arch/arm64/net/bpf_jit_comp.c | 71 +++++++++++++++++++++++++----------
 arch/x86/net/bpf_jit_comp.c   | 51 ++++++++++++++++++-------
 include/linux/bpf.h           |  1 +
 kernel/bpf/arraymap.c         | 27 ++++++++++++-
 kernel/bpf/verifier.c         | 30 ++++++++++++++-
 lib/test_bpf.c                | 39 ++++++++++++++++---
 6 files changed, 178 insertions(+), 41 deletions(-)

--
2.52.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-01-02 15:38 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-02 15:00 [PATCH bpf-next 0/4] bpf: tailcall: Eliminate max_entries and bpf_func access at runtime Leon Hwang
2026-01-02 15:00 ` [PATCH bpf-next 1/4] bpf: tailcall: Introduce bpf_arch_tail_call_prologue_offset Leon Hwang
2026-01-02 15:21   ` bot+bpf-ci
2026-01-02 15:38     ` Leon Hwang
2026-01-02 15:00 ` [PATCH bpf-next 2/4] bpf, x64: tailcall: Eliminate max_entries and bpf_func access at runtime Leon Hwang
2026-01-02 15:00 ` [PATCH bpf-next 3/4] bpf, arm64: " Leon Hwang
2026-01-02 15:00 ` [PATCH bpf-next 4/4] bpf, lib/test_bpf: Fix broken tailcall tests Leon Hwang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).