From: Andrii Nakryiko <andrii@kernel.org>
To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
martin.lau@kernel.org
Cc: andrii@kernel.org, kernel-team@meta.com
Subject: [PATCH v2 bpf-next 0/4] Add internal-only BPF per-CPU instruction
Date: Mon, 1 Apr 2024 19:13:01 -0700 [thread overview]
Message-ID: <20240402021307.1012571-1-andrii@kernel.org> (raw)
Add a new BPF instruction for resolving per-CPU memory addresses.
New instruction is a special form of BPF_ALU64 | BPF_MOV | BPF_DW, with
insns->off set to BPF_ADDR_PERCPU (== -1). It resolves provided per-CPU offset
to an absolute address where per-CPU data resides for "this" CPU.
This patch set implements support for it in x86-64 BPF JIT only.
Using the new instruction, we also implement inlining for three cases:
- bpf_get_smp_processor_id(), which allows to avoid unnecessary trivial
function call, saving a bit of performance and also not polluting LBR
records with unnecessary function call/return records;
- PERCPU_ARRAY's bpf_map_lookup_elem() is completely inlined, bringing its
performance to implementing per-CPU data structures using global variables
in BPF (which is an awesome improvement, see benchmarks below);
- PERCPU_HASH's bpf_map_lookup_elem() is partially inlined, just like the
same for non-PERCPU HASH map; this still saves a bit of overhead.
To validate performance benefits, I hacked together a tiny benchmark doing
only bpf_map_lookup_elem() and incrementing the value by 1 for PERCPU_ARRAY
(arr-inc benchmark below) and PERCPU_HASH (hash-inc benchmark below) maps. To
establish a baseline, I also implemented logic similar to PERCPU_ARRAY based
on global variable array using bpf_get_smp_processor_id() to index array for
current CPU (glob-arr-inc benchmark below).
BEFORE
======
glob-arr-inc : 163.685 ± 0.092M/s
arr-inc : 138.096 ± 0.160M/s
hash-inc : 66.855 ± 0.123M/s
AFTER
=====
glob-arr-inc : 173.921 ± 0.039M/s (+6%)
arr-inc : 170.729 ± 0.210M/s (+23.7%)
hash-inc : 68.673 ± 0.070M/s (+2.7%)
As can be seen, PERCPU_HASH gets a modest +2.7% improvement, while global
array-based gets a nice +6% due to inlining of bpf_get_smp_processor_id().
But what's really important is that arr-inc benchmark basically catches up
with glob-arr-inc, resulting in +23.7% improvement. This means that in
practice it won't be necessary to avoid PERCPU_ARRAY anymore if performance is
critical (e.g., high-frequent stats collection, which is often a practical use
for PERCPU_ARRAY today).
v1->v2:
- use BPF_ALU64 | BPF_MOV instruction instead of LDX (Alexei);
- dropped the direct per-CPU memory read instruction, it can always be added
back, if necessary;
- guarded bpf_get_smp_processor_id() behind x86-64 check (Alexei);
- switched all per-cpu addr casts to (unsigned long) to avoid sparse
warnings.
Andrii Nakryiko (4):
bpf: add special internal-only MOV instruction to resolve per-CPU
addrs
bpf: inline bpf_get_smp_processor_id() helper
bpf: inline bpf_map_lookup_elem() for PERCPU_ARRAY maps
bpf: inline bpf_map_lookup_elem() helper for PERCPU_HASH map
arch/x86/net/bpf_jit_comp.c | 16 ++++++++++++++++
include/linux/filter.h | 20 ++++++++++++++++++++
kernel/bpf/arraymap.c | 33 +++++++++++++++++++++++++++++++++
kernel/bpf/core.c | 5 +++++
kernel/bpf/disasm.c | 14 ++++++++++++++
kernel/bpf/hashtab.c | 21 +++++++++++++++++++++
kernel/bpf/verifier.c | 24 ++++++++++++++++++++++++
7 files changed, 133 insertions(+)
--
2.43.0
next reply other threads:[~2024-04-02 2:13 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-02 2:13 Andrii Nakryiko [this message]
2024-04-02 2:13 ` [PATCH v2 bpf-next 1/4] bpf: add special internal-only MOV instruction to resolve per-CPU addrs Andrii Nakryiko
2024-04-02 4:35 ` John Fastabend
2024-04-02 2:13 ` [PATCH v2 bpf-next 2/4] bpf: inline bpf_get_smp_processor_id() helper Andrii Nakryiko
2024-04-02 4:41 ` John Fastabend
2024-04-02 2:13 ` [PATCH v2 bpf-next 3/4] bpf: inline bpf_map_lookup_elem() for PERCPU_ARRAY maps Andrii Nakryiko
2024-04-02 5:02 ` John Fastabend
2024-04-02 16:20 ` Andrii Nakryiko
2024-04-02 2:13 ` [PATCH v2 bpf-next 4/4] bpf: inline bpf_map_lookup_elem() helper for PERCPU_HASH map Andrii Nakryiko
2024-04-02 5:04 ` John Fastabend
2024-04-02 4:57 ` [PATCH v2 bpf-next 0/4] Add internal-only BPF per-CPU instruction John Fastabend
2024-04-02 16:20 ` Andrii Nakryiko
2024-04-03 22:01 ` John Fastabend
2024-04-03 17:40 ` patchwork-bot+netdevbpf
2024-04-03 17:41 ` Alexei Starovoitov
2024-04-03 18:03 ` Andrii Nakryiko
2024-04-03 18:14 ` Alexei Starovoitov
2024-04-04 16:12 ` Andrii Nakryiko
2024-04-04 16:16 ` Puranjay Mohan
2024-04-04 16:18 ` Andrii Nakryiko
2024-04-05 12:48 ` Puranjay Mohan
2024-04-05 13:19 ` Pu Lehui
2024-04-04 21:02 ` Puranjay Mohan
2024-04-04 21:21 ` Andrii Nakryiko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240402021307.1012571-1-andrii@kernel.org \
--to=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@meta.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox