From: John Fastabend <john.fastabend@gmail.com>
To: Andrii Nakryiko <andrii@kernel.org>,
bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
martin.lau@kernel.org
Cc: andrii@kernel.org, kernel-team@meta.com
Subject: RE: [PATCH bpf-next 1/4] bpf: add internal-only per-CPU LDX instructions
Date: Mon, 01 Apr 2024 18:12:44 -0700 [thread overview]
Message-ID: <660b5b8cc7162_801520889@john.notmuch> (raw)
In-Reply-To: <20240329184740.4084786-2-andrii@kernel.org>
Andrii Nakryiko wrote:
> Add BPF instructions for working with per-CPU data. These instructions
> are internal-only and users are not allowed to use them directly. They
> will only be used for internal inlining optimizations for now.
>
> Two different instructions are added. One, with BPF_MEM_PERCPU opcode,
> performs memory dereferencing of a per-CPU "address" (which is actually
> an offset). This one is useful when inlined logic needs to load data
> stored in per-CPU storage (bpf_get_smp_processor_id() is one such
> example).
>
> Another, with BPF_ADDR_PERCPU opcode, performs a resolution of a per-CPU
> address (offset) stored in a register. This one is useful anywhere where
> per-CPU data is not read, but rather is returned to user as just
> absolute raw memory pointer (useful in bpf_map_lookup_elem() helper
> inlinings, for example).
>
> BPF disassembler is also taught to recognize them to support dumping
> final BPF assembly code (non-JIT'ed version).
>
> Add arch-specific way for BPF JITs to mark support for this instructions.
>
> This patch also adds support for these instructions in x86-64 BPF JIT.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> arch/x86/net/bpf_jit_comp.c | 29 +++++++++++++++++++++++++++++
> include/linux/filter.h | 27 +++++++++++++++++++++++++++
> kernel/bpf/core.c | 5 +++++
> kernel/bpf/disasm.c | 33 ++++++++++++++++++++++++++-------
> 4 files changed, 87 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 3b639d6f2f54..610bbedaae70 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1910,6 +1910,30 @@ st: if (is_imm8(insn->off))
> }
> break;
>
> + /* internal-only per-cpu zero-extending memory load */
> + case BPF_LDX | BPF_MEM_PERCPU | BPF_B:
> + case BPF_LDX | BPF_MEM_PERCPU | BPF_H:
> + case BPF_LDX | BPF_MEM_PERCPU | BPF_W:
> + case BPF_LDX | BPF_MEM_PERCPU | BPF_DW:
> + insn_off = insn->off;
> + EMIT1(0x65); /* gs segment modifier */
> + emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
> + break;
> +
> + /* internal-only load-effective-address-of per-cpu offset */
> + case BPF_LDX | BPF_ADDR_PERCPU | BPF_DW: {
> + u32 off = (u32)(void *)&this_cpu_off;
> +
> + /* mov <dst>, <src> (if necessary) */
> + EMIT_mov(dst_reg, src_reg);
> +
> + /* add <dst>, gs:[<off>] */
> + EMIT2(0x65, add_1mod(0x48, dst_reg));
> + EMIT3(0x03, add_1reg(0x04, dst_reg), 0x25);
> + EMIT(off, 4);
> +
> + break;
> + }
> case BPF_STX | BPF_ATOMIC | BPF_W:
> case BPF_STX | BPF_ATOMIC | BPF_DW:
> if (insn->imm == (BPF_AND | BPF_FETCH) ||
[..]
> +/* Per-CPU zero-extending memory load (internal-only) */
> +#define BPF_LDX_MEM_PERCPU(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_LDX | BPF_SIZE(SIZE) | BPF_MEM_PERCPU,\
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = 0 })
> +
> +/* Load effective address of a given per-CPU offset */
> +#define BPF_LDX_ADDR_PERCPU(DST, SRC, OFF) \
Do you need OFF here? It seems the above is using &this_cpu_off.
> + ((struct bpf_insn) { \
> + .code = BPF_LDX | BPF_DW | BPF_ADDR_PERCPU, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = 0 })
> +
next prev parent reply other threads:[~2024-04-02 1:12 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-29 18:47 [PATCH bpf-next 0/4] Add internal-only BPF per-CPU instructions Andrii Nakryiko
2024-03-29 18:47 ` [PATCH bpf-next 1/4] bpf: add internal-only per-CPU LDX instructions Andrii Nakryiko
2024-03-30 0:26 ` Stanislav Fomichev
2024-03-30 5:22 ` Andrii Nakryiko
2024-03-30 10:10 ` kernel test robot
2024-04-02 1:12 ` John Fastabend [this message]
2024-04-02 1:47 ` Andrii Nakryiko
2024-03-29 18:47 ` [PATCH bpf-next 2/4] bpf: inline bpf_get_smp_processor_id() helper Andrii Nakryiko
2024-03-29 20:27 ` Andrii Nakryiko
2024-03-29 23:41 ` Alexei Starovoitov
2024-03-30 5:16 ` Andrii Nakryiko
2024-03-30 9:37 ` kernel test robot
2024-03-30 10:53 ` kernel test robot
2024-03-30 20:49 ` kernel test robot
2024-03-29 18:47 ` [PATCH bpf-next 3/4] bpf: inline bpf_map_lookup_elem() for PERCPU_ARRAY maps Andrii Nakryiko
2024-03-29 18:47 ` [PATCH bpf-next 4/4] bpf: inline bpf_map_lookup_elem() helper for PERCPU_HASH map Andrii Nakryiko
2024-03-29 23:52 ` Alexei Starovoitov
2024-03-30 5:22 ` Andrii Nakryiko
2024-03-29 23:47 ` [PATCH bpf-next 0/4] Add internal-only BPF per-CPU instructions Alexei Starovoitov
2024-03-30 5:18 ` Andrii Nakryiko
2024-04-01 16:28 ` Eduard Zingerman
2024-04-01 22:54 ` Andrii Nakryiko
2024-04-02 9:13 ` Eduard Zingerman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=660b5b8cc7162_801520889@john.notmuch \
--to=john.fastabend@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@meta.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox