* [PATCH bpf-next v3 0/2] powerpc64/bpf: Inline helper in powerpc JIT
@ 2025-12-05 13:40 Saket Kumar Bhaskar
2025-12-05 13:40 ` [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
2025-12-05 13:40 ` [PATCH bpf-next v3 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task/_btf() Saket Kumar Bhaskar
0 siblings, 2 replies; 4+ messages in thread
From: Saket Kumar Bhaskar @ 2025-12-05 13:40 UTC (permalink / raw)
To: bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin
This series add support for internal only per-CPU instructions,
inlines the bpf_get_smp_processor_id() and bpf_get_current_task()
helper calls for powerpc BPF JIT.
Changes since v2:
* Collected Reviewed-by tag.
* Inlined bpf_get_current_task/btf().
* Fixed addressing of src_reg and BPF_REG_0. (Christophe)
* Fixed condition for non smp case as suggested by Christophe.
v2: https://lore.kernel.org/all/cover.1762422548.git.skb99@linux.ibm.com/
Changes since v1:
* Addressed Christophe's comments.
* Inlined bpf_get_current_task() as well.
v1: https://lore.kernel.org/all/20250311160955.825647-1-skb99@linux.ibm.com/
Saket Kumar Bhaskar (2):
powerpc64/bpf: Support internal-only MOV instruction to resolve
per-CPU addrs
powerpc64/bpf: Inline bpf_get_smp_processor_id() and
bpf_get_current_task/_btf()
arch/powerpc/net/bpf_jit_comp.c | 17 +++++++++++++++++
arch/powerpc/net/bpf_jit_comp64.c | 20 ++++++++++++++++++++
2 files changed, 37 insertions(+)
--
2.51.0
^ permalink raw reply [flat|nested] 4+ messages in thread* [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs 2025-12-05 13:40 [PATCH bpf-next v3 0/2] powerpc64/bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar @ 2025-12-05 13:40 ` Saket Kumar Bhaskar 2025-12-05 14:01 ` bot+bpf-ci 2025-12-05 13:40 ` [PATCH bpf-next v3 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task/_btf() Saket Kumar Bhaskar 1 sibling, 1 reply; 4+ messages in thread From: Saket Kumar Bhaskar @ 2025-12-05 13:40 UTC (permalink / raw) To: bpf, linuxppc-dev, linux-kernel Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel, martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin With the introduction of commit 7bdbf7446305 ("bpf: add special internal-only MOV instruction to resolve per-CPU addrs"), a new BPF instruction BPF_MOV64_PERCPU_REG has been added to resolve absolute addresses of per-CPU data from their per-CPU offsets. This update requires enabling support for this instruction in the powerpc JIT compiler. As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data optimisations"), the per-CPU data offset for the CPU is stored in the paca. To support this BPF instruction in the powerpc JIT, the following powerpc instructions are emitted: if (IS_ENABLED(CONFIG_SMP)) ld tmp1_reg, 48(13) //Load per-CPU data offset from paca(r13) in tmp1_reg. add dst_reg, src_reg, tmp1_reg //Add the per cpu offset to the dst. else if (src_reg != dst_reg) mr dst_reg, src_reg //Move src_reg to dst_reg, if src_reg != dst_reg To evaluate the performance improvements introduced by this change, the benchmark described in [1] was employed. Before Change: glob-arr-inc : 41.580 ± 0.034M/s arr-inc : 39.592 ± 0.055M/s hash-inc : 25.873 ± 0.012M/s After Change: glob-arr-inc : 42.024 ± 0.049M/s arr-inc : 55.447 ± 0.031M/s hash-inc : 26.565 ± 0.014M/s [1] https://github.com/anakryiko/linux/commit/8dec900975ef Reviewed-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com> --- arch/powerpc/net/bpf_jit_comp.c | 5 +++++ arch/powerpc/net/bpf_jit_comp64.c | 9 +++++++++ 2 files changed, 14 insertions(+) diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index 5e976730b2f5..d53e9cd7563f 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -466,6 +466,11 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) return true; } +bool bpf_jit_supports_percpu_insn(void) +{ + return IS_ENABLED(CONFIG_PPC64); +} + void *arch_alloc_bpf_trampoline(unsigned int size) { return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns); diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 1fe37128c876..816f9d7d9e5d 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -918,6 +918,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */ case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */ + if (insn_is_mov_percpu_addr(&insn[i])) { + if (IS_ENABLED(CONFIG_SMP)) { + EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset))); + EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg)); + } else if (src_reg != dst_reg) { + EMIT(PPC_RAW_MR(dst_reg, src_reg)); + } + } + if (insn_is_cast_user(&insn[i])) { EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32)); PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL)); -- 2.51.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs 2025-12-05 13:40 ` [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar @ 2025-12-05 14:01 ` bot+bpf-ci 0 siblings, 0 replies; 4+ messages in thread From: bot+bpf-ci @ 2025-12-05 14:01 UTC (permalink / raw) To: skb99, bpf, linuxppc-dev, linux-kernel Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel, martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2721 bytes --] > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c > index 5e976730b2f5..d53e9cd7563f 100644 > --- a/arch/powerpc/net/bpf_jit_comp.c > +++ b/arch/powerpc/net/bpf_jit_comp.c [ ... ] > diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c > index 1fe37128c876..816f9d7d9e5d 100644 > --- a/arch/powerpc/net/bpf_jit_comp64.c > +++ b/arch/powerpc/net/bpf_jit_comp64.c > @@ -918,6 +918,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code > case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */ > case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */ > > + if (insn_is_mov_percpu_addr(&insn[i])) { > + if (IS_ENABLED(CONFIG_SMP)) { > + EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset))); > + EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg)); > + } else if (src_reg != dst_reg) { > + EMIT(PPC_RAW_MR(dst_reg, src_reg)); > + } > + } > + Does this code need a break statement here? After handling the per-CPU MOV instruction, execution falls through to the subsequent MOV handling code. In the common case where dst_reg != src_reg with CONFIG_SMP enabled, the code first emits ADD to compute the per-CPU address in dst_reg, but then continues to the final else-if in this case statement which emits MR dst_reg, src_reg, overwriting the computed address. Other architectures (x86 and arm64) use break after their per-CPU instruction handling in bpf_jit_build_body()->BPF_MOV case. > if (insn_is_cast_user(&insn[i])) { > EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32)); > PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL)); > PPC_BCC_SHORT(COND_EQ, (ctx->idx + 2) * 4); > EMIT(PPC_RAW_OR(tmp1_reg, dst_reg, tmp1_reg)); > EMIT(PPC_RAW_MR(dst_reg, tmp1_reg)); > break; > } > > if (imm == 1) { > /* special mov32 for zext */ > EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31)); > break; > } else if (off == 8) { > EMIT(PPC_RAW_EXTSB(dst_reg, src_reg)); > } else if (off == 16) { > EMIT(PPC_RAW_EXTSH(dst_reg, src_reg)); > } else if (off == 32) { > EMIT(PPC_RAW_EXTSW(dst_reg, src_reg)); > } else if (dst_reg != src_reg) ^^^^ This condition can be true for per-CPU instructions (which have off == -1), causing the computed per-CPU address to be overwritten with src_reg. > EMIT(PPC_RAW_MR(dst_reg, src_reg)); > goto bpf_alu32_trunc; --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19964922727 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH bpf-next v3 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task/_btf() 2025-12-05 13:40 [PATCH bpf-next v3 0/2] powerpc64/bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar 2025-12-05 13:40 ` [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar @ 2025-12-05 13:40 ` Saket Kumar Bhaskar 1 sibling, 0 replies; 4+ messages in thread From: Saket Kumar Bhaskar @ 2025-12-05 13:40 UTC (permalink / raw) To: bpf, linuxppc-dev, linux-kernel Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel, martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin Inline the calls to bpf_get_smp_processor_id() and bpf_get_current_task/_btf() in the powerpc bpf jit. powerpc saves the Logical processor number (paca_index) and pointer to current task (__current) in paca. Here is how the powerpc JITed assembly changes after this commit: Before: cpu = bpf_get_smp_processor_id(); addis 12, 2, -517 addi 12, 12, -29456 mtctr 12 bctrl mr 8, 3 After: cpu = bpf_get_smp_processor_id(); lhz 8, 8(13) To evaluate the performance improvements introduced by this change, the benchmark described in [1] was employed. +---------------+-------------------+-------------------+--------------+ | Name | Before | After | % change | |---------------+-------------------+-------------------+--------------| | glob-arr-inc | 40.701 ± 0.008M/s | 55.207 ± 0.021M/s | + 35.64% | | arr-inc | 39.401 ± 0.007M/s | 56.275 ± 0.023M/s | + 42.42% | | hash-inc | 24.944 ± 0.004M/s | 26.212 ± 0.003M/s | + 5.08% | +---------------+-------------------+-------------------+--------------+ [1] https://github.com/anakryiko/linux/commit/8dec900975ef Reviewed-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com> --- arch/powerpc/net/bpf_jit_comp.c | 12 ++++++++++++ arch/powerpc/net/bpf_jit_comp64.c | 11 +++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index d53e9cd7563f..b243ee205885 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -471,6 +471,18 @@ bool bpf_jit_supports_percpu_insn(void) return IS_ENABLED(CONFIG_PPC64); } +bool bpf_jit_inlines_helper_call(s32 imm) +{ + switch (imm) { + case BPF_FUNC_get_smp_processor_id: + case BPF_FUNC_get_current_task: + case BPF_FUNC_get_current_task_btf: + return true; + default: + return false; + } +} + void *arch_alloc_bpf_trampoline(unsigned int size) { return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns); diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 816f9d7d9e5d..76a44f9ad7d2 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -1399,6 +1399,17 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code case BPF_JMP | BPF_CALL: ctx->seen |= SEEN_FUNC; + if (src_reg == bpf_to_ppc(BPF_REG_0)) { + if (imm == BPF_FUNC_get_smp_processor_id) { + EMIT(PPC_RAW_LHZ(src_reg, _R13, offsetof(struct paca_struct, paca_index))); + break; + } else if (imm == BPF_FUNC_get_current_task || + imm == BPF_FUNC_get_current_task_btf) { + EMIT(PPC_RAW_LD(src_reg, _R13, offsetof(struct paca_struct, __current))); + break; + } + } + ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass, &func_addr, &func_addr_fixed); if (ret < 0) -- 2.51.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-12-05 14:01 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-12-05 13:40 [PATCH bpf-next v3 0/2] powerpc64/bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar 2025-12-05 13:40 ` [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar 2025-12-05 14:01 ` bot+bpf-ci 2025-12-05 13:40 ` [PATCH bpf-next v3 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task/_btf() Saket Kumar Bhaskar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox