* [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT
@ 2025-11-17 6:52 Saket Kumar Bhaskar
2025-11-17 6:52 ` [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Saket Kumar Bhaskar @ 2025-11-17 6:52 UTC (permalink / raw)
To: bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin
This series add support for internal only per-CPU instructions,
inlines the bpf_get_smp_processor_id() and bpf_get_current_task()
helper calls for powerpc BPF JIT.
Changes since v1:
* Addressed Christophe's comments.
* Inlined bpf_get_current_task() as well.
v1: https://lore.kernel.org/all/20250311160955.825647-1-skb99@linux.ibm.com/
Saket Kumar Bhaskar (2):
powerpc64/bpf: Support internal-only MOV instruction to resolve
per-CPU addrs
powerpc64/bpf: Inline bpf_get_smp_processor_id() and
bpf_get_current_task()
arch/powerpc/net/bpf_jit_comp.c | 16 ++++++++++++++++
arch/powerpc/net/bpf_jit_comp64.c | 19 +++++++++++++++++++
2 files changed, 35 insertions(+)
--
2.51.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs
2025-11-17 6:52 [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
@ 2025-11-17 6:52 ` Saket Kumar Bhaskar
2025-11-25 10:02 ` Christophe Leroy (CS GROUP)
2025-11-17 6:52 ` [PATCH bpf-next v2 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task() Saket Kumar Bhaskar
2025-11-18 12:16 ` [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Puranjay Mohan
2 siblings, 1 reply; 6+ messages in thread
From: Saket Kumar Bhaskar @ 2025-11-17 6:52 UTC (permalink / raw)
To: bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin
With the introduction of commit 7bdbf7446305 ("bpf: add special
internal-only MOV instruction to resolve per-CPU addrs"),
a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
resolve absolute addresses of per-CPU data from their per-CPU
offsets. This update requires enabling support for this
instruction in the powerpc JIT compiler.
As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
optimisations"), the per-CPU data offset for the CPU is stored in
the paca.
To support this BPF instruction in the powerpc JIT, the following
powerpc instructions are emitted:
ld tmp1_reg, 48(13) //Load per-CPU data offset from paca(r13) in tmp1_reg.
add dst_reg, src_reg, tmp1_reg //Add the per cpu offset to the dst.
mr dst_reg, src_reg //Move src_reg to dst_reg, if src_reg != dst_reg
To evaluate the performance improvements introduced by this change,
the benchmark described in [1] was employed.
Before Change:
glob-arr-inc : 41.580 ± 0.034M/s
arr-inc : 39.592 ± 0.055M/s
hash-inc : 25.873 ± 0.012M/s
After Change:
glob-arr-inc : 42.024 ± 0.049M/s
arr-inc : 55.447 ± 0.031M/s
hash-inc : 26.565 ± 0.014M/s
[1] https://github.com/anakryiko/linux/commit/8dec900975ef
Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp.c | 5 +++++
arch/powerpc/net/bpf_jit_comp64.c | 9 +++++++++
2 files changed, 14 insertions(+)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 88ad5ba7b87f..2f2230ae2145 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -466,6 +466,11 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena)
return true;
}
+bool bpf_jit_supports_percpu_insn(void)
+{
+ return IS_ENABLED(CONFIG_PPC64);
+}
+
void *arch_alloc_bpf_trampoline(unsigned int size)
{
return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 1fe37128c876..21486706b5ea 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -918,6 +918,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
+ if (insn_is_mov_percpu_addr(&insn[i])) {
+ if (IS_ENABLED(CONFIG_SMP)) {
+ EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
+ EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg));
+ } else {
+ EMIT(PPC_RAW_MR(dst_reg, src_reg));
+ }
+ }
+
if (insn_is_cast_user(&insn[i])) {
EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32));
PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL));
--
2.51.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH bpf-next v2 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task()
2025-11-17 6:52 [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
2025-11-17 6:52 ` [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
@ 2025-11-17 6:52 ` Saket Kumar Bhaskar
2025-11-25 10:15 ` Christophe Leroy (CS GROUP)
2025-11-18 12:16 ` [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Puranjay Mohan
2 siblings, 1 reply; 6+ messages in thread
From: Saket Kumar Bhaskar @ 2025-11-17 6:52 UTC (permalink / raw)
To: bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin
Inline the calls to bpf_get_smp_processor_id()/bpf_get_current_task()
in the powerpc bpf jit.
powerpc saves the Logical processor number (paca_index) and pointer
to current task (__current) in paca.
Here is how the powerpc JITed assembly changes after this commit:
Before:
cpu = bpf_get_smp_processor_id();
addis 12, 2, -517
addi 12, 12, -29456
mtctr 12
bctrl
mr 8, 3
After:
cpu = bpf_get_smp_processor_id();
lhz 8, 8(13)
To evaluate the performance improvements introduced by this change,
the benchmark described in [1] was employed.
+---------------+-------------------+-------------------+--------------+
| Name | Before | After | % change |
|---------------+-------------------+-------------------+--------------|
| glob-arr-inc | 40.701 ± 0.008M/s | 55.207 ± 0.021M/s | + 35.64% |
| arr-inc | 39.401 ± 0.007M/s | 56.275 ± 0.023M/s | + 42.42% |
| hash-inc | 24.944 ± 0.004M/s | 26.212 ± 0.003M/s | + 5.08% |
+---------------+-------------------+-------------------+--------------+
[1] https://github.com/anakryiko/linux/commit/8dec900975ef
Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
---
arch/powerpc/net/bpf_jit_comp.c | 11 +++++++++++
arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++++
2 files changed, 21 insertions(+)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 2f2230ae2145..c88dfa1418ec 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -471,6 +471,17 @@ bool bpf_jit_supports_percpu_insn(void)
return IS_ENABLED(CONFIG_PPC64);
}
+bool bpf_jit_inlines_helper_call(s32 imm)
+{
+ switch (imm) {
+ case BPF_FUNC_get_smp_processor_id:
+ case BPF_FUNC_get_current_task:
+ return true;
+ default:
+ return false;
+ }
+}
+
void *arch_alloc_bpf_trampoline(unsigned int size)
{
return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 21486706b5ea..4e1643422370 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -1399,6 +1399,16 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
case BPF_JMP | BPF_CALL:
ctx->seen |= SEEN_FUNC;
+ if (insn[i].src_reg == BPF_REG_0) {
+ if (imm == BPF_FUNC_get_smp_processor_id) {
+ EMIT(PPC_RAW_LHZ(insn[i].src_reg, _R13, offsetof(struct paca_struct, paca_index)));
+ break;
+ } else if (imm == BPF_FUNC_get_current_task) {
+ EMIT(PPC_RAW_LD(insn[i].src_reg, _R13, offsetof(struct paca_struct, __current)));
+ break;
+ }
+ }
+
ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
&func_addr, &func_addr_fixed);
if (ret < 0)
--
2.51.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT
2025-11-17 6:52 [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
2025-11-17 6:52 ` [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
2025-11-17 6:52 ` [PATCH bpf-next v2 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task() Saket Kumar Bhaskar
@ 2025-11-18 12:16 ` Puranjay Mohan
2 siblings, 0 replies; 6+ messages in thread
From: Puranjay Mohan @ 2025-11-18 12:16 UTC (permalink / raw)
To: Saket Kumar Bhaskar, bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, christophe.leroy, naveen, maddy, mpe, npiggin
Saket Kumar Bhaskar <skb99@linux.ibm.com> writes:
> This series add support for internal only per-CPU instructions,
> inlines the bpf_get_smp_processor_id() and bpf_get_current_task()
> helper calls for powerpc BPF JIT.
>
> Changes since v1:
> * Addressed Christophe's comments.
> * Inlined bpf_get_current_task() as well.
>
> v1: https://lore.kernel.org/all/20250311160955.825647-1-skb99@linux.ibm.com/
>
> Saket Kumar Bhaskar (2):
> powerpc64/bpf: Support internal-only MOV instruction to resolve
> per-CPU addrs
> powerpc64/bpf: Inline bpf_get_smp_processor_id() and
> bpf_get_current_task()
>
> arch/powerpc/net/bpf_jit_comp.c | 16 ++++++++++++++++
> arch/powerpc/net/bpf_jit_comp64.c | 19 +++++++++++++++++++
> 2 files changed, 35 insertions(+)
For both the patches:
Reviewed-by: Puranjay Mohan <puranjay@kernel.org>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs
2025-11-17 6:52 ` [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
@ 2025-11-25 10:02 ` Christophe Leroy (CS GROUP)
0 siblings, 0 replies; 6+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2025-11-25 10:02 UTC (permalink / raw)
To: Saket Kumar Bhaskar, bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, naveen, maddy, mpe, npiggin
Le 17/11/2025 à 07:52, Saket Kumar Bhaskar a écrit :
> With the introduction of commit 7bdbf7446305 ("bpf: add special
> internal-only MOV instruction to resolve per-CPU addrs"),
> a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
> resolve absolute addresses of per-CPU data from their per-CPU
> offsets. This update requires enabling support for this
> instruction in the powerpc JIT compiler.
>
> As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
> optimisations"), the per-CPU data offset for the CPU is stored in
> the paca.
>
> To support this BPF instruction in the powerpc JIT, the following
> powerpc instructions are emitted:
>
> ld tmp1_reg, 48(13) //Load per-CPU data offset from paca(r13) in tmp1_reg.
> add dst_reg, src_reg, tmp1_reg //Add the per cpu offset to the dst.
> mr dst_reg, src_reg //Move src_reg to dst_reg, if src_reg != dst_reg
Must be something wrong here. The 'add' was done into the dst_reg so
here you erase the result of the addition by the source register.
>
> To evaluate the performance improvements introduced by this change,
> the benchmark described in [1] was employed.
>
> Before Change:
> glob-arr-inc : 41.580 ± 0.034M/s
> arr-inc : 39.592 ± 0.055M/s
> hash-inc : 25.873 ± 0.012M/s
>
> After Change:
> glob-arr-inc : 42.024 ± 0.049M/s
> arr-inc : 55.447 ± 0.031M/s
> hash-inc : 26.565 ± 0.014M/s
>
> [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C2f16cef7d35341c9683608de25a5ee3b%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638989591756011820%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dNHc3FAFJkZpHq%2Be1hTv5CfBrGEXxWTKrLGSHaUw%2BRk%3D&reserved=0
>
> Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
> ---
> arch/powerpc/net/bpf_jit_comp.c | 5 +++++
> arch/powerpc/net/bpf_jit_comp64.c | 9 +++++++++
> 2 files changed, 14 insertions(+)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 88ad5ba7b87f..2f2230ae2145 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -466,6 +466,11 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena)
> return true;
> }
>
> +bool bpf_jit_supports_percpu_insn(void)
> +{
> + return IS_ENABLED(CONFIG_PPC64);
> +}
> +
> void *arch_alloc_bpf_trampoline(unsigned int size)
> {
> return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 1fe37128c876..21486706b5ea 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -918,6 +918,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
> case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
>
> + if (insn_is_mov_percpu_addr(&insn[i])) {
> + if (IS_ENABLED(CONFIG_SMP)) {
> + EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> + EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg));
> + } else {
> + EMIT(PPC_RAW_MR(dst_reg, src_reg));
You should make sure dst_reg is different from src_reg before emitting
this, you may otherwise generate one of the following instructions that
change the thread priority:
#define HMT_very_low() asm volatile("or 31, 31, 31 # very low priority")
#define HMT_low() asm volatile("or 1, 1, 1 # low priority")
#define HMT_medium_low() asm volatile("or 6, 6, 6 # medium low priority")
#define HMT_medium() asm volatile("or 2, 2, 2 # medium priority")
#define HMT_medium_high() asm volatile("or 5, 5, 5 # medium high priority")
#define HMT_high() asm volatile("or 3, 3, 3 # high priority")
> + }
> + }
> +
> if (insn_is_cast_user(&insn[i])) {
> EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32));
> PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL));
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v2 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task()
2025-11-17 6:52 ` [PATCH bpf-next v2 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task() Saket Kumar Bhaskar
@ 2025-11-25 10:15 ` Christophe Leroy (CS GROUP)
0 siblings, 0 replies; 6+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2025-11-25 10:15 UTC (permalink / raw)
To: Saket Kumar Bhaskar, bpf, linuxppc-dev, linux-kernel
Cc: hbathini, sachinpb, venkat88, andrii, eddyz87, ast, daniel,
martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, naveen, maddy, mpe, npiggin
Le 17/11/2025 à 07:52, Saket Kumar Bhaskar a écrit :
> Inline the calls to bpf_get_smp_processor_id()/bpf_get_current_task()
> in the powerpc bpf jit.
>
> powerpc saves the Logical processor number (paca_index) and pointer
> to current task (__current) in paca.
>
> Here is how the powerpc JITed assembly changes after this commit:
>
> Before:
>
> cpu = bpf_get_smp_processor_id();
>
> addis 12, 2, -517
> addi 12, 12, -29456
> mtctr 12
> bctrl
> mr 8, 3
>
> After:
>
> cpu = bpf_get_smp_processor_id();
>
> lhz 8, 8(13)
>
> To evaluate the performance improvements introduced by this change,
> the benchmark described in [1] was employed.
>
> +---------------+-------------------+-------------------+--------------+
> | Name | Before | After | % change |
> |---------------+-------------------+-------------------+--------------|
> | glob-arr-inc | 40.701 ± 0.008M/s | 55.207 ± 0.021M/s | + 35.64% |
> | arr-inc | 39.401 ± 0.007M/s | 56.275 ± 0.023M/s | + 42.42% |
> | hash-inc | 24.944 ± 0.004M/s | 26.212 ± 0.003M/s | + 5.08% |
> +---------------+-------------------+-------------------+--------------+
>
> [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C4a08a3af41ff4f9bc55d08de25a5f0ee%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638989591794687135%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=FtfTYpm9VgLfO1Q3iZvyrE4QRG317%2B%2BjfPd66Wd%2FQP4%3D&reserved=0
>
> Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
> ---
> arch/powerpc/net/bpf_jit_comp.c | 11 +++++++++++
> arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++++
> 2 files changed, 21 insertions(+)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 2f2230ae2145..c88dfa1418ec 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -471,6 +471,17 @@ bool bpf_jit_supports_percpu_insn(void)
> return IS_ENABLED(CONFIG_PPC64);
> }
>
> +bool bpf_jit_inlines_helper_call(s32 imm)
> +{
> + switch (imm) {
> + case BPF_FUNC_get_smp_processor_id:
> + case BPF_FUNC_get_current_task:
What about BPF_FUNC_get_current_task_btf ?
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> void *arch_alloc_bpf_trampoline(unsigned int size)
> {
> return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 21486706b5ea..4e1643422370 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -1399,6 +1399,16 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> case BPF_JMP | BPF_CALL:
> ctx->seen |= SEEN_FUNC;
>
> + if (insn[i].src_reg == BPF_REG_0) {
Are you sure you want to use BPF_REG_0 here ? Is it the correct meaning
? I see RISCV and ARM64 use 0 instead.
If you keep BPF_REG_0 I would have a preference for
if (src_reg == bpf_to_ppc(BPF_REG_0))
> + if (imm == BPF_FUNC_get_smp_processor_id) {
> + EMIT(PPC_RAW_LHZ(insn[i].src_reg, _R13, offsetof(struct paca_struct, paca_index)));
This looks wrong, you can't use insn[i].src_reg to emit powerpc
instructions, you must use the local src_reg which converts the register
ID with bpf_to_ppc()
> + break;
> + } else if (imm == BPF_FUNC_get_current_task) {
> + EMIT(PPC_RAW_LD(insn[i].src_reg, _R13, offsetof(struct paca_struct, __current)));
Same here.
> + break;
> + }
> + }
> +
> ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
> &func_addr, &func_addr_fixed);
> if (ret < 0)
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-11-25 10:15 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-17 6:52 [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
2025-11-17 6:52 ` [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
2025-11-25 10:02 ` Christophe Leroy (CS GROUP)
2025-11-17 6:52 ` [PATCH bpf-next v2 2/2] powerpc64/bpf: Inline bpf_get_smp_processor_id() and bpf_get_current_task() Saket Kumar Bhaskar
2025-11-25 10:15 ` Christophe Leroy (CS GROUP)
2025-11-18 12:16 ` [PATCH bpf-next v2 0/2] bpf: Inline helper in powerpc JIT Puranjay Mohan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).