linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] bpf: Inline helper in powerpc JIT
@ 2025-03-11 16:09 Saket Kumar Bhaskar
  2025-03-11 16:09 ` [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
  2025-03-11 16:09 ` [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id() Saket Kumar Bhaskar
  0 siblings, 2 replies; 7+ messages in thread
From: Saket Kumar Bhaskar @ 2025-03-11 16:09 UTC (permalink / raw)
  To: bpf, linuxppc-dev, linux-kernel
  Cc: ast, hbathini, andrii, daniel, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa,
	christophe.leroy, naveen, maddy, mpe, npiggin

This series adds the support of internal only per-CPU instructions
and inlines the bpf_get_smp_processor_id() helper call for powerpc
BPF JIT.


Saket Kumar Bhaskar (2):
  powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU
    addrs
  powerpc, bpf: Inline bpf_get_smp_processor_id()

 arch/powerpc/net/bpf_jit_comp.c   | 15 +++++++++++++++
 arch/powerpc/net/bpf_jit_comp64.c | 13 +++++++++++++
 2 files changed, 28 insertions(+)

-- 
2.43.5



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs
  2025-03-11 16:09 [PATCH 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
@ 2025-03-11 16:09 ` Saket Kumar Bhaskar
  2025-03-11 17:38   ` Christophe Leroy
  2025-03-11 16:09 ` [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id() Saket Kumar Bhaskar
  1 sibling, 1 reply; 7+ messages in thread
From: Saket Kumar Bhaskar @ 2025-03-11 16:09 UTC (permalink / raw)
  To: bpf, linuxppc-dev, linux-kernel
  Cc: ast, hbathini, andrii, daniel, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa,
	christophe.leroy, naveen, maddy, mpe, npiggin

With the introduction of commit 7bdbf7446305 ("bpf: add special
internal-only MOV instruction to resolve per-CPU addrs"),
a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
resolve absolute addresses of per-CPU data from their per-CPU
offsets. This update requires enabling support for this
instruction in the powerpc JIT compiler.

As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
optimisations"), the per-CPU data offset for the CPU is stored in
the paca.

To support this BPF instruction in the powerpc JIT, the following
powerpc instructions are emitted:

mr dst_reg, src_reg		//Move src_reg to dst_reg, if src_reg != dst_reg
ld tmp1_reg, 48(13)		//Load per-CPU data offset from paca(r13) in tmp1_reg.
add dst_reg, dst_reg, tmp1_reg	//Add the per cpu offset to the dst.

To evaluate the performance improvements introduced by this change,
the benchmark described in [1] was employed.

Before Change:
glob-arr-inc   :   41.580 ± 0.034M/s
arr-inc        :   39.592 ± 0.055M/s
hash-inc       :   25.873 ± 0.012M/s

After Change:
glob-arr-inc   :   42.024 ± 0.049M/s
arr-inc        :   55.447 ± 0.031M/s
hash-inc       :   26.565 ± 0.014M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp.c   | 5 +++++
 arch/powerpc/net/bpf_jit_comp64.c | 8 ++++++++
 2 files changed, 13 insertions(+)

diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 2991bb171a9b..3d4bd45a9a22 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -440,6 +440,11 @@ bool bpf_jit_supports_far_kfunc_call(void)
 	return IS_ENABLED(CONFIG_PPC64);
 }
 
+bool bpf_jit_supports_percpu_insn(void)
+{
+	return true;
+}
+
 void *arch_alloc_bpf_trampoline(unsigned int size)
 {
 	return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 233703b06d7c..06f06770ceea 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -679,6 +679,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
 		 */
 		case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
 		case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
+			if (insn_is_mov_percpu_addr(&insn[i])) {
+				if (dst_reg != src_reg)
+					EMIT(PPC_RAW_MR(dst_reg, src_reg));
+#ifdef CONFIG_SMP
+				EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
+				EMIT(PPC_RAW_ADD(dst_reg, dst_reg, tmp1_reg));
+#endif
+			}
 			if (imm == 1) {
 				/* special mov32 for zext */
 				EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
-- 
2.43.5



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id()
  2025-03-11 16:09 [PATCH 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
  2025-03-11 16:09 ` [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
@ 2025-03-11 16:09 ` Saket Kumar Bhaskar
  2025-03-11 17:51   ` Christophe Leroy
  1 sibling, 1 reply; 7+ messages in thread
From: Saket Kumar Bhaskar @ 2025-03-11 16:09 UTC (permalink / raw)
  To: bpf, linuxppc-dev, linux-kernel
  Cc: ast, hbathini, andrii, daniel, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa,
	christophe.leroy, naveen, maddy, mpe, npiggin

Inline the calls to bpf_get_smp_processor_id() in the powerpc bpf jit.

powerpc saves the Logical processor number (paca_index) in paca.

Here is how the powerpc JITed assembly changes after this commit:

Before:

cpu = bpf_get_smp_processor_id();

addis 12, 2, -517
addi 12, 12, -29456
mtctr 12
bctrl
mr	8, 3

After:

cpu = bpf_get_smp_processor_id();

lhz 8, 8(13)

To evaluate the performance improvements introduced by this change,
the benchmark described in [1] was employed.

+---------------+-------------------+-------------------+--------------+
|      Name     |      Before       |        After      |   % change   |
|---------------+-------------------+-------------------+--------------|
| glob-arr-inc  | 41.580 ± 0.034M/s | 54.137 ± 0.019M/s |   + 30.20%   |
| arr-inc       | 39.592 ± 0.055M/s | 54.000 ± 0.026M/s |   + 36.39%   |
| hash-inc      | 25.873 ± 0.012M/s | 26.334 ± 0.058M/s |   + 1.78%    |
+---------------+-------------------+-------------------+--------------+

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp.c   | 10 ++++++++++
 arch/powerpc/net/bpf_jit_comp64.c |  5 +++++
 2 files changed, 15 insertions(+)

diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 3d4bd45a9a22..4b79b2d95469 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -445,6 +445,16 @@ bool bpf_jit_supports_percpu_insn(void)
 	return true;
 }
 
+bool bpf_jit_inlines_helper_call(s32 imm)
+{
+	switch (imm) {
+	case BPF_FUNC_get_smp_processor_id:
+		return true;
+	default:
+		return false;
+	}
+}
+
 void *arch_alloc_bpf_trampoline(unsigned int size)
 {
 	return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 06f06770ceea..a8de12c026da 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -1087,6 +1087,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
 		case BPF_JMP | BPF_CALL:
 			ctx->seen |= SEEN_FUNC;
 
+			if (insn[i].src_reg == 0 && imm == BPF_FUNC_get_smp_processor_id) {
+				EMIT(PPC_RAW_LHZ(bpf_to_ppc(BPF_REG_0), _R13, offsetof(struct paca_struct, paca_index)));
+				break;
+			}
+
 			ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
 						    &func_addr, &func_addr_fixed);
 			if (ret < 0)
-- 
2.43.5



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs
  2025-03-11 16:09 ` [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
@ 2025-03-11 17:38   ` Christophe Leroy
  2025-04-29 16:59     ` Saket Kumar Bhaskar
  0 siblings, 1 reply; 7+ messages in thread
From: Christophe Leroy @ 2025-03-11 17:38 UTC (permalink / raw)
  To: Saket Kumar Bhaskar, bpf, linuxppc-dev, linux-kernel
  Cc: ast, hbathini, andrii, daniel, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa,
	naveen, maddy, mpe, npiggin



Le 11/03/2025 à 17:09, Saket Kumar Bhaskar a écrit :
> [Vous ne recevez pas souvent de courriers de skb99@linux.ibm.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> 
> With the introduction of commit 7bdbf7446305 ("bpf: add special
> internal-only MOV instruction to resolve per-CPU addrs"),
> a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
> resolve absolute addresses of per-CPU data from their per-CPU
> offsets. This update requires enabling support for this
> instruction in the powerpc JIT compiler.
> 
> As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
> optimisations"), the per-CPU data offset for the CPU is stored in
> the paca.
> 
> To support this BPF instruction in the powerpc JIT, the following
> powerpc instructions are emitted:
> 
> mr dst_reg, src_reg             //Move src_reg to dst_reg, if src_reg != dst_reg
> ld tmp1_reg, 48(13)             //Load per-CPU data offset from paca(r13) in tmp1_reg.
> add dst_reg, dst_reg, tmp1_reg  //Add the per cpu offset to the dst.

Why not do:

   add dst_reg, src_reg, tmp1_reg

instead of a combination of 'mr' and 'add' ?


> 
> To evaluate the performance improvements introduced by this change,
> the benchmark described in [1] was employed.
> 
> Before Change:
> glob-arr-inc   :   41.580 ± 0.034M/s
> arr-inc        :   39.592 ± 0.055M/s
> hash-inc       :   25.873 ± 0.012M/s
> 
> After Change:
> glob-arr-inc   :   42.024 ± 0.049M/s
> arr-inc        :   55.447 ± 0.031M/s
> hash-inc       :   26.565 ± 0.014M/s
> 
> [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7Ca4bc35a9cb49457fb5cc08dd60b73783%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638773062200197453%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=1t2Bc3w6Ye0u33UNEjsSAv114HDOGNXmk1I%2Fxt7K2sc%3D&reserved=0
> 
> Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit_comp.c   | 5 +++++
>   arch/powerpc/net/bpf_jit_comp64.c | 8 ++++++++
>   2 files changed, 13 insertions(+)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 2991bb171a9b..3d4bd45a9a22 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -440,6 +440,11 @@ bool bpf_jit_supports_far_kfunc_call(void)
>          return IS_ENABLED(CONFIG_PPC64);
>   }
> 
> +bool bpf_jit_supports_percpu_insn(void)
> +{
> +       return true;
> +}
> +

What about PPC32 ?

>   void *arch_alloc_bpf_trampoline(unsigned int size)
>   {
>          return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 233703b06d7c..06f06770ceea 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -679,6 +679,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>                   */
>                  case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
>                  case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
> +                       if (insn_is_mov_percpu_addr(&insn[i])) {
> +                               if (dst_reg != src_reg)
> +                                       EMIT(PPC_RAW_MR(dst_reg, src_reg));

Shouldn't be needed except for the non-SMP case maybe.

> +#ifdef CONFIG_SMP
> +                               EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> +                               EMIT(PPC_RAW_ADD(dst_reg, dst_reg, tmp1_reg));

Can use src_reg as first operand instead of dst_reg

> +#endif

data_offset always exists in paca_struct, please use 
IS_ENABLED(CONFIG_SMP) instead of #ifdef

> +                       }
>                          if (imm == 1) {
>                                  /* special mov32 for zext */
>                                  EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
> --
> 2.43.5
> 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id()
  2025-03-11 16:09 ` [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id() Saket Kumar Bhaskar
@ 2025-03-11 17:51   ` Christophe Leroy
  2025-04-29 17:02     ` Saket Kumar Bhaskar
  0 siblings, 1 reply; 7+ messages in thread
From: Christophe Leroy @ 2025-03-11 17:51 UTC (permalink / raw)
  To: Saket Kumar Bhaskar, bpf, linuxppc-dev, linux-kernel
  Cc: ast, hbathini, andrii, daniel, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa,
	naveen, maddy, mpe, npiggin



Le 11/03/2025 à 17:09, Saket Kumar Bhaskar a écrit :
> [Vous ne recevez pas souvent de courriers de skb99@linux.ibm.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> 
> Inline the calls to bpf_get_smp_processor_id() in the powerpc bpf jit.
> 
> powerpc saves the Logical processor number (paca_index) in paca.
> 
> Here is how the powerpc JITed assembly changes after this commit:
> 
> Before:
> 
> cpu = bpf_get_smp_processor_id();
> 
> addis 12, 2, -517
> addi 12, 12, -29456
> mtctr 12
> bctrl
> mr      8, 3
> 
> After:
> 
> cpu = bpf_get_smp_processor_id();
> 
> lhz 8, 8(13)
> 
> To evaluate the performance improvements introduced by this change,
> the benchmark described in [1] was employed.
> 
> +---------------+-------------------+-------------------+--------------+
> |      Name     |      Before       |        After      |   % change   |
> |---------------+-------------------+-------------------+--------------|
> | glob-arr-inc  | 41.580 ± 0.034M/s | 54.137 ± 0.019M/s |   + 30.20%   |
> | arr-inc       | 39.592 ± 0.055M/s | 54.000 ± 0.026M/s |   + 36.39%   |
> | hash-inc      | 25.873 ± 0.012M/s | 26.334 ± 0.058M/s |   + 1.78%    |
> +---------------+-------------------+-------------------+--------------+
> 

Nice improvement.

I see that bpf_get_current_task() could be inlined as well, on PPC32 it 
is in r2, on PPC64 it is in paca.

> [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C1d1f40ce41344cf1ecf508dd60b73ae0%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638773062267813839%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=T%2BG206FHtW7hhFT1%2BXxRwN7pc%2BRzu8SiMlZ5njIlhB8%3D&reserved=0
> 
> Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit_comp.c   | 10 ++++++++++
>   arch/powerpc/net/bpf_jit_comp64.c |  5 +++++
>   2 files changed, 15 insertions(+)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 3d4bd45a9a22..4b79b2d95469 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -445,6 +445,16 @@ bool bpf_jit_supports_percpu_insn(void)
>          return true;
>   }
> 
> +bool bpf_jit_inlines_helper_call(s32 imm)
> +{
> +       switch (imm) {
> +       case BPF_FUNC_get_smp_processor_id:
> +               return true;
> +       default:
> +               return false;
> +       }
> +}

What about PPC32 ?


> +
>   void *arch_alloc_bpf_trampoline(unsigned int size)
>   {
>          return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 06f06770ceea..a8de12c026da 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -1087,6 +1087,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>                  case BPF_JMP | BPF_CALL:
>                          ctx->seen |= SEEN_FUNC;
> 
> +                       if (insn[i].src_reg == 0 && imm == BPF_FUNC_get_smp_processor_id) {

Please use BPF_REG_0 instead of just 0.

> +                               EMIT(PPC_RAW_LHZ(bpf_to_ppc(BPF_REG_0), _R13, offsetof(struct paca_struct, paca_index)));

Can just use 'src_reg' instead of 'bpf_to_ppc(BPF_REG_0)'

> +                               break;
> +                       }
> +
>                          ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
>                                                      &func_addr, &func_addr_fixed);
>                          if (ret < 0)
> --
> 2.43.5
> 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs
  2025-03-11 17:38   ` Christophe Leroy
@ 2025-04-29 16:59     ` Saket Kumar Bhaskar
  0 siblings, 0 replies; 7+ messages in thread
From: Saket Kumar Bhaskar @ 2025-04-29 16:59 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: bpf, linuxppc-dev, linux-kernel, ast, hbathini, andrii, daniel,
	martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
	sdf, haoluo, jolsa, naveen, maddy, mpe, npiggin

On Tue, Mar 11, 2025 at 06:38:23PM +0100, Christophe Leroy wrote:
> 
> 
> Le 11/03/2025 à 17:09, Saket Kumar Bhaskar a écrit :
> > [Vous ne recevez pas souvent de courriers de skb99@linux.ibm.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> > 
> > With the introduction of commit 7bdbf7446305 ("bpf: add special
> > internal-only MOV instruction to resolve per-CPU addrs"),
> > a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
> > resolve absolute addresses of per-CPU data from their per-CPU
> > offsets. This update requires enabling support for this
> > instruction in the powerpc JIT compiler.
> > 
> > As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
> > optimisations"), the per-CPU data offset for the CPU is stored in
> > the paca.
> > 
> > To support this BPF instruction in the powerpc JIT, the following
> > powerpc instructions are emitted:
> > 
> > mr dst_reg, src_reg             //Move src_reg to dst_reg, if src_reg != dst_reg
> > ld tmp1_reg, 48(13)             //Load per-CPU data offset from paca(r13) in tmp1_reg.
> > add dst_reg, dst_reg, tmp1_reg  //Add the per cpu offset to the dst.
> 
> Why not do:
> 
>   add dst_reg, src_reg, tmp1_reg
> 
> instead of a combination of 'mr' and 'add' ?
> 
Will do it in v2. 
> > 
> > To evaluate the performance improvements introduced by this change,
> > the benchmark described in [1] was employed.
> > 
> > Before Change:
> > glob-arr-inc   :   41.580 ± 0.034M/s
> > arr-inc        :   39.592 ± 0.055M/s
> > hash-inc       :   25.873 ± 0.012M/s
> > 
> > After Change:
> > glob-arr-inc   :   42.024 ± 0.049M/s
> > arr-inc        :   55.447 ± 0.031M/s
> > hash-inc       :   26.565 ± 0.014M/s
> > 
> > [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7Ca4bc35a9cb49457fb5cc08dd60b73783%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638773062200197453%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=1t2Bc3w6Ye0u33UNEjsSAv114HDOGNXmk1I%2Fxt7K2sc%3D&reserved=0
> > 
> > Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
> > ---
> >   arch/powerpc/net/bpf_jit_comp.c   | 5 +++++
> >   arch/powerpc/net/bpf_jit_comp64.c | 8 ++++++++
> >   2 files changed, 13 insertions(+)
> > 
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index 2991bb171a9b..3d4bd45a9a22 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -440,6 +440,11 @@ bool bpf_jit_supports_far_kfunc_call(void)
> >          return IS_ENABLED(CONFIG_PPC64);
> >   }
> > 
> > +bool bpf_jit_supports_percpu_insn(void)
> > +{
> > +       return true;
> > +}
> > +
> 
> What about PPC32 ?
> 
Right now we will enable it for PPC64. So will modify the return statement accordingly.
> >   void *arch_alloc_bpf_trampoline(unsigned int size)
> >   {
> >          return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> > diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> > index 233703b06d7c..06f06770ceea 100644
> > --- a/arch/powerpc/net/bpf_jit_comp64.c
> > +++ b/arch/powerpc/net/bpf_jit_comp64.c
> > @@ -679,6 +679,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> >                   */
> >                  case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
> >                  case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
> > +                       if (insn_is_mov_percpu_addr(&insn[i])) {
> > +                               if (dst_reg != src_reg)
> > +                                       EMIT(PPC_RAW_MR(dst_reg, src_reg));
> 
> Shouldn't be needed except for the non-SMP case maybe.
> 
Acknowledged.
> > +#ifdef CONFIG_SMP
> > +                               EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> > +                               EMIT(PPC_RAW_ADD(dst_reg, dst_reg, tmp1_reg));
> 
> Can use src_reg as first operand instead of dst_reg
> 
Will include this in v2.
> > +#endif
> 
> data_offset always exists in paca_struct, please use IS_ENABLED(CONFIG_SMP)
> instead of #ifdef
> 
> > +                       }
> >                          if (imm == 1) {
> >                                  /* special mov32 for zext */
> >                                  EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
> > --
> > 2.43.5
> > 
> 
Thanks for reviewing Chris.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id()
  2025-03-11 17:51   ` Christophe Leroy
@ 2025-04-29 17:02     ` Saket Kumar Bhaskar
  0 siblings, 0 replies; 7+ messages in thread
From: Saket Kumar Bhaskar @ 2025-04-29 17:02 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: bpf, linuxppc-dev, linux-kernel, ast, hbathini, andrii, daniel,
	martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
	sdf, haoluo, jolsa, naveen, maddy, mpe, npiggin

On Tue, Mar 11, 2025 at 06:51:28PM +0100, Christophe Leroy wrote:
> 
> 
> Le 11/03/2025 à 17:09, Saket Kumar Bhaskar a écrit :
> > [Vous ne recevez pas souvent de courriers de skb99@linux.ibm.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> > 
> > Inline the calls to bpf_get_smp_processor_id() in the powerpc bpf jit.
> > 
> > powerpc saves the Logical processor number (paca_index) in paca.
> > 
> > Here is how the powerpc JITed assembly changes after this commit:
> > 
> > Before:
> > 
> > cpu = bpf_get_smp_processor_id();
> > 
> > addis 12, 2, -517
> > addi 12, 12, -29456
> > mtctr 12
> > bctrl
> > mr      8, 3
> > 
> > After:
> > 
> > cpu = bpf_get_smp_processor_id();
> > 
> > lhz 8, 8(13)
> > 
> > To evaluate the performance improvements introduced by this change,
> > the benchmark described in [1] was employed.
> > 
> > +---------------+-------------------+-------------------+--------------+
> > |      Name     |      Before       |        After      |   % change   |
> > |---------------+-------------------+-------------------+--------------|
> > | glob-arr-inc  | 41.580 ± 0.034M/s | 54.137 ± 0.019M/s |   + 30.20%   |
> > | arr-inc       | 39.592 ± 0.055M/s | 54.000 ± 0.026M/s |   + 36.39%   |
> > | hash-inc      | 25.873 ± 0.012M/s | 26.334 ± 0.058M/s |   + 1.78%    |
> > +---------------+-------------------+-------------------+--------------+
> > 
> 
> Nice improvement.
> 
> I see that bpf_get_current_task() could be inlined as well, on PPC32 it is
> in r2, on PPC64 it is in paca.
> 
Working on it to inline bpf_get_current_task as well. Will send with v2.
> > [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C1d1f40ce41344cf1ecf508dd60b73ae0%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638773062267813839%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=T%2BG206FHtW7hhFT1%2BXxRwN7pc%2BRzu8SiMlZ5njIlhB8%3D&reserved=0
> > 
> > Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
> > ---
> >   arch/powerpc/net/bpf_jit_comp.c   | 10 ++++++++++
> >   arch/powerpc/net/bpf_jit_comp64.c |  5 +++++
> >   2 files changed, 15 insertions(+)
> > 
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index 3d4bd45a9a22..4b79b2d95469 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -445,6 +445,16 @@ bool bpf_jit_supports_percpu_insn(void)
> >          return true;
> >   }
> > 
> > +bool bpf_jit_inlines_helper_call(s32 imm)
> > +{
> > +       switch (imm) {
> > +       case BPF_FUNC_get_smp_processor_id:
> > +               return true;
> > +       default:
> > +               return false;
> > +       }
> > +}
> 
> What about PPC32 ?
> 
Will send v2 for PPC64 as of now.
> 
> > +
> >   void *arch_alloc_bpf_trampoline(unsigned int size)
> >   {
> >          return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> > diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> > index 06f06770ceea..a8de12c026da 100644
> > --- a/arch/powerpc/net/bpf_jit_comp64.c
> > +++ b/arch/powerpc/net/bpf_jit_comp64.c
> > @@ -1087,6 +1087,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> >                  case BPF_JMP | BPF_CALL:
> >                          ctx->seen |= SEEN_FUNC;
> > 
> > +                       if (insn[i].src_reg == 0 && imm == BPF_FUNC_get_smp_processor_id) {
> 
> Please use BPF_REG_0 instead of just 0.
> 
Acknowledged
> > +                               EMIT(PPC_RAW_LHZ(bpf_to_ppc(BPF_REG_0), _R13, offsetof(struct paca_struct, paca_index)));
> 
> Can just use 'src_reg' instead of 'bpf_to_ppc(BPF_REG_0)'
> 
Will include this in v2.
> > +                               break;
> > +                       }
> > +
> >                          ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
> >                                                      &func_addr, &func_addr_fixed);
> >                          if (ret < 0)
> > --
> > 2.43.5
> > 
> 
Thanks for reviewing Chris.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-04-29 17:03 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-11 16:09 [PATCH 0/2] bpf: Inline helper in powerpc JIT Saket Kumar Bhaskar
2025-03-11 16:09 ` [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction to resolve per-CPU addrs Saket Kumar Bhaskar
2025-03-11 17:38   ` Christophe Leroy
2025-04-29 16:59     ` Saket Kumar Bhaskar
2025-03-11 16:09 ` [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id() Saket Kumar Bhaskar
2025-03-11 17:51   ` Christophe Leroy
2025-04-29 17:02     ` Saket Kumar Bhaskar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).