* [PATCH 1/5] powerpc64/bpf: jit support for 32bit offset jmp instruction
2024-05-17 7:56 [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Artem Savkov
@ 2024-05-17 7:56 ` Artem Savkov
2024-05-17 7:56 ` [PATCH 2/5] powerpc64/bpf: jit support for unconditional byte swap Artem Savkov
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Artem Savkov @ 2024-05-17 7:56 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel, Artem Savkov
Add jit support for JMP32_JA instruction. Tested using test_bpf module.
Signed-off-by: Artem Savkov <asavkov@redhat.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 8afc14a4a1258..3071205782b15 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -1053,6 +1053,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
case BPF_JMP | BPF_JA:
PPC_JMP(addrs[i + 1 + off]);
break;
+ case BPF_JMP32 | BPF_JA:
+ PPC_JMP(addrs[i + 1 + imm]);
+ break;
case BPF_JMP | BPF_JGT | BPF_K:
case BPF_JMP | BPF_JGT | BPF_X:
--
2.45.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH 2/5] powerpc64/bpf: jit support for unconditional byte swap
2024-05-17 7:56 [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Artem Savkov
2024-05-17 7:56 ` [PATCH 1/5] powerpc64/bpf: jit support for 32bit offset jmp instruction Artem Savkov
@ 2024-05-17 7:56 ` Artem Savkov
2024-05-22 11:37 ` Hari Bathini
2024-05-17 7:56 ` [PATCH 3/5] powerpc64/bpf: jit support for sign extended load Artem Savkov
` (3 subsequent siblings)
5 siblings, 1 reply; 8+ messages in thread
From: Artem Savkov @ 2024-05-17 7:56 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel, Artem Savkov
Add jit support for unconditional byte swap. Tested using BSWAP tests
from test_bpf module.
Signed-off-by: Artem Savkov <asavkov@redhat.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 3071205782b15..97191cf091bbf 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -699,11 +699,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
*/
case BPF_ALU | BPF_END | BPF_FROM_LE:
case BPF_ALU | BPF_END | BPF_FROM_BE:
+ case BPF_ALU64 | BPF_END | BPF_FROM_LE:
#ifdef __BIG_ENDIAN__
if (BPF_SRC(code) == BPF_FROM_BE)
goto emit_clear;
#else /* !__BIG_ENDIAN__ */
- if (BPF_SRC(code) == BPF_FROM_LE)
+ if (BPF_CLASS(code) == BPF_ALU && BPF_SRC(code) == BPF_FROM_LE)
goto emit_clear;
#endif
switch (imm) {
--
2.45.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 2/5] powerpc64/bpf: jit support for unconditional byte swap
2024-05-17 7:56 ` [PATCH 2/5] powerpc64/bpf: jit support for unconditional byte swap Artem Savkov
@ 2024-05-22 11:37 ` Hari Bathini
0 siblings, 0 replies; 8+ messages in thread
From: Hari Bathini @ 2024-05-22 11:37 UTC (permalink / raw)
To: Artem Savkov, Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel
On 17/05/24 1:26 pm, Artem Savkov wrote:
> Add jit support for unconditional byte swap. Tested using BSWAP tests
> from test_bpf module.
>
> Signed-off-by: Artem Savkov <asavkov@redhat.com>
> ---
> arch/powerpc/net/bpf_jit_comp64.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 3071205782b15..97191cf091bbf 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -699,11 +699,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> */
> case BPF_ALU | BPF_END | BPF_FROM_LE:
> case BPF_ALU | BPF_END | BPF_FROM_BE:
> + case BPF_ALU64 | BPF_END | BPF_FROM_LE:
A comment here indicating this case does unconditional swap
could improve readability.
Other than this minor nit, the patchset looks good to me.
Also, tested the changes with test_bpf module and selftests.
For the series..
Reviewed-by: Hari Bathini <hbathini@linux.ibm.com>
> #ifdef __BIG_ENDIAN__
> if (BPF_SRC(code) == BPF_FROM_BE)
> goto emit_clear;
> #else /* !__BIG_ENDIAN__ */
> - if (BPF_SRC(code) == BPF_FROM_LE)
> + if (BPF_CLASS(code) == BPF_ALU && BPF_SRC(code) == BPF_FROM_LE)
> goto emit_clear;
> #endif
> switch (imm) {
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/5] powerpc64/bpf: jit support for sign extended load
2024-05-17 7:56 [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Artem Savkov
2024-05-17 7:56 ` [PATCH 1/5] powerpc64/bpf: jit support for 32bit offset jmp instruction Artem Savkov
2024-05-17 7:56 ` [PATCH 2/5] powerpc64/bpf: jit support for unconditional byte swap Artem Savkov
@ 2024-05-17 7:56 ` Artem Savkov
2024-05-17 7:56 ` [PATCH 4/5] powerpc64/bpf: jit support for sign extended mov Artem Savkov
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Artem Savkov @ 2024-05-17 7:56 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel, Artem Savkov
Add jit support for sign extended load. Tested using test_bpf module.
Signed-off-by: Artem Savkov <asavkov@redhat.com>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/net/bpf_jit_comp64.c | 61 ++++++++++++++++++---------
2 files changed, 43 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 076ae60b4a55d..76cc9a2d82065 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -471,6 +471,7 @@
#define PPC_RAW_VCMPEQUB_RC(vrt, vra, vrb) \
(0x10000006 | ___PPC_RT(vrt) | ___PPC_RA(vra) | ___PPC_RB(vrb) | __PPC_RC21)
#define PPC_RAW_LD(r, base, i) (0xe8000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_DS(i))
+#define PPC_RAW_LWA(r, base, i) (0xe8000002 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_DS(i))
#define PPC_RAW_LWZ(r, base, i) (0x80000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
#define PPC_RAW_LWZX(t, a, b) (0x7c00002e | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_STD(r, base, i) (0xf8000000 | ___PPC_RS(r) | ___PPC_RA(base) | IMM_DS(i))
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 97191cf091bbf..b9f47398b311d 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -925,13 +925,19 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
*/
/* dst = *(u8 *)(ul) (src + off) */
case BPF_LDX | BPF_MEM | BPF_B:
+ case BPF_LDX | BPF_MEMSX | BPF_B:
case BPF_LDX | BPF_PROBE_MEM | BPF_B:
+ case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
/* dst = *(u16 *)(ul) (src + off) */
case BPF_LDX | BPF_MEM | BPF_H:
+ case BPF_LDX | BPF_MEMSX | BPF_H:
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
+ case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
/* dst = *(u32 *)(ul) (src + off) */
case BPF_LDX | BPF_MEM | BPF_W:
+ case BPF_LDX | BPF_MEMSX | BPF_W:
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
+ case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
/* dst = *(u64 *)(ul) (src + off) */
case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
@@ -941,7 +947,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
* load only if addr is kernel address (see is_kernel_addr()), otherwise
* set dst_reg=0 and move on.
*/
- if (BPF_MODE(code) == BPF_PROBE_MEM) {
+ if (BPF_MODE(code) == BPF_PROBE_MEM || BPF_MODE(code) == BPF_PROBE_MEMSX) {
EMIT(PPC_RAW_ADDI(tmp1_reg, src_reg, off));
if (IS_ENABLED(CONFIG_PPC_BOOK3E_64))
PPC_LI64(tmp2_reg, 0x8000000000000000ul);
@@ -954,30 +960,47 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
* Check if 'off' is word aligned for BPF_DW, because
* we might generate two instructions.
*/
- if (BPF_SIZE(code) == BPF_DW && (off & 3))
+ if ((BPF_SIZE(code) == BPF_DW ||
+ (BPF_SIZE(code) == BPF_B && BPF_MODE(code) == BPF_PROBE_MEMSX)) &&
+ (off & 3))
PPC_JMP((ctx->idx + 3) * 4);
else
PPC_JMP((ctx->idx + 2) * 4);
}
- switch (size) {
- case BPF_B:
- EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off));
- break;
- case BPF_H:
- EMIT(PPC_RAW_LHZ(dst_reg, src_reg, off));
- break;
- case BPF_W:
- EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off));
- break;
- case BPF_DW:
- if (off % 4) {
- EMIT(PPC_RAW_LI(tmp1_reg, off));
- EMIT(PPC_RAW_LDX(dst_reg, src_reg, tmp1_reg));
- } else {
- EMIT(PPC_RAW_LD(dst_reg, src_reg, off));
+ if (BPF_MODE(code) == BPF_MEMSX || BPF_MODE(code) == BPF_PROBE_MEMSX) {
+ switch (size) {
+ case BPF_B:
+ EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off));
+ EMIT(PPC_RAW_EXTSB(dst_reg, dst_reg));
+ break;
+ case BPF_H:
+ EMIT(PPC_RAW_LHA(dst_reg, src_reg, off));
+ break;
+ case BPF_W:
+ EMIT(PPC_RAW_LWA(dst_reg, src_reg, off));
+ break;
+ }
+ } else {
+ switch (size) {
+ case BPF_B:
+ EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off));
+ break;
+ case BPF_H:
+ EMIT(PPC_RAW_LHZ(dst_reg, src_reg, off));
+ break;
+ case BPF_W:
+ EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off));
+ break;
+ case BPF_DW:
+ if (off % 4) {
+ EMIT(PPC_RAW_LI(tmp1_reg, off));
+ EMIT(PPC_RAW_LDX(dst_reg, src_reg, tmp1_reg));
+ } else {
+ EMIT(PPC_RAW_LD(dst_reg, src_reg, off));
+ }
+ break;
}
- break;
}
if (size != BPF_DW && insn_is_zext(&insn[i + 1]))
--
2.45.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH 4/5] powerpc64/bpf: jit support for sign extended mov
2024-05-17 7:56 [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Artem Savkov
` (2 preceding siblings ...)
2024-05-17 7:56 ` [PATCH 3/5] powerpc64/bpf: jit support for sign extended load Artem Savkov
@ 2024-05-17 7:56 ` Artem Savkov
2024-05-17 7:56 ` [PATCH 5/5] powerpc64/bpf: jit support for signed division and modulo Artem Savkov
2024-07-12 12:53 ` [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Michael Ellerman
5 siblings, 0 replies; 8+ messages in thread
From: Artem Savkov @ 2024-05-17 7:56 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel, Artem Savkov
Add jit support for sign extended mov. Tested using test_bpf module.
Signed-off-by: Artem Savkov <asavkov@redhat.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index b9f47398b311d..811775cfd3a1b 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -676,8 +676,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
/* special mov32 for zext */
EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
break;
- }
- EMIT(PPC_RAW_MR(dst_reg, src_reg));
+ } else if (off == 8) {
+ EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));
+ } else if (off == 16) {
+ EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));
+ } else if (off == 32) {
+ EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));
+ } else if (dst_reg != src_reg)
+ EMIT(PPC_RAW_MR(dst_reg, src_reg));
goto bpf_alu32_trunc;
case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */
case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = (s64) imm */
--
2.45.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH 5/5] powerpc64/bpf: jit support for signed division and modulo
2024-05-17 7:56 [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Artem Savkov
` (3 preceding siblings ...)
2024-05-17 7:56 ` [PATCH 4/5] powerpc64/bpf: jit support for sign extended mov Artem Savkov
@ 2024-05-17 7:56 ` Artem Savkov
2024-07-12 12:53 ` [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Michael Ellerman
5 siblings, 0 replies; 8+ messages in thread
From: Artem Savkov @ 2024-05-17 7:56 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel, Artem Savkov
Add jit support for sign division and modulo. Tested using test_bpf
module.
Signed-off-by: Artem Savkov <asavkov@redhat.com>
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/net/bpf_jit_comp64.c | 41 +++++++++++++++++++++------
2 files changed, 34 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 76cc9a2d82065..b98a9e982c03b 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -536,6 +536,7 @@
#define PPC_RAW_MULI(d, a, i) (0x1c000000 | ___PPC_RT(d) | ___PPC_RA(a) | IMM_L(i))
#define PPC_RAW_DIVW(d, a, b) (0x7c0003d6 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_DIVWU(d, a, b) (0x7c000396 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b))
+#define PPC_RAW_DIVD(d, a, b) (0x7c0003d2 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_DIVDU(d, a, b) (0x7c000392 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_DIVDE(t, a, b) (0x7c000352 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_DIVDE_DOT(t, a, b) (0x7c000352 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 811775cfd3a1b..1f5f93926e424 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -510,20 +510,33 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
if (BPF_OP(code) == BPF_MOD) {
- EMIT(PPC_RAW_DIVWU(tmp1_reg, dst_reg, src_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVW(tmp1_reg, dst_reg, src_reg));
+ else
+ EMIT(PPC_RAW_DIVWU(tmp1_reg, dst_reg, src_reg));
+
EMIT(PPC_RAW_MULW(tmp1_reg, src_reg, tmp1_reg));
EMIT(PPC_RAW_SUB(dst_reg, dst_reg, tmp1_reg));
} else
- EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, src_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVW(dst_reg, dst_reg, src_reg));
+ else
+ EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, src_reg));
goto bpf_alu32_trunc;
case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */
case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
if (BPF_OP(code) == BPF_MOD) {
- EMIT(PPC_RAW_DIVDU(tmp1_reg, dst_reg, src_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVD(tmp1_reg, dst_reg, src_reg));
+ else
+ EMIT(PPC_RAW_DIVDU(tmp1_reg, dst_reg, src_reg));
EMIT(PPC_RAW_MULD(tmp1_reg, src_reg, tmp1_reg));
EMIT(PPC_RAW_SUB(dst_reg, dst_reg, tmp1_reg));
} else
- EMIT(PPC_RAW_DIVDU(dst_reg, dst_reg, src_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVD(dst_reg, dst_reg, src_reg));
+ else
+ EMIT(PPC_RAW_DIVDU(dst_reg, dst_reg, src_reg));
break;
case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */
case BPF_ALU | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */
@@ -544,19 +557,31 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
switch (BPF_CLASS(code)) {
case BPF_ALU:
if (BPF_OP(code) == BPF_MOD) {
- EMIT(PPC_RAW_DIVWU(tmp2_reg, dst_reg, tmp1_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVW(tmp2_reg, dst_reg, tmp1_reg));
+ else
+ EMIT(PPC_RAW_DIVWU(tmp2_reg, dst_reg, tmp1_reg));
EMIT(PPC_RAW_MULW(tmp1_reg, tmp1_reg, tmp2_reg));
EMIT(PPC_RAW_SUB(dst_reg, dst_reg, tmp1_reg));
} else
- EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, tmp1_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVW(dst_reg, dst_reg, tmp1_reg));
+ else
+ EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, tmp1_reg));
break;
case BPF_ALU64:
if (BPF_OP(code) == BPF_MOD) {
- EMIT(PPC_RAW_DIVDU(tmp2_reg, dst_reg, tmp1_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVD(tmp2_reg, dst_reg, tmp1_reg));
+ else
+ EMIT(PPC_RAW_DIVDU(tmp2_reg, dst_reg, tmp1_reg));
EMIT(PPC_RAW_MULD(tmp1_reg, tmp1_reg, tmp2_reg));
EMIT(PPC_RAW_SUB(dst_reg, dst_reg, tmp1_reg));
} else
- EMIT(PPC_RAW_DIVDU(dst_reg, dst_reg, tmp1_reg));
+ if (off)
+ EMIT(PPC_RAW_DIVD(dst_reg, dst_reg, tmp1_reg));
+ else
+ EMIT(PPC_RAW_DIVDU(dst_reg, dst_reg, tmp1_reg));
break;
}
goto bpf_alu32_trunc;
--
2.45.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions
2024-05-17 7:56 [PATCH 0/5] powerpc64/bpf: jit support for cpuv4 instructions Artem Savkov
` (4 preceding siblings ...)
2024-05-17 7:56 ` [PATCH 5/5] powerpc64/bpf: jit support for signed division and modulo Artem Savkov
@ 2024-07-12 12:53 ` Michael Ellerman
5 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2024-07-12 12:53 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Naveen N. Rao, linuxppc-dev, Artem Savkov
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, netdev,
linux-kernel
On Fri, 17 May 2024 09:56:45 +0200, Artem Savkov wrote:
> Add support for recently added cpuv4 instructions fixing test_bpf module
> failures. This is mostly based on 8ecf3c1dab1c6 (powerpc/bpf/32: Fix
> failing test_bpf tests, 2024-03-05)
>
> Artem Savkov (5):
> powerpc64/bpf: jit support for 32bit offset jmp instruction
> powerpc64/bpf: jit support for unconditional byte swap
> powerpc64/bpf: jit support for sign extended load
> powerpc64/bpf: jit support for sign extended mov
> powerpc64/bpf: jit support for signed division and modulo
>
> [...]
Applied to powerpc/next.
[1/5] powerpc64/bpf: jit support for 32bit offset jmp instruction
https://git.kernel.org/powerpc/c/3c086ce222cefcf16d412faa10d456161d076796
[2/5] powerpc64/bpf: jit support for unconditional byte swap
https://git.kernel.org/powerpc/c/a71c0b09a14db72d59c48a8cda7a73032f4d418b
[3/5] powerpc64/bpf: jit support for sign extended load
https://git.kernel.org/powerpc/c/717756c9c8ddad9f28389185bfb161d4d88e01a4
[4/5] powerpc64/bpf: jit support for sign extended mov
https://git.kernel.org/powerpc/c/597b1710982d10b8629697e4a548b30d0d93eeed
[5/5] powerpc64/bpf: jit support for signed division and modulo
https://git.kernel.org/powerpc/c/fde318326daa48a4bb3ca8ee229bac4d14b5bc2a
cheers
^ permalink raw reply [flat|nested] 8+ messages in thread