* Re: [PATCH v2 2/2] selftests/bpf: Enable private stack tests for powerpc64
2026-02-25 15:39 ` [PATCH v2 2/2] selftests/bpf: Enable private stack tests for powerpc64 adubey
@ 2026-02-25 11:12 ` bot+bpf-ci
0 siblings, 0 replies; 5+ messages in thread
From: bot+bpf-ci @ 2026-02-25 11:12 UTC (permalink / raw)
To: adubey, linuxppc-dev
Cc: bpf, hbathini, ast, daniel, andrii, maddy, adubey, ast, andrii,
daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1934 bytes --]
> diff --git a/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c b/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c
> --- a/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c
> +++ b/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c
> @@ -7,7 +7,8 @@
>
> char _license[] SEC("license") = "GPL";
>
> -#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64)
> +#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64) \
> + || defined(__TARGET_ARCH_powerpc)
As Christophe Leroy noted in the v1 review, __TARGET_ARCH_powerpc
matches both powerpc32 and powerpc64.
bpf_jit_supports_private_stack() in bpf_jit_comp.c returns
IS_ENABLED(CONFIG_PPC64), so private stack is only available on
powerpc64. On powerpc32 skip would be false, the tests would
attempt to load programs with combined stack depths exceeding
512 bytes, and the verifier would reject them.
Was this intended to be addressed in a follow-up, or should the
guard be narrowed to powerpc64 only?
https://lore.kernel.org/bpf/1956c912-c880-4214-9bc3-e205902459b5@kernel.org/
> [ ... ]
The same change appears in struct_ops_private_stack_fail.c and
struct_ops_private_stack_recur.c with the same concern.
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22393518231
AI-authorship-score: low
AI-authorship-explanation: Straightforward mechanical change with copy-pasted test output, typical of a human developer enabling tests for a newly supported architecture.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Unaddressed review comment from the PowerPC BPF JIT co-maintainer about __TARGET_ARCH_powerpc matching both PPC32 and PPC64, causing test failures on powerpc32 where private stack is not supported.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack
2026-02-25 15:39 [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack adubey
@ 2026-02-25 11:32 ` bot+bpf-ci
2026-02-25 15:39 ` [PATCH v2 2/2] selftests/bpf: Enable private stack tests for powerpc64 adubey
2026-02-25 17:13 ` [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack kernel test robot
2 siblings, 0 replies; 5+ messages in thread
From: bot+bpf-ci @ 2026-02-25 11:32 UTC (permalink / raw)
To: adubey, linuxppc-dev
Cc: bpf, hbathini, ast, daniel, andrii, maddy, adubey, ast, andrii,
daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 4494 bytes --]
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
[ ... ]
> +static void priv_stack_check_guard(void __percpu *priv_stack_ptr, int alloc_size,
> + struct bpf_prog *fp)
> +{
> + int cpu, underflow_idx = (alloc_size - PRIV_STACK_GUARD_SZ) >> 3;
> + u64 *stack_ptr;
> +
> + for_each_possible_cpu(cpu) {
> + stack_ptr = per_cpu_ptr(priv_stack_ptr, cpu);
> + if (stack_ptr[0] != PRIV_STACK_GUARD_VAL ||
> + stack_ptr[1] != PRIV_STACK_GUARD_VAL ||
> + stack_ptr[underflow_idx] != PRIV_STACK_GUARD_VAL ||
> + stack_ptr[underflow_idx + 1] != PRIV_STACK_GUARD_VAL) {
> + pr_err("BPF private stack overflow/underflow detected for prog %sx\n",
> + bpf_jit_get_prog_name(fp));
Should "%sx" be "%s"? The trailing 'x' would be appended to the
program name in the error output.
> + break;
> + }
> + }
> +}
[ ... ]
> + priv_stack_ptr = fp->aux->priv_stack_ptr;
> + if (!priv_stack_ptr && fp->aux->jits_use_priv_stack) {
[ ... ]
> + priv_stack_ptr = __alloc_percpu_gfp(priv_stack_alloc_size, 16, GFP_KERNEL);
> + if (!priv_stack_ptr) {
> + fp = org_fp;
> + goto out_priv_stack;
> + }
[ ... ]
> out_addrs:
> + if (!image && priv_stack_ptr) {
> + fp->aux->priv_stack_ptr = NULL;
> + free_percpu(priv_stack_ptr);
> + }
> +out_priv_stack:
> kfree(addrs);
When __alloc_percpu_gfp() fails above, goto out_priv_stack jumps
here. At that point, addrs is declared as:
u32 *addrs;
without initialization and has not yet been assigned (the assignment
addrs = jit_data->addrs comes after the goto). This passes a garbage
pointer to kfree().
The x86 JIT has kvfree(addrs) at the out_addrs label (before
out_priv_stack), so the goto to out_priv_stack skips it. Should the
label ordering here be adjusted similarly, or should addrs be
initialized to NULL?
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
[ ... ]
> EMIT(PPC_RAW_STDU(_R1, _R1,
> - -(bpf_jit_stack_size(ctx) + ctx->stack_size)));
> + -(bpf_jit_stack_size(ctx) + (ctx->priv_sp ? 0 : ctx->stack_size))));
> }
The STDU frame size is now bpf_jit_stack_size(ctx) when priv_sp is
set, excluding ctx->stack_size. However, bpf_jit_stack_offsetof()
still computes NVR save offsets including ctx->stack_size:
bpf_jit_stack_offsetof() {
...
return (bpf_has_stack_frame(ctx) ?
(frame_nvr_size + ctx->stack_size) : 0)
- (8 * (32 - reg)) - BPF_PPC_TAILCALL;
}
Similarly, bpf_jit_stack_local() returns
STACK_FRAME_MIN_SIZE + ctx->stack_size.
Since the STDU frame no longer reserves space for ctx->stack_size
when using private stack, these offsets point ctx->stack_size bytes
beyond the allocated frame. This would cause NVR saves in the
prologue and restores in the epilogue to write past the frame
boundary into the caller's stack.
The x86 JIT handles this by zeroing stack_depth in do_jit() when
private stack is active, which keeps all offset calculations
consistent. Would a similar approach work here, either zeroing
ctx->stack_size after setting up priv_sp, or updating the offset
functions to account for it?
[ ... ]
> /* Tear down our stack frame */
> if (bpf_has_stack_frame(ctx)) {
> - EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_jit_stack_size(ctx) + ctx->stack_size));
> + EMIT(PPC_RAW_ADDI(_R1, _R1,
> + bpf_jit_stack_size(ctx) + (ctx->priv_sp ? 0 : ctx->stack_size)));
Same issue here -- the ADDI restore is correct, but the NVR restores
emitted by bpf_jit_emit_common_epilogue() use
bpf_jit_stack_offsetof(), which still includes ctx->stack_size in
the offset and would load from the wrong location.
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22393518231
AI-authorship-score: low
AI-authorship-explanation: The code shows typical human porting errors and organic review iteration across v1/v2, inconsistent with AI-generated code patterns.
issues-found: 3
issue-severity-score: high
issue-severity-explanation: Stack corruption from mismatched NVR save offsets affects every PPC64 BPF program using private stack, and kfree of an uninitialized pointer can crash or corrupt the slab allocator under memory pressure.
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack
@ 2026-02-25 15:39 adubey
2026-02-25 11:32 ` bot+bpf-ci
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: adubey @ 2026-02-25 15:39 UTC (permalink / raw)
To: linuxppc-dev; +Cc: bpf, hbathini, ast, daniel, andrii, maddy, Abhishek Dubey
From: Abhishek Dubey <adubey@linux.ibm.com>
Provision the private stack as a per-CPU allocation during
bpf_int_jit_compile(). Align the stack to 16 bytes and place guard
regions at both ends to detect runtime stack overflow and underflow.
Round the private stack size up to the nearest 16-byte boundary.
Make each guard region 16 bytes to preserve the required overall
16-byte alignment. When private stack is set, skip bpf stack size
accounting in kernel stack.
There is no stack pointer in powerpc. Stack referencing during JIT
is done using frame pointer. Frame pointer calculation goes like:
BPF frame pointer = Priv stack allocation start address +
Overflow guard +
Actual stack size defined by verifier
Memory layout:
High Addr +--------------------------------------------------+
| |
| 16 bytes Underflow guard (0xEB9F12345678eb9fULL) |
| |
BPF FP -> +--------------------------------------------------+
| |
| Private stack - determined by verifier |
| 16-bytes aligned |
| |
+--------------------------------------------------+
| |
Lower Addr | 16 byte Overflow guard (0xEB9F12345678eb9fULL) |
| |
Priv stack alloc ->+--------------------------------------------------+
start
Update BPF_REG_FP to point to the calculated offset within the
allocated private stack buffer. Now, BPF stack usage reference
in the allocated private stack.
The patch is rebase over fixes by Hari:
https://lore.kernel.org/bpf/20260220063933.196141-1-hbathini@linux.ibm.com/
v1->v2:
Fix ci-bot warning for percpu pointer casting
Minor refactoring
[v1]: https://lore.kernel.org/bpf/20260216152234.36632-1-adubey@linux.ibm.com
Signed-off-by: Abhishek Dubey <adubey@linux.ibm.com>
---
arch/powerpc/net/bpf_jit.h | 5 +++
arch/powerpc/net/bpf_jit_comp.c | 75 +++++++++++++++++++++++++++++++
arch/powerpc/net/bpf_jit_comp64.c | 34 +++++++++++---
3 files changed, 109 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 7354e1d72f79..eb0a400b5a98 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -178,8 +178,13 @@ struct codegen_context {
bool is_subprog;
bool exception_boundary;
bool exception_cb;
+ void __percpu *priv_sp;
};
+/* Memory size & magic-value to detect private stack overflow/underflow */
+#define PRIV_STACK_GUARD_SZ 16
+#define PRIV_STACK_GUARD_VAL 0xEB9F12345678eb9fULL
+
#define bpf_to_ppc(r) (ctx->b2p[r])
#ifdef CONFIG_PPC32
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 278e09b57560..ebd21c75ce47 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -129,6 +129,39 @@ bool bpf_jit_needs_zext(void)
return true;
}
+static void priv_stack_init_guard(void __percpu *priv_stack_ptr, int alloc_size)
+{
+ int cpu, underflow_idx = (alloc_size - PRIV_STACK_GUARD_SZ) >> 3;
+ u64 *stack_ptr;
+
+ for_each_possible_cpu(cpu) {
+ stack_ptr = per_cpu_ptr(priv_stack_ptr, cpu);
+ stack_ptr[0] = PRIV_STACK_GUARD_VAL;
+ stack_ptr[1] = PRIV_STACK_GUARD_VAL;
+ stack_ptr[underflow_idx] = PRIV_STACK_GUARD_VAL;
+ stack_ptr[underflow_idx + 1] = PRIV_STACK_GUARD_VAL;
+ }
+}
+
+static void priv_stack_check_guard(void __percpu *priv_stack_ptr, int alloc_size,
+ struct bpf_prog *fp)
+{
+ int cpu, underflow_idx = (alloc_size - PRIV_STACK_GUARD_SZ) >> 3;
+ u64 *stack_ptr;
+
+ for_each_possible_cpu(cpu) {
+ stack_ptr = per_cpu_ptr(priv_stack_ptr, cpu);
+ if (stack_ptr[0] != PRIV_STACK_GUARD_VAL ||
+ stack_ptr[1] != PRIV_STACK_GUARD_VAL ||
+ stack_ptr[underflow_idx] != PRIV_STACK_GUARD_VAL ||
+ stack_ptr[underflow_idx + 1] != PRIV_STACK_GUARD_VAL) {
+ pr_err("BPF private stack overflow/underflow detected for prog %sx\n",
+ bpf_jit_get_prog_name(fp));
+ break;
+ }
+ }
+}
+
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
{
u32 proglen;
@@ -140,6 +173,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
struct codegen_context cgctx;
int pass;
int flen;
+ int priv_stack_alloc_size;
+ void __percpu *priv_stack_ptr = NULL;
struct bpf_binary_header *fhdr = NULL;
struct bpf_binary_header *hdr = NULL;
struct bpf_prog *org_fp = fp;
@@ -173,6 +208,26 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
fp->aux->jit_data = jit_data;
}
+ priv_stack_ptr = fp->aux->priv_stack_ptr;
+ if (!priv_stack_ptr && fp->aux->jits_use_priv_stack) {
+ /*
+ * Allocate private stack of size equivalent to
+ * verifier-calculated stack size plus two memory
+ * guard regions to detect private stack overflow
+ * and underflow.
+ */
+ priv_stack_alloc_size = round_up(fp->aux->stack_depth, 16) +
+ 2 * PRIV_STACK_GUARD_SZ;
+ priv_stack_ptr = __alloc_percpu_gfp(priv_stack_alloc_size, 16, GFP_KERNEL);
+ if (!priv_stack_ptr) {
+ fp = org_fp;
+ goto out_priv_stack;
+ }
+
+ priv_stack_init_guard(priv_stack_ptr, priv_stack_alloc_size);
+ fp->aux->priv_stack_ptr = priv_stack_ptr;
+ }
+
flen = fp->len;
addrs = jit_data->addrs;
if (addrs) {
@@ -209,6 +264,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
cgctx.is_subprog = bpf_is_subprog(fp);
cgctx.exception_boundary = fp->aux->exception_boundary;
cgctx.exception_cb = fp->aux->exception_cb;
+ cgctx.priv_sp = priv_stack_ptr;
/* Scouting faux-generate pass 0 */
if (bpf_jit_build_body(fp, NULL, NULL, &cgctx, addrs, 0, false)) {
@@ -306,6 +362,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
}
bpf_prog_fill_jited_linfo(fp, addrs);
out_addrs:
+ if (!image && priv_stack_ptr) {
+ fp->aux->priv_stack_ptr = NULL;
+ free_percpu(priv_stack_ptr);
+ }
+out_priv_stack:
kfree(addrs);
kfree(jit_data);
fp->aux->jit_data = NULL;
@@ -419,6 +480,8 @@ void bpf_jit_free(struct bpf_prog *fp)
if (fp->jited) {
struct powerpc_jit_data *jit_data = fp->aux->jit_data;
struct bpf_binary_header *hdr;
+ void __percpu *priv_stack_ptr;
+ int priv_stack_alloc_size;
/*
* If we fail the final pass of JIT (from jit_subprogs),
@@ -432,6 +495,13 @@ void bpf_jit_free(struct bpf_prog *fp)
}
hdr = bpf_jit_binary_pack_hdr(fp);
bpf_jit_binary_pack_free(hdr, NULL);
+ priv_stack_ptr = fp->aux->priv_stack_ptr;
+ if (priv_stack_ptr) {
+ priv_stack_alloc_size = round_up(fp->aux->stack_depth, 16) +
+ 2 * PRIV_STACK_GUARD_SZ;
+ priv_stack_check_guard(priv_stack_ptr, priv_stack_alloc_size, fp);
+ free_percpu(priv_stack_ptr);
+ }
WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(fp));
}
@@ -453,6 +523,11 @@ bool bpf_jit_supports_kfunc_call(void)
return true;
}
+bool bpf_jit_supports_private_stack(void)
+{
+ return IS_ENABLED(CONFIG_PPC64);
+}
+
bool bpf_jit_supports_arena(void)
{
return IS_ENABLED(CONFIG_PPC64);
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 640b84409687..d026cff30d1d 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -183,6 +183,22 @@ void bpf_jit_realloc_regs(struct codegen_context *ctx)
{
}
+static void emit_fp_priv_stack(u32 *image, struct codegen_context *ctx)
+{
+ /* Load percpu data offset */
+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), _R13,
+ offsetof(struct paca_struct, data_offset)));
+ PPC_LI64(bpf_to_ppc(BPF_REG_FP), (__force long)ctx->priv_sp);
+ /*
+ * Load base percpu pointer of private stack allocation.
+ * Runtime per-cpu address = (base + data_offset) + (guard + stack_size)
+ */
+ EMIT(PPC_RAW_ADD(bpf_to_ppc(BPF_REG_FP),
+ bpf_to_ppc(TMP_REG_1), bpf_to_ppc(BPF_REG_FP)));
+ EMIT(PPC_RAW_ADDI(bpf_to_ppc(BPF_REG_FP), bpf_to_ppc(BPF_REG_FP),
+ PRIV_STACK_GUARD_SZ + round_up(ctx->stack_size, 16)));
+}
+
/*
* For exception boundary & exception_cb progs:
* return increased size to accommodate additional NVRs.
@@ -251,7 +267,7 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
}
EMIT(PPC_RAW_STDU(_R1, _R1,
- -(bpf_jit_stack_size(ctx) + ctx->stack_size)));
+ -(bpf_jit_stack_size(ctx) + (ctx->priv_sp ? 0 : ctx->stack_size))));
}
/*
@@ -307,9 +323,16 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
* Exception_cb not restricted from using stack area or arena.
* Setup frame pointer to point to the bpf stack area
*/
- if (bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_FP)))
- EMIT(PPC_RAW_ADDI(bpf_to_ppc(BPF_REG_FP), _R1,
- STACK_FRAME_MIN_SIZE + ctx->stack_size));
+ if (bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_FP))) {
+ if (ctx->priv_sp) {
+ /* Set up fp in private stack */
+ emit_fp_priv_stack(image, ctx);
+ } else {
+ /* Setup frame pointer to point to the bpf stack area */
+ EMIT(PPC_RAW_ADDI(bpf_to_ppc(BPF_REG_FP), _R1,
+ STACK_FRAME_MIN_SIZE + ctx->stack_size));
+ }
+ }
if (ctx->arena_vm_start)
PPC_LI64(bpf_to_ppc(ARENA_VM_START), ctx->arena_vm_start);
@@ -339,7 +362,8 @@ static void bpf_jit_emit_common_epilogue(u32 *image, struct codegen_context *ctx
/* Tear down our stack frame */
if (bpf_has_stack_frame(ctx)) {
- EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_jit_stack_size(ctx) + ctx->stack_size));
+ EMIT(PPC_RAW_ADDI(_R1, _R1,
+ bpf_jit_stack_size(ctx) + (ctx->priv_sp ? 0 : ctx->stack_size)));
if (ctx->seen & SEEN_FUNC || ctx->exception_cb) {
EMIT(PPC_RAW_LD(_R0, _R1, PPC_LR_STKOFF));
--
2.52.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/2] selftests/bpf: Enable private stack tests for powerpc64
2026-02-25 15:39 [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack adubey
2026-02-25 11:32 ` bot+bpf-ci
@ 2026-02-25 15:39 ` adubey
2026-02-25 11:12 ` bot+bpf-ci
2026-02-25 17:13 ` [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack kernel test robot
2 siblings, 1 reply; 5+ messages in thread
From: adubey @ 2026-02-25 15:39 UTC (permalink / raw)
To: linuxppc-dev; +Cc: bpf, hbathini, ast, daniel, andrii, maddy, Abhishek Dubey
From: Abhishek Dubey <adubey@linux.ibm.com>
With support of private stack, relevant tests must pass
on powerpc64.
#./test_progs -t struct_ops_private_stack
#434/1 struct_ops_private_stack/private_stack:OK
#434/2 struct_ops_private_stack/private_stack_fail:OK
#434/3 struct_ops_private_stack/private_stack_recur:OK
#434 struct_ops_private_stack:OK
Summary: 1/3 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Abhishek Dubey <adubey@linux.ibm.com>
---
tools/testing/selftests/bpf/progs/struct_ops_private_stack.c | 3 ++-
.../selftests/bpf/progs/struct_ops_private_stack_fail.c | 3 ++-
.../selftests/bpf/progs/struct_ops_private_stack_recur.c | 3 ++-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c b/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c
index dbe646013811..1df1111cd029 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_private_stack.c
@@ -7,7 +7,8 @@
char _license[] SEC("license") = "GPL";
-#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64)
+#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64) \
+ || defined(__TARGET_ARCH_powerpc)
bool skip __attribute((__section__(".data"))) = false;
#else
bool skip = true;
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_private_stack_fail.c b/tools/testing/selftests/bpf/progs/struct_ops_private_stack_fail.c
index 3d89ad7cbe2a..e09c1a8782b4 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_private_stack_fail.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_private_stack_fail.c
@@ -7,7 +7,8 @@
char _license[] SEC("license") = "GPL";
-#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64)
+#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64) \
+ || defined(__TARGET_ARCH_powerpc)
bool skip __attribute((__section__(".data"))) = false;
#else
bool skip = true;
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_private_stack_recur.c b/tools/testing/selftests/bpf/progs/struct_ops_private_stack_recur.c
index b1f6d7e5a8e5..791800835673 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_private_stack_recur.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_private_stack_recur.c
@@ -7,7 +7,8 @@
char _license[] SEC("license") = "GPL";
-#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64)
+#if defined(__TARGET_ARCH_x86) || defined(__TARGET_ARCH_arm64) \
+ || defined(__TARGET_ARCH_powerpc)
bool skip __attribute((__section__(".data"))) = false;
#else
bool skip = true;
--
2.52.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack
2026-02-25 15:39 [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack adubey
2026-02-25 11:32 ` bot+bpf-ci
2026-02-25 15:39 ` [PATCH v2 2/2] selftests/bpf: Enable private stack tests for powerpc64 adubey
@ 2026-02-25 17:13 ` kernel test robot
2 siblings, 0 replies; 5+ messages in thread
From: kernel test robot @ 2026-02-25 17:13 UTC (permalink / raw)
To: adubey, linuxppc-dev
Cc: oe-kbuild-all, bpf, hbathini, ast, daniel, andrii, maddy,
Abhishek Dubey
Hi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
[also build test WARNING on bpf/master powerpc/next linus/master v7.0-rc1 next-20260224]
[cannot apply to bpf-next/net powerpc/fixes]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/adubey-linux-ibm-com/selftests-bpf-Enable-private-stack-tests-for-powerpc64/20260225-184532
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20260225153950.15331-1-adubey%40linux.ibm.com
patch subject: [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack
config: powerpc-randconfig-r123-20260225 (https://download.01.org/0day-ci/archive/20260226/202602260116.5ljOxwdH-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 9a109fbb6e184ec9bcce10615949f598f4c974a9)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260226/202602260116.5ljOxwdH-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602260116.5ljOxwdH-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> arch/powerpc/net/bpf_jit_comp.c:222:7: warning: variable 'addrs' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
222 | if (!priv_stack_ptr) {
| ^~~~~~~~~~~~~~~
arch/powerpc/net/bpf_jit_comp.c:370:9: note: uninitialized use occurs here
370 | kfree(addrs);
| ^~~~~
arch/powerpc/net/bpf_jit_comp.c:222:3: note: remove the 'if' if its condition is always false
222 | if (!priv_stack_ptr) {
| ^~~~~~~~~~~~~~~~~~~~~~
223 | fp = org_fp;
| ~~~~~~~~~~~~
224 | goto out_priv_stack;
| ~~~~~~~~~~~~~~~~~~~~
225 | }
| ~
arch/powerpc/net/bpf_jit_comp.c:171:12: note: initialize the variable 'addrs' to silence this warning
171 | u32 *addrs;
| ^
| = NULL
1 warning generated.
vim +222 arch/powerpc/net/bpf_jit_comp.c
164
165 struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
166 {
167 u32 proglen;
168 u32 alloclen;
169 u8 *image = NULL;
170 u32 *code_base;
171 u32 *addrs;
172 struct powerpc_jit_data *jit_data;
173 struct codegen_context cgctx;
174 int pass;
175 int flen;
176 int priv_stack_alloc_size;
177 void __percpu *priv_stack_ptr = NULL;
178 struct bpf_binary_header *fhdr = NULL;
179 struct bpf_binary_header *hdr = NULL;
180 struct bpf_prog *org_fp = fp;
181 struct bpf_prog *tmp_fp;
182 bool bpf_blinded = false;
183 bool extra_pass = false;
184 u8 *fimage = NULL;
185 u32 *fcode_base;
186 u32 extable_len;
187 u32 fixup_len;
188
189 if (!fp->jit_requested)
190 return org_fp;
191
192 tmp_fp = bpf_jit_blind_constants(org_fp);
193 if (IS_ERR(tmp_fp))
194 return org_fp;
195
196 if (tmp_fp != org_fp) {
197 bpf_blinded = true;
198 fp = tmp_fp;
199 }
200
201 jit_data = fp->aux->jit_data;
202 if (!jit_data) {
203 jit_data = kzalloc_obj(*jit_data);
204 if (!jit_data) {
205 fp = org_fp;
206 goto out;
207 }
208 fp->aux->jit_data = jit_data;
209 }
210
211 priv_stack_ptr = fp->aux->priv_stack_ptr;
212 if (!priv_stack_ptr && fp->aux->jits_use_priv_stack) {
213 /*
214 * Allocate private stack of size equivalent to
215 * verifier-calculated stack size plus two memory
216 * guard regions to detect private stack overflow
217 * and underflow.
218 */
219 priv_stack_alloc_size = round_up(fp->aux->stack_depth, 16) +
220 2 * PRIV_STACK_GUARD_SZ;
221 priv_stack_ptr = __alloc_percpu_gfp(priv_stack_alloc_size, 16, GFP_KERNEL);
> 222 if (!priv_stack_ptr) {
223 fp = org_fp;
224 goto out_priv_stack;
225 }
226
227 priv_stack_init_guard(priv_stack_ptr, priv_stack_alloc_size);
228 fp->aux->priv_stack_ptr = priv_stack_ptr;
229 }
230
231 flen = fp->len;
232 addrs = jit_data->addrs;
233 if (addrs) {
234 cgctx = jit_data->ctx;
235 /*
236 * JIT compiled to a writable location (image/code_base) first.
237 * It is then moved to the readonly final location (fimage/fcode_base)
238 * using instruction patching.
239 */
240 fimage = jit_data->fimage;
241 fhdr = jit_data->fhdr;
242 proglen = jit_data->proglen;
243 hdr = jit_data->hdr;
244 image = (void *)hdr + ((void *)fimage - (void *)fhdr);
245 extra_pass = true;
246 /* During extra pass, ensure index is reset before repopulating extable entries */
247 cgctx.exentry_idx = 0;
248 goto skip_init_ctx;
249 }
250
251 addrs = kcalloc(flen + 1, sizeof(*addrs), GFP_KERNEL);
252 if (addrs == NULL) {
253 fp = org_fp;
254 goto out_addrs;
255 }
256
257 memset(&cgctx, 0, sizeof(struct codegen_context));
258 bpf_jit_init_reg_mapping(&cgctx);
259
260 /* Make sure that the stack is quadword aligned. */
261 cgctx.stack_size = round_up(fp->aux->stack_depth, 16);
262 cgctx.arena_vm_start = bpf_arena_get_kern_vm_start(fp->aux->arena);
263 cgctx.user_vm_start = bpf_arena_get_user_vm_start(fp->aux->arena);
264 cgctx.is_subprog = bpf_is_subprog(fp);
265 cgctx.exception_boundary = fp->aux->exception_boundary;
266 cgctx.exception_cb = fp->aux->exception_cb;
267 cgctx.priv_sp = priv_stack_ptr;
268
269 /* Scouting faux-generate pass 0 */
270 if (bpf_jit_build_body(fp, NULL, NULL, &cgctx, addrs, 0, false)) {
271 /* We hit something illegal or unsupported. */
272 fp = org_fp;
273 goto out_addrs;
274 }
275
276 /*
277 * If we have seen a tail call, we need a second pass.
278 * This is because bpf_jit_emit_common_epilogue() is called
279 * from bpf_jit_emit_tail_call() with a not yet stable ctx->seen.
280 * We also need a second pass if we ended up with too large
281 * a program so as to ensure BPF_EXIT branches are in range.
282 */
283 if (cgctx.seen & SEEN_TAILCALL || !is_offset_in_branch_range((long)cgctx.idx * 4)) {
284 cgctx.idx = 0;
285 if (bpf_jit_build_body(fp, NULL, NULL, &cgctx, addrs, 0, false)) {
286 fp = org_fp;
287 goto out_addrs;
288 }
289 }
290
291 bpf_jit_realloc_regs(&cgctx);
292 /*
293 * Pretend to build prologue, given the features we've seen. This will
294 * update ctgtx.idx as it pretends to output instructions, then we can
295 * calculate total size from idx.
296 */
297 bpf_jit_build_prologue(NULL, &cgctx);
298 addrs[fp->len] = cgctx.idx * 4;
299 bpf_jit_build_epilogue(NULL, &cgctx);
300
301 fixup_len = fp->aux->num_exentries * BPF_FIXUP_LEN * 4;
302 extable_len = fp->aux->num_exentries * sizeof(struct exception_table_entry);
303
304 proglen = cgctx.idx * 4;
305 alloclen = proglen + FUNCTION_DESCR_SIZE + fixup_len + extable_len;
306
307 fhdr = bpf_jit_binary_pack_alloc(alloclen, &fimage, 4, &hdr, &image,
308 bpf_jit_fill_ill_insns);
309 if (!fhdr) {
310 fp = org_fp;
311 goto out_addrs;
312 }
313
314 if (extable_len)
315 fp->aux->extable = (void *)fimage + FUNCTION_DESCR_SIZE + proglen + fixup_len;
316
317 skip_init_ctx:
318 code_base = (u32 *)(image + FUNCTION_DESCR_SIZE);
319 fcode_base = (u32 *)(fimage + FUNCTION_DESCR_SIZE);
320
321 /* Code generation passes 1-2 */
322 for (pass = 1; pass < 3; pass++) {
323 /* Now build the prologue, body code & epilogue for real. */
324 cgctx.idx = 0;
325 cgctx.alt_exit_addr = 0;
326 bpf_jit_build_prologue(code_base, &cgctx);
327 if (bpf_jit_build_body(fp, code_base, fcode_base, &cgctx, addrs, pass,
328 extra_pass)) {
329 bpf_arch_text_copy(&fhdr->size, &hdr->size, sizeof(hdr->size));
330 bpf_jit_binary_pack_free(fhdr, hdr);
331 fp = org_fp;
332 goto out_addrs;
333 }
334 bpf_jit_build_epilogue(code_base, &cgctx);
335
336 if (bpf_jit_enable > 1)
337 pr_info("Pass %d: shrink = %d, seen = 0x%x\n", pass,
338 proglen - (cgctx.idx * 4), cgctx.seen);
339 }
340
341 if (bpf_jit_enable > 1)
342 /*
343 * Note that we output the base address of the code_base
344 * rather than image, since opcodes are in code_base.
345 */
346 bpf_jit_dump(flen, proglen, pass, code_base);
347
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-02-25 17:14 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-25 15:39 [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack adubey
2026-02-25 11:32 ` bot+bpf-ci
2026-02-25 15:39 ` [PATCH v2 2/2] selftests/bpf: Enable private stack tests for powerpc64 adubey
2026-02-25 11:12 ` bot+bpf-ci
2026-02-25 17:13 ` [PATCH v2 1/2] powerpc64/bpf: Implement JIT support for private stack kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox