From: sashiko-bot@kernel.org
To: "Kuan-Wei Chiu" <visitorckw@gmail.com>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v3] m68k, bpf: Add initial BPF JIT compiler support
Date: Wed, 13 May 2026 01:03:39 +0000 [thread overview]
Message-ID: <20260513010339.81EDEC2BCB0@smtp.kernel.org> (raw)
In-Reply-To: <20260511212756.3163955-1-visitorckw@gmail.com>
Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [High] BPF-to-BPF calls (BPF_PSEUDO_CALL) are broken, corrupting arguments, return values, and tail call tracking.
- [High] Missing protection against signed division overflow (`S32_MIN / -1` and `S64_MIN / -1`), leading to undefined behavior and potential kernel panics.
- [Medium] Using `kcalloc` for potentially large array allocations leads to JIT failures under memory fragmentation.
--
commit 1a2c23f18e30a4b931f962e5e35eed83ff42f6a2
Author: Kuan-Wei Chiu <visitorckw@gmail.com>
m68k, bpf: Add initial BPF JIT compiler support
Add a BPF JIT compiler for the m68k architecture.
> diff --git a/arch/m68k/net/bpf_jit_comp.c b/arch/m68k/net/bpf_jit_comp.c
> new file mode 100644
> index 0000000000000..1531b321410ae
> --- /dev/null
> +++ b/arch/m68k/net/bpf_jit_comp.c
[ ... ]
> +static noinline s32 jit_sdiv32(s32 a, s32 b) { return b ? a / b : 0; }
> +static noinline s32 jit_smod32(s32 a, s32 b) { return b ? a % b : a; }
> +
> +static noinline u64 jit_mul64(u64 a, u64 b) { return a * b; }
> +static noinline u64 jit_div64(u64 a, u64 b) { return b ? div64_u64(a, b) : 0; }
> +static noinline s64 jit_sdiv64(s64 a, s64 b) { return b ? div64_s64(a, b) : 0; }
What happens if signed division or modulo is performed by -1 with the minimum
representable value (e.g., S32_MIN / -1)?
In C, evaluating INT_MIN / -1 results in signed integer overflow (undefined
behavior), which can trigger a CPU division exception on m68k and crash the
kernel.
Should explicit bounds checking be added to handle these edge cases and return
the dividend as required by the eBPF specification?
[ ... ]
> +static int emit_call(const struct bpf_insn *insn, struct jit_ctx *ctx)
> +{
> + const s8 arg_regs[] = { BPF_REG_5, BPF_REG_4, BPF_REG_3, BPF_REG_2, BPF_REG_1 };
> + const s8 *tmp1 = bpf2m68k[TMP_REG_1];
> + bool extra_pass = ctx->target != NULL;
> + u64 func_addr;
> + bool fixed;
> + int i, err;
> +
> + err = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass, &func_addr, &fixed);
> + if (err)
> + return err;
> +
> + for (i = 0; i < 5; i++) {
> + const s8 *reg = bpf2m68k[arg_regs[i]];
> + s8 d_lo = bpf_get_reg32(reg[1], tmp1[1], ctx);
> + s8 d_hi = bpf_get_reg32(reg[0], tmp1[0], ctx);
> +
> + emit_16(ctx, 0x2f00 | d_lo); /* move.l d_lo, -(%sp) */
> + emit_16(ctx, 0x2f00 | d_hi); /* move.l d_hi, -(%sp) */
> + }
> +
> + emit_16(ctx, 0x207c); /* movea.l #func_addr, %a0 */
> + emit_32(ctx, (u32)func_addr);
> +
> + emit_16(ctx, 0x4e90); /* jsr (%a0) */
> +
> + emit_16(ctx, 0x4fef); /* lea 40(%sp), %sp */
> + emit_16(ctx, 40);
> +
> + emit_16(ctx, 0xc340); /* exg %d0, %d1 */
Does this code break BPF-to-BPF calls (BPF_PSEUDO_CALL)?
It appears to treat the subprogram exactly like a standard C helper by pushing
BPF registers R1-R5 to the stack and then executing exg %d0, %d1 to swap the
64-bit return value.
However, the subprogram's JIT-compiled code unconditionally uses the standard
build_prologue(), which lacks a check for ctx->prog->is_func.
Consequently, doesn't the subprogram act like a root BPF program, initializing
BPF_REG_1 from 8(%fp), zeroing its upper 32 bits, and completely ignoring the
stacked R2-R5 arguments?
Additionally, it seems the global tail call counter (BPF_TC_LO) is improperly
reset to 33 on every subprogram entry, and the return value of BPF subprograms
(which already populate BPF_REG_0 properly) gets corrupted by the swap.
[ ... ]
> +struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct bpf_prog *prog)
> +{
> + struct bpf_binary_header *header;
> + struct jit_ctx ctx;
> + unsigned int image_size;
> + u8 *image_ptr;
> +
> + if (!prog->jit_requested)
> + return prog;
> +
> + memset(&ctx, 0, sizeof(ctx));
> + ctx.prog = prog;
> +
> + ctx.offsets = kcalloc(prog->len + 1, sizeof(int), GFP_KERNEL);
Is it safe to use kcalloc() here for potentially large array allocations?
Since the BPF verifier allows up to 1,000,000 instructions per program, this
could request up to 4MB of contiguous physical memory.
On systems with memory fragmentation, this large physically contiguous
allocation is highly likely to fail, unnecessarily aborting JIT compilation.
Would it be better to use kvcalloc() instead, consistent with other BPF JITs?
> + if (!ctx.offsets)
> + return prog;
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260511212756.3163955-1-visitorckw@gmail.com?part=1
prev parent reply other threads:[~2026-05-13 1:03 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 21:27 [PATCH bpf-next v3] m68k, bpf: Add initial BPF JIT compiler support Kuan-Wei Chiu
2026-05-11 22:12 ` bot+bpf-ci
2026-05-12 11:57 ` Greg Ungerer
2026-05-12 17:32 ` Andreas Schwab
2026-05-13 1:03 ` sashiko-bot [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260513010339.81EDEC2BCB0@smtp.kernel.org \
--to=sashiko-bot@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=sashiko-reviews@lists.linux.dev \
--cc=visitorckw@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox