public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Jiri Olsa" <jolsa@kernel.org>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCHv5 bpf-next 04/28] bpf: Use mutex lock pool for bpf trampolines
Date: Sat, 18 Apr 2026 03:52:26 +0000	[thread overview]
Message-ID: <20260418035227.168AEC19424@smtp.kernel.org> (raw)
In-Reply-To: <20260417192502.194548-5-jolsa@kernel.org>

Sashiko AI review found 2 potential issue(s):
- [High] False sharing in the shared mutex pool causes spurious `mutex_trylock` failures, leading to random `-EAGAIN` errors and trampoline state corruption.
- [Medium] The `trampoline_locks` mutex pool is initialized in `late_initcall`, creating an uninitialized lock hazard for early boot BPF usage.
--

Note: Due to system recitation restrictions, the standard inline-quoted
patch format has been altered. The findings are summarized below in a
free-form format without direct code quotes.

commit 8bee21cb8aa45a6b2275c87c16fbbd2fd1953e75
Author: Jiri Olsa <jolsa@kernel.org>

bpf: Use mutex lock pool for bpf trampolines

This commit replaces the per-trampoline mutex with a 32-bucket static mutex
pool. This allows upcoming tracing_multi features to lock all involved
trampolines simultaneously without exceeding the maximum lock depth.


In bpf_tramp_ftrace_ops_func(), mutex_trylock() is used on the selected
trampoline pool lock to avoid deadlocks with direct_mutex. Because multiple
trampolines now share the same lock, could this lead to spurious trylock
failures if an unrelated trampoline happens to hash to the same bucket?

If the trylock fails, the function returns -EAGAIN. Does this mean
concurrent ftrace or kprobe registrations might randomly fail due to false
sharing?

Also, when the upcoming tracing_multi feature locks all 32 mutexes at once,
wouldn't that deterministically abort any concurrent ftrace IPMODIFY
registrations system-wide?

Finally, if this trylock failure happens during unregistration, could ftrace
cleanup routines ignore the -EAGAIN, potentially leaving the
BPF_TRAMP_F_SHARE_IPMODIFY flag permanently set and corrupting the
trampoline state?


Regarding the pool initialization, the new trampoline_locks array is
initialized in init_trampolines(), which runs as a late_initcall.

Since the static array resides in the bss section and is only zero-initialized
before this phase, what happens if early boot BPF components (like early BPF
LSM policies) try to attach or modify a trampoline before late_initcall?

Will this result in calling mutex_lock() on an uninitialized mutex, leading
to crashes or lockdep warnings when CONFIG_DEBUG_MUTEXES is enabled?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260417192502.194548-5-jolsa@kernel.org?part=1

  parent reply	other threads:[~2026-04-18  3:52 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-17 19:24 [PATCHv5 bpf-next 00/28] bpf: tracing_multi link Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 01/28] ftrace: Add ftrace_hash_count function Jiri Olsa
2026-04-18  0:58   ` sashiko-bot
2026-04-21  8:54     ` Jiri Olsa
2026-04-18  6:10   ` bot+bpf-ci
2026-04-17 19:24 ` [PATCHv5 bpf-next 02/28] ftrace: Add ftrace_hash_remove function Jiri Olsa
2026-04-18  6:10   ` bot+bpf-ci
2026-04-21  8:54     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 03/28] ftrace: Add add_ftrace_hash_entry function Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 04/28] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
2026-04-17 20:10   ` bot+bpf-ci
2026-04-21  8:54     ` Jiri Olsa
2026-04-18  3:52   ` sashiko-bot [this message]
2026-04-21  8:55     ` Jiri Olsa
2026-04-18  6:49   ` bot+bpf-ci
2026-04-17 19:24 ` [PATCHv5 bpf-next 05/28] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 06/28] bpf: Move trampoline image setup into bpf_trampoline_ops callbacks Jiri Olsa
2026-04-17 20:10   ` bot+bpf-ci
2026-04-21  8:55     ` Jiri Olsa
2026-04-18  6:10   ` bot+bpf-ci
2026-04-17 19:24 ` [PATCHv5 bpf-next 07/28] bpf: Add bpf_trampoline_add/remove_prog functions Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 08/28] bpf: Add struct bpf_tramp_node object Jiri Olsa
2026-04-17 20:22   ` bot+bpf-ci
2026-04-18  6:10   ` bot+bpf-ci
2026-04-21  8:55     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 09/28] bpf: Factor fsession link to use struct bpf_tramp_node Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 10/28] bpf: Add multi tracing attach types Jiri Olsa
2026-04-17 20:22   ` bot+bpf-ci
2026-04-21  8:55     ` Jiri Olsa
2026-04-18  4:09   ` sashiko-bot
2026-04-21  8:55     ` Jiri Olsa
2026-04-18  6:49   ` bot+bpf-ci
2026-04-21  8:56     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 11/28] bpf: Move sleepable verification code to btf_id_allow_sleepable Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 12/28] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
2026-04-17 20:22   ` bot+bpf-ci
2026-04-21  8:56     ` Jiri Olsa
2026-04-18  6:10   ` bot+bpf-ci
2026-04-21  8:56     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 13/28] bpf: Add support for tracing multi link Jiri Olsa
2026-04-18  8:58   ` sashiko-bot
2026-04-21  8:56     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 14/28] bpf: Add support for tracing_multi link cookies Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 15/28] bpf: Add support for tracing_multi link session Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 16/28] bpf: Add support for tracing_multi link fdinfo Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 17/28] libbpf: Add bpf_object_cleanup_btf function Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 18/28] libbpf: Add bpf_link_create support for tracing_multi link Jiri Olsa
2026-04-18  3:50   ` sashiko-bot
2026-04-21  8:56     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 19/28] libbpf: Add btf_type_is_traceable_func function Jiri Olsa
2026-04-18  3:40   ` sashiko-bot
2026-04-21  8:56     ` Jiri Olsa
2026-04-18  5:59   ` bot+bpf-ci
2026-04-17 19:24 ` [PATCHv5 bpf-next 20/28] libbpf: Add support to create tracing multi link Jiri Olsa
2026-04-18  6:10   ` bot+bpf-ci
2026-04-21  8:57     ` Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 21/28] selftests/bpf: Add tracing multi skel/pattern/ids attach tests Jiri Olsa
2026-04-17 20:10   ` bot+bpf-ci
2026-04-21  8:54     ` Jiri Olsa
2026-04-18  3:34   ` sashiko-bot
2026-04-21  8:57     ` Jiri Olsa
2026-04-18  6:10   ` bot+bpf-ci
2026-04-17 19:24 ` [PATCHv5 bpf-next 22/28] selftests/bpf: Add tracing multi skel/pattern/ids module " Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 23/28] selftests/bpf: Add tracing multi intersect tests Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 24/28] selftests/bpf: Add tracing multi cookies test Jiri Olsa
2026-04-17 19:24 ` [PATCHv5 bpf-next 25/28] selftests/bpf: Add tracing multi session test Jiri Olsa
2026-04-17 19:25 ` [PATCHv5 bpf-next 26/28] selftests/bpf: Add tracing multi attach fails test Jiri Olsa
2026-04-17 19:25 ` [PATCHv5 bpf-next 27/28] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
2026-04-17 19:25 ` [PATCHv5 bpf-next 28/28] selftests/bpf: Add tracing multi attach rollback tests Jiri Olsa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260418035227.168AEC19424@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=jolsa@kernel.org \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox