From: Menglong Dong <menglong.dong@linux.dev>
To: menglong8.dong@gmail.com, ast@kernel.org, rostedt@goodmis.org,
bot+bpf-ci@kernel.org
Cc: daniel@iogearbox.net, john.fastabend@gmail.com,
andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com,
song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org,
sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org,
mhiramat@kernel.org, mark.rutland@arm.com,
mathieu.desnoyers@efficios.com, bpf@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
ast@kernel.org, andrii@kernel.org, daniel@iogearbox.net,
martin.lau@kernel.org, eddyz87@gmail.com,
yonghong.song@linux.dev, clm@meta.com, ihor.solodrai@linux.dev
Subject: Re: [PATCH RFC bpf-next 1/7] ftrace: introduce FTRACE_OPS_FL_JMP
Date: Fri, 14 Nov 2025 18:57:39 +0800 [thread overview]
Message-ID: <1946607.CQOukoFCf9@7950hx> (raw)
In-Reply-To: <aa9a271384d9850c42f9331b03d736b2fd1dd7c00446460f0ecc19cb32e9f117@mail.kernel.org>
On 2025/11/14 18:20, bot+bpf-ci@kernel.org wrote:
> > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> > index 7ded7df6e..14705dec1 100644
> > --- a/include/linux/ftrace.h
> > +++ b/include/linux/ftrace.h
> > @@ -351,6 +351,7 @@ enum {
> > FTRACE_OPS_FL_DIRECT = BIT(17),
> > FTRACE_OPS_FL_SUBOP = BIT(18),
> > FTRACE_OPS_FL_GRAPH = BIT(19),
> > + FTRACE_OPS_FL_JMP = BIT(20),
> > };
> >
> > #ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
>
> [ ... ]
>
> > +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
> > +static inline bool ftrace_is_jmp(unsigned long addr)
> > +{
> > + return addr & 1;
> > +}
> > +
> > +static inline unsigned long ftrace_jmp_set(unsigned long addr)
> > +{
> > + return addr | 1UL;
> > +}
> > +
> > +static inline unsigned long ftrace_jmp_get(unsigned long addr)
> > +{
> > + return addr & ~1UL;
> > +}
>
> [ ... ]
>
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 42bd2ba68..b143a79e6 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -5938,7 +5938,8 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
> > for (i = 0; i < size; i++) {
> > hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
> > del = __ftrace_lookup_ip(direct_functions, entry->ip);
> > - if (del && del->direct == addr) {
> > + if (del && ftrace_jmp_get(del->direct) ==
> > + ftrace_jmp_get(addr)) {
> > remove_hash_entry(direct_functions, del);
> > kfree(del);
> > }
>
> [ ... ]
>
> > @@ -5994,6 +5995,9 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >
> > mutex_lock(&direct_mutex);
> >
> > + if (ops->flags & FTRACE_OPS_FL_JMP)
> > + addr = ftrace_jmp_set(addr);
> > +
>
> When FTRACE_OPS_FL_JMP is set, this encodes the flag into the last bit
> of addr. The modified addr is then stored in entry->direct and
> ops->direct_call. However, this flagged address later flows through
> ftrace_find_rec_direct()->ftrace_get_addr_new()->ftrace_make_call()->
> ftrace_call_replace() without being masked.
>
> Can ftrace_call_replace() generate correct CALL instructions when addr
> has the LSB set? On x86, ftrace_call_replace() calls text_gen_insn()
> with the raw address to generate a relative CALL instruction. If addr
> has bit 0 set to 1, this would generate a CALL to an odd address,
> causing execution to skip the first instruction byte at the trampoline.
If DYNAMIC_FTRACE_WITH_JMP is not enabled, ftrace_jmp_set() will
return the addr directly, so it's OK on this case. When
DYNAMIC_FTRACE_WITH_JMP is enabled, take x86_64 for example
in the next patch, it will get the real address in ftrace_call_replace()
with ftrace_jmp_get().
The arch-specification things is split to the next patch.
>
> Similarly, at runtime call_direct_funcs() reads ops->direct_call and
> passes it to arch_ftrace_set_direct_caller() without masking. The x86
> assembly then does RET to this address, which would jump to addr+1 if
> the LSB is set.
>
> The commit message says "we can tell if we should use 'jmp' for the
> callback in ftrace_call_replace()" but ftrace_call_replace() isn't
> modified to check or mask the bit. Are there missing changes to handle
> the encoded address in these paths?
It is modified in the next patch. And in order to reduce the risk, we
can wrap the code here with CONFIG_DYNAMIC_FTRACE_WITH_JMP.
>
> > /* Make sure requested entries are not already registered.. */
> > size = 1 << hash->size_bits;
> > for (i = 0; i < size; i++) {
>
> [ ... ]
>
> > @@ -6117,6 +6121,9 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >
> > lockdep_assert_held_once(&direct_mutex);
> >
> > + if (ops->flags & FTRACE_OPS_FL_JMP)
> > + addr = ftrace_jmp_set(addr);
> > +
>
> Same issue here - the flagged addr is stored but downstream consumers
> don't mask it before using as a jump target.
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19360353328
>
next prev parent reply other threads:[~2025-11-14 10:58 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-14 9:24 [PATCH RFC bpf-next 0/7] bpf trampoline support "jmp" mode Menglong Dong
2025-11-14 9:24 ` [PATCH RFC bpf-next 1/7] ftrace: introduce FTRACE_OPS_FL_JMP Menglong Dong
2025-11-14 10:20 ` bot+bpf-ci
2025-11-14 10:57 ` Menglong Dong [this message]
2025-11-14 9:24 ` [PATCH RFC bpf-next 2/7] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP Menglong Dong
2025-11-14 16:39 ` Steven Rostedt
2025-11-15 2:12 ` Menglong Dong
2025-11-14 9:24 ` [PATCH RFC bpf-next 3/7] bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME Menglong Dong
2025-11-14 18:23 ` Alexei Starovoitov
2025-11-14 9:24 ` [PATCH RFC bpf-next 4/7] bpf,x86: adjust the "jmp" mode for bpf trampoline Menglong Dong
2025-11-14 18:22 ` Alexei Starovoitov
2025-11-15 2:14 ` Menglong Dong
2025-11-14 9:24 ` [PATCH RFC bpf-next 5/7] bpf: introduce bpf_arch_text_poke_type Menglong Dong
2025-11-14 10:20 ` bot+bpf-ci
2025-11-14 18:41 ` Alexei Starovoitov
2025-11-15 2:26 ` Menglong Dong
2025-11-14 9:24 ` [PATCH RFC bpf-next 6/7] bpf,x86: implement bpf_arch_text_poke_type for x86_64 Menglong Dong
2025-11-14 9:24 ` [PATCH RFC bpf-next 7/7] bpf: implement "jmp" mode for trampoline Menglong Dong
2025-11-14 18:50 ` Alexei Starovoitov
2025-11-15 2:39 ` Menglong Dong
2025-11-15 2:42 ` Alexei Starovoitov
2025-11-14 13:38 ` [PATCH RFC bpf-next 0/7] bpf trampoline support "jmp" mode Steven Rostedt
2025-11-14 13:58 ` Menglong Dong
2025-11-14 16:28 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1946607.CQOukoFCf9@7950hx \
--to=menglong.dong@linux.dev \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bot+bpf-ci@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=clm@meta.com \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=haoluo@google.com \
--cc=ihor.solodrai@linux.dev \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=martin.lau@kernel.org \
--cc=martin.lau@linux.dev \
--cc=mathieu.desnoyers@efficios.com \
--cc=menglong8.dong@gmail.com \
--cc=mhiramat@kernel.org \
--cc=rostedt@goodmis.org \
--cc=sdf@fomichev.me \
--cc=song@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).