From: Jiri Olsa <olsajiri@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>,
Andrii Nakryiko <andrii@kernel.org>,
bpf@vger.kernel.org, Song Liu <songliubraving@fb.com>,
Yonghong Song <yhs@fb.com>,
John Fastabend <john.fastabend@gmail.com>,
Hao Luo <haoluo@google.com>, Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Alan Maguire <alan.maguire@oracle.com>,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org
Subject: Re: [PATCH bpf-next 08/13] uprobes/x86: Add support to optimize uprobes
Date: Fri, 13 Dec 2024 14:06:51 +0100 [thread overview]
Message-ID: <Z1wxa8yYrwt5Oz9z@krava> (raw)
In-Reply-To: <20241213104907.GA35539@noisy.programming.kicks-ass.net>
On Fri, Dec 13, 2024 at 11:49:07AM +0100, Peter Zijlstra wrote:
> On Wed, Dec 11, 2024 at 02:33:57PM +0100, Jiri Olsa wrote:
> > diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
> > index cdea97f8cd39..b2420eeee23a 100644
> > --- a/arch/x86/kernel/uprobes.c
> > +++ b/arch/x86/kernel/uprobes.c
>
> > @@ -1306,3 +1339,132 @@ bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
> > else
> > return regs->sp <= ret->stack;
> > }
> > +
> > +int arch_uprobe_verify_opcode(struct arch_uprobe *auprobe, struct page *page,
> > + unsigned long vaddr, uprobe_opcode_t *new_opcode,
> > + int nbytes)
> > +{
> > + uprobe_opcode_t old_opcode[5];
> > + bool is_call, is_swbp, is_nop5;
> > +
> > + if (!test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags))
> > + return uprobe_verify_opcode(page, vaddr, new_opcode);
> > +
> > + /*
> > + * The ARCH_UPROBE_FLAG_CAN_OPTIMIZE flag guarantees the following
> > + * 5 bytes read won't cross the page boundary.
> > + */
> > + uprobe_copy_from_page(page, vaddr, (uprobe_opcode_t *) &old_opcode, 5);
> > + is_call = is_call_insn((uprobe_opcode_t *) &old_opcode);
> > + is_swbp = is_swbp_insn((uprobe_opcode_t *) &old_opcode);
> > + is_nop5 = is_nop5_insn((uprobe_opcode_t *) &old_opcode);
> > +
> > + /*
> > + * We allow following trasitions for optimized uprobes:
> > + *
> > + * nop5 -> swbp -> call
> > + * || | |
> > + * |'--<---' |
> > + * '---<-----------'
> > + *
> > + * We return 1 to ack the write, 0 to do nothing, -1 to fail write.
> > + *
> > + * If the current opcode (old_opcode) has already desired value,
> > + * we do nothing, because we are racing with another thread doing
> > + * the update.
> > + */
> > + switch (nbytes) {
> > + case 5:
> > + if (is_call_insn(new_opcode)) {
> > + if (is_swbp)
> > + return 1;
> > + if (is_call && !memcmp(new_opcode, &old_opcode, 5))
> > + return 0;
> > + } else {
> > + if (is_call || is_swbp)
> > + return 1;
> > + if (is_nop5)
> > + return 0;
> > + }
> > + break;
> > + case 1:
> > + if (is_swbp_insn(new_opcode)) {
> > + if (is_nop5)
> > + return 1;
> > + if (is_swbp || is_call)
> > + return 0;
> > + } else {
> > + if (is_swbp || is_call)
> > + return 1;
> > + if (is_nop5)
> > + return 0;
> > + }
> > + }
> > + return -1;
> > +}
> > +
> > +bool arch_uprobe_is_register(uprobe_opcode_t *insn, int nbytes)
> > +{
> > + return nbytes == 5 ? is_call_insn(insn) : is_swbp_insn(insn);
> > +}
> > +
> > +static void __arch_uprobe_optimize(struct arch_uprobe *auprobe, struct mm_struct *mm,
> > + unsigned long vaddr)
> > +{
> > + struct uprobe_trampoline *tramp;
> > + char call[5];
> > +
> > + tramp = uprobe_trampoline_get(vaddr);
> > + if (!tramp)
> > + goto fail;
> > +
> > + relative_call(call, (void *) vaddr, (void *) tramp->vaddr);
> > + if (uprobe_write_opcode(auprobe, mm, vaddr, call, 5))
> > + goto fail;
> > +
> > + set_bit(ARCH_UPROBE_FLAG_OPTIMIZED, &auprobe->flags);
> > + return;
> > +
> > +fail:
> > + /* Once we fail we never try again. */
> > + clear_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags);
> > + uprobe_trampoline_put(tramp);
> > +}
> > +
> > +static bool should_optimize(struct arch_uprobe *auprobe)
> > +{
> > + if (!test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags))
> > + return false;
> > + if (test_bit(ARCH_UPROBE_FLAG_OPTIMIZED, &auprobe->flags))
> > + return false;
> > + return true;
> > +}
> > +
> > +void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr)
> > +{
> > + struct mm_struct *mm = current->mm;
> > +
> > + if (!should_optimize(auprobe))
> > + return;
> > +
> > + mmap_write_lock(mm);
> > + if (should_optimize(auprobe))
> > + __arch_uprobe_optimize(auprobe, mm, vaddr);
> > + mmap_write_unlock(mm);
> > +}
> > +
> > +int set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr)
> > +{
> > + uprobe_opcode_t *insn = (uprobe_opcode_t *) auprobe->insn;
> > +
> > + if (test_bit(ARCH_UPROBE_FLAG_OPTIMIZED, &auprobe->flags))
> > + return uprobe_write_opcode(auprobe, mm, vaddr, insn, 5);
> > +
> > + return uprobe_write_opcode(auprobe, mm, vaddr, insn, UPROBE_SWBP_INSN_SIZE);
> > +}
> > +
> > +bool arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr)
> > +{
> > + long delta = (long)(vaddr + 5 - vtramp);
> > + return delta >= INT_MIN && delta <= INT_MAX;
> > +}
>
> All this code is useless on 32bit, right?
yes, there's user_64bit_mode check in arch_uprobe_trampoline_mapping
when getting the trampoline, so above won't get executed in practise..
but I think we should make it more obvious and put the check directly
to arch_uprobe_optimize
jirka
next prev parent reply other threads:[~2024-12-13 13:06 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-11 13:33 [PATCH bpf-next 00/13] uprobes: Add support to optimize usdt probes on x86_64 Jiri Olsa
2024-12-11 13:33 ` [PATCH bpf-next 01/13] uprobes: Rename arch_uretprobe_trampoline function Jiri Olsa
2024-12-13 0:42 ` Andrii Nakryiko
2024-12-11 13:33 ` [PATCH bpf-next 02/13] uprobes: Make copy_from_page global Jiri Olsa
2024-12-13 0:43 ` Andrii Nakryiko
2024-12-11 13:33 ` [PATCH bpf-next 03/13] uprobes: Add nbytes argument to uprobe_write_opcode Jiri Olsa
2024-12-13 0:45 ` Andrii Nakryiko
2024-12-11 13:33 ` [PATCH bpf-next 04/13] uprobes: Add arch_uprobe_verify_opcode function Jiri Olsa
2024-12-13 0:48 ` Andrii Nakryiko
2024-12-13 13:21 ` Jiri Olsa
2024-12-13 21:11 ` Andrii Nakryiko
2024-12-13 21:52 ` Jiri Olsa
2024-12-11 13:33 ` [PATCH bpf-next 05/13] uprobes: Add mapping for optimized uprobe trampolines Jiri Olsa
2024-12-13 1:01 ` Andrii Nakryiko
2024-12-13 13:42 ` Jiri Olsa
2024-12-13 21:58 ` Andrii Nakryiko
2024-12-11 13:33 ` [PATCH bpf-next 06/13] uprobes/x86: Add uprobe syscall to speed up uprobe Jiri Olsa
2024-12-13 13:48 ` Thomas Weißschuh
2024-12-13 14:51 ` Jiri Olsa
2024-12-13 15:12 ` Thomas Weißschuh
2024-12-13 21:52 ` Jiri Olsa
2024-12-14 13:21 ` Thomas Weißschuh
2024-12-16 8:03 ` Jiri Olsa
2024-12-11 13:33 ` [PATCH bpf-next 07/13] uprobes/x86: Add support to emulate nop5 instruction Jiri Olsa
2024-12-13 10:45 ` Peter Zijlstra
2024-12-13 13:02 ` Jiri Olsa
2024-12-11 13:33 ` [PATCH bpf-next 08/13] uprobes/x86: Add support to optimize uprobes Jiri Olsa
2024-12-13 10:49 ` Peter Zijlstra
2024-12-13 13:06 ` Jiri Olsa [this message]
2024-12-13 21:58 ` Andrii Nakryiko
2024-12-15 12:06 ` David Laight
2024-12-15 14:14 ` Oleg Nesterov
2024-12-16 8:08 ` Jiri Olsa
2024-12-16 9:18 ` David Laight
2024-12-16 10:12 ` Oleg Nesterov
2024-12-16 11:10 ` David Laight
2024-12-16 12:22 ` Oleg Nesterov
2024-12-16 12:50 ` Jiri Olsa
2024-12-16 15:08 ` David Laight
2024-12-16 16:06 ` Jiri Olsa
2024-12-11 13:33 ` [PATCH bpf-next 09/13] selftests/bpf: Use 5-byte nop for x86 usdt probes Jiri Olsa
2024-12-13 21:58 ` Andrii Nakryiko
2024-12-16 8:32 ` Jiri Olsa
2024-12-16 23:06 ` Andrii Nakryiko
2024-12-11 13:33 ` [PATCH bpf-next 10/13] selftests/bpf: Add uprobe/usdt optimized test Jiri Olsa
2024-12-13 21:58 ` Andrii Nakryiko
2024-12-16 7:58 ` Jiri Olsa
2024-12-11 13:34 ` [PATCH bpf-next 11/13] selftests/bpf: Add hit/attach/detach race optimized uprobe test Jiri Olsa
2024-12-13 21:58 ` Andrii Nakryiko
2024-12-16 7:59 ` Jiri Olsa
2024-12-11 13:34 ` [PATCH bpf-next 12/13] selftests/bpf: Add uprobe syscall sigill signal test Jiri Olsa
2024-12-11 13:34 ` [PATCH bpf-next 13/13] selftests/bpf: Add 5-byte nop uprobe trigger bench Jiri Olsa
2024-12-13 21:57 ` Andrii Nakryiko
2024-12-16 7:56 ` Jiri Olsa
2024-12-13 0:43 ` [PATCH bpf-next 00/13] uprobes: Add support to optimize usdt probes on x86_64 Andrii Nakryiko
2024-12-13 9:46 ` Jiri Olsa
2024-12-13 10:51 ` Peter Zijlstra
2024-12-13 13:07 ` Jiri Olsa
2024-12-13 13:54 ` Peter Zijlstra
2024-12-13 14:05 ` Jiri Olsa
2024-12-13 18:39 ` Peter Zijlstra
2024-12-13 21:52 ` Jiri Olsa
2024-12-13 21:59 ` Andrii Nakryiko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z1wxa8yYrwt5Oz9z@krava \
--to=olsajiri@gmail.com \
--cc=alan.maguire@oracle.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=haoluo@google.com \
--cc=john.fastabend@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=mhiramat@kernel.org \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=songliubraving@fb.com \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).