From: Masami Hiramatsu <mhiramat@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
rostedt@goodmis.org, mhiramat@kernel.org, bristot@redhat.com,
jbaron@akamai.com, torvalds@linux-foundation.org,
tglx@linutronix.de, mingo@kernel.org, namit@vmware.com,
hpa@zytor.com, luto@kernel.org, ard.biesheuvel@linaro.org,
jpoimboe@redhat.com, jeyu@kernel.org, paulmck@kernel.org,
mathieu.desnoyers@efficios.com
Subject: Re: [PATCH v4 12/16] x86/kprobes: Fix ordering
Date: Tue, 22 Oct 2019 10:35:21 +0900 [thread overview]
Message-ID: <20191022103521.3015bc5e128cd68fa645013c@kernel.org> (raw)
In-Reply-To: <20191018074634.629386219@infradead.org>
On Fri, 18 Oct 2019 09:35:37 +0200
Peter Zijlstra <peterz@infradead.org> wrote:
> Kprobes does something like:
>
> register:
> arch_arm_kprobe()
> text_poke(INT3)
> /* guarantees nothing, INT3 will become visible at some point, maybe */
>
> kprobe_optimizer()
> /* guarantees the bytes after INT3 are unused */
> syncrhonize_rcu_tasks();
> text_poke_bp(JMP32);
> /* implies IPI-sync, kprobe really is enabled */
>
>
> unregister:
> __disarm_kprobe()
> unoptimize_kprobe()
> text_poke_bp(INT3 + tail);
> /* implies IPI-sync, so tail is guaranteed visible */
> arch_disarm_kprobe()
> text_poke(old);
> /* guarantees nothing, old will maybe become visible */
>
> synchronize_rcu()
>
> free-stuff
Note that this is only for the case of optimized kprobe.
(On some probe points we can not optimize it)
>
> Now the problem is that on register, the synchronize_rcu_tasks() does
> not imply sufficient to guarantee all CPUs have already observed INT3
> (although in practise this is exceedingly unlikely not to have
> happened) (similar to how MEMBARRIER_CMD_PRIVATE_EXPEDITED does not
> imply MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE).
OK, so the sync_core() after int3 is needed to guarantee the probe
is enabled on each core.
>
> Worse, even if it did, we'd have to do 2 synchronize calls to provide
> the guarantee we're looking for, the first to ensure INT3 is visible,
> the second to guarantee nobody is then still using the instruction
> bytes after INT3.
I think this 2nd guarantee is done by syncrhonize_rcu() if we
put sync_core() after int3. syncrhonize_rcu() ensures that
all cores once scheduled and all interrupts have done.
>
> Similar on unregister; the synchronize_rcu() between
> __unregister_kprobe_top() and __unregister_kprobe_bottom() does not
> guarantee all CPUs are free of the INT3 (and observe the old text).
I agree with putting sync_core() after putting/removing INT3.
>
> Therefore, sprinkle some IPI-sync love around. This guarantees that
> all CPUs agree on the text and RCU once again provides the required
> guaranteed.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: hpa@zytor.com
> Cc: paulmck@kernel.org
> Cc: mathieu.desnoyers@efficios.com
> ---
> arch/x86/include/asm/text-patching.h | 1 +
> arch/x86/kernel/alternative.c | 11 ++++++++---
> arch/x86/kernel/kprobes/core.c | 2 ++
> arch/x86/kernel/kprobes/opt.c | 12 ++++--------
> 4 files changed, 15 insertions(+), 11 deletions(-)
>
> --- a/arch/x86/include/asm/text-patching.h
> +++ b/arch/x86/include/asm/text-patching.h
> @@ -42,6 +42,7 @@ extern void text_poke_early(void *addr,
> * an inconsistent instruction while you patch.
> */
> extern void *text_poke(void *addr, const void *opcode, size_t len);
> +extern void text_poke_sync(void);
> extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
> extern int poke_int3_handler(struct pt_regs *regs);
> extern void text_poke_bp(void *addr, const void *opcode, size_t len, const void *emulate);
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -936,6 +936,11 @@ static void do_sync_core(void *info)
> sync_core();
> }
>
> +void text_poke_sync(void)
> +{
> + on_each_cpu(do_sync_core, NULL, 1);
> +}
> +
> struct text_poke_loc {
> s32 rel_addr; /* addr := _stext + rel_addr */
> s32 rel32;
> @@ -1085,7 +1090,7 @@ static void text_poke_bp_batch(struct te
> for (i = 0; i < nr_entries; i++)
> text_poke(text_poke_addr(&tp[i]), &int3, sizeof(int3));
>
> - on_each_cpu(do_sync_core, NULL, 1);
> + text_poke_sync();
>
> /*
> * Second step: update all but the first byte of the patched range.
> @@ -1107,7 +1112,7 @@ static void text_poke_bp_batch(struct te
> * not necessary and we'd be safe even without it. But
> * better safe than sorry (plus there's not only Intel).
> */
> - on_each_cpu(do_sync_core, NULL, 1);
> + text_poke_sync();
> }
>
> /*
> @@ -1123,7 +1128,7 @@ static void text_poke_bp_batch(struct te
> }
>
> if (do_sync)
> - on_each_cpu(do_sync_core, NULL, 1);
> + text_poke_sync();
>
> /*
> * sync_core() implies an smp_mb() and orders this store against
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -502,11 +502,13 @@ int arch_prepare_kprobe(struct kprobe *p
> void arch_arm_kprobe(struct kprobe *p)
> {
> text_poke(p->addr, ((unsigned char []){INT3_INSN_OPCODE}), 1);
> + text_poke_sync();
> }
>
> void arch_disarm_kprobe(struct kprobe *p)
> {
> text_poke(p->addr, &p->opcode, 1);
> + text_poke_sync();
> }
This looks good to me.
>
> void arch_remove_kprobe(struct kprobe *p)
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -444,14 +444,10 @@ void arch_optimize_kprobes(struct list_h
> /* Replace a relative jump with a breakpoint (int3). */
> void arch_unoptimize_kprobe(struct optimized_kprobe *op)
> {
> - u8 insn_buff[JMP32_INSN_SIZE];
> -
> - /* Set int3 to first byte for kprobes */
> - insn_buff[0] = INT3_INSN_OPCODE;
> - memcpy(insn_buff + 1, op->optinsn.copied_insn, DISP32_SIZE);
> -
> - text_poke_bp(op->kp.addr, insn_buff, JMP32_INSN_SIZE,
> - text_gen_insn(JMP32_INSN_OPCODE, op->kp.addr, op->optinsn.insn));
> + arch_arm_kprobe(&op->kp);
> + text_poke(op->kp.addr + INT3_INSN_SIZE,
> + op->optinsn.copied_insn, DISP32_SIZE);
> + text_poke_sync();
> }
For this part, I thought it was same as what text_poke_bp() does.
But, indeed, this looks better (simpler & lighter) than using
text_poke_bp()...
So, in total, this looks good to me.
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Thank you,
--
Masami Hiramatsu <mhiramat@kernel.org>
next prev parent reply other threads:[~2019-10-22 1:35 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-18 7:35 [PATCH v4 00/16] Rewrite x86/ftrace to use text_poke (and more) Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 01/16] x86/alternatives: Teach text_poke_bp() to emulate instructions Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 02/16] x86/alternatives: Update int3_emulate_push() comment Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 03/16] x86/alternatives,jump_label: Provide better text_poke() batching interface Peter Zijlstra
2019-10-21 8:48 ` Ingo Molnar
2019-10-21 9:21 ` Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 04/16] x86/alternatives: Add and use text_gen_insn() helper Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 05/16] x86/ftrace: Use text_poke() Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 06/16] x86/mm: Remove set_kernel_text_r[ow]() Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 07/16] x86/alternative: Add text_opcode_size() Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 08/16] x86/ftrace: Use text_gen_insn() Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 09/16] x86/alternative: Remove text_poke_loc::len Peter Zijlstra
2019-10-21 8:58 ` Ingo Molnar
2019-10-21 9:02 ` Ingo Molnar
2019-10-18 7:35 ` [PATCH v4 10/16] x86/alternative: Shrink text_poke_loc Peter Zijlstra
2019-10-21 9:01 ` Ingo Molnar
2019-10-21 9:25 ` Peter Zijlstra
2019-10-21 9:33 ` Ingo Molnar
2019-10-18 7:35 ` [PATCH v4 11/16] x86/kprobes: Convert to text-patching.h Peter Zijlstra
2019-10-21 14:57 ` Masami Hiramatsu
2019-10-18 7:35 ` [PATCH v4 12/16] x86/kprobes: Fix ordering Peter Zijlstra
2019-10-22 1:35 ` Masami Hiramatsu [this message]
2019-10-22 10:31 ` Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 13/16] arm/ftrace: Use __patch_text_real() Peter Zijlstra
2019-10-28 16:25 ` Will Deacon
2019-10-28 16:34 ` Peter Zijlstra
2019-10-28 16:35 ` Peter Zijlstra
2019-10-28 16:47 ` Will Deacon
2019-10-28 16:55 ` Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 14/16] module: Remove set_all_modules_text_*() Peter Zijlstra
2019-10-18 7:35 ` [PATCH v4 15/16] module: Move where we mark modules RO,X Peter Zijlstra
2019-10-21 13:53 ` Josh Poimboeuf
2019-10-21 14:14 ` Peter Zijlstra
2019-10-21 15:34 ` Peter Zijlstra
2019-10-21 15:44 ` Peter Zijlstra
2019-10-21 16:11 ` Peter Zijlstra
2019-10-22 11:31 ` Heiko Carstens
2019-10-22 12:31 ` Peter Zijlstra
2019-10-23 11:48 ` Peter Zijlstra
2019-10-23 15:16 ` Peter Zijlstra
2019-10-23 17:15 ` Josh Poimboeuf
2019-10-24 10:59 ` Peter Zijlstra
2019-10-24 18:31 ` Josh Poimboeuf
2019-10-24 20:33 ` Peter Zijlstra
2019-10-23 17:00 ` Josh Poimboeuf
2019-10-24 13:16 ` Peter Zijlstra
2019-10-25 6:44 ` Petr Mladek
2019-10-25 8:43 ` Peter Zijlstra
2019-10-25 10:06 ` Peter Zijlstra
2019-10-25 13:50 ` Josh Poimboeuf
2019-10-26 1:17 ` Josh Poimboeuf
2019-10-28 10:07 ` Peter Zijlstra
2019-10-28 10:43 ` Peter Zijlstra
2019-10-25 9:16 ` Peter Zijlstra
2019-10-22 2:21 ` Steven Rostedt
2019-10-22 20:24 ` Peter Zijlstra
2019-10-22 20:40 ` Steven Rostedt
2019-10-23 9:07 ` Peter Zijlstra
2019-10-23 18:52 ` Steven Rostedt
2019-10-24 10:16 ` Peter Zijlstra
2019-10-24 10:18 ` Peter Zijlstra
2019-10-24 15:00 ` Steven Rostedt
2019-10-24 16:43 ` Peter Zijlstra
2019-10-24 18:17 ` Steven Rostedt
2019-10-24 20:24 ` Peter Zijlstra
2019-10-24 20:28 ` Steven Rostedt
2019-10-18 7:35 ` [PATCH v4 16/16] ftrace: Merge ftrace_module_{init,enable}() Peter Zijlstra
2019-10-18 8:20 ` Peter Zijlstra
2019-10-21 9:09 ` [PATCH v4 00/16] Rewrite x86/ftrace to use text_poke (and more) Ingo Molnar
2019-10-21 13:38 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191022103521.3015bc5e128cd68fa645013c@kernel.org \
--to=mhiramat@kernel.org \
--cc=ard.biesheuvel@linaro.org \
--cc=bristot@redhat.com \
--cc=hpa@zytor.com \
--cc=jbaron@akamai.com \
--cc=jeyu@kernel.org \
--cc=jpoimboe@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@kernel.org \
--cc=namit@vmware.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox