* [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup
@ 2017-08-18 8:22 Masami Hiramatsu
2017-08-18 8:24 ` [PATCH -tip v3 1/2] kprobes/x86: Make insn buffer always ROX and use text_poke Masami Hiramatsu
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2017-08-18 8:22 UTC (permalink / raw)
To: Ingo Molnar
Cc: Ingo Molnar, H . Peter Anvin, x86, Masami Hiramatsu,
Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
David S . Miller, linux-kernel
Hi,
This series modifies how to handle RO insn buffer and
cleans up addressof operators.
The 1st patch changes the RO insn buffer handling: instead
of using set_memory_ro/rw to modify the buffer, it prepares
new instructions in another buffer and write it with
text_poke() as suggested by Ingo Molnar (Thanks!).
Since the text_poke() is safely modifying code by
mapping alias pages, it can write RO pages.
This also override alloc_insn_page() so that it returns
ROX page directly.
The 2nd one is not changed. It is a cleanup patch
to remove addressof operators ("&") since
it is meaningless anymore.
V3 has just a following update:
- [1/2] Not to just add set_memory_ro(), introduce new
patch to change the way to handle RO pages.
Thanks,
---
Masami Hiramatsu (2):
kprobes/x86: Make insn buffer always ROX and use text_poke
kprobes/x86: Remove addressof operators
arch/x86/include/asm/kprobes.h | 4 +-
arch/x86/kernel/kprobes/common.h | 6 ++-
arch/x86/kernel/kprobes/core.c | 61 +++++++++++++++++++++------------
arch/x86/kernel/kprobes/opt.c | 71 +++++++++++++++++++++-----------------
kernel/kprobes.c | 2 +
5 files changed, 86 insertions(+), 58 deletions(-)
--
Masami Hiramatsu <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH -tip v3 1/2] kprobes/x86: Make insn buffer always ROX and use text_poke 2017-08-18 8:22 [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu @ 2017-08-18 8:24 ` Masami Hiramatsu 2017-09-28 10:51 ` [tip:perf/core] kprobes/x86: Make insn buffer always ROX and use text_poke() tip-bot for Masami Hiramatsu 2017-08-18 8:25 ` [PATCH -tip v3 2/2] kprobes/x86: Remove addressof operators Masami Hiramatsu 2017-08-28 2:28 ` [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu 2 siblings, 1 reply; 7+ messages in thread From: Masami Hiramatsu @ 2017-08-18 8:24 UTC (permalink / raw) To: Ingo Molnar Cc: Ingo Molnar, H . Peter Anvin, x86, Masami Hiramatsu, Ananth N Mavinakayanahalli, Anil S Keshavamurthy, David S . Miller, linux-kernel Make insn buffer always ROX and use text_poke() to write the copied instructions instead of set_memory_*(). This makes instruction buffer stronger against other kernel subsystems because there is no window time to modify the buffer. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> --- arch/x86/kernel/kprobes/common.h | 6 ++-- arch/x86/kernel/kprobes/core.c | 61 +++++++++++++++++++++++------------- arch/x86/kernel/kprobes/opt.c | 65 ++++++++++++++++++++++---------------- kernel/kprobes.c | 2 + 4 files changed, 81 insertions(+), 53 deletions(-) diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h index db2182d63ed0..e2c2a1970869 100644 --- a/arch/x86/kernel/kprobes/common.h +++ b/arch/x86/kernel/kprobes/common.h @@ -75,11 +75,11 @@ extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf, * Copy an instruction and adjust the displacement if the instruction * uses the %rip-relative addressing mode. */ -extern int __copy_instruction(u8 *dest, u8 *src, struct insn *insn); +extern int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn); /* Generate a relative-jump/call instruction */ -extern void synthesize_reljump(void *from, void *to); -extern void synthesize_relcall(void *from, void *to); +extern void synthesize_reljump(void *dest, void *from, void *to); +extern void synthesize_relcall(void *dest, void *from, void *to); #ifdef CONFIG_OPTPROBES extern int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter); diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index f0153714ddac..b48e0efd668e 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -119,29 +119,29 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = { const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist); static nokprobe_inline void -__synthesize_relative_insn(void *from, void *to, u8 op) +__synthesize_relative_insn(void *dest, void *from, void *to, u8 op) { struct __arch_relative_insn { u8 op; s32 raddr; } __packed *insn; - insn = (struct __arch_relative_insn *)from; + insn = (struct __arch_relative_insn *)dest; insn->raddr = (s32)((long)(to) - ((long)(from) + 5)); insn->op = op; } /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/ -void synthesize_reljump(void *from, void *to) +void synthesize_reljump(void *dest, void *from, void *to) { - __synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE); + __synthesize_relative_insn(dest, from, to, RELATIVEJUMP_OPCODE); } NOKPROBE_SYMBOL(synthesize_reljump); /* Insert a call instruction at address 'from', which calls address 'to'.*/ -void synthesize_relcall(void *from, void *to) +void synthesize_relcall(void *dest, void *from, void *to) { - __synthesize_relative_insn(from, to, RELATIVECALL_OPCODE); + __synthesize_relative_insn(dest, from, to, RELATIVECALL_OPCODE); } NOKPROBE_SYMBOL(synthesize_relcall); @@ -346,10 +346,11 @@ static int is_IF_modifier(kprobe_opcode_t *insn) /* * Copy an instruction with recovering modified instruction by kprobes * and adjust the displacement if the instruction uses the %rip-relative - * addressing mode. + * addressing mode. Note that since @real will be the final place of copied + * instruction, displacement must be adjust by @real, not @dest. * This returns the length of copied instruction, or 0 if it has an error. */ -int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) +int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn) { kprobe_opcode_t buf[MAX_INSN_SIZE]; unsigned long recovered_insn = @@ -387,11 +388,11 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) * have given. */ newdisp = (u8 *) src + (s64) insn->displacement.value - - (u8 *) dest; + - (u8 *) real; if ((s64) (s32) newdisp != newdisp) { pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp); pr_err("\tSrc: %p, Dest: %p, old disp: %x\n", - src, dest, insn->displacement.value); + src, real, insn->displacement.value); return 0; } disp = (u8 *) dest + insn_offset_displacement(insn); @@ -402,20 +403,38 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) } /* Prepare reljump right after instruction to boost */ -static void prepare_boost(struct kprobe *p, struct insn *insn) +static int prepare_boost(kprobe_opcode_t *buf, struct kprobe *p, + struct insn *insn) { + int len = insn->length; + if (can_boost(insn, p->addr) && - MAX_INSN_SIZE - insn->length >= RELATIVEJUMP_SIZE) { + MAX_INSN_SIZE - len >= RELATIVEJUMP_SIZE) { /* * These instructions can be executed directly if it * jumps back to correct address. */ - synthesize_reljump(p->ainsn.insn + insn->length, + synthesize_reljump(buf + len, p->ainsn.insn + len, p->addr + insn->length); + len += RELATIVEJUMP_SIZE; p->ainsn.boostable = true; } else { p->ainsn.boostable = false; } + + return len; +} + +/* Make page to RO mode when allocate it */ +void *alloc_insn_page(void) +{ + void *page; + + page = module_alloc(PAGE_SIZE); + if (page) + set_memory_ro((unsigned long)page & PAGE_MASK, 1); + + return page; } /* Recover page to RW mode before releasing it */ @@ -429,12 +448,11 @@ void free_insn_page(void *page) static int arch_copy_kprobe(struct kprobe *p) { struct insn insn; + kprobe_opcode_t buf[MAX_INSN_SIZE]; int len; - set_memory_rw((unsigned long)p->ainsn.insn & PAGE_MASK, 1); - /* Copy an instruction with recovering if other optprobe modifies it.*/ - len = __copy_instruction(p->ainsn.insn, p->addr, &insn); + len = __copy_instruction(buf, p->addr, p->ainsn.insn, &insn); if (!len) return -EINVAL; @@ -442,15 +460,16 @@ static int arch_copy_kprobe(struct kprobe *p) * __copy_instruction can modify the displacement of the instruction, * but it doesn't affect boostable check. */ - prepare_boost(p, &insn); - - set_memory_ro((unsigned long)p->ainsn.insn & PAGE_MASK, 1); + len = prepare_boost(buf, p, &insn); /* Check whether the instruction modifies Interrupt Flag or not */ - p->ainsn.if_modifier = is_IF_modifier(p->ainsn.insn); + p->ainsn.if_modifier = is_IF_modifier(buf); /* Also, displacement change doesn't affect the first byte */ - p->opcode = p->ainsn.insn[0]; + p->opcode = buf[0]; + + /* OK, write back the instruction(s) into ROX insn buffer */ + text_poke(p->ainsn.insn, buf, len); return 0; } diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 4f98aad38237..22e65f0b8b34 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -184,13 +184,13 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) } NOKPROBE_SYMBOL(optimized_callback); -static int copy_optimized_instructions(u8 *dest, u8 *src) +static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real) { struct insn insn; int len = 0, ret; while (len < RELATIVEJUMP_SIZE) { - ret = __copy_instruction(dest + len, src + len, &insn); + ret = __copy_instruction(dest + len, src + len, real, &insn); if (!ret || !can_boost(&insn, src + len)) return -EINVAL; len += ret; @@ -343,57 +343,66 @@ void arch_remove_optimized_kprobe(struct optimized_kprobe *op) int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *__unused) { - u8 *buf; - int ret; + u8 *buf = NULL, *slot; + int ret, len; long rel; if (!can_optimize((unsigned long)op->kp.addr)) return -EILSEQ; - op->optinsn.insn = get_optinsn_slot(); - if (!op->optinsn.insn) + buf = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL); + if (!buf) return -ENOMEM; + op->optinsn.insn = slot = get_optinsn_slot(); + if (!slot) { + ret = -ENOMEM; + goto out; + } + /* * Verify if the address gap is in 2GB range, because this uses * a relative jump. */ - rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE; + rel = (long)slot - (long)op->kp.addr + RELATIVEJUMP_SIZE; if (abs(rel) > 0x7fffffff) { - __arch_remove_optimized_kprobe(op, 0); - return -ERANGE; + ret = -ERANGE; + goto err; } - buf = (u8 *)op->optinsn.insn; - set_memory_rw((unsigned long)buf & PAGE_MASK, 1); + /* Copy arch-dep-instance from template */ + memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); /* Copy instructions into the out-of-line buffer */ - ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr); - if (ret < 0) { - __arch_remove_optimized_kprobe(op, 0); - return ret; - } + ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr, + slot + TMPL_END_IDX); + if (ret < 0) + goto err; op->optinsn.size = ret; - - /* Copy arch-dep-instance from template */ - memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); + len = TMPL_END_IDX + op->optinsn.size; /* Set probe information */ synthesize_set_arg1(buf + TMPL_MOVE_IDX, (unsigned long)op); /* Set probe function call */ - synthesize_relcall(buf + TMPL_CALL_IDX, optimized_callback); + synthesize_relcall(buf + TMPL_CALL_IDX, + slot + TMPL_CALL_IDX, optimized_callback); /* Set returning jmp instruction at the tail of out-of-line buffer */ - synthesize_reljump(buf + TMPL_END_IDX + op->optinsn.size, + synthesize_reljump(buf + len, slot + len, (u8 *)op->kp.addr + op->optinsn.size); - - set_memory_ro((unsigned long)buf & PAGE_MASK, 1); - - flush_icache_range((unsigned long) buf, - (unsigned long) buf + TMPL_END_IDX + - op->optinsn.size + RELATIVEJUMP_SIZE); - return 0; + len += RELATIVEJUMP_SIZE; + + /* We have to use text_poke for instuction buffer because it is RO */ + text_poke(slot, buf, len); + ret = 0; +out: + kfree(buf); + return ret; + +err: + __arch_remove_optimized_kprobe(op, 0); + goto out; } /* diff --git a/kernel/kprobes.c b/kernel/kprobes.c index a1606a4224e1..15fba7fe57c8 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -117,7 +117,7 @@ enum kprobe_slot_state { SLOT_USED = 2, }; -static void *alloc_insn_page(void) +void __weak *alloc_insn_page(void) { return module_alloc(PAGE_SIZE); } ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [tip:perf/core] kprobes/x86: Make insn buffer always ROX and use text_poke() 2017-08-18 8:24 ` [PATCH -tip v3 1/2] kprobes/x86: Make insn buffer always ROX and use text_poke Masami Hiramatsu @ 2017-09-28 10:51 ` tip-bot for Masami Hiramatsu 0 siblings, 0 replies; 7+ messages in thread From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:51 UTC (permalink / raw) To: linux-tip-commits Cc: mhiramat, tglx, mingo, hpa, torvalds, davem, ananth, anil.s.keshavamurthy, linux-kernel, peterz Commit-ID: 63fef14fc98a8b4fad777fd3bef4d068802b3f14 Gitweb: https://git.kernel.org/tip/63fef14fc98a8b4fad777fd3bef4d068802b3f14 Author: Masami Hiramatsu <mhiramat@kernel.org> AuthorDate: Fri, 18 Aug 2017 17:24:00 +0900 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Thu, 28 Sep 2017 09:23:03 +0200 kprobes/x86: Make insn buffer always ROX and use text_poke() Make insn buffer always ROX and use text_poke() to write the copied instructions instead of set_memory_*(). This makes instruction buffer stronger against other kernel subsystems because there is no window time to modify the buffer. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S . Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/150304463032.17009.14195368040691676813.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/x86/kernel/kprobes/common.h | 6 ++-- arch/x86/kernel/kprobes/core.c | 61 ++++++++++++++++++++++++------------- arch/x86/kernel/kprobes/opt.c | 65 +++++++++++++++++++++++----------------- kernel/kprobes.c | 2 +- 4 files changed, 81 insertions(+), 53 deletions(-) diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h index db2182d..e2c2a19 100644 --- a/arch/x86/kernel/kprobes/common.h +++ b/arch/x86/kernel/kprobes/common.h @@ -75,11 +75,11 @@ extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf, * Copy an instruction and adjust the displacement if the instruction * uses the %rip-relative addressing mode. */ -extern int __copy_instruction(u8 *dest, u8 *src, struct insn *insn); +extern int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn); /* Generate a relative-jump/call instruction */ -extern void synthesize_reljump(void *from, void *to); -extern void synthesize_relcall(void *from, void *to); +extern void synthesize_reljump(void *dest, void *from, void *to); +extern void synthesize_relcall(void *dest, void *from, void *to); #ifdef CONFIG_OPTPROBES extern int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter); diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index f015371..b48e0ef 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -119,29 +119,29 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = { const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist); static nokprobe_inline void -__synthesize_relative_insn(void *from, void *to, u8 op) +__synthesize_relative_insn(void *dest, void *from, void *to, u8 op) { struct __arch_relative_insn { u8 op; s32 raddr; } __packed *insn; - insn = (struct __arch_relative_insn *)from; + insn = (struct __arch_relative_insn *)dest; insn->raddr = (s32)((long)(to) - ((long)(from) + 5)); insn->op = op; } /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/ -void synthesize_reljump(void *from, void *to) +void synthesize_reljump(void *dest, void *from, void *to) { - __synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE); + __synthesize_relative_insn(dest, from, to, RELATIVEJUMP_OPCODE); } NOKPROBE_SYMBOL(synthesize_reljump); /* Insert a call instruction at address 'from', which calls address 'to'.*/ -void synthesize_relcall(void *from, void *to) +void synthesize_relcall(void *dest, void *from, void *to) { - __synthesize_relative_insn(from, to, RELATIVECALL_OPCODE); + __synthesize_relative_insn(dest, from, to, RELATIVECALL_OPCODE); } NOKPROBE_SYMBOL(synthesize_relcall); @@ -346,10 +346,11 @@ static int is_IF_modifier(kprobe_opcode_t *insn) /* * Copy an instruction with recovering modified instruction by kprobes * and adjust the displacement if the instruction uses the %rip-relative - * addressing mode. + * addressing mode. Note that since @real will be the final place of copied + * instruction, displacement must be adjust by @real, not @dest. * This returns the length of copied instruction, or 0 if it has an error. */ -int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) +int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn) { kprobe_opcode_t buf[MAX_INSN_SIZE]; unsigned long recovered_insn = @@ -387,11 +388,11 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) * have given. */ newdisp = (u8 *) src + (s64) insn->displacement.value - - (u8 *) dest; + - (u8 *) real; if ((s64) (s32) newdisp != newdisp) { pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp); pr_err("\tSrc: %p, Dest: %p, old disp: %x\n", - src, dest, insn->displacement.value); + src, real, insn->displacement.value); return 0; } disp = (u8 *) dest + insn_offset_displacement(insn); @@ -402,20 +403,38 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) } /* Prepare reljump right after instruction to boost */ -static void prepare_boost(struct kprobe *p, struct insn *insn) +static int prepare_boost(kprobe_opcode_t *buf, struct kprobe *p, + struct insn *insn) { + int len = insn->length; + if (can_boost(insn, p->addr) && - MAX_INSN_SIZE - insn->length >= RELATIVEJUMP_SIZE) { + MAX_INSN_SIZE - len >= RELATIVEJUMP_SIZE) { /* * These instructions can be executed directly if it * jumps back to correct address. */ - synthesize_reljump(p->ainsn.insn + insn->length, + synthesize_reljump(buf + len, p->ainsn.insn + len, p->addr + insn->length); + len += RELATIVEJUMP_SIZE; p->ainsn.boostable = true; } else { p->ainsn.boostable = false; } + + return len; +} + +/* Make page to RO mode when allocate it */ +void *alloc_insn_page(void) +{ + void *page; + + page = module_alloc(PAGE_SIZE); + if (page) + set_memory_ro((unsigned long)page & PAGE_MASK, 1); + + return page; } /* Recover page to RW mode before releasing it */ @@ -429,12 +448,11 @@ void free_insn_page(void *page) static int arch_copy_kprobe(struct kprobe *p) { struct insn insn; + kprobe_opcode_t buf[MAX_INSN_SIZE]; int len; - set_memory_rw((unsigned long)p->ainsn.insn & PAGE_MASK, 1); - /* Copy an instruction with recovering if other optprobe modifies it.*/ - len = __copy_instruction(p->ainsn.insn, p->addr, &insn); + len = __copy_instruction(buf, p->addr, p->ainsn.insn, &insn); if (!len) return -EINVAL; @@ -442,15 +460,16 @@ static int arch_copy_kprobe(struct kprobe *p) * __copy_instruction can modify the displacement of the instruction, * but it doesn't affect boostable check. */ - prepare_boost(p, &insn); - - set_memory_ro((unsigned long)p->ainsn.insn & PAGE_MASK, 1); + len = prepare_boost(buf, p, &insn); /* Check whether the instruction modifies Interrupt Flag or not */ - p->ainsn.if_modifier = is_IF_modifier(p->ainsn.insn); + p->ainsn.if_modifier = is_IF_modifier(buf); /* Also, displacement change doesn't affect the first byte */ - p->opcode = p->ainsn.insn[0]; + p->opcode = buf[0]; + + /* OK, write back the instruction(s) into ROX insn buffer */ + text_poke(p->ainsn.insn, buf, len); return 0; } diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 4f98aad..22e65f0 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -184,13 +184,13 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) } NOKPROBE_SYMBOL(optimized_callback); -static int copy_optimized_instructions(u8 *dest, u8 *src) +static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real) { struct insn insn; int len = 0, ret; while (len < RELATIVEJUMP_SIZE) { - ret = __copy_instruction(dest + len, src + len, &insn); + ret = __copy_instruction(dest + len, src + len, real, &insn); if (!ret || !can_boost(&insn, src + len)) return -EINVAL; len += ret; @@ -343,57 +343,66 @@ void arch_remove_optimized_kprobe(struct optimized_kprobe *op) int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *__unused) { - u8 *buf; - int ret; + u8 *buf = NULL, *slot; + int ret, len; long rel; if (!can_optimize((unsigned long)op->kp.addr)) return -EILSEQ; - op->optinsn.insn = get_optinsn_slot(); - if (!op->optinsn.insn) + buf = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL); + if (!buf) return -ENOMEM; + op->optinsn.insn = slot = get_optinsn_slot(); + if (!slot) { + ret = -ENOMEM; + goto out; + } + /* * Verify if the address gap is in 2GB range, because this uses * a relative jump. */ - rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE; + rel = (long)slot - (long)op->kp.addr + RELATIVEJUMP_SIZE; if (abs(rel) > 0x7fffffff) { - __arch_remove_optimized_kprobe(op, 0); - return -ERANGE; + ret = -ERANGE; + goto err; } - buf = (u8 *)op->optinsn.insn; - set_memory_rw((unsigned long)buf & PAGE_MASK, 1); + /* Copy arch-dep-instance from template */ + memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); /* Copy instructions into the out-of-line buffer */ - ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr); - if (ret < 0) { - __arch_remove_optimized_kprobe(op, 0); - return ret; - } + ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr, + slot + TMPL_END_IDX); + if (ret < 0) + goto err; op->optinsn.size = ret; - - /* Copy arch-dep-instance from template */ - memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); + len = TMPL_END_IDX + op->optinsn.size; /* Set probe information */ synthesize_set_arg1(buf + TMPL_MOVE_IDX, (unsigned long)op); /* Set probe function call */ - synthesize_relcall(buf + TMPL_CALL_IDX, optimized_callback); + synthesize_relcall(buf + TMPL_CALL_IDX, + slot + TMPL_CALL_IDX, optimized_callback); /* Set returning jmp instruction at the tail of out-of-line buffer */ - synthesize_reljump(buf + TMPL_END_IDX + op->optinsn.size, + synthesize_reljump(buf + len, slot + len, (u8 *)op->kp.addr + op->optinsn.size); - - set_memory_ro((unsigned long)buf & PAGE_MASK, 1); - - flush_icache_range((unsigned long) buf, - (unsigned long) buf + TMPL_END_IDX + - op->optinsn.size + RELATIVEJUMP_SIZE); - return 0; + len += RELATIVEJUMP_SIZE; + + /* We have to use text_poke for instuction buffer because it is RO */ + text_poke(slot, buf, len); + ret = 0; +out: + kfree(buf); + return ret; + +err: + __arch_remove_optimized_kprobe(op, 0); + goto out; } /* diff --git a/kernel/kprobes.c b/kernel/kprobes.c index a1606a4..15fba7f 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -117,7 +117,7 @@ enum kprobe_slot_state { SLOT_USED = 2, }; -static void *alloc_insn_page(void) +void __weak *alloc_insn_page(void) { return module_alloc(PAGE_SIZE); } ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH -tip v3 2/2] kprobes/x86: Remove addressof operators 2017-08-18 8:22 [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu 2017-08-18 8:24 ` [PATCH -tip v3 1/2] kprobes/x86: Make insn buffer always ROX and use text_poke Masami Hiramatsu @ 2017-08-18 8:25 ` Masami Hiramatsu 2017-09-28 10:52 ` [tip:perf/core] kprobes/x86: Remove addressof() operators tip-bot for Masami Hiramatsu 2017-08-28 2:28 ` [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu 2 siblings, 1 reply; 7+ messages in thread From: Masami Hiramatsu @ 2017-08-18 8:25 UTC (permalink / raw) To: Ingo Molnar Cc: Ingo Molnar, H . Peter Anvin, x86, Masami Hiramatsu, Ananth N Mavinakayanahalli, Anil S Keshavamurthy, David S . Miller, linux-kernel Since commit 54a7d50b9205 ("x86: mark kprobe templates as character arrays, not single characters") changes optprobe_template_* to arrays, we can remove addressof operators from those symbols. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> --- arch/x86/include/asm/kprobes.h | 4 ++-- arch/x86/kernel/kprobes/opt.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h index 6cf65437b5e5..9f2e3102e0bb 100644 --- a/arch/x86/include/asm/kprobes.h +++ b/arch/x86/include/asm/kprobes.h @@ -58,8 +58,8 @@ extern __visible kprobe_opcode_t optprobe_template_call[]; extern __visible kprobe_opcode_t optprobe_template_end[]; #define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE) #define MAX_OPTINSN_SIZE \ - (((unsigned long)&optprobe_template_end - \ - (unsigned long)&optprobe_template_entry) + \ + (((unsigned long)optprobe_template_end - \ + (unsigned long)optprobe_template_entry) + \ MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE) extern const int kretprobe_blacklist_size; diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 22e65f0b8b34..0cae7c0f32ec 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -142,11 +142,11 @@ void optprobe_template_func(void); STACK_FRAME_NON_STANDARD(optprobe_template_func); #define TMPL_MOVE_IDX \ - ((long)&optprobe_template_val - (long)&optprobe_template_entry) + ((long)optprobe_template_val - (long)optprobe_template_entry) #define TMPL_CALL_IDX \ - ((long)&optprobe_template_call - (long)&optprobe_template_entry) + ((long)optprobe_template_call - (long)optprobe_template_entry) #define TMPL_END_IDX \ - ((long)&optprobe_template_end - (long)&optprobe_template_entry) + ((long)optprobe_template_end - (long)optprobe_template_entry) #define INT3_SIZE sizeof(kprobe_opcode_t) @@ -371,7 +371,7 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, } /* Copy arch-dep-instance from template */ - memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); + memcpy(buf, optprobe_template_entry, TMPL_END_IDX); /* Copy instructions into the out-of-line buffer */ ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr, ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [tip:perf/core] kprobes/x86: Remove addressof() operators 2017-08-18 8:25 ` [PATCH -tip v3 2/2] kprobes/x86: Remove addressof operators Masami Hiramatsu @ 2017-09-28 10:52 ` tip-bot for Masami Hiramatsu 0 siblings, 0 replies; 7+ messages in thread From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:52 UTC (permalink / raw) To: linux-tip-commits Cc: peterz, mingo, torvalds, ananth, mhiramat, davem, tglx, hpa, linux-kernel, anil.s.keshavamurthy Commit-ID: a8976fc84b644e3b567ea2bafad3b53b21ed6b6c Gitweb: https://git.kernel.org/tip/a8976fc84b644e3b567ea2bafad3b53b21ed6b6c Author: Masami Hiramatsu <mhiramat@kernel.org> AuthorDate: Fri, 18 Aug 2017 17:25:08 +0900 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Thu, 28 Sep 2017 09:23:03 +0200 kprobes/x86: Remove addressof() operators The following commit: 54a7d50b9205 ("x86: mark kprobe templates as character arrays, not single characters") changed optprobe_template_* to arrays, so we can remove the addressof() operators from those symbols. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S . Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/150304469798.17009.15886717935027472863.stgit@devbox Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/x86/include/asm/kprobes.h | 4 ++-- arch/x86/kernel/kprobes/opt.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h index 6cf6543..9f2e310 100644 --- a/arch/x86/include/asm/kprobes.h +++ b/arch/x86/include/asm/kprobes.h @@ -58,8 +58,8 @@ extern __visible kprobe_opcode_t optprobe_template_call[]; extern __visible kprobe_opcode_t optprobe_template_end[]; #define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE) #define MAX_OPTINSN_SIZE \ - (((unsigned long)&optprobe_template_end - \ - (unsigned long)&optprobe_template_entry) + \ + (((unsigned long)optprobe_template_end - \ + (unsigned long)optprobe_template_entry) + \ MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE) extern const int kretprobe_blacklist_size; diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 22e65f0..0cae7c0 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -142,11 +142,11 @@ void optprobe_template_func(void); STACK_FRAME_NON_STANDARD(optprobe_template_func); #define TMPL_MOVE_IDX \ - ((long)&optprobe_template_val - (long)&optprobe_template_entry) + ((long)optprobe_template_val - (long)optprobe_template_entry) #define TMPL_CALL_IDX \ - ((long)&optprobe_template_call - (long)&optprobe_template_entry) + ((long)optprobe_template_call - (long)optprobe_template_entry) #define TMPL_END_IDX \ - ((long)&optprobe_template_end - (long)&optprobe_template_entry) + ((long)optprobe_template_end - (long)optprobe_template_entry) #define INT3_SIZE sizeof(kprobe_opcode_t) @@ -371,7 +371,7 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, } /* Copy arch-dep-instance from template */ - memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); + memcpy(buf, optprobe_template_entry, TMPL_END_IDX); /* Copy instructions into the out-of-line buffer */ ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr, ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup 2017-08-18 8:22 [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu 2017-08-18 8:24 ` [PATCH -tip v3 1/2] kprobes/x86: Make insn buffer always ROX and use text_poke Masami Hiramatsu 2017-08-18 8:25 ` [PATCH -tip v3 2/2] kprobes/x86: Remove addressof operators Masami Hiramatsu @ 2017-08-28 2:28 ` Masami Hiramatsu 2017-09-11 10:28 ` Masami Hiramatsu 2 siblings, 1 reply; 7+ messages in thread From: Masami Hiramatsu @ 2017-08-28 2:28 UTC (permalink / raw) To: Masami Hiramatsu Cc: Ingo Molnar, Ingo Molnar, H . Peter Anvin, x86, Ananth N Mavinakayanahalli, Anil S Keshavamurthy, David S . Miller, linux-kernel Hi Ingo, What would you think about fixing this way? This makes the instruction buffer always RO and poke it via text_poke. Thank you, On Fri, 18 Aug 2017 17:22:54 +0900 Masami Hiramatsu <mhiramat@kernel.org> wrote: > Hi, > > This series modifies how to handle RO insn buffer and > cleans up addressof operators. > > The 1st patch changes the RO insn buffer handling: instead > of using set_memory_ro/rw to modify the buffer, it prepares > new instructions in another buffer and write it with > text_poke() as suggested by Ingo Molnar (Thanks!). > Since the text_poke() is safely modifying code by > mapping alias pages, it can write RO pages. > This also override alloc_insn_page() so that it returns > ROX page directly. > > The 2nd one is not changed. It is a cleanup patch > to remove addressof operators ("&") since > it is meaningless anymore. > > V3 has just a following update: > - [1/2] Not to just add set_memory_ro(), introduce new > patch to change the way to handle RO pages. > > Thanks, > > --- > > Masami Hiramatsu (2): > kprobes/x86: Make insn buffer always ROX and use text_poke > kprobes/x86: Remove addressof operators > > > arch/x86/include/asm/kprobes.h | 4 +- > arch/x86/kernel/kprobes/common.h | 6 ++- > arch/x86/kernel/kprobes/core.c | 61 +++++++++++++++++++++------------ > arch/x86/kernel/kprobes/opt.c | 71 +++++++++++++++++++++----------------- > kernel/kprobes.c | 2 + > 5 files changed, 86 insertions(+), 58 deletions(-) > > -- > Masami Hiramatsu <mhiramat@kernel.org> -- Masami Hiramatsu <mhiramat@kernel.org> ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup 2017-08-28 2:28 ` [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu @ 2017-09-11 10:28 ` Masami Hiramatsu 0 siblings, 0 replies; 7+ messages in thread From: Masami Hiramatsu @ 2017-09-11 10:28 UTC (permalink / raw) To: Ingo Molnar Cc: Masami Hiramatsu, Ingo Molnar, H . Peter Anvin, x86, Ananth N Mavinakayanahalli, Anil S Keshavamurthy, David S . Miller, linux-kernel Hello Ingo, Ping? Could you give me a comment? Since current code has an obvious bug, it should be fixed anyway. Thank you, On Mon, 28 Aug 2017 11:28:04 +0900 Masami Hiramatsu <mhiramat@kernel.org> wrote: > Hi Ingo, > > What would you think about fixing this way? > This makes the instruction buffer always RO and > poke it via text_poke. > > Thank you, > > On Fri, 18 Aug 2017 17:22:54 +0900 > Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > Hi, > > > > This series modifies how to handle RO insn buffer and > > cleans up addressof operators. > > > > The 1st patch changes the RO insn buffer handling: instead > > of using set_memory_ro/rw to modify the buffer, it prepares > > new instructions in another buffer and write it with > > text_poke() as suggested by Ingo Molnar (Thanks!). > > Since the text_poke() is safely modifying code by > > mapping alias pages, it can write RO pages. > > This also override alloc_insn_page() so that it returns > > ROX page directly. > > > > The 2nd one is not changed. It is a cleanup patch > > to remove addressof operators ("&") since > > it is meaningless anymore. > > > > V3 has just a following update: > > - [1/2] Not to just add set_memory_ro(), introduce new > > patch to change the way to handle RO pages. > > > > Thanks, > > > > --- > > > > Masami Hiramatsu (2): > > kprobes/x86: Make insn buffer always ROX and use text_poke > > kprobes/x86: Remove addressof operators > > > > > > arch/x86/include/asm/kprobes.h | 4 +- > > arch/x86/kernel/kprobes/common.h | 6 ++- > > arch/x86/kernel/kprobes/core.c | 61 +++++++++++++++++++++------------ > > arch/x86/kernel/kprobes/opt.c | 71 +++++++++++++++++++++----------------- > > kernel/kprobes.c | 2 + > > 5 files changed, 86 insertions(+), 58 deletions(-) > > > > -- > > Masami Hiramatsu <mhiramat@kernel.org> > > > -- > Masami Hiramatsu <mhiramat@kernel.org> -- Masami Hiramatsu <mhiramat@kernel.org> ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2017-09-28 10:55 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-08-18 8:22 [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu 2017-08-18 8:24 ` [PATCH -tip v3 1/2] kprobes/x86: Make insn buffer always ROX and use text_poke Masami Hiramatsu 2017-09-28 10:51 ` [tip:perf/core] kprobes/x86: Make insn buffer always ROX and use text_poke() tip-bot for Masami Hiramatsu 2017-08-18 8:25 ` [PATCH -tip v3 2/2] kprobes/x86: Remove addressof operators Masami Hiramatsu 2017-09-28 10:52 ` [tip:perf/core] kprobes/x86: Remove addressof() operators tip-bot for Masami Hiramatsu 2017-08-28 2:28 ` [PATCH -tip v3 0/2] kprobes/x86: Another way to make insn buffer RO and cleanup Masami Hiramatsu 2017-09-11 10:28 ` Masami Hiramatsu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox