From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755157AbbCCIdx (ORCPT ); Tue, 3 Mar 2015 03:33:53 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:12958 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752994AbbCCIdw (ORCPT ); Tue, 3 Mar 2015 03:33:52 -0500 Message-ID: <54F571C4.5050309@huawei.com> Date: Tue, 3 Mar 2015 16:33:08 +0800 From: Wang Nan User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: Masami Hiramatsu CC: , , , , Subject: Re: [PATCH] kprobes: x86: cleanup __recover_probed_insn(). References: <1425364776-5833-1-git-send-email-wangnan0@huawei.com> <54F56F02.2070505@hitachi.com> In-Reply-To: <54F56F02.2070505@hitachi.com> Content-Type: text/plain; charset="ISO-2022-JP" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.111.69.129] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.54F571E5.001B,ss=1,re=0.001,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 55f9568a813afea6b92a18a0d30cbd7c Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2015/3/3 16:21, Masami Hiramatsu wrote: > (2015/03/03 15:39), Wang Nan wrote: >> Since kernel kconfig forbids turning off KPROBES_ON_FTRACE for x86, we >> don't need to consider the situation that a kprobe probing on a ftrace >> location. The only exception should be early kprobe with >> KPROBES_ON_FTRACE enabled. However, it is still impossible for it to get >> a tainted by ftrace if it is registered before ftrace is ready. >> >> Thus this patch removes unneed logic to make code simpler. > > Nak. > Please make sure why this is introduced (and try to check by reproducing it). > https://lkml.org/lkml/2015/2/20/208 > > Thank you, > Thank you for your response. I didn't realize it is newly introduced code. It breaks my early kprobes on ftrace code. I provided a fix on it: https://lkml.org/lkml/2015/3/3/4 Do you have any suggestion on it? Thank you. >> >> Signed-off-by: Wang Nan >> --- >> arch/x86/kernel/kprobes/core.c | 62 ++++++++---------------------------------- >> 1 file changed, 12 insertions(+), 50 deletions(-) >> >> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c >> index 4e3d5a9..88a99c0 100644 >> --- a/arch/x86/kernel/kprobes/core.c >> +++ b/arch/x86/kernel/kprobes/core.c >> @@ -219,55 +219,6 @@ retry: >> } >> } >> >> -static unsigned long >> -__recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr) >> -{ >> - struct kprobe *kp; >> - unsigned long faddr; >> - >> - kp = get_kprobe((void *)addr); >> - faddr = ftrace_location(addr); >> - /* >> - * Addresses inside the ftrace location are refused by >> - * arch_check_ftrace_location(). Something went terribly wrong >> - * if such an address is checked here. >> - */ >> - if (WARN_ON(faddr && faddr != addr)) >> - return 0UL; >> - /* >> - * Use the current code if it is not modified by Kprobe >> - * and it cannot be modified by ftrace. >> - */ >> - if (!kp && !faddr) >> - return addr; >> - >> - /* >> - * Basically, kp->ainsn.insn has an original instruction. >> - * However, RIP-relative instruction can not do single-stepping >> - * at different place, __copy_instruction() tweaks the displacement of >> - * that instruction. In that case, we can't recover the instruction >> - * from the kp->ainsn.insn. >> - * >> - * On the other hand, in case on normal Kprobe, kp->opcode has a copy >> - * of the first byte of the probed instruction, which is overwritten >> - * by int3. And the instruction at kp->addr is not modified by kprobes >> - * except for the first byte, we can recover the original instruction >> - * from it and kp->opcode. >> - * >> - * In case of Kprobes using ftrace, we do not have a copy of >> - * the original instruction. In fact, the ftrace location might >> - * be modified at anytime and even could be in an inconsistent state. >> - * Fortunately, we know that the original code is the ideal 5-byte >> - * long NOP. >> - */ >> - memcpy(buf, (void *)addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); >> - if (faddr) >> - memcpy(buf, ideal_nops[NOP_ATOMIC5], 5); >> - else >> - buf[0] = kp->opcode; >> - return (unsigned long)buf; >> -} >> - >> /* >> * Recover the probed instruction at addr for further analysis. >> * Caller must lock kprobes by kprobe_mutex, or disable preemption >> @@ -282,7 +233,18 @@ unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long add >> if (__addr != addr) >> return __addr; >> >> - return __recover_probed_insn(buf, addr); >> + /* >> + * If KPROBES_ON_FTRACE is off, we are not allowed probing at >> + * ftrace location. If it is on, we should use >> + * arm_kprobe_ftrace() and never get here. As a result, there >> + * is no need to care about confliction between kprobe and >> + * ftrace. The only exception should be early kprobes. However, >> + * for such kprobes registered before ftrace is ready, it is >> + * impossible to get a tainted instruction; for such kprobes >> + * registered after ftrace ready, it will use >> + * arm_kprobe_ftrace() and won't get here. >> + */ >> + return addr; >> } >> >> /* Check if paddr is at an instruction boundary */ >> > >