From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x225gY9WUp9xxYRkClUrFLi2myXizKfwj1byVUF3EgievUnCvd82cG5RHXh5WPKmMRkY+1e6T ARC-Seal: i=1; a=rsa-sha256; t=1516611264; cv=none; d=google.com; s=arc-20160816; b=im337WFoWLvj8++wPriOmCsYrUC2g3l5m9VwH3lUL2vJT//OwI0G1j3CfTAZRzXg/Y v7tnIeTHIs6vPqJzRgoowb+DcriKRAXNmxiUrOiSNdFDq6pewiKsSE6miKB892uMAqlp 3cAUHdFPGS8mLOsZkA+SyxTa94JPc+ck5sGdEhYjSIPIX/8usRdqyPawAnRIRcaIRBKO OToQt8BJ3HmVUcu3omPZKvCKRVbaQUptmVQ8GrG2H1IQsqrFmIZCZm3jbVzlq4d8HkBw A2zoiW7mluCofrvAy2hQGLU8qq/5AZtnWFsUt6Sk0paIRbQsPQeFh4QIYaNLgXOJRyMj 3nZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=THhfoziyNtETbELPv+BZrKtbVqVBtUQFx7+f9YvW6Fo=; b=05ZrXKf4FwJfe8aY7/SYCHFbXI3UamTZlsv4Ug4KkXg+xbM+Fqcb4vbBdHTWpzsxOk ibGAakVh5ijliPcG2Qr2UVgwoR1P8lsDde3a02ZPlmap66ejuzngmYppOu5MOMHRgQJf bbpPsR1TA+ktfd/F+4K20m8aqVP1SNVIME7qyIiJ302tp4zrsIkUWieZVBUG/jKXNYbd VZQfy2CuJ6AjccWp8cok9PLVNYTNhSAnbbK/b9NrZukg0+q4gxNtn8nUBQoGyvX6Ewck UgExUYrEy9yZZV/NuKgBOM41RewsydrWwgXY7wvbCXtg3KdTXDNwyCy8+tsobvvGo6w0 FuAA== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning gregkh@linuxfoundation.org does not designate 90.92.71.90 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning gregkh@linuxfoundation.org does not designate 90.92.71.90 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Masami Hiramatsu , Thomas Gleixner , David Woodhouse , Andi Kleen , Peter Zijlstra , Ananth N Mavinakayanahalli , Arjan van de Ven , Greg Kroah-Hartman Subject: [PATCH 4.14 84/89] kprobes/x86: Disable optimizing on the function jumps to indirect thunk Date: Mon, 22 Jan 2018 09:46:04 +0100 Message-Id: <20180122084002.911688066@linuxfoundation.org> X-Mailer: git-send-email 2.16.0 In-Reply-To: <20180122083954.683903493@linuxfoundation.org> References: <20180122083954.683903493@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-LABELS: =?utf-8?b?IlxcU2VudCI=?= X-GMAIL-THRID: =?utf-8?q?1590281447475969743?= X-GMAIL-MSGID: =?utf-8?q?1590282173578398684?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Masami Hiramatsu commit c86a32c09f8ced67971a2310e3b0dda4d1749007 upstream. Since indirect jump instructions will be replaced by jump to __x86_indirect_thunk_*, those jmp instruction must be treated as an indirect jump. Since optprobe prohibits to optimize probes in the function which uses an indirect jump, it also needs to find out the function which jump to __x86_indirect_thunk_* and disable optimization. Add a check that the jump target address is between the __indirect_thunk_start/end when optimizing kprobe. Signed-off-by: Masami Hiramatsu Signed-off-by: Thomas Gleixner Acked-by: David Woodhouse Cc: Andi Kleen Cc: Peter Zijlstra Cc: Ananth N Mavinakayanahalli Cc: Arjan van de Ven Cc: Greg Kroah-Hartman Link: https://lkml.kernel.org/r/151629212062.10241.6991266100233002273.stgit@devbox Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/kprobes/opt.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -40,6 +40,7 @@ #include #include #include +#include #include "common.h" @@ -205,7 +206,7 @@ static int copy_optimized_instructions(u } /* Check whether insn is indirect jump */ -static int insn_is_indirect_jump(struct insn *insn) +static int __insn_is_indirect_jump(struct insn *insn) { return ((insn->opcode.bytes[0] == 0xff && (X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */ @@ -239,6 +240,26 @@ static int insn_jump_into_range(struct i return (start <= target && target <= start + len); } +static int insn_is_indirect_jump(struct insn *insn) +{ + int ret = __insn_is_indirect_jump(insn); + +#ifdef CONFIG_RETPOLINE + /* + * Jump to x86_indirect_thunk_* is treated as an indirect jump. + * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with + * older gcc may use indirect jump. So we add this check instead of + * replace indirect-jump check. + */ + if (!ret) + ret = insn_jump_into_range(insn, + (unsigned long)__indirect_thunk_start, + (unsigned long)__indirect_thunk_end - + (unsigned long)__indirect_thunk_start); +#endif + return ret; +} + /* Decode whole function to ensure any instructions don't jump into target */ static int can_optimize(unsigned long paddr) {