From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x225pj0tDJteqJC3FmN8NwTulYO1X03L8IncHJrllnR8O8Lqnuj9VZs3OzcVRq51SKwtEP0So ARC-Seal: i=1; a=rsa-sha256; t=1516610572; cv=none; d=google.com; s=arc-20160816; b=g0GGHfrtN7dbgetSfH+wQ2kytHJ+ZtvNe4diMzdW72fK5raCi1btZTC8D8I9+DvEFc ikMOjHTM9uyUuelm2ynVFK1o88SjqRFmZfvH+JD7EhmCxrp0cz/T/PxxGlwSSk/rzO6v MHSGJqucpXTddW+l7LH9Kmm+8t8HYLxCvWsw34dAR23PBYC1LALP3rzDti3FBS7Ow6Xc dqZbt7ymfBhiE8zOzFpCF7Fa1Dd/JJg9F2eKhPTMNA6bQpx8++QiBsh41B23SXN/fNm2 6T/p5s/yz0od6IXg/c7ne92ro4TYTh0yQAp+zWbqCoxyserkeqvuLk5XBxSgX3V7E5RR TbYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ulRJZyAP4KFg3eBj3yBH+JhSb0lVSJ/jgLH2ZsltNcY=; b=pUAgW0XE5ZA8AYVfvsiIpNellIIAyft1g4y2qBQT5WxhJ6xuOQha21PZ6wDaSkTHOL SLG2gxhqoixYIWVR28IEyoX5IWriNxCyZ4+pY4pVtlQIyAjCCUZTs4f5Hfgz93bWDGdq 93uuBxI1GFBZTPM5zrmagdWYz259FF0fBMuuK0i0jpA7/qz+9w0AlBHbE3DIVSpgWb+F I8GTDigBa5nhH/BgG/Qv4sdVN8mgGZCwn5mt1D7A90rrSwozsFC5JpW4uMj6ys30yjfn k0HVRhn20qZuNUlIeu07jy/fz8NAg2Q56UxZYJaxeXuW9dcXDCLmkMMr+tYiQNMBQMxr 71LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning gregkh@linuxfoundation.org does not designate 90.92.71.90 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning gregkh@linuxfoundation.org does not designate 90.92.71.90 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Masami Hiramatsu , Thomas Gleixner , David Woodhouse , Andi Kleen , Peter Zijlstra , Ananth N Mavinakayanahalli , Arjan van de Ven , Greg Kroah-Hartman Subject: [PATCH 4.4 51/53] kprobes/x86: Disable optimizing on the function jumps to indirect thunk Date: Mon, 22 Jan 2018 09:40:43 +0100 Message-Id: <20180122083912.814825288@linuxfoundation.org> X-Mailer: git-send-email 2.16.0 In-Reply-To: <20180122083910.299610926@linuxfoundation.org> References: <20180122083910.299610926@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-LABELS: =?utf-8?b?IlxcU2VudCI=?= X-GMAIL-THRID: =?utf-8?q?1590281447475969743?= X-GMAIL-MSGID: =?utf-8?q?1590281447475969743?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Masami Hiramatsu commit c86a32c09f8ced67971a2310e3b0dda4d1749007 upstream. Since indirect jump instructions will be replaced by jump to __x86_indirect_thunk_*, those jmp instruction must be treated as an indirect jump. Since optprobe prohibits to optimize probes in the function which uses an indirect jump, it also needs to find out the function which jump to __x86_indirect_thunk_* and disable optimization. Add a check that the jump target address is between the __indirect_thunk_start/end when optimizing kprobe. Signed-off-by: Masami Hiramatsu Signed-off-by: Thomas Gleixner Acked-by: David Woodhouse Cc: Andi Kleen Cc: Peter Zijlstra Cc: Ananth N Mavinakayanahalli Cc: Arjan van de Ven Cc: Greg Kroah-Hartman Link: https://lkml.kernel.org/r/151629212062.10241.6991266100233002273.stgit@devbox Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/kprobes/opt.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -36,6 +36,7 @@ #include #include #include +#include #include "common.h" @@ -191,7 +192,7 @@ static int copy_optimized_instructions(u } /* Check whether insn is indirect jump */ -static int insn_is_indirect_jump(struct insn *insn) +static int __insn_is_indirect_jump(struct insn *insn) { return ((insn->opcode.bytes[0] == 0xff && (X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */ @@ -225,6 +226,26 @@ static int insn_jump_into_range(struct i return (start <= target && target <= start + len); } +static int insn_is_indirect_jump(struct insn *insn) +{ + int ret = __insn_is_indirect_jump(insn); + +#ifdef CONFIG_RETPOLINE + /* + * Jump to x86_indirect_thunk_* is treated as an indirect jump. + * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with + * older gcc may use indirect jump. So we add this check instead of + * replace indirect-jump check. + */ + if (!ret) + ret = insn_jump_into_range(insn, + (unsigned long)__indirect_thunk_start, + (unsigned long)__indirect_thunk_end - + (unsigned long)__indirect_thunk_start); +#endif + return ret; +} + /* Decode whole function to ensure any instructions don't jump into target */ static int can_optimize(unsigned long paddr) {