From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x226NVsZau5ZsbBTjzoLYTz4X9RvlKeUgOY4cjvRBJ10bvYIcd2q2Pdd42QhVzJnkAj8z5j3A ARC-Seal: i=1; a=rsa-sha256; t=1516747180; cv=none; d=google.com; s=arc-20160816; b=D/Pa9u/q+LmCob2rG2xGIJoGGIt5ALuch71pHxkDQHoBU5gWivbF6/NOg9BUICD/2b 0T+gjlOuf8umZrxOBqSM9ozB2ejJ5zMnOMY5oymb6PFi8I10x1CVJClmuUjaPRP5/UOp ejcZ9PYPcJeTgP7RH2NqqWzbUrbyy3Dj/J8b0MsBo3ePSLjOw4NBc85HaMhl4XfLD1aT qbywlpzS83icC/gzO+IjtIzHMmL1d2xlDuA+927SE90SUZzpBig+XMjJTMMF6jRqAi2H Wd6UAhjdGcGutGruS9AjBUhMGuh7iqlyWKS4IND5I0ejd3tPneIMOLX1hZuGZT6YqYa5 gP4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=gRMi4k9qTW5A22PbMelbb2tc2nvyZYXouEuI6mN7IxQ=; b=W505IMM7fYWzKb3cKuFclXjZxwz3lRaNv5FcxgVxoRfvzuUHMx0NmuFpQm/+rYSUxS lC5HNxxiaGqX2g+DfLagzg8WmDGSbQsIcfbhaNheZg1pk3MhNWcqmn841V0gyRCCRXwA 47Ojb0MM0laSlN5qvra2tZLUmWfMy8bLp0jQw0FUcfEcso4UxovG/58I6FxHhBf4iJ1w AAf+A9oi0pl3/BlRsAty4fsSkBfifW5L42eq7ZDeNMpRNfUiI7LtLlcsn5xbgL50G9Z1 iWPFfrOZEH9E2QifMOPgOlG6gbdl6jbX5U1m2CTbEmm0nob7ChcY63OvuZ3uS5iOemn5 0l0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of ak@linux.intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ak@linux.intel.com Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of ak@linux.intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ak@linux.intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,403,1511856000"; d="scan'208";a="12673114" From: Andi Kleen To: tglx@linutronix.de Cc: dwmw@amazon.co.uk, thomas.lendacky@amd.com, peterz@infradead.org, torvalds@linux-foundation.org, luto@amacapital.net, x86@kernel.org, gregkh@linux-foundation.org, linux-kernel@vger.kernel.org, Andi Kleen Subject: [PATCH 2/2] x86/retpoline: Fill return buffer on interrupt return to kernel Date: Tue, 23 Jan 2018 14:39:16 -0800 Message-Id: <20180123223916.26904-3-andi@firstfloor.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180123223916.26904-1-andi@firstfloor.org> References: <20180123223916.26904-1-andi@firstfloor.org> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1590424691502194681?= X-GMAIL-MSGID: =?utf-8?q?1590424691502194681?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: From: Andi Kleen Interrupts can have rather deep call chains on top of the original call chain. Fill the return buffer on Skylake when returning from an interrupt/NMI to the kernel, to avoid return buffer underflows later. This only needs to be done when returning to the kernel, so interrupts interrupting user space are not impacted. The patch also do the same for returns from exceptions to the kernel. This is useful because get_user page faults can also have deep call chains. It's all unified in the same code path. This patch changes the code for 32 and 64bit. Signed-off-by: Andi Kleen --- arch/x86/entry/entry_32.S | 15 ++++++++++++--- arch/x86/entry/entry_64.S | 20 ++++++++++++++++++++ 2 files changed, 32 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 60c4c342316c..77cf739d018e 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -65,7 +65,6 @@ # define preempt_stop(clobbers) DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF #else # define preempt_stop(clobbers) -# define resume_kernel restore_all #endif .macro TRACE_IRQS_IRET @@ -346,8 +345,16 @@ ENTRY(resume_userspace) jmp restore_all END(ret_from_exception) -#ifdef CONFIG_PREEMPT ENTRY(resume_kernel) + /* + * Interrupts/faults could cause the return buffer of the CPU + * to overflow, which would lead to a underflow later, + * which may lead to a uncontrolled indirect branch. + * Fill the return buffer when returning to the kernel. + */ + FILL_RETURN_BUFFER %eax, RSB_FILL_LOOPS, X86_FEATURE_RSB_UNDERFLOW + +#ifdef CONFIG_PREEMPT DISABLE_INTERRUPTS(CLBR_ANY) .Lneed_resched: cmpl $0, PER_CPU_VAR(__preempt_count) @@ -356,8 +363,10 @@ ENTRY(resume_kernel) jz restore_all call preempt_schedule_irq jmp .Lneed_resched -END(resume_kernel) +#else + jmp restore_all #endif +END(resume_kernel) GLOBAL(__begin_SYSENTER_singlestep_region) /* diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 63f4320602a3..ec36af4b0b36 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -789,6 +789,14 @@ retint_kernel: TRACE_IRQS_IRETQ GLOBAL(restore_regs_and_return_to_kernel) + /* + * Interrupts/faults could cause the return buffer of the CPU + * to overflow, which would lead to a underflow later, + * which may lead to a uncontrolled indirect branch. + * Fill the return buffer when returning to the kernel. + */ + FILL_RETURN_BUFFER %rax, RSB_FILL_LOOPS, X86_FEATURE_RSB_UNDERFLOW + #ifdef CONFIG_DEBUG_ENTRY /* Assert that pt_regs indicates kernel mode. */ testb $3, CS(%rsp) @@ -1657,6 +1665,10 @@ nested_nmi: nested_nmi_out: popq %rdx + /* + * No need to clear return buffer here because the outter NMI will do it, + * and we assume two NMIs will not overflow the return buffer. + */ /* We are returning to kernel mode, so this cannot result in a fault. */ iretq @@ -1754,6 +1766,14 @@ end_repeat_nmi: nmi_swapgs: SWAPGS_UNSAFE_STACK nmi_restore: + /* + * NMI could cause the return buffer of the CPU + * to overflow, which would lead to a underflow later, + * which may lead to a uncontrolled indirect branch. + * Fill the return buffer when returning to the kernel. + */ + FILL_RETURN_BUFFER %rax, RSB_FILL_LOOPS, X86_FEATURE_RSB_UNDERFLOW + POP_EXTRA_REGS POP_C_REGS -- 2.14.3