From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 19 Feb 2019 13:38:17 -0000 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gw5b1-0004AL-Nc for speck@linutronix.de; Tue, 19 Feb 2019 14:38:03 +0100 Message-Id: <20190219125346.141295571@linutronix.de> Date: Tue, 19 Feb 2019 13:44:10 +0100 From: Thomas Gleixner References: <20190219124406.449727187@linutronix.de> MIME-Version: 1.0 Subject: [patch 4/8] MDS basics 4 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: Subject: [patch 4/8] x86/speculation/mds: Conditionaly clear CPU buffers on idle entry From: Thomas Gleixner Add a static key which controls the invocation of the CPU buffer clear mechanism on idle entry. This is independent of other MDS mitigations because the idle entry invocation to mitigate the potential leakage due to store buffer repartitioning is only necessary on SMT systems. Add the actual invocations to the different halt/mwait variants which covers all usage sites. mwaitx is not patched as it's not available on Intel CPUs. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/irqflags.h | 4 ++++ arch/x86/include/asm/mwait.h | 7 +++++++ arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 2 ++ 4 files changed, 14 insertions(+) --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -6,6 +6,8 @@ #ifndef __ASSEMBLY__ +#include + /* Provide __cpuidle; we can't safely include */ #define __cpuidle __attribute__((__section__(".cpuidle.text"))) @@ -54,11 +56,13 @@ static inline void native_irq_enable(voi static inline __cpuidle void native_safe_halt(void) { + mds_clear_cpu_buffers(&idle_mds_clear_cpu_buffers); asm volatile("sti; hlt": : :"memory"); } static inline __cpuidle void native_halt(void) { + mds_clear_cpu_buffers(&idle_mds_clear_cpu_buffers); asm volatile("hlt": : :"memory"); } --- a/arch/x86/include/asm/mwait.h +++ b/arch/x86/include/asm/mwait.h @@ -6,6 +6,7 @@ #include #include +#include #define MWAIT_SUBSTATE_MASK 0xf #define MWAIT_CSTATE_MASK 0xf @@ -40,6 +41,8 @@ static inline void __monitorx(const void static inline void __mwait(unsigned long eax, unsigned long ecx) { + mds_clear_cpu_buffers(&idle_mds_clear_cpu_buffers); + /* "mwait %eax, %ecx;" */ asm volatile(".byte 0x0f, 0x01, 0xc9;" :: "a" (eax), "c" (ecx)); @@ -74,6 +77,8 @@ static inline void __mwait(unsigned long static inline void __mwaitx(unsigned long eax, unsigned long ebx, unsigned long ecx) { + /* No MDS buffer clear as this is AMD/HYGON only */ + /* "mwaitx %eax, %ebx, %ecx;" */ asm volatile(".byte 0x0f, 0x01, 0xfb;" :: "a" (eax), "b" (ebx), "c" (ecx)); @@ -81,6 +86,8 @@ static inline void __mwaitx(unsigned lon static inline void __sti_mwait(unsigned long eax, unsigned long ecx) { + mds_clear_cpu_buffers(&idle_mds_clear_cpu_buffers); + trace_hardirqs_on(); /* "mwait %eax, %ecx;" */ asm volatile("sti; .byte 0x0f, 0x01, 0xc9;" --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -338,6 +338,7 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); DECLARE_STATIC_KEY_FALSE(user_mds_clear_cpu_buffers); +DECLARE_STATIC_KEY_FALSE(idle_mds_clear_cpu_buffers); #endif /* __ASSEMBLY__ */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -65,6 +65,8 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always /* Control MDS CPU buffer clear before returning to user space */ DEFINE_STATIC_KEY_FALSE(user_mds_clear_cpu_buffers); +/* Control MDS CPU buffer clear before idling (halt, mwait) */ +DEFINE_STATIC_KEY_FALSE(idle_mds_clear_cpu_buffers); void __init check_bugs(void) {