From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 20 Feb 2019 15:19:21 -0000 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gwTdI-00063S-EG for speck@linutronix.de; Wed, 20 Feb 2019 16:18:00 +0100 Message-Id: <20190220151400.787843957@linutronix.de> Date: Wed, 20 Feb 2019 16:08:02 +0100 From: Thomas Gleixner References: <20190220150753.665964899@linutronix.de> MIME-Version: 1.0 Subject: [patch V2 09/10] MDS basics+ 9 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: To avoid the expensive CPU buffer flushing on every transition from kernel to user space it is desired to provide a conditional mitigation mode. Provide the infrastructure which is required to implement this: - A static key to enable conditional mode CPU buffer flushing. - A per CPU variable which indicates that CPU buffers need to be flushed on return to user space. The variable is defined next to __preempt_count to it ends up in a cacheline which is required on return to user space anyway. - The conditinal flush mechanics on return to user space. - A helper function to set the flush request. Is in processor.h for now to avoid include hell, but might move to a separate header. Signed-off-by: Thomas Gleixner --- arch/x86/entry/common.c | 6 ++++++ arch/x86/include/asm/nospec-branch.h | 3 +++ arch/x86/include/asm/processor.h | 13 +++++++++++++ arch/x86/kernel/cpu/bugs.c | 1 + arch/x86/kernel/cpu/common.c | 7 +++++++ 5 files changed, 30 insertions(+) --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -183,6 +183,12 @@ static void exit_to_usermode_loop(struct static inline void mds_user_clear_cpu_buffers(void) { + if (static_branch_likely(&mds_user_clear_cond)) { + if (__this_cpu_read(mds_cond_clear)) { + __this_cpu_write(mds_cond_clear, 0); + mds_clear_cpu_buffers(); + } + } if (static_branch_likely(&mds_user_clear_always)) mds_clear_cpu_buffers(); } --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -9,6 +9,7 @@ #include #include #include +#include /* * Fill the CPU return stack buffer. @@ -319,7 +320,9 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); DECLARE_STATIC_KEY_FALSE(mds_user_clear_always); +DECLARE_STATIC_KEY_FALSE(mds_user_clear_cond); DECLARE_STATIC_KEY_FALSE(mds_idle_clear); +DECLARE_PER_CPU(unsigned int, mds_cond_clear); #include --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -24,6 +24,7 @@ struct vm86; #include #include #include +#include #include #include @@ -998,4 +999,16 @@ enum mds_mitigations { MDS_MITIGATION_HOPE, }; +/** + * mds_request_buffer_clear - Set the request to clear CPU buffers + * + * This is invoked from contexts which identify a necessarity to clear CPU + * buffers on the next return to user space. + */ +static inline void mds_request_buffer_clear(void) +{ + if (static_branch_likely(&mds_user_clear_cond)) + this_cpu_write(mds_cond_clear, 1); +} + #endif /* _ASM_X86_PROCESSOR_H */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -66,6 +66,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always /* Control MDS CPU buffer clear before returning to user space */ DEFINE_STATIC_KEY_FALSE(mds_user_clear_always); +DEFINE_STATIC_KEY_FALSE(mds_user_clear_cond); /* Control MDS CPU buffer clear before idling (halt, mwait) */ DEFINE_STATIC_KEY_FALSE(mds_idle_clear); --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -1544,6 +1545,9 @@ DEFINE_PER_CPU(unsigned int, irq_count) DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT; EXPORT_PER_CPU_SYMBOL(__preempt_count); +/* Indicator for return to user space or VMENTER to clear CPU buffers */ +DEFINE_PER_CPU(unsigned int, mds_cond_clear); + /* May not be marked __init: used by software suspend */ void syscall_init(void) { @@ -1617,6 +1621,9 @@ EXPORT_PER_CPU_SYMBOL(current_task); DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT; EXPORT_PER_CPU_SYMBOL(__preempt_count); +/* Indicator for return to user space or VMENTER to clear CPU buffers */ +DEFINE_PER_CPU(unsigned int, mds_cond_clear); + /* * On x86_32, vm86 modifies tss.sp0, so sp0 isn't a reliable way to find * the top of the kernel stack. Use an extra percpu variable to track the