From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 21 Feb 2019 02:12:35 -0000 Received: from smtp.ctxuk.citrix.com ([185.25.65.24] helo=SMTP.EU.CITRIX.COM) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1gwdqh-0006YK-Sz for speck@linutronix.de; Thu, 21 Feb 2019 03:12:32 +0100 Subject: [MODERATED] Re: [patch V2 03/10] MDS basics+ 3 References: <20190220150753.665964899@linutronix.de> <20190220151400.217101404@linutronix.de> From: Andrew Cooper Message-ID: <1edb2eec-d17f-eefd-4c96-3c5c3eb69d09@citrix.com> Date: Thu, 21 Feb 2019 02:12:19 +0000 MIME-Version: 1.0 In-Reply-To: <20190220151400.217101404@linutronix.de> Content-Type: multipart/mixed; boundary="yJCf4TI3hACiv2g4Z0rUwUwpx0dxfHIw0"; protected-headers="v1" To: speck@linutronix.de List-ID: --yJCf4TI3hACiv2g4Z0rUwUwpx0dxfHIw0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Language: en-GB On 20/02/2019 15:07, speck for Thomas Gleixner wrote: > --- a/arch/x86/include/asm/nospec-branch.h > +++ b/arch/x86/include/asm/nospec-branch.h > @@ -318,6 +318,26 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_ > DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); > DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); > =20 > +#include > + > +/** > + * mds_clear_cpu_buffers - Mitigation for MDS vulnerability > + * > + * This uses the otherwise unused and obsolete VERW instruction in > + * combination with microcode which triggers a CPU buffer flush when t= he > + * instruction is executed. > + */ > +static inline void mds_clear_cpu_buffers(void) > +{ > + static const u16 ds =3D __KERNEL_DS; In Xen, I've added a note justifying the choice of selector, in the expectation that people probably won't remember exactly why in 6 months time. For least latency (allegedly to avoid a static prediction stall in microcode), it should be a writeable data segment which is hot in the cache, and being adjacent to __KERNEL_CS is a pretty good bet. > + > + /* > + * Has to be memory form, don't modify to use a register. VERW > + * modifies ZF. I don't understand why everyone is so concerned about VERW modifying ZF.=C2=A0 Its not as if this fact is relevant anywhere that the mitigatio= n is liable to be used. > + */ > + asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc"); The "i" (0) isn't referenced in the assembly, and can be dropped. On a tangent, have GCC or Clang made any indication that they're going to stop assuming that all asm() statements clobber flags, and start making the "cc" clobber necessary on x86 targets? ~Andrew --yJCf4TI3hACiv2g4Z0rUwUwpx0dxfHIw0--