From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greg Kroah-Hartman Subject: Re: [PATCH v4.9 STABLE] x86/fpu: Disable bottom halves while loading FPU registers Date: Fri, 21 Dec 2018 17:29:05 +0100 Message-ID: <20181221162905.GB7797@kroah.com> References: <20181204103726.750894136@linuxfoundation.org> <20181204103731.697870447@linuxfoundation.org> <5C07FC30.43601034@users.sourceforge.net> <20181205190020.GO29510@zn.tnic> <20181206105407.GH19891@kroah.com> <20181221162338.dir7z3c76kucm6uh@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Borislav Petkov , Jari Ruusu , linux-kernel@vger.kernel.org, stable@vger.kernel.org, Ingo Molnar , Thomas Gleixner , Andy Lutomirski , Dave Hansen , "H. Peter Anvin" , "Jason A. Donenfeld" , kvm ML , Paolo Bonzini , Radim =?iso-8859-1?B?S3I/beE/?= , Rik van Riel , x86-ml To: Sebastian Andrzej Siewior Return-path: Content-Disposition: inline In-Reply-To: <20181221162338.dir7z3c76kucm6uh@linutronix.de> Sender: stable-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Fri, Dec 21, 2018 at 05:23:38PM +0100, Sebastian Andrzej Siewior wrote: > The sequence > > fpu->initialized = 1; /* step A */ > preempt_disable(); /* step B */ > fpu__restore(fpu); > preempt_enable(); > > in __fpu__restore_sig() is racy in regard to a context switch. > > For 32bit frames, __fpu__restore_sig() prepares the FPU state within > fpu->state. To ensure that a context switch (switch_fpu_prepare() in > particular) does not modify fpu->state it uses fpu__drop() which sets > fpu->initialized to 0. > > After fpu->initialized is cleared, the CPU's FPU state is not saved > to fpu->state during a context switch. The new state is loaded via > fpu__restore(). It gets loaded into fpu->state from userland and > ensured it is sane. fpu->initialized is then set to 1 in order to avoid > fpu__initialize() doing anything (overwrite the new state) which is part > of fpu__restore(). > > A context switch between step A and B above would save CPU's current FPU > registers to fpu->state and overwrite the newly prepared state. This > looks like a tiny race window but the Kernel Test Robot reported this > back in 2016 while we had lazy FPU support. Borislav Petkov made the > link between that report and another patch that has been posted. Since > the removal of the lazy FPU support, this race goes unnoticed because > the warning has been removed. > > Disable bottom halves around the restore sequence to avoid the race. BH > need to be disabled because BH is allowed to run (even with preemption > disabled) and might invoke kernel_fpu_begin() by doing IPsec. > > [ bp: massage commit message a bit. ] > > Signed-off-by: Sebastian Andrzej Siewior > Signed-off-by: Borislav Petkov > Acked-by: Ingo Molnar > Acked-by: Thomas Gleixner > Cc: Andy Lutomirski > Cc: Dave Hansen > Cc: "H. Peter Anvin" > Cc: "Jason A. Donenfeld" > Cc: kvm ML > Cc: Paolo Bonzini > Cc: Radim Krčmář > Cc: Rik van Riel > Cc: stable@vger.kernel.org > Cc: x86-ml > Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de > Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic > Signed-off-by: Sebastian Andrzej Siewior > --- > arch/x86/kernel/fpu/signal.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) What is the git commit id of this patch upstream? thanks, greg k-h