* Re: [cip-dev] [PATCH 4.9 18/22] x86/fpu: Disable bottom halves while loading FPU registers
[not found] ` <20181228113127.414176417@linuxfoundation.org>
@ 2020-07-24 17:07 ` Jan Kiszka
2020-07-24 17:44 ` Greg Kroah-Hartman
0 siblings, 1 reply; 3+ messages in thread
From: Jan Kiszka @ 2020-07-24 17:07 UTC (permalink / raw)
To: Greg Kroah-Hartman, linux-kernel, Sebastian Andrzej Siewior
Cc: stable, Borislav Petkov, Ingo Molnar, Thomas Gleixner,
Andy Lutomirski, Dave Hansen, H. Peter Anvin, Jason A. Donenfeld,
kvm ML, Paolo Bonzini, Radim Krčmář, Rik van Riel,
x86-ml, cip-dev
[-- Attachment #1: Type: text/plain, Size: 3695 bytes --]
On 28.12.18 12:52, Greg Kroah-Hartman wrote:
> 4.9-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> commit 68239654acafe6aad5a3c1dc7237e60accfebc03 upstream.
>
> The sequence
>
> fpu->initialized = 1; /* step A */
> preempt_disable(); /* step B */
> fpu__restore(fpu);
> preempt_enable();
>
> in __fpu__restore_sig() is racy in regard to a context switch.
>
> For 32bit frames, __fpu__restore_sig() prepares the FPU state within
> fpu->state. To ensure that a context switch (switch_fpu_prepare() in
> particular) does not modify fpu->state it uses fpu__drop() which sets
> fpu->initialized to 0.
>
> After fpu->initialized is cleared, the CPU's FPU state is not saved
> to fpu->state during a context switch. The new state is loaded via
> fpu__restore(). It gets loaded into fpu->state from userland and
> ensured it is sane. fpu->initialized is then set to 1 in order to avoid
> fpu__initialize() doing anything (overwrite the new state) which is part
> of fpu__restore().
>
> A context switch between step A and B above would save CPU's current FPU
> registers to fpu->state and overwrite the newly prepared state. This
> looks like a tiny race window but the Kernel Test Robot reported this
> back in 2016 while we had lazy FPU support. Borislav Petkov made the
> link between that report and another patch that has been posted. Since
> the removal of the lazy FPU support, this race goes unnoticed because
> the warning has been removed.
>
> Disable bottom halves around the restore sequence to avoid the race. BH
> need to be disabled because BH is allowed to run (even with preemption
> disabled) and might invoke kernel_fpu_begin() by doing IPsec.
>
> [ bp: massage commit message a bit. ]
>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Acked-by: Ingo Molnar <mingo@kernel.org>
> Acked-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
> Cc: kvm ML <kvm@vger.kernel.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: stable@vger.kernel.org
> Cc: x86-ml <x86@kernel.org>
> Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de
> Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> ---
> arch/x86/kernel/fpu/signal.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> --- a/arch/x86/kernel/fpu/signal.c
> +++ b/arch/x86/kernel/fpu/signal.c
> @@ -342,10 +342,10 @@ static int __fpu__restore_sig(void __use
> sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
> }
>
> + local_bh_disable();
> fpu->fpstate_active = 1;
> - preempt_disable();
> fpu__restore(fpu);
> - preempt_enable();
> + local_bh_enable();
>
> return err;
> } else {
>
>
Any reason why the backport stopped back than at 4.9? I just debugged
this out of a 4.4 kernel, and it is needed there as well. I'm happy to
propose a backport, would just appreciate a hint if the BH protection is
needed also there (my case was without BH).
Thanks,
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
[-- Attachment #2: Type: text/plain, Size: 419 bytes --]
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#5010): https://lists.cip-project.org/g/cip-dev/message/5010
Mute This Topic: https://lists.cip-project.org/mt/75770466/4520388
Group Owner: cip-dev+owner@lists.cip-project.org
Unsubscribe: https://lists.cip-project.org/g/cip-dev/leave/8129055/727948398/xyzzy [cip-dev@archiver.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [cip-dev] [PATCH 4.9 18/22] x86/fpu: Disable bottom halves while loading FPU registers
2020-07-24 17:07 ` [cip-dev] [PATCH 4.9 18/22] x86/fpu: Disable bottom halves while loading FPU registers Jan Kiszka
@ 2020-07-24 17:44 ` Greg Kroah-Hartman
2020-07-24 18:12 ` Sasha Levin
0 siblings, 1 reply; 3+ messages in thread
From: Greg Kroah-Hartman @ 2020-07-24 17:44 UTC (permalink / raw)
To: Jan Kiszka
Cc: linux-kernel, Sebastian Andrzej Siewior, stable, Borislav Petkov,
Ingo Molnar, Thomas Gleixner, Andy Lutomirski, Dave Hansen,
H. Peter Anvin, Jason A. Donenfeld, kvm ML, Paolo Bonzini,
Radim Krčmář, Rik van Riel, x86-ml, cip-dev
[-- Attachment #1: Type: text/plain, Size: 4037 bytes --]
On Fri, Jul 24, 2020 at 07:07:06PM +0200, Jan Kiszka wrote:
> On 28.12.18 12:52, Greg Kroah-Hartman wrote:
> > 4.9-stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > commit 68239654acafe6aad5a3c1dc7237e60accfebc03 upstream.
> >
> > The sequence
> >
> > fpu->initialized = 1; /* step A */
> > preempt_disable(); /* step B */
> > fpu__restore(fpu);
> > preempt_enable();
> >
> > in __fpu__restore_sig() is racy in regard to a context switch.
> >
> > For 32bit frames, __fpu__restore_sig() prepares the FPU state within
> > fpu->state. To ensure that a context switch (switch_fpu_prepare() in
> > particular) does not modify fpu->state it uses fpu__drop() which sets
> > fpu->initialized to 0.
> >
> > After fpu->initialized is cleared, the CPU's FPU state is not saved
> > to fpu->state during a context switch. The new state is loaded via
> > fpu__restore(). It gets loaded into fpu->state from userland and
> > ensured it is sane. fpu->initialized is then set to 1 in order to avoid
> > fpu__initialize() doing anything (overwrite the new state) which is part
> > of fpu__restore().
> >
> > A context switch between step A and B above would save CPU's current FPU
> > registers to fpu->state and overwrite the newly prepared state. This
> > looks like a tiny race window but the Kernel Test Robot reported this
> > back in 2016 while we had lazy FPU support. Borislav Petkov made the
> > link between that report and another patch that has been posted. Since
> > the removal of the lazy FPU support, this race goes unnoticed because
> > the warning has been removed.
> >
> > Disable bottom halves around the restore sequence to avoid the race. BH
> > need to be disabled because BH is allowed to run (even with preemption
> > disabled) and might invoke kernel_fpu_begin() by doing IPsec.
> >
> > [ bp: massage commit message a bit. ]
> >
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > Signed-off-by: Borislav Petkov <bp@suse.de>
> > Acked-by: Ingo Molnar <mingo@kernel.org>
> > Acked-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Andy Lutomirski <luto@kernel.org>
> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
> > Cc: kvm ML <kvm@vger.kernel.org>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Rik van Riel <riel@surriel.com>
> > Cc: stable@vger.kernel.org
> > Cc: x86-ml <x86@kernel.org>
> > Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de
> > Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > ---
> > arch/x86/kernel/fpu/signal.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > --- a/arch/x86/kernel/fpu/signal.c
> > +++ b/arch/x86/kernel/fpu/signal.c
> > @@ -342,10 +342,10 @@ static int __fpu__restore_sig(void __use
> > sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
> > }
> > + local_bh_disable();
> > fpu->fpstate_active = 1;
> > - preempt_disable();
> > fpu__restore(fpu);
> > - preempt_enable();
> > + local_bh_enable();
> > return err;
> > } else {
> >
> >
>
> Any reason why the backport stopped back than at 4.9? I just debugged this
> out of a 4.4 kernel, and it is needed there as well. I'm happy to propose a
> backport, would just appreciate a hint if the BH protection is needed also
> there (my case was without BH).
You are asking about something we did back in 2018. I can't remember
what I did last week :)
If you provide a backport that works, I'll be glad to take it. The
current patch does not apply cleanly there at all.
thanks,
greg k-h
[-- Attachment #2: Type: text/plain, Size: 419 bytes --]
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#5011): https://lists.cip-project.org/g/cip-dev/message/5011
Mute This Topic: https://lists.cip-project.org/mt/75770466/4520388
Group Owner: cip-dev+owner@lists.cip-project.org
Unsubscribe: https://lists.cip-project.org/g/cip-dev/leave/8129055/727948398/xyzzy [cip-dev@archiver.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [cip-dev] [PATCH 4.9 18/22] x86/fpu: Disable bottom halves while loading FPU registers
2020-07-24 17:44 ` Greg Kroah-Hartman
@ 2020-07-24 18:12 ` Sasha Levin
0 siblings, 0 replies; 3+ messages in thread
From: Sasha Levin @ 2020-07-24 18:12 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: Jan Kiszka, linux-kernel, Sebastian Andrzej Siewior, stable,
Borislav Petkov, Ingo Molnar, Thomas Gleixner, Andy Lutomirski,
Dave Hansen, H. Peter Anvin, Jason A. Donenfeld, kvm ML,
Paolo Bonzini, Radim Krčmář, Rik van Riel, x86-ml,
cip-dev
[-- Attachment #1: Type: text/plain, Size: 4398 bytes --]
On Fri, Jul 24, 2020 at 07:44:37PM +0200, Greg Kroah-Hartman wrote:
>On Fri, Jul 24, 2020 at 07:07:06PM +0200, Jan Kiszka wrote:
>> On 28.12.18 12:52, Greg Kroah-Hartman wrote:
>> > 4.9-stable review patch. If anyone has any objections, please let me know.
>> >
>> > ------------------
>> >
>> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>> >
>> > commit 68239654acafe6aad5a3c1dc7237e60accfebc03 upstream.
>> >
>> > The sequence
>> >
>> > fpu->initialized = 1; /* step A */
>> > preempt_disable(); /* step B */
>> > fpu__restore(fpu);
>> > preempt_enable();
>> >
>> > in __fpu__restore_sig() is racy in regard to a context switch.
>> >
>> > For 32bit frames, __fpu__restore_sig() prepares the FPU state within
>> > fpu->state. To ensure that a context switch (switch_fpu_prepare() in
>> > particular) does not modify fpu->state it uses fpu__drop() which sets
>> > fpu->initialized to 0.
>> >
>> > After fpu->initialized is cleared, the CPU's FPU state is not saved
>> > to fpu->state during a context switch. The new state is loaded via
>> > fpu__restore(). It gets loaded into fpu->state from userland and
>> > ensured it is sane. fpu->initialized is then set to 1 in order to avoid
>> > fpu__initialize() doing anything (overwrite the new state) which is part
>> > of fpu__restore().
>> >
>> > A context switch between step A and B above would save CPU's current FPU
>> > registers to fpu->state and overwrite the newly prepared state. This
>> > looks like a tiny race window but the Kernel Test Robot reported this
>> > back in 2016 while we had lazy FPU support. Borislav Petkov made the
>> > link between that report and another patch that has been posted. Since
>> > the removal of the lazy FPU support, this race goes unnoticed because
>> > the warning has been removed.
>> >
>> > Disable bottom halves around the restore sequence to avoid the race. BH
>> > need to be disabled because BH is allowed to run (even with preemption
>> > disabled) and might invoke kernel_fpu_begin() by doing IPsec.
>> >
>> > [ bp: massage commit message a bit. ]
>> >
>> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>> > Signed-off-by: Borislav Petkov <bp@suse.de>
>> > Acked-by: Ingo Molnar <mingo@kernel.org>
>> > Acked-by: Thomas Gleixner <tglx@linutronix.de>
>> > Cc: Andy Lutomirski <luto@kernel.org>
>> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> > Cc: "H. Peter Anvin" <hpa@zytor.com>
>> > Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
>> > Cc: kvm ML <kvm@vger.kernel.org>
>> > Cc: Paolo Bonzini <pbonzini@redhat.com>
>> > Cc: Radim Krčmář <rkrcmar@redhat.com>
>> > Cc: Rik van Riel <riel@surriel.com>
>> > Cc: stable@vger.kernel.org
>> > Cc: x86-ml <x86@kernel.org>
>> > Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de
>> > Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic
>> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>> > Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>> > ---
>> > arch/x86/kernel/fpu/signal.c | 4 ++--
>> > 1 file changed, 2 insertions(+), 2 deletions(-)
>> >
>> > --- a/arch/x86/kernel/fpu/signal.c
>> > +++ b/arch/x86/kernel/fpu/signal.c
>> > @@ -342,10 +342,10 @@ static int __fpu__restore_sig(void __use
>> > sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
>> > }
>> > + local_bh_disable();
>> > fpu->fpstate_active = 1;
>> > - preempt_disable();
>> > fpu__restore(fpu);
>> > - preempt_enable();
>> > + local_bh_enable();
>> > return err;
>> > } else {
>> >
>> >
>>
>> Any reason why the backport stopped back than at 4.9? I just debugged this
>> out of a 4.4 kernel, and it is needed there as well. I'm happy to propose a
>> backport, would just appreciate a hint if the BH protection is needed also
>> there (my case was without BH).
>
>You are asking about something we did back in 2018. I can't remember
>what I did last week :)
>
>If you provide a backport that works, I'll be glad to take it. The
>current patch does not apply cleanly there at all.
The conflict was due to a missing rename back in 4.4: e4a81bfcaae1
("x86/fpu: Rename fpu::fpstate_active to fpu::initialized").
I've fixed up the patch and queued it for 4.4, thanks for pointing it
out Jan!
--
Thanks,
Sasha
[-- Attachment #2: Type: text/plain, Size: 419 bytes --]
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#5012): https://lists.cip-project.org/g/cip-dev/message/5012
Mute This Topic: https://lists.cip-project.org/mt/75770466/4520388
Group Owner: cip-dev+owner@lists.cip-project.org
Unsubscribe: https://lists.cip-project.org/g/cip-dev/leave/8129055/727948398/xyzzy [cip-dev@archiver.kernel.org]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-07-24 18:12 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20181228113126.144310132@linuxfoundation.org>
[not found] ` <20181228113127.414176417@linuxfoundation.org>
2020-07-24 17:07 ` [cip-dev] [PATCH 4.9 18/22] x86/fpu: Disable bottom halves while loading FPU registers Jan Kiszka
2020-07-24 17:44 ` Greg Kroah-Hartman
2020-07-24 18:12 ` Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox