From: Jens Axboe <axboe@kernel.dk>
To: Timothy Pearson <tpearson@raptorengineering.com>,
regressions <regressions@lists.linux.dev>,
Michael Ellerman <mpe@ellerman.id.au>,
npiggin <npiggin@gmail.com>,
christophe leroy <christophe.leroy@csgroup.eu>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Subject: Re: [PATCH] powerpc: Don't clobber fr0/vs0 during fp|altivec register save
Date: Sun, 19 Nov 2023 06:14:50 -0700 [thread overview]
Message-ID: <42b9fdd7-2939-4ffc-8e18-4996948b19f7@kernel.dk> (raw)
In-Reply-To: <1105090647.48374193.1700351103830.JavaMail.zimbra@raptorengineeringinc.com>
On 11/18/23 4:45 PM, Timothy Pearson wrote:
> During floating point and vector save to thread data fr0/vs0 are clobbered
> by the FPSCR/VSCR store routine. This leads to userspace register corruption
> and application data corruption / crash under the following rare condition:
>
> * A userspace thread is executing with VSX/FP mode enabled
> * The userspace thread is making active use of fr0 and/or vs0
> * An IPI is taken in kernel mode, forcing the userspace thread to reschedule
> * The userspace thread is interrupted by the IPI before accessing data it
> previously stored in fr0/vs0
> * The thread being switched in by the IPI has a pending signal
>
> If these exact criteria are met, then the following sequence happens:
>
> * The existing thread FP storage is still valid before the IPI, due to a
> prior call to save_fpu() or store_fp_state(). Note that the current
> fr0/vs0 registers have been clobbered, so the FP/VSX state in registers
> is now invalid pending a call to restore_fp()/restore_altivec().
> * IPI -- FP/VSX register state remains invalid
> * interrupt_exit_user_prepare_main() calls do_notify_resume(),
> due to the pending signal
> * do_notify_resume() eventually calls save_fpu() via giveup_fpu(), which
> merrily reads and saves the invalid FP/VSX state to thread local storage.
> * interrupt_exit_user_prepare_main() calls restore_math(), writing the invalid
> FP/VSX state back to registers.
> * Execution is released to userspace, and the application crashes or corrupts
> data.
What an epic bug hunt! Hats off to you for seeing it through and getting
to the bottom of it. Particularly difficult as the commit that made it
easier to trigger was in no way related to where the actual bug was.
I ran this on the vm I have access to, and it survived 2x500 iterations.
Happy to call that good:
Tested-by: Jens Axboe <axboe@kernel.dk>
--
Jens Axboe
next prev parent reply other threads:[~2023-11-19 13:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-18 23:45 [PATCH] powerpc: Don't clobber fr0/vs0 during fp|altivec register save Timothy Pearson
2023-11-19 0:12 ` Timothy Pearson
2023-11-19 6:08 ` Linux regression tracking (Thorsten Leemhuis)
2023-11-19 7:02 ` Gabriel Paubert
2023-11-19 13:14 ` Jens Axboe [this message]
2023-11-29 10:30 ` Salvatore Bonaccorso
2023-11-29 10:47 ` Christophe Leroy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=42b9fdd7-2939-4ffc-8e18-4996948b19f7@kernel.dk \
--to=axboe@kernel.dk \
--cc=christophe.leroy@csgroup.eu \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=regressions@lists.linux.dev \
--cc=tpearson@raptorengineering.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).