* [PATCH 1/1] Set prev_flush_start and flush_bio in an atomic way
@ 2020-12-10 6:33 Xiao Ni
2021-01-12 9:24 ` Xiao Ni
0 siblings, 1 reply; 3+ messages in thread
From: Xiao Ni @ 2020-12-10 6:33 UTC (permalink / raw)
To: songliubraving, linux-raid; +Cc: ncroxon, heinzm, djeffery
One customer reports a crash problem which causes by flush request. It triggers a warning
before crash.
/* new request after previous flush is completed */
if (ktime_after(req_start, mddev->prev_flush_start)) {
WARN_ON(mddev->flush_bio);
mddev->flush_bio = bio;
bio = NULL;
}
The WARN_ON is triggered. We use spin lock to protect prev_flush_start and flush_bio
in md_flush_request. But there is no lock protection in md_submit_flush_data. It can
set flush_bio to NULL first because of compiler reordering write instructions.
For example, flush bio1 sets flush bio to NULL first in md_submit_flush_data. An
interrupt or vmware causing an extended stall happen between updating flush_bio and
prev_flush_start. Because flush_bio is NULL, flush bio2 can get the lock and submit
to underlayer disks. Then flush bio1 updates prev_flush_start after the interrupt or
exteneded stall.
Then flush bio3 enters in md_flush_request. The start time req_start is behind
prev_flush_start. The flush_bio is not NULL(flush bio2 hasn't finished). So it can
trigger the WARN_ON now. Then it calls INIT_WORK again. INIT_WORK() will re-initialize
the list pointers in the work_struct, which then can result in a corrupted work list
and the work_struct queued a second time. With the work list corrupted, it can lead
in invalid work items being used and cause a crash in process_one_work.
We need to make sure only one flush bio can be handled at one same time. So add
spin lock in md_submit_flush_data to protect prev_flush_start and flush_bio in
an atomic way.
Reviewed-by: David Jeffery <djeffery@redhat.com>
Signed-off-by: Xiao Ni <xni@redhat.com>
---
drivers/md/md.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index c42af46..2746d5c 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws)
* could wait for this and below md_handle_request could wait for those
* bios because of suspend check
*/
+ spin_lock_irq(&mddev->lock);
mddev->prev_flush_start = mddev->start_flush;
mddev->flush_bio = NULL;
+ spin_unlock_irq(&mddev->lock);
wake_up(&mddev->sb_wait);
if (bio->bi_iter.bi_size == 0) {
--
2.7.5
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH 1/1] Set prev_flush_start and flush_bio in an atomic way
2020-12-10 6:33 [PATCH 1/1] Set prev_flush_start and flush_bio in an atomic way Xiao Ni
@ 2021-01-12 9:24 ` Xiao Ni
2021-01-12 21:36 ` Song Liu
0 siblings, 1 reply; 3+ messages in thread
From: Xiao Ni @ 2021-01-12 9:24 UTC (permalink / raw)
To: songliubraving, linux-raid; +Cc: ncroxon, heinzm, djeffery
Hi Song
Could you review this patch
Regards
Xiao
On 12/10/2020 02:33 PM, Xiao Ni wrote:
> One customer reports a crash problem which causes by flush request. It triggers a warning
> before crash.
>
> /* new request after previous flush is completed */
> if (ktime_after(req_start, mddev->prev_flush_start)) {
> WARN_ON(mddev->flush_bio);
> mddev->flush_bio = bio;
> bio = NULL;
> }
>
> The WARN_ON is triggered. We use spin lock to protect prev_flush_start and flush_bio
> in md_flush_request. But there is no lock protection in md_submit_flush_data. It can
> set flush_bio to NULL first because of compiler reordering write instructions.
>
> For example, flush bio1 sets flush bio to NULL first in md_submit_flush_data. An
> interrupt or vmware causing an extended stall happen between updating flush_bio and
> prev_flush_start. Because flush_bio is NULL, flush bio2 can get the lock and submit
> to underlayer disks. Then flush bio1 updates prev_flush_start after the interrupt or
> exteneded stall.
>
> Then flush bio3 enters in md_flush_request. The start time req_start is behind
> prev_flush_start. The flush_bio is not NULL(flush bio2 hasn't finished). So it can
> trigger the WARN_ON now. Then it calls INIT_WORK again. INIT_WORK() will re-initialize
> the list pointers in the work_struct, which then can result in a corrupted work list
> and the work_struct queued a second time. With the work list corrupted, it can lead
> in invalid work items being used and cause a crash in process_one_work.
>
> We need to make sure only one flush bio can be handled at one same time. So add
> spin lock in md_submit_flush_data to protect prev_flush_start and flush_bio in
> an atomic way.
>
> Reviewed-by: David Jeffery <djeffery@redhat.com>
> Signed-off-by: Xiao Ni <xni@redhat.com>
> ---
> drivers/md/md.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index c42af46..2746d5c 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws)
> * could wait for this and below md_handle_request could wait for those
> * bios because of suspend check
> */
> + spin_lock_irq(&mddev->lock);
> mddev->prev_flush_start = mddev->start_flush;
> mddev->flush_bio = NULL;
> + spin_unlock_irq(&mddev->lock);
> wake_up(&mddev->sb_wait);
>
> if (bio->bi_iter.bi_size == 0) {
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH 1/1] Set prev_flush_start and flush_bio in an atomic way
2021-01-12 9:24 ` Xiao Ni
@ 2021-01-12 21:36 ` Song Liu
0 siblings, 0 replies; 3+ messages in thread
From: Song Liu @ 2021-01-12 21:36 UTC (permalink / raw)
To: Xiao Ni
Cc: linux-raid@vger.kernel.org, ncroxon@redhat.com, heinzm@redhat.com,
djeffery@redhat.com
> On Jan 12, 2021, at 1:24 AM, Xiao Ni <xni@redhat.com> wrote:
>
> Hi Song
>
> Could you review this patch
>
I am sorry for the delay. Just applied it to md-fixes.
I made some changes to it
1. add md: prefix to the commit log subject;
2. Adjust the width of the commit log to 75 characters.
For future tests please run ./scripts/checkpatch.pl on it.
Thanks,
Song
> Regards
> Xiao
>
> On 12/10/2020 02:33 PM, Xiao Ni wrote:
>> One customer reports a crash problem which causes by flush request. It triggers a warning
>> before crash.
>>
>> /* new request after previous flush is completed */
>> if (ktime_after(req_start, mddev->prev_flush_start)) {
>> WARN_ON(mddev->flush_bio);
>> mddev->flush_bio = bio;
>> bio = NULL;
>> }
>>
>> The WARN_ON is triggered. We use spin lock to protect prev_flush_start and flush_bio
>> in md_flush_request. But there is no lock protection in md_submit_flush_data. It can
>> set flush_bio to NULL first because of compiler reordering write instructions.
>>
>> For example, flush bio1 sets flush bio to NULL first in md_submit_flush_data. An
>> interrupt or vmware causing an extended stall happen between updating flush_bio and
>> prev_flush_start. Because flush_bio is NULL, flush bio2 can get the lock and submit
>> to underlayer disks. Then flush bio1 updates prev_flush_start after the interrupt or
>> exteneded stall.
>>
>> Then flush bio3 enters in md_flush_request. The start time req_start is behind
>> prev_flush_start. The flush_bio is not NULL(flush bio2 hasn't finished). So it can
>> trigger the WARN_ON now. Then it calls INIT_WORK again. INIT_WORK() will re-initialize
>> the list pointers in the work_struct, which then can result in a corrupted work list
>> and the work_struct queued a second time. With the work list corrupted, it can lead
>> in invalid work items being used and cause a crash in process_one_work.
>>
>> We need to make sure only one flush bio can be handled at one same time. So add
>> spin lock in md_submit_flush_data to protect prev_flush_start and flush_bio in
>> an atomic way.
>>
>> Reviewed-by: David Jeffery <djeffery@redhat.com>
>> Signed-off-by: Xiao Ni <xni@redhat.com>
>> ---
>> drivers/md/md.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index c42af46..2746d5c 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws)
>> * could wait for this and below md_handle_request could wait for those
>> * bios because of suspend check
>> */
>> + spin_lock_irq(&mddev->lock);
>> mddev->prev_flush_start = mddev->start_flush;
>> mddev->flush_bio = NULL;
>> + spin_unlock_irq(&mddev->lock);
>> wake_up(&mddev->sb_wait);
>> if (bio->bi_iter.bi_size == 0) {
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-01-12 21:47 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-12-10 6:33 [PATCH 1/1] Set prev_flush_start and flush_bio in an atomic way Xiao Ni
2021-01-12 9:24 ` Xiao Ni
2021-01-12 21:36 ` Song Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).