* PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling
@ 2012-07-16 14:55 Alexander Lyakas
2012-07-17 1:45 ` NeilBrown
0 siblings, 1 reply; 2+ messages in thread
From: Alexander Lyakas @ 2012-07-16 14:55 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hi Neil,
this is yet another issue I encounter, which is indirectly related to
bad-blocks code, but I think it can be hit when bad-blocks logging is
disabled too.
Scenario:
- RAID1 with one device A, one device missing
- mdadm --manage /dev/mdX --add /dev/B (fresh device B added)
- recovery of B starts
Now at some point, end_sync_write() on B returns with error. Now the
following can happen:
In sync_request_write() we do:
1/
/*
* schedule writes
*/
atomic_set(&r1_bio->remaining, 1);
2/ then we schedule WRITEs, so for each WRITE scheduled we do:
atomic_inc(&r1_bio->remaining);
3/ then we do:
if (atomic_dec_and_test(&r1_bio->remaining)) {
/* if we're here, all write(s) have completed, so clean up */
md_done_sync(mddev, r1_bio->sectors, 1);
put_buf(r1_bio);
}
So assume that end_sync_write() completed with error, before we got to
3/. Then in end_sync_write() we set R1BIO_WriteError, and the we
decrement r1_bio->remaining, so it becomes 1, so we bail out and don't
call reschedule_retry().
Then in 3/ we decrement r1_bio->remaining again, see that it is 0 now
and complete the bio....without marking bad block or failing the
device. So we think that this region is in-sync, while it's not,
because we hit IO error on B.
I checked vs 2.6 versions and such behavior makes sense there, because
R1BIO_WriteError or R1BIO_MadeGood cases are not present there (no
bad-blocks functionality). But now, we must call reschedule_retry() at
both places (if needed). Does this make sense?
I tested the following patch, which seems to work ok:
$ diff -U 20 c:/work/code_backups/psp13/bad_sectors/raid1.c
ubuntu-precise/source/drivers/md/raid1.c
--- c:/work/code_backups/psp13/bad_sectors/raid1.c Mon Jul 16 17:10:24 2012
+++ ubuntu-precise/source/drivers/md/raid1.c Mon Jul 16 17:45:13 2012
@@ -1793,42 +1793,47 @@
*/
atomic_set(&r1_bio->remaining, 1);
for (i = 0; i < disks ; i++) {
wbio = r1_bio->bios[i];
if (wbio->bi_end_io == NULL ||
(wbio->bi_end_io == end_sync_read &&
(i == r1_bio->read_disk ||
!test_bit(MD_RECOVERY_SYNC, &mddev->recovery))))
continue;
wbio->bi_rw = WRITE;
wbio->bi_end_io = end_sync_write;
atomic_inc(&r1_bio->remaining);
md_sync_acct(conf->mirrors[i].rdev->bdev, wbio->bi_size >> 9);
generic_make_request(wbio);
}
if (atomic_dec_and_test(&r1_bio->remaining)) {
/* if we're here, all write(s) have completed, so clean up */
- md_done_sync(mddev, r1_bio->sectors, 1);
- put_buf(r1_bio);
+ if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
+ test_bit(R1BIO_WriteError, &r1_bio->state))
+ reschedule_retry(r1_bio);
+ else {
+ md_done_sync(mddev, r1_bio->sectors, 1);
+ put_buf(r1_bio);
+ }
}
}
/*
* This is a kernel thread which:
*
* 1. Retries failed read operations on working mirrors.
* 2. Updates the raid superblock when problems encounter.
* 3. Performs writes following reads for array synchronising.
*/
static void fix_read_error(struct r1conf *conf, int read_disk,
sector_t sect, int sectors)
{
struct mddev *mddev = conf->mddev;
while(sectors) {
int s = sectors;
int d = read_disk;
int success = 0;
int start;
Thanks,
Alex.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling
2012-07-16 14:55 PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling Alexander Lyakas
@ 2012-07-17 1:45 ` NeilBrown
0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2012-07-17 1:45 UTC (permalink / raw)
To: Alexander Lyakas; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3652 bytes --]
On Mon, 16 Jul 2012 17:55:25 +0300 Alexander Lyakas <alex.bolshoy@gmail.com>
wrote:
> Hi Neil,
> this is yet another issue I encounter, which is indirectly related to
> bad-blocks code, but I think it can be hit when bad-blocks logging is
> disabled too.
>
> Scenario:
> - RAID1 with one device A, one device missing
> - mdadm --manage /dev/mdX --add /dev/B (fresh device B added)
> - recovery of B starts
>
> Now at some point, end_sync_write() on B returns with error. Now the
> following can happen:
> In sync_request_write() we do:
> 1/
> /*
> * schedule writes
> */
> atomic_set(&r1_bio->remaining, 1);
>
> 2/ then we schedule WRITEs, so for each WRITE scheduled we do:
> atomic_inc(&r1_bio->remaining);
>
> 3/ then we do:
> if (atomic_dec_and_test(&r1_bio->remaining)) {
> /* if we're here, all write(s) have completed, so clean up */
> md_done_sync(mddev, r1_bio->sectors, 1);
> put_buf(r1_bio);
> }
>
> So assume that end_sync_write() completed with error, before we got to
> 3/. Then in end_sync_write() we set R1BIO_WriteError, and the we
> decrement r1_bio->remaining, so it becomes 1, so we bail out and don't
> call reschedule_retry().
> Then in 3/ we decrement r1_bio->remaining again, see that it is 0 now
> and complete the bio....without marking bad block or failing the
> device. So we think that this region is in-sync, while it's not,
> because we hit IO error on B.
>
> I checked vs 2.6 versions and such behavior makes sense there, because
> R1BIO_WriteError or R1BIO_MadeGood cases are not present there (no
> bad-blocks functionality). But now, we must call reschedule_retry() at
> both places (if needed). Does this make sense?
>
> I tested the following patch, which seems to work ok:
Thanks. I agree with you analysis.
I've made a small change to fix another problem with that code.
Thanks,
NeilBrown
From af671b264f271563d343249886db16155a3130e0 Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb@suse.de>
Date: Tue, 17 Jul 2012 11:43:47 +1000
Subject: [PATCH] commit 4367af556133723d0f443e14ca8170d9447317cb md/raid1:
clear bad-block record when write succeeds.
Added a 'reschedule_retry' call possibility at the end of
end_sync_write, but didn't add matching code at the end of
sync_request_write. So if the writes complete very quickly, or
scheduling makes it seem that way, then we can miss rescheduling
the request and the resync could hang.
Also commit 73d5c38a9536142e062c35997b044e89166e063b
md: avoid races when stopping resync.
Fix a race condition in this same code in end_sync_write but didn't
make the change in sync_request_write.
This patch updates sync_request_write to fix both of those.
Patch is suitable for 3.1 and later kernels.
Reported-by: Alexander Lyakas <alex.bolshoy@gmail.com>
Original-version-by: Alexander Lyakas <alex.bolshoy@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: NeilBrown <neilb@suse.de>
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index e2e6ec2..506d055 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1892,8 +1892,14 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
if (atomic_dec_and_test(&r1_bio->remaining)) {
/* if we're here, all write(s) have completed, so clean up */
- md_done_sync(mddev, r1_bio->sectors, 1);
- put_buf(r1_bio);
+ int s = r1_bio->sectors;
+ if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
+ test_bit(R1BIO_WriteError, &r1_bio->state))
+ reschedule_retry(r1_bio);
+ else {
+ put_buf(r1_bio);
+ md_done_sync(mddev, s, 1);
+ }
}
}
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-07-17 1:45 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-16 14:55 PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling Alexander Lyakas
2012-07-17 1:45 ` NeilBrown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).