linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paul Clements <Paul.Clements@SteelEye.com>
To: Neil Brown <neilb@cse.unsw.edu.au>
Cc: linux-raid@vger.kernel.org
Subject: [PATCH] raid1: abort resync if there are no spare drives
Date: Wed, 25 Feb 2004 16:39:10 -0500	[thread overview]
Message-ID: <403D15FE.CAACDC4A@SteelEye.com> (raw)
In-Reply-To: 403A30F9.D54AAC68@SteelEye.com

[-- Attachment #1: Type: text/plain, Size: 622 bytes --]

The attached patch makes sure that resync/recovery get aborted (and
status recorded correctly) when there are no spare drives left (due to
the failure of a spare during a resync). Previously, if a spare failed
during a resync, the resync would continue until completed (and would
appear to be successful). 

Also, there was an erroneous usage of master_bio->bi_bdev field (which
is not always properly set) that has been changed to the simpler
comparison vs. r1_bio->read_disk (as is done in raid1_end_request). I
think Neil has already addressed this issue in another patch that has
been pushed to Andrew...

Thanks,
Paul

[-- Attachment #2: raid1_abort_sync_no_targets.diff --]
[-- Type: text/x-patch, Size: 1240 bytes --]

--- raid1.c.PRISTINE	Tue Feb 24 16:10:26 2004
+++ raid1.c	Wed Feb 25 16:22:34 2004
@@ -818,6 +818,8 @@ static void sync_request_write(mddev_t *
 		put_buf(r1_bio);
 		return;
 	}
+	/* assume failure until we find a drive to write this to */
+	clear_bit(R1BIO_Uptodate, &r1_bio->state);
 
 	spin_lock_irq(&conf->device_lock);
 	for (i = 0; i < disks ; i++) {
@@ -825,7 +827,7 @@ static void sync_request_write(mddev_t *
 		if (!conf->mirrors[i].rdev || 
 		    conf->mirrors[i].rdev->faulty)
 			continue;
-		if (conf->mirrors[i].rdev->bdev == bio->bi_bdev)
+		if (i == r1_bio->read_disk)
 			/*
 			 * we read from here, no need to write
 			 */
@@ -838,6 +840,8 @@ static void sync_request_write(mddev_t *
 			continue;
 		atomic_inc(&conf->mirrors[i].rdev->nr_pending);
 		r1_bio->write_bios[i] = bio;
+		/* we found a drive to write to */
+		set_bit(R1BIO_Uptodate, &r1_bio->state);
 	}
 	spin_unlock_irq(&conf->device_lock);
 
@@ -859,7 +863,8 @@ static void sync_request_write(mddev_t *
 	}
 
 	if (atomic_dec_and_test(&r1_bio->remaining)) {
-		md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9, 1);
+		md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9,
+				test_bit(R1BIO_Uptodate, &r1_bio->state));
 		put_buf(r1_bio);
 	}
 }

  parent reply	other threads:[~2004-02-25 21:39 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-02-23  2:41 SW RAID5 + high memory support freezes 2.6.3 kernel Pavol Luptak
2004-02-23  4:27 ` Neil Brown
2004-02-23  5:30   ` Andrew Morton
2004-02-23 13:35     ` Pavol Luptak
2004-02-23 14:05       ` syrius.ml
2004-02-23 16:57   ` [PATCH] md: fix device size calculation with non-persistent superblock Paul Clements
2004-02-24  1:13     ` Neil Brown
2004-02-24 15:27       ` Paul Clements
2004-02-25 21:39     ` Paul Clements [this message]
2004-03-03  0:21       ` [PATCH] raid1: abort resync if there are no spare drives Neil Brown
2004-03-03  2:47         ` Paul Clements

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=403D15FE.CAACDC4A@SteelEye.com \
    --to=paul.clements@steeleye.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@cse.unsw.edu.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).