* [PATCH/RFC] Fix resync hang after surprise removal
@ 2011-06-15 16:02 Jim Paradis
2011-06-16 1:36 ` NeilBrown
0 siblings, 1 reply; 3+ messages in thread
From: Jim Paradis @ 2011-06-15 16:02 UTC (permalink / raw)
To: linux-raid; +Cc: Jim Paradis
We ran into a situation where surprise removal of a non-boot 2-disk raid1
array with I/O running can result in a tight loop in which md claims to be
resyncing the array.
It appears that remove_add_spares() in md.c contains two sets of conditions
used to determine if there is a spare available. The disk that was pulled
has been marked 'faulty' in rdev->flags and its raid_disk value is >= 0.
Since it is neither In_Sync nor Blocked, spares gets incremented and so md
thinks there is a spare when in fact there is not.
One of my colleagues at Stratus proposed this patch, which rearranges the
order of the tests and makes them mutually exclusive. Running with this
patch resolves the problem in our lab: we were able to run stress tests
with surprise removals without incident.
Since neither of us is an md expert, we'd like feedback as to whether
this patch is reasonable and whether it can be pushed upstream.
Jim Paradis
Onsite Red Hat Partner Engineer
Stratus Technologies, Inc.
md.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
Signed-off-by: Jim Paradis <james.paradis@stratus.com>
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 4332fc2..cdc5276 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7086,10 +7086,6 @@ static int remove_and_add_spares(mddev_t *mddev)
if (mddev->degraded && !mddev->recovery_disabled) {
list_for_each_entry(rdev, &mddev->disks, same_set) {
- if (rdev->raid_disk >= 0 &&
- !test_bit(In_sync, &rdev->flags) &&
- !test_bit(Blocked, &rdev->flags))
- spares++;
if (rdev->raid_disk < 0
&& !test_bit(Faulty, &rdev->flags)) {
rdev->recovery_offset = 0;
@@ -7100,11 +7096,18 @@ static int remove_and_add_spares(mddev_t *mddev)
if (sysfs_create_link(&mddev->kobj,
&rdev->kobj, nm))
/* failure here is OK */;
- spares++;
+ spares++;
md_new_event(mddev);
set_bit(MD_CHANGE_DEVS, &mddev->flags);
- } else
- break;
+ }
+ }
+ else if (rdev->raid_disk >= 0 &&
+ !test_bit(In_sync, &rdev->flags) &&
+ !test_bit(Blocked, &rdev->flags)) {
+ spares++;
+ }
+ else {
+ break;
}
}
}
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH/RFC] Fix resync hang after surprise removal
2011-06-15 16:02 [PATCH/RFC] Fix resync hang after surprise removal Jim Paradis
@ 2011-06-16 1:36 ` NeilBrown
2011-06-17 15:42 ` James Paradis
0 siblings, 1 reply; 3+ messages in thread
From: NeilBrown @ 2011-06-16 1:36 UTC (permalink / raw)
To: Jim Paradis; +Cc: linux-raid
On Wed, 15 Jun 2011 12:02:15 -0400 Jim Paradis <jparadis@redhat.com> wrote:
> We ran into a situation where surprise removal of a non-boot 2-disk raid1
> array with I/O running can result in a tight loop in which md claims to be
> resyncing the array.
>
> It appears that remove_add_spares() in md.c contains two sets of conditions
> used to determine if there is a spare available. The disk that was pulled
> has been marked 'faulty' in rdev->flags and its raid_disk value is >= 0.
> Since it is neither In_Sync nor Blocked, spares gets incremented and so md
> thinks there is a spare when in fact there is not.
>
> One of my colleagues at Stratus proposed this patch, which rearranges the
> order of the tests and makes them mutually exclusive. Running with this
> patch resolves the problem in our lab: we were able to run stress tests
> with surprise removals without incident.
>
> Since neither of us is an md expert, we'd like feedback as to whether
> this patch is reasonable and whether it can be pushed upstream.
Hi,
thanks for the report and the patch.
However I don't think the patch really does what you want.
The two tests are already mutually exclusive as one begins with
raid_disk >= 0
and the other with
raid_disk < 0
and neither change raid_disk.
The reason the patch has an effect is the 'break' that has been added.
i.e. as soon as you find a normal working device you break out of the loop
and stop looking for spares.
I think the correct fix is simply:
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 4332fc2..91e31e2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7088,6 +7088,7 @@ static int remove_and_add_spares(mddev_t *mddev)
list_for_each_entry(rdev, &mddev->disks, same_set) {
if (rdev->raid_disk >= 0 &&
!test_bit(In_sync, &rdev->flags) &&
+ !test_bit(Faulty, &rdev->flags) &&
!test_bit(Blocked, &rdev->flags))
spares++;
if (rdev->raid_disk < 0
i.e. never consider a Faulty device to be a spare.
It looks like this bug was introduced by commit dfc70645000616777
in 2.6.26 when we allowed partially recovered devices to remain in the array
when a different device fails.
Can you please conform that this patch removes your symptom?
Thanks,
NeilBrown
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH/RFC] Fix resync hang after surprise removal
2011-06-16 1:36 ` NeilBrown
@ 2011-06-17 15:42 ` James Paradis
0 siblings, 0 replies; 3+ messages in thread
From: James Paradis @ 2011-06-17 15:42 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
> NeilBrown <neilb@suse.de> wrote:
> Hi,
> thanks for the report and the patch.
>
> However I don't think the patch really does what you want.
>
> The two tests are already mutually exclusive as one begins with
> raid_disk >= 0
> and the other with
> raid_disk < 0
> and neither change raid_disk.
>
> The reason the patch has an effect is the 'break' that has been added.
> i.e. as soon as you find a normal working device you break out of the
> loop
> and stop looking for spares.
>
> I think the correct fix is simply:
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 4332fc2..91e31e2 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -7088,6 +7088,7 @@ static int remove_and_add_spares(mddev_t *mddev)
> list_for_each_entry(rdev, &mddev->disks, same_set) {
> if (rdev->raid_disk >= 0 &&
> !test_bit(In_sync, &rdev->flags) &&
> + !test_bit(Faulty, &rdev->flags) &&
> !test_bit(Blocked, &rdev->flags))
> spares++;
> if (rdev->raid_disk < 0
>
>
> i.e. never consider a Faulty device to be a spare.
>
> It looks like this bug was introduced by commit dfc70645000616777
> in 2.6.26 when we allowed partially recovered devices to remain in the
> array
> when a different device fails.
>
> Can you please conform that this patch removes your symptom?
>
> Thanks,
> NeilBrown
This patch does indeed fix the problem! Thanks!
--jim
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-06-17 15:42 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-15 16:02 [PATCH/RFC] Fix resync hang after surprise removal Jim Paradis
2011-06-16 1:36 ` NeilBrown
2011-06-17 15:42 ` James Paradis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).