linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* md/raid5: fresh drive rebuild always requires a fullsync if interrupted
@ 2013-09-11 18:08 Alexander Lyakas
  2013-09-12  5:52 ` NeilBrown
  0 siblings, 1 reply; 2+ messages in thread
From: Alexander Lyakas @ 2013-09-11 18:08 UTC (permalink / raw)
  To: NeilBrown, linux-raid

Hi Neil,

Please consider the following scenario:
# degraded raid5 with 3 drives (A,B,C) and one missing
# a fresh drive D is added and starts rebuilding
# drive D fails
# after some time drive D is re-added

what happens is the following flow:
# super_1_validate does not set In_sync flag, because
MD_FEATURE_RECOVERY_OFFSET is set:
    if ((le32_to_cpu(sb->feature_map) &
         MD_FEATURE_RECOVERY_OFFSET))
        rdev->recovery_offset = le64_to_cpu(sb->recovery_offset);
    else
        set_bit(In_sync, &rdev->flags);
    rdev->raid_disk = role;

# As a result, add_new_disk does not set saved_raid_disk:
    if (test_bit(In_sync, &rdev->flags))
        rdev->saved_raid_disk = rdev->raid_disk;
    else
        rdev->saved_raid_disk = -1;

# then add_new_disk unconditionally does:
    rdev->raid_disk = -1;

# Later remove_and_add_spares() resets rdev->recovery_offset and calls
the personality:
    if (rdev->raid_disk < 0 && !test_bit(Faulty, &rdev->flags)) {
        rdev->recovery_offset = 0;
        if (mddev->pers->hot_add_disk(mddev, rdev) == 0) {

# And then raid5_add_disk does:
        if (rdev->saved_raid_disk != disk)
            conf->fullsync = 1;

which results in full sync.
This is on kernel 3.8.13, but your current for-linus branch has the
same issue, I believe.

Is this a reasonable behavior?

Also, I see that recovery_offset is basically not used at all during
re-add flow: we cannot resume the rebuild from recovery_offset,
because while the drive was out of the array, data may have been
written before recovery_offset, correct? That's why it is not used?

Thanks,
Alex.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: md/raid5: fresh drive rebuild always requires a fullsync if interrupted
  2013-09-11 18:08 md/raid5: fresh drive rebuild always requires a fullsync if interrupted Alexander Lyakas
@ 2013-09-12  5:52 ` NeilBrown
  0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2013-09-12  5:52 UTC (permalink / raw)
  To: Alexander Lyakas; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2155 bytes --]

On Wed, 11 Sep 2013 21:08:11 +0300 Alexander Lyakas <alex.bolshoy@gmail.com>
wrote:

> Hi Neil,
> 
> Please consider the following scenario:
> # degraded raid5 with 3 drives (A,B,C) and one missing
> # a fresh drive D is added and starts rebuilding
> # drive D fails
> # after some time drive D is re-added
> 
> what happens is the following flow:
> # super_1_validate does not set In_sync flag, because
> MD_FEATURE_RECOVERY_OFFSET is set:
>     if ((le32_to_cpu(sb->feature_map) &
>          MD_FEATURE_RECOVERY_OFFSET))
>         rdev->recovery_offset = le64_to_cpu(sb->recovery_offset);
>     else
>         set_bit(In_sync, &rdev->flags);
>     rdev->raid_disk = role;
> 
> # As a result, add_new_disk does not set saved_raid_disk:
>     if (test_bit(In_sync, &rdev->flags))
>         rdev->saved_raid_disk = rdev->raid_disk;
>     else
>         rdev->saved_raid_disk = -1;
> 
> # then add_new_disk unconditionally does:
>     rdev->raid_disk = -1;
> 
> # Later remove_and_add_spares() resets rdev->recovery_offset and calls
> the personality:
>     if (rdev->raid_disk < 0 && !test_bit(Faulty, &rdev->flags)) {
>         rdev->recovery_offset = 0;
>         if (mddev->pers->hot_add_disk(mddev, rdev) == 0) {
> 
> # And then raid5_add_disk does:
>         if (rdev->saved_raid_disk != disk)
>             conf->fullsync = 1;
> 
> which results in full sync.
> This is on kernel 3.8.13, but your current for-linus branch has the
> same issue, I believe.
> 
> Is this a reasonable behavior?

Reasonable, but maybe not ideal.

> 
> Also, I see that recovery_offset is basically not used at all during
> re-add flow: we cannot resume the rebuild from recovery_offset,
> because while the drive was out of the array, data may have been
> written before recovery_offset, correct? That's why it is not used?

I suspect it isn't used because I never thought to use it.
It is probably reasonable to set 'saved_raid_disk' if recovery_offset holds
and  interesting value.  You would need to make sure that that is preserved
by the code that uses 'saved_raid_disk'.

Patches welcome....

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2013-09-12  5:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-11 18:08 md/raid5: fresh drive rebuild always requires a fullsync if interrupted Alexander Lyakas
2013-09-12  5:52 ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).