linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Mei <meijia@gmail.com>
To: NeilBrown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org, eric.mei@seagate.com
Subject: Re: [PATCH] [md] raid5: check faulty flag for array status during recovery.
Date: Thu, 19 Feb 2015 15:49:32 -0700	[thread overview]
Message-ID: <54E6687C.6020408@gmail.com> (raw)
In-Reply-To: <20150220085147.03bb2247@notabene.brown>

Hi Neil, You are absolutely right we need RCU lock for this. Thank you 
so much!

Eric

On 2015-02-19 2:51 PM, NeilBrown wrote:
> On Tue, 6 Jan 2015 15:24:24 -0700 Eric Mei <meijia@gmail.com> wrote:
>
>> Hi Neil,
>>
>> In a MDRAID derived work we found and fixed a data corruption bug. We think this also affect vanilla MDRAID, but we didn’t directly prove that by constructing a test to show the corruption. Following is the theoretical analysis, please kindly review and see if I missed something.
>>
>> To rebuild a stripe, MD checks whether array will be optimal after rebuild complete, if that’s true, we’ll mark the WIB bit to be cleared, the purpose is to enable “incremental rebuild”. The code section is like this:
>>
>> 	/* Need to check if array will still be degraded after recovery/resync
>> 	 * We don't need to check the 'failed' flag as when that gets set,
>> 	 * recovery aborts.
>> 	 */
>> 	for (i = 0; i < conf->raid_disks; i++)
>> 		if (conf->disks[i].rdev == NULL)
>> 			still_degraded = 1;
>>
>> The problem is that only checking rdev == NULL might not be enough. Suppose both 2 drives D0 and D1 failed and marked as Faulty; We immediately removed D0 from array, but because some lingering IO on D1, it remains in array with Faulty flags on. A new drive pulled in, rebuild against D0 starts. Now because no rdev is NULL, MD thinks array will be optimal. If some writes happened before rebuild reaches the region, their dirty bits in WIB will be cleared. When later add D1 back into array, we’ll skip rebuilding those stripes, thus data corruption.
>>
>> The attached patch (against 3.18.0-rc6) is supposed to fix this issue.
>>
>> Thanks
>> Eric
>>
> Hi Eric,
>   sorry for the delay, and thanks for the reminder...
>
> The issue you described could only affect RAID6 as it requires the array to
> continue with two failed drives.
>
> However in the RAID6 case I think you are correct - there is a chance of
> corruption if there is a double failure and a delay in removing one device.
>
> Your patch isn't quite safe as conf->disks[i].rdev can become NULL at any
> moment, so it could become NULL between testing and de-referencing.
> So I've modified it as follows.
>
> Thanks,
> NeilBrown
>
>
>
> Author: Eric Mei <eric.mei@seagate.com>
> Date:   Tue Jan 6 09:35:02 2015 -0800
>
>      raid5: check faulty flag for array status during recovery.
>      
>      When we have more than 1 drive failure, it's possible we start
>      rebuild one drive while leaving another faulty drive in array.
>      To determine whether array will be optimal after building, current
>      code only check whether a drive is missing, which could potentially
>      lead to data corruption. This patch is to add checking Faulty flag.
>      
>      Signed-off-by: NeilBrown <neilb@suse.de>
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index bc6d7595ad76..022a0d99e110 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -5120,12 +5120,17 @@ static inline sector_t sync_request(struct mddev *mddev, sector_t sector_nr, int
>   		schedule_timeout_uninterruptible(1);
>   	}
>   	/* Need to check if array will still be degraded after recovery/resync
> -	 * We don't need to check the 'failed' flag as when that gets set,
> -	 * recovery aborts.
> +	 * Note in case of > 1 drive failures it's possible we're rebuilding
> +	 * one drive while leaving another faulty drive in array.
>   	 */
> -	for (i = 0; i < conf->raid_disks; i++)
> -		if (conf->disks[i].rdev == NULL)
> +	rcu_read_lock();
> +	for (i = 0; i < conf->raid_disks; i++) {
> +		struct md_rdev *rdev = ACCESS_ONCE(conf->disks[i].rdev);
> +
> +		if (rdev == NULL || test_bit(Faulty, &rdev->flags))
>   			still_degraded = 1;
> +	}
> +	rcu_read_unlock();
>   
>   	bitmap_start_sync(mddev->bitmap, sector_nr, &sync_blocks, still_degraded);
>   

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

      reply	other threads:[~2015-02-19 22:49 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-06 22:24 [PATCH] [md] raid5: check faulty flag for array status during recovery Eric Mei
2015-02-19 21:51 ` NeilBrown
2015-02-19 22:49   ` Eric Mei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54E6687C.6020408@gmail.com \
    --to=meijia@gmail.com \
    --cc=eric.mei@seagate.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).