public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Lukasz Dorau <lukasz.dorau@intel.com>
Cc: linux-raid@vger.kernel.org, pawel.baldysiak@intel.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] md: Fix skipping recovery for read-only arrays.
Date: Wed, 16 Oct 2013 14:49:54 +1100	[thread overview]
Message-ID: <20131016144954.5eb8a689@notabene.brown> (raw)
In-Reply-To: <20131007142551.14867.36809.stgit@gklab-154-244.igk.intel.com>

[-- Attachment #1: Type: text/plain, Size: 2135 bytes --]

On Mon, 07 Oct 2013 16:25:51 +0200 Lukasz Dorau <lukasz.dorau@intel.com>
wrote:

> Since:
>         commit 7ceb17e87bde79d285a8b988cfed9eaeebe60b86
>         md: Allow devices to be re-added to a read-only array.
> 
> spares are activated on a read-only array. In case of raid1 and raid10
> personalities it causes that not-in-sync devices are marked in-sync
> without checking if recovery has been finished.
> 
> If a read-only array is degraded and one of its devices is not in-sync
> (because the array has been only partially recovered) recovery will be skipped.
> 
> This patch adds checking if recovery has been finished before marking a device
> in-sync for raid1 and raid10 personalities. In case of raid5 personality
> such condition is already present (at raid5.c:6029).
> 
> Bug was introduced in 3.10 and causes data corruption.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
> Signed-off-by: Lukasz Dorau <lukasz.dorau@intel.com>
> ---
>  drivers/md/raid1.c  |    1 +
>  drivers/md/raid10.c |    1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index d60412c..aacf6bf 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -1479,6 +1479,7 @@ static int raid1_spare_active(struct mddev *mddev)
>  			}
>  		}
>  		if (rdev
> +		    && rdev->recovery_offset == MaxSector
>  		    && !test_bit(Faulty, &rdev->flags)
>  		    && !test_and_set_bit(In_sync, &rdev->flags)) {
>  			count++;
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index df7b0a0..73dc8a3 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -1782,6 +1782,7 @@ static int raid10_spare_active(struct mddev *mddev)
>  			}
>  			sysfs_notify_dirent_safe(tmp->replacement->sysfs_state);
>  		} else if (tmp->rdev
> +			   && tmp->rdev->recovery_offset == MaxSector
>  			   && !test_bit(Faulty, &tmp->rdev->flags)
>  			   && !test_and_set_bit(In_sync, &tmp->rdev->flags)) {
>  			count++;

Applied - thanks.

I'll forward it to Linus and -stable shortly.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

  reply	other threads:[~2013-10-16  3:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-07 14:25 [PATCH] md: Fix skipping recovery for read-only arrays Lukasz Dorau
2013-10-16  3:49 ` NeilBrown [this message]
2013-10-16  7:43   ` Dorau, Lukasz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131016144954.5eb8a689@notabene.brown \
    --to=neilb@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=lukasz.dorau@intel.com \
    --cc=pawel.baldysiak@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox