linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Albert Pauw <albert.pauw@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Error in rebuild of two "layered" md devices in container
Date: Wed, 15 Aug 2012 09:43:52 +1000	[thread overview]
Message-ID: <20120815094352.38550670@notabene.brown> (raw)
In-Reply-To: <CAGkViCEBCMgPXbByGPe8MgTKsSkn1Dc=_XLgCCDCfSrEGwEs_w@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2216 bytes --]

On Wed, 1 Aug 2012 19:52:51 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:

> Hi Neil,
> 
> found another bug.
> 
> - Created a container with six disks
> - Created two md devices in it:
> 
> mdadm -CR /dev/md0 -l 6 -n 6 -z 50M
> mdadm -CR /dev/md1 -l 5 -n 6 -z 50M
> 
> The md devices are "layered" in the container across all disks.
> 
> They both get build and are online.
> 
> - Fail one disk, both md devices are affected
> - Remove disk
> - Clear superblock of removed disk
> - Add disk again (in essence, I just added a spare disk)
> 
> Now comes the error:
> 
> - md0 is rebuild
> - md1 is NOT rebuild

The reason for this is somewhat messy.
mdadm will currently only add a 'spare' device to an array which needs a
replacement device.
In DDF the whole device is either 'active' or 'spare'.  There isn't a concept
of 'partly active, partly spare'.
So when mdadm adds part of the disk to one array it stops being spare and
started being active.  So when mdadm looks for a spare to add to the second
array, there are no spare devices.

I can hack around it by allowing any non-failed device to be considered as a
spare but I need to find a better solution.  That might take a while.  I've
made a note on my to-do list, but it is a rather long list.

Thanks,
NeilBrown

diff --git a/super-ddf.c b/super-ddf.c
index d006a04..11b98f7 100644
--- a/super-ddf.c
+++ b/super-ddf.c
@@ -2616,7 +2616,7 @@ static int validate_geometry_ddf(struct supertype *st,
 	if (chunk && *chunk == UnSet)
 		*chunk = DEFAULT_CHUNK;
 
-
+	if (level == -1000000) level = LEVEL_CONTAINER;
 	if (level == LEVEL_CONTAINER) {
 		/* Must be a fresh device to add to a container */
 		return validate_geometry_ddf_container(st, level, layout,
@@ -3701,6 +3701,10 @@ static struct mdinfo *ddf_activate_spare(struct active_array *a,
 			} else if (ddf->phys->entries[dl->pdnum].type &
 				   __cpu_to_be16(DDF_Global_Spare)) {
 				is_global = 1;
+			} else if (!(ddf->phys->entries[dl->pdnum].state &
+				     __cpu_to_be16(DDF_Failed))) {
+				/* we can possibly use some of this */
+				is_global = 1;
 			}
 			if ( ! (is_dedicated ||
 				(is_global && global_ok))) {

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

  parent reply	other threads:[~2012-08-14 23:43 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-01 17:52 Error in rebuild of two "layered" md devices in container Albert Pauw
2012-08-01 18:34 ` Albert Pauw
2012-08-14 23:43 ` NeilBrown [this message]
2012-08-15 20:04   ` Albert Pauw

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120815094352.38550670@notabene.brown \
    --to=neilb@suse.de \
    --cc=albert.pauw@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).