linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Strange / inconsistent behavior with mdadm -I -R
@ 2013-03-14 20:03 Martin Wilck
  2013-03-14 20:11 ` [PATCH] RAID10: Allow skipping recovery when clean arrays are assembled mwilck
  2013-03-17 23:35 ` Strange / inconsistent behavior with mdadm -I -R NeilBrown
  0 siblings, 2 replies; 11+ messages in thread
From: Martin Wilck @ 2013-03-14 20:03 UTC (permalink / raw)
  To: linux-raid, NeilBrown

Hello Neil,

for my DDF/RAID10 work, I have been trying to figure out how mdadm -I -R
is supposed to behave, and I have found strangeness I'd like to clarify,
lest I make a mistake in my DDF/RAID10 code.

My test case is incremental assembly of a clean array running mdadm -I
-R by hand for each array device in turn.

1) native md and containers behave differently for RAID 1

Both native and container RAID 1 are started in auto-read-only mode when
the 1st disk is added. When the 2nd disk is added, the native md
switches to "active" and starts a recovery which finishes immediately.
Container arrays (tested: DDF), on the other hand, do not switch to
"active" until a write attempt is made on the array. The problem is in
the native case: after the switch to "active", no more disks can be
added any more ("can only add $DISK as a spare").

IMO the container behavior makes more sense and matches the man page
better than the native behavior. Do you agree? Would it be hard to fix that?

2) RAID1 skips recovery for clean arrays, RAID10 does not

Native RAID 10 behaves similar to RAID1 as described above. When the
array can be started, it does so, in auto-read-mode. When the next disk
is added after that, recovery starts, and the array switches to
"active", and further disks can't be added the "simple way" any more.
There's one important difference: in the RAID 10 case, the recovery
doesn't finish immediately. Rather, md does a full recovery of the added
disk although it was clean. This is wrong; I have come up with a patch
for this which I will send in a follow-up email.

I tested this behavior with kernels 2.6.32, 3.0, and 3.8, with the same
result, using mdadm from the git tree.

Regards
Martin

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-04-16 19:09 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-14 20:03 Strange / inconsistent behavior with mdadm -I -R Martin Wilck
2013-03-14 20:11 ` [PATCH] RAID10: Allow skipping recovery when clean arrays are assembled mwilck
2013-03-17 23:06   ` NeilBrown
2013-03-17 23:35 ` Strange / inconsistent behavior with mdadm -I -R NeilBrown
2013-03-19 18:29   ` Martin Wilck
2013-03-21  1:22     ` NeilBrown
2013-03-27  5:53       ` NeilBrown
2013-03-27 20:29         ` Martin Wilck
2013-04-16 19:09         ` Martin Wilck
2013-03-19 18:35   ` [PATCH] RAID10: Allow skipping recovery when clean arrays are assembled mwilck
2013-03-19 23:46     ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).