linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: More ddf container woes
@ 2011-03-23 19:18 Albert Pauw
  2011-03-23 22:08 ` NeilBrown
  0 siblings, 1 reply; 13+ messages in thread
From: Albert Pauw @ 2011-03-23 19:18 UTC (permalink / raw)
  To: Neil Brown, linux-raid

  Hi Neil,

I noticed on your 3.1.5 announcement that there were ddf fixes as well.

I tried the stuff I mentioned before (see below), but those issues 
weren't fixed.

I hope you will have some time to look into this.

Regards,

Albert Pauw

-------- Original Message --------

  Hi Neil,

I updated to the git version (devel) and tried my "old" tricks:

- Create a container with 5 disks
- Created two raid sets (raid 1 md0 and raid 5 md1) in this container

mdadm -E  /dev/md127 shows all disks active/Online

- Failed one disk in md0

mdadm -E /dev/md127 shows this disk as active/Offline, Failed

- Failed one disk in md1

mdadm -E /dev/md127 shows this disk as active/Offline, Failed

- Added a new spare disk to the container

mdadm -E /dev/md127 shows this new disk as active/Online, Rebuilding

this looks good, but although the container has six disks, the lastly failed
disk is missing, mdadm -E /dev/md127 only shows five disks (including
the rebuilding one).

This time however, only one of the failed raid sets is rebuilding, so
that fix is ok.

Here is another scenario with strange implications:

- Created a container with 6 disks

mdadm -E /dev/md127 shows all 6 disks as Global-Spare/Online

- Removed one of the disks, as I only needed 5

This time mdadm -e /dev/md127 shows six physical disks, one of which has
no device

- Created two raid sets (raid 1 md0 and raid 5 md1) in this container

mdadm -E  /dev/md127 shows all disks active/Online, except the "empty
entry" which stays
Global-Spare/Online

- I fail two disks, one in each raid array

mdadm -E /dev/md127 shows these two disks as active/Offline, Failed

- I add back the disk I removed earlier, it should fit into the empty
slot of mdadm -E

mdadm -E /dev/md127 shows something very strange, namely
->  All disks are set to Global-Spare/Online
->  All device files are removed from the slots in mdadm -E, except the
newly added one,
which shows the correct device

Albert




^ permalink raw reply	[flat|nested] 13+ messages in thread
* mdadm ddf questions
@ 2011-02-19 11:13 Albert Pauw
  2011-02-22  7:41 ` Albert Pauw
  0 siblings, 1 reply; 13+ messages in thread
From: Albert Pauw @ 2011-02-19 11:13 UTC (permalink / raw)
  To: linux-raid

  I have dabbed a bit with the standard raid1/raid5 sets and am just 
diving into this whole ddf container stuff,
and see how I can fail, remove and add a disk.

Here is what I have, Fedora 14, five 1GB Sata disks (they are virtual 
disks under VirtualBox but it all seems
to work well under the standard raid stuff. For mdadm I am using the 
latest git version, with version nr 3.1.4.

I created a ddf container:

mdadm -C /dev/md/container -e ddf -l container -n 5 /dev/sd[b-f]

I now create a raid 5 set in this container:

mdadm -C /dev/md1 -l raid5 -n 5 /dev/md/container

This all seems to work, I also noticed that after a stop and start of 
both the container and the raidset,
the container has been renamed to /dev/md/ddf0 which points to /dev/md127.

I now fail one disk in the  raidset:

mdadm -f /dev/md1 /dev/sdc

I noticed that it is removed from the md1 raidset, and marked 
online,failed in the container. So far so
good. When I now stop the md1 array and start it again, it will be back 
again with all 5 disks, clean, no failure
although in the container the disk is marked failed. I then remove it 
from the container:

mdadm -r /dev/md127 /dev/sdc

I clean the disk with mdadm --zero-superblock /dev/sdc and add it again.

But how do I add this disk again to the md1 raidset?

I see in the container that /dev/sdc is back, with status 
"active/Online, Failed" and a new disk is added
with no device file and status "Global-Spare/Online".

I am confused now.

So my question: how do I replace a faulty disk in a raidset, which is in 
a ddf container?

Thanks and bare with me, I am relatively new to all this.

Albert

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-08-14 23:31 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-23 19:18 More ddf container woes Albert Pauw
2011-03-23 22:08 ` NeilBrown
2012-07-28 11:46   ` Version 3.2.5 and ddf issues (bugreport) Albert Pauw
2012-07-31  6:11     ` NeilBrown
2012-07-31  8:46       ` Albert Pauw
2012-08-02  0:05         ` NeilBrown
2012-08-14 23:31         ` NeilBrown
  -- strict thread matches above, loose matches on Subject: below --
2011-02-19 11:13 mdadm ddf questions Albert Pauw
2011-02-22  7:41 ` Albert Pauw
2011-02-23  6:17   ` NeilBrown
2011-02-25 17:53     ` Albert Pauw
2011-03-02 22:31       ` NeilBrown
2011-03-10  8:34         ` More ddf container woes Albert Pauw
2011-03-11 11:50           ` Albert Pauw
2011-03-14  8:02             ` NeilBrown
2011-03-14  9:00               ` Albert Pauw
2011-03-15  4:43                 ` NeilBrown
2011-03-15 19:07                   ` Albert Pauw

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).