linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: More ddf container woes
@ 2011-03-23 19:18 Albert Pauw
  2011-03-23 22:08 ` NeilBrown
  0 siblings, 1 reply; 8+ messages in thread
From: Albert Pauw @ 2011-03-23 19:18 UTC (permalink / raw)
  To: Neil Brown, linux-raid

  Hi Neil,

I noticed on your 3.1.5 announcement that there were ddf fixes as well.

I tried the stuff I mentioned before (see below), but those issues 
weren't fixed.

I hope you will have some time to look into this.

Regards,

Albert Pauw

-------- Original Message --------

  Hi Neil,

I updated to the git version (devel) and tried my "old" tricks:

- Create a container with 5 disks
- Created two raid sets (raid 1 md0 and raid 5 md1) in this container

mdadm -E  /dev/md127 shows all disks active/Online

- Failed one disk in md0

mdadm -E /dev/md127 shows this disk as active/Offline, Failed

- Failed one disk in md1

mdadm -E /dev/md127 shows this disk as active/Offline, Failed

- Added a new spare disk to the container

mdadm -E /dev/md127 shows this new disk as active/Online, Rebuilding

this looks good, but although the container has six disks, the lastly failed
disk is missing, mdadm -E /dev/md127 only shows five disks (including
the rebuilding one).

This time however, only one of the failed raid sets is rebuilding, so
that fix is ok.

Here is another scenario with strange implications:

- Created a container with 6 disks

mdadm -E /dev/md127 shows all 6 disks as Global-Spare/Online

- Removed one of the disks, as I only needed 5

This time mdadm -e /dev/md127 shows six physical disks, one of which has
no device

- Created two raid sets (raid 1 md0 and raid 5 md1) in this container

mdadm -E  /dev/md127 shows all disks active/Online, except the "empty
entry" which stays
Global-Spare/Online

- I fail two disks, one in each raid array

mdadm -E /dev/md127 shows these two disks as active/Offline, Failed

- I add back the disk I removed earlier, it should fit into the empty
slot of mdadm -E

mdadm -E /dev/md127 shows something very strange, namely
->  All disks are set to Global-Spare/Online
->  All device files are removed from the slots in mdadm -E, except the
newly added one,
which shows the correct device

Albert




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: More ddf container woes
  2011-03-23 19:18 More ddf container woes Albert Pauw
@ 2011-03-23 22:08 ` NeilBrown
  2012-07-28 11:46   ` Version 3.2.5 and ddf issues (bugreport) Albert Pauw
  0 siblings, 1 reply; 8+ messages in thread
From: NeilBrown @ 2011-03-23 22:08 UTC (permalink / raw)
  To: Albert Pauw; +Cc: linux-raid

On Wed, 23 Mar 2011 20:18:24 +0100 Albert Pauw <albert.pauw@gmail.com> wrote:

>   Hi Neil,
> 
> I noticed on your 3.1.5 announcement that there were ddf fixes as well.

Yes, they are mainly the fixes that you had helped me find and so already
knew about.

> 
> I tried the stuff I mentioned before (see below), but those issues 
> weren't fixed.
> 
> I hope you will have some time to look into this.

Hopefully, but probably not for a few weeks - I need to take a break from
mdadm and catch up with some other stuff.

I'll keep this message to refer back to when I do get back to mdadm.

Thanks for your help,

NeilBrown


> 
> Regards,
> 
> Albert Pauw
> 
> -------- Original Message --------
> 
>   Hi Neil,
> 
> I updated to the git version (devel) and tried my "old" tricks:
> 
> - Create a container with 5 disks
> - Created two raid sets (raid 1 md0 and raid 5 md1) in this container
> 
> mdadm -E  /dev/md127 shows all disks active/Online
> 
> - Failed one disk in md0
> 
> mdadm -E /dev/md127 shows this disk as active/Offline, Failed
> 
> - Failed one disk in md1
> 
> mdadm -E /dev/md127 shows this disk as active/Offline, Failed
> 
> - Added a new spare disk to the container
> 
> mdadm -E /dev/md127 shows this new disk as active/Online, Rebuilding
> 
> this looks good, but although the container has six disks, the lastly failed
> disk is missing, mdadm -E /dev/md127 only shows five disks (including
> the rebuilding one).
> 
> This time however, only one of the failed raid sets is rebuilding, so
> that fix is ok.
> 
> Here is another scenario with strange implications:
> 
> - Created a container with 6 disks
> 
> mdadm -E /dev/md127 shows all 6 disks as Global-Spare/Online
> 
> - Removed one of the disks, as I only needed 5
> 
> This time mdadm -e /dev/md127 shows six physical disks, one of which has
> no device
> 
> - Created two raid sets (raid 1 md0 and raid 5 md1) in this container
> 
> mdadm -E  /dev/md127 shows all disks active/Online, except the "empty
> entry" which stays
> Global-Spare/Online
> 
> - I fail two disks, one in each raid array
> 
> mdadm -E /dev/md127 shows these two disks as active/Offline, Failed
> 
> - I add back the disk I removed earlier, it should fit into the empty
> slot of mdadm -E
> 
> mdadm -E /dev/md127 shows something very strange, namely
> ->  All disks are set to Global-Spare/Online
> ->  All device files are removed from the slots in mdadm -E, except the
> newly added one,
> which shows the correct device
> 
> Albert
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Version 3.2.5 and ddf issues (bugreport)
  2011-03-23 22:08 ` NeilBrown
@ 2012-07-28 11:46   ` Albert Pauw
  2012-07-31  6:11     ` NeilBrown
  0 siblings, 1 reply; 8+ messages in thread
From: Albert Pauw @ 2012-07-28 11:46 UTC (permalink / raw)
  To: linux-raid; +Cc: NeilBrown

Hi Neil,

After a hiatus of 1.5 year (busy with all sorts) I am back and tried the 
ddf code to see how things improved.

I build a VM Centos 6.3 system with 6 extra 1GB disks for testing.
I found several issues in the standard installed 3.2.3 version of mdadm 
relating to ddf, but installed the
3.2.5 version in order to work with recent code.

However, while version 3.2.3 is able to create a ddf container with 
raidsets in it, I found a problem with the 3.2.5 version.

After initially creating the container:

mdadm -C /dev/md127 -e ddf -l container /dev/sd[b-g]

which worked, I created a raid (1 or 5 it doesn't matter in this case) 
in it:

mdadm -C /dev/md0 -l raid5 -n 3 /dev/md127

However, it stays on resync=PENDING and readonly, and doesn't get build.

So I tried to set it to readwrite:

mdadm --readwrite  /dev/md0

Unfortunately, it stays on readonly and doesn't get build.

As said before, this did work in 3.2.3.

Are you already on this problem?

Kind regards,

Albert

P.S. My idea was, because the ddf code is not that much used, to create 
a test/validate script for ddf.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Version 3.2.5 and ddf issues (bugreport)
@ 2012-07-28 11:54 Albert Pauw
  0 siblings, 0 replies; 8+ messages in thread
From: Albert Pauw @ 2012-07-28 11:54 UTC (permalink / raw)
  To: linux-raid

Hi Neil,

After a hiatus of 1.5 year (busy with all sorts) I am back and tried the
ddf code to see how things improved.

I build a VM Centos 6.3 system with 6 extra 1GB disks for testing.
I found several issues in the standard installed 3.2.3 version of mdadm
relating to ddf, but installed the
3.2.5 version in order to work with recent code.

However, while version 3.2.3 is able to create a ddf container with
raidsets in it, I found a problem with the 3.2.5 version.

After initially creating the container:

mdadm -C /dev/md127 -e ddf -l container /dev/sd[b-g]

which worked, I created a raid (1 or 5 it doesn't matter in this case)
in it:

mdadm -C /dev/md0 -l raid5 -n 3 /dev/md127

However, it stays on resync=PENDING and readonly, and doesn't get build.

So I tried to set it to readwrite:

mdadm --readwrite  /dev/md0

Unfortunately, it stays on readonly and doesn't get build.

As said before, this did work in 3.2.3.

Are you already on this problem?

Kind regards,

Albert

P.S. My idea was, because the ddf code is not that much used, to create
a test/validate script for ddf.




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Version 3.2.5 and ddf issues (bugreport)
  2012-07-28 11:46   ` Version 3.2.5 and ddf issues (bugreport) Albert Pauw
@ 2012-07-31  6:11     ` NeilBrown
  2012-07-31  8:46       ` Albert Pauw
  0 siblings, 1 reply; 8+ messages in thread
From: NeilBrown @ 2012-07-31  6:11 UTC (permalink / raw)
  To: Albert Pauw; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1771 bytes --]

On Sat, 28 Jul 2012 13:46:06 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:

> Hi Neil,
> 
> After a hiatus of 1.5 year (busy with all sorts) I am back and tried the 
> ddf code to see how things improved.

Thanks!

> 
> I build a VM Centos 6.3 system with 6 extra 1GB disks for testing.
> I found several issues in the standard installed 3.2.3 version of mdadm 
> relating to ddf, but installed the
> 3.2.5 version in order to work with recent code.
> 
> However, while version 3.2.3 is able to create a ddf container with 
> raidsets in it, I found a problem with the 3.2.5 version.
> 
> After initially creating the container:
> 
> mdadm -C /dev/md127 -e ddf -l container /dev/sd[b-g]
> 
> which worked, I created a raid (1 or 5 it doesn't matter in this case) 
> in it:
> 
> mdadm -C /dev/md0 -l raid5 -n 3 /dev/md127
> 
> However, it stays on resync=PENDING and readonly, and doesn't get build.
> 
> So I tried to set it to readwrite:
> 
> mdadm --readwrite  /dev/md0
> 
> Unfortunately, it stays on readonly and doesn't get build.
> 
> As said before, this did work in 3.2.3.
> 
> Are you already on this problem?

It sounds like a problem with 'mdmon'.  mdmon needs to be running before the
array can become read-write.  mdadm should start mdmon automatically but
maybe it isn't.  Maybe it cannot find mdmon?

could you check if mdadm is running?  If it isn't run
   mdmon /dev/md127 &
and see if it starts working.

> 
> Kind regards,
> 
> Albert
> 
> P.S. My idea was, because the ddf code is not that much used, to create 
> a test/validate script for ddf.
> 
There is one 'ddf' test in my test suite and I think it is working.  I'm
happy to add more tests if you suggest or write some.

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Version 3.2.5 and ddf issues (bugreport)
  2012-07-31  6:11     ` NeilBrown
@ 2012-07-31  8:46       ` Albert Pauw
  2012-08-02  0:05         ` NeilBrown
  2012-08-14 23:31         ` NeilBrown
  0 siblings, 2 replies; 8+ messages in thread
From: Albert Pauw @ 2012-07-31  8:46 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On 07/31/2012 08:11 AM, NeilBrown wrote:
> On Sat, 28 Jul 2012 13:46:06 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:
>
>> Hi Neil,
>>
>> After a hiatus of 1.5 year (busy with all sorts) I am back and tried the
>> ddf code to see how things improved.
> Thanks!
>
>> I build a VM Centos 6.3 system with 6 extra 1GB disks for testing.
>> I found several issues in the standard installed 3.2.3 version of mdadm
>> relating to ddf, but installed the
>> 3.2.5 version in order to work with recent code.
>>
>> However, while version 3.2.3 is able to create a ddf container with
>> raidsets in it, I found a problem with the 3.2.5 version.
>>
>> After initially creating the container:
>>
>> mdadm -C /dev/md127 -e ddf -l container /dev/sd[b-g]
>>
>> which worked, I created a raid (1 or 5 it doesn't matter in this case)
>> in it:
>>
>> mdadm -C /dev/md0 -l raid5 -n 3 /dev/md127
>>
>> However, it stays on resync=PENDING and readonly, and doesn't get build.
>>
>> So I tried to set it to readwrite:
>>
>> mdadm --readwrite  /dev/md0
>>
>> Unfortunately, it stays on readonly and doesn't get build.
>>
>> As said before, this did work in 3.2.3.
>>
>> Are you already on this problem?
> It sounds like a problem with 'mdmon'.  mdmon needs to be running before the
> array can become read-write.  mdadm should start mdmon automatically but
> maybe it isn't.  Maybe it cannot find mdmon?
>
> could you check if mdadm is running?  If it isn't run
>     mdmon /dev/md127 &
> and see if it starts working.
Hi Neil,

thanks for your reply. Yes, mdmon wasn't running. Couldn't get it 
running with a recompiled 3.2.5, the standard one which came with Centos 
(3.2.3) works fine, I assume the made some changes to the code? Anyway, 
I moved to my own laptop, running Fedora 16 and pulled mdadm frm git and 
recompiled. That works. I also used loop devices as disks.

Here is the first of my findings:

I created a container with six disks, disk 1-2 is a raid 1 device, disk 
3-6 are a raid 6 device.

Here is the table shown at the end of the mdadm -E command for the 
container:

  Physical Disks : 6
       Number    RefNo      Size       Device      Type/State
          0    06a5f547    479232K /dev/loop2      active/Online
          1    47564acc    479232K /dev/loop3      active/Online
          2    bf30692c    479232K /dev/loop5      active/Online
          3    275d02f5    479232K /dev/loop4      active/Online
          4    b0916b3f    479232K /dev/loop6      active/Online
          5    65956a72    479232K /dev/loop1      active/Online

I now fail a disk (disk 0) and I get:

  Physical Disks : 6
       Number    RefNo      Size       Device      Type/State
          0    06a5f547    479232K /dev/loop2      active/Online
          1    47564acc    479232K /dev/loop3      active/Online
          2    bf30692c    479232K /dev/loop5      active/Online
          3    275d02f5    479232K /dev/loop4      active/Online
          4    b0916b3f    479232K /dev/loop6      active/Online
          5    65956a72    479232K /dev/loop1      active/Offline, Failed

Then I removed the disk from the container:

  Physical Disks : 6
       Number    RefNo      Size       Device      Type/State
          0    06a5f547    479232K /dev/loop2      active/Online
          1    47564acc    479232K /dev/loop3      active/Online
          2    bf30692c    479232K /dev/loop5      active/Online
          3    275d02f5    479232K /dev/loop4      active/Online
          4    b0916b3f    479232K /dev/loop6      active/Online
          5    65956a72    479232K                 active/Offline, 
Failed, Missing

Notice the active/Offline status, is this correct?

I added the disk back into the container, NO zero-superblock:

  Physical Disks : 6
       Number    RefNo      Size       Device      Type/State
          0    06a5f547    479232K /dev/loop2      active/Online
          1    47564acc    479232K /dev/loop3      active/Online
          2    bf30692c    479232K /dev/loop5      active/Online
          3    275d02f5    479232K /dev/loop4      active/Online
          4    b0916b3f    479232K /dev/loop6      active/Online
          5    65956a72    479232K /dev/loop1      active/Offline, 
Failed, Missing

It stays active/Offline (this is now correct I assume), Failed (again 
correct if had failed before), but also still missing.

I remove the disk again, do a zero-superblock and add it again:

Physical Disks : 6
       Number    RefNo      Size       Device      Type/State
          0    06a5f547    479232K /dev/loop2      active/Online
          1    47564acc    479232K /dev/loop3      active/Online
          2    bf30692c    479232K /dev/loop5      active/Online
          3    275d02f5    479232K /dev/loop4      active/Online
          4    b0916b3f    479232K /dev/loop6      active/Online
          5    ede51ba3    479232K /dev/loop1      active/Online, Rebuilding

This is correct, the disk is seen as a new disk and rebuilding starts.


Regards,

Albert

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Version 3.2.5 and ddf issues (bugreport)
  2012-07-31  8:46       ` Albert Pauw
@ 2012-08-02  0:05         ` NeilBrown
  2012-08-14 23:31         ` NeilBrown
  1 sibling, 0 replies; 8+ messages in thread
From: NeilBrown @ 2012-08-02  0:05 UTC (permalink / raw)
  To: Albert Pauw; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5609 bytes --]

On Tue, 31 Jul 2012 10:46:26 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:

> On 07/31/2012 08:11 AM, NeilBrown wrote:
> > On Sat, 28 Jul 2012 13:46:06 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:
> >
> >> Hi Neil,
> >>
> >> After a hiatus of 1.5 year (busy with all sorts) I am back and tried the
> >> ddf code to see how things improved.
> > Thanks!
> >
> >> I build a VM Centos 6.3 system with 6 extra 1GB disks for testing.
> >> I found several issues in the standard installed 3.2.3 version of mdadm
> >> relating to ddf, but installed the
> >> 3.2.5 version in order to work with recent code.
> >>
> >> However, while version 3.2.3 is able to create a ddf container with
> >> raidsets in it, I found a problem with the 3.2.5 version.
> >>
> >> After initially creating the container:
> >>
> >> mdadm -C /dev/md127 -e ddf -l container /dev/sd[b-g]
> >>
> >> which worked, I created a raid (1 or 5 it doesn't matter in this case)
> >> in it:
> >>
> >> mdadm -C /dev/md0 -l raid5 -n 3 /dev/md127
> >>
> >> However, it stays on resync=PENDING and readonly, and doesn't get build.
> >>
> >> So I tried to set it to readwrite:
> >>
> >> mdadm --readwrite  /dev/md0
> >>
> >> Unfortunately, it stays on readonly and doesn't get build.
> >>
> >> As said before, this did work in 3.2.3.
> >>
> >> Are you already on this problem?
> > It sounds like a problem with 'mdmon'.  mdmon needs to be running before the
> > array can become read-write.  mdadm should start mdmon automatically but
> > maybe it isn't.  Maybe it cannot find mdmon?
> >
> > could you check if mdadm is running?  If it isn't run
> >     mdmon /dev/md127 &
> > and see if it starts working.
> Hi Neil,
> 
> thanks for your reply. Yes, mdmon wasn't running. Couldn't get it 
> running with a recompiled 3.2.5, the standard one which came with Centos 
> (3.2.3) works fine, I assume the made some changes to the code? Anyway, 
> I moved to my own laptop, running Fedora 16 and pulled mdadm frm git and 
> recompiled. That works. I also used loop devices as disks.
> 
> Here is the first of my findings:
> 
> I created a container with six disks, disk 1-2 is a raid 1 device, disk 
> 3-6 are a raid 6 device.
> 
> Here is the table shown at the end of the mdadm -E command for the 
> container:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K /dev/loop1      active/Online
> 
> I now fail a disk (disk 0) and I get:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K /dev/loop1      active/Offline, Failed
> 
> Then I removed the disk from the container:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K                 active/Offline, 
> Failed, Missing
> 
> Notice the active/Offline status, is this correct?
> 
> I added the disk back into the container, NO zero-superblock:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K /dev/loop1      active/Offline, 
> Failed, Missing
> 
> It stays active/Offline (this is now correct I assume), Failed (again 
> correct if had failed before), but also still missing.
> 
> I remove the disk again, do a zero-superblock and add it again:
> 
> Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    ede51ba3    479232K /dev/loop1      active/Online, Rebuilding
> 
> This is correct, the disk is seen as a new disk and rebuilding starts.
> 
> 
> Regards,
> 
> Albert

Hi Albert,

thanks for this and your other reports.
I won't be able to look at them for a while, but hopefully will get back to
you some time next week.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Version 3.2.5 and ddf issues (bugreport)
  2012-07-31  8:46       ` Albert Pauw
  2012-08-02  0:05         ` NeilBrown
@ 2012-08-14 23:31         ` NeilBrown
  1 sibling, 0 replies; 8+ messages in thread
From: NeilBrown @ 2012-08-14 23:31 UTC (permalink / raw)
  To: Albert Pauw; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 6452 bytes --]

On Tue, 31 Jul 2012 10:46:26 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:

> On 07/31/2012 08:11 AM, NeilBrown wrote:
> > On Sat, 28 Jul 2012 13:46:06 +0200 Albert Pauw <albert.pauw@gmail.com> wrote:
> >
> >> Hi Neil,
> >>
> >> After a hiatus of 1.5 year (busy with all sorts) I am back and tried the
> >> ddf code to see how things improved.
> > Thanks!
> >
> >> I build a VM Centos 6.3 system with 6 extra 1GB disks for testing.
> >> I found several issues in the standard installed 3.2.3 version of mdadm
> >> relating to ddf, but installed the
> >> 3.2.5 version in order to work with recent code.
> >>
> >> However, while version 3.2.3 is able to create a ddf container with
> >> raidsets in it, I found a problem with the 3.2.5 version.
> >>
> >> After initially creating the container:
> >>
> >> mdadm -C /dev/md127 -e ddf -l container /dev/sd[b-g]
> >>
> >> which worked, I created a raid (1 or 5 it doesn't matter in this case)
> >> in it:
> >>
> >> mdadm -C /dev/md0 -l raid5 -n 3 /dev/md127
> >>
> >> However, it stays on resync=PENDING and readonly, and doesn't get build.
> >>
> >> So I tried to set it to readwrite:
> >>
> >> mdadm --readwrite  /dev/md0
> >>
> >> Unfortunately, it stays on readonly and doesn't get build.
> >>
> >> As said before, this did work in 3.2.3.
> >>
> >> Are you already on this problem?
> > It sounds like a problem with 'mdmon'.  mdmon needs to be running before the
> > array can become read-write.  mdadm should start mdmon automatically but
> > maybe it isn't.  Maybe it cannot find mdmon?
> >
> > could you check if mdadm is running?  If it isn't run
> >     mdmon /dev/md127 &
> > and see if it starts working.
> Hi Neil,
> 
> thanks for your reply. Yes, mdmon wasn't running. Couldn't get it 
> running with a recompiled 3.2.5, the standard one which came with Centos 
> (3.2.3) works fine, I assume the made some changes to the code? Anyway, 
> I moved to my own laptop, running Fedora 16 and pulled mdadm frm git and 
> recompiled. That works. I also used loop devices as disks.
> 
> Here is the first of my findings:
> 
> I created a container with six disks, disk 1-2 is a raid 1 device, disk 
> 3-6 are a raid 6 device.
> 
> Here is the table shown at the end of the mdadm -E command for the 
> container:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K /dev/loop1      active/Online
> 
> I now fail a disk (disk 0) and I get:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K /dev/loop1      active/Offline, Failed
> 
> Then I removed the disk from the container:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K                 active/Offline, 
> Failed, Missing
> 
> Notice the active/Offline status, is this correct?

To be honest, I don't know.  The DDF spec doesn't really go into that sort of
detail, or at least I didn't find it.
Given that the device is Missing, it hardly seems to matter whether it is
Active or Spare or Foreign or Legacy.
I guess if it re-appears we want to know what it was ... maybe.

> 
> I added the disk back into the container, NO zero-superblock:
> 
>   Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    65956a72    479232K /dev/loop1      active/Offline, 
> Failed, Missing
> 
> It stays active/Offline (this is now correct I assume), Failed (again 
> correct if had failed before), but also still missing.

I found why this happens.  When I added code to support incremental assembly
of DDF arrays, I broke the ability to hot-add a device which happened to have
reasonably good looking metadata on it.  The best approach for now is to
--zero the device first.  I'll push out a patch which does just that.


> 
> I remove the disk again, do a zero-superblock and add it again:
> 
> Physical Disks : 6
>        Number    RefNo      Size       Device      Type/State
>           0    06a5f547    479232K /dev/loop2      active/Online
>           1    47564acc    479232K /dev/loop3      active/Online
>           2    bf30692c    479232K /dev/loop5      active/Online
>           3    275d02f5    479232K /dev/loop4      active/Online
>           4    b0916b3f    479232K /dev/loop6      active/Online
>           5    ede51ba3    479232K /dev/loop1      active/Online, Rebuilding
> 
> This is correct, the disk is seen as a new disk and rebuilding starts.
> 
> 
> Regards,
> 
> Albert

diff --git a/Manage.c b/Manage.c
index f83af65..7f27f74 100644
--- a/Manage.c
+++ b/Manage.c
@@ -786,6 +786,7 @@ int Manage_add(int fd, int tfd, struct mddev_dev *dv,
 			return -1;
 		}
 
+		Kill(dv->devname, NULL, 0, -1, 0);
 		dfd = dev_open(dv->devname, O_RDWR | O_EXCL|O_DIRECT);
 		if (mdmon_running(tst->container_dev))
 			tst->update_tail = &tst->updates;



Thanks,
NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-08-14 23:31 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-23 19:18 More ddf container woes Albert Pauw
2011-03-23 22:08 ` NeilBrown
2012-07-28 11:46   ` Version 3.2.5 and ddf issues (bugreport) Albert Pauw
2012-07-31  6:11     ` NeilBrown
2012-07-31  8:46       ` Albert Pauw
2012-08-02  0:05         ` NeilBrown
2012-08-14 23:31         ` NeilBrown
  -- strict thread matches above, loose matches on Subject: below --
2012-07-28 11:54 Albert Pauw

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).