linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* All spares in Raid 5, default chunk? version of mdadm significant?
       [not found] <CAK-d5dYwa69wcSGX-u97WmJbMceEX2D1wiXRea-mS+X+i9Dyyw@mail.gmail.com>
@ 2012-06-22 12:24 ` Anshuman Aggarwal
  2012-06-25  6:50 ` Anshuman Aggarwal
  1 sibling, 0 replies; 4+ messages in thread
From: Anshuman Aggarwal @ 2012-06-22 12:24 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1836 bytes --]

Hi,
 I have a 6 device raid 5 array which had 1 disk go bad and then due to a
power outage, the machine shutdown and when it started all the disks were
showing up as spares with the following mdadm -E output (sample for one
given below and full for all devices attached).

This md device was part a Physical Volume for a LVM Volume Group.

I am trying to recreate the array using mdadm --create --assume-clean using
1 device as missing. I am checking if the device is getting created
correctly by checking the UUID of the created device which would match if
the device gets created correctly.

I have tried a few combinations of the disk order that I believe is right
however I think I'm getting tripped up by the fact that the mdadm I used to
create this md device was some 2.x series and now we are on 3.x (and I may
have taken some of the defaults originally which I don't' remember)
What all 'defaults' have changed over the versions so I can try those? Like
chunk size? Can we manually configure the super/data offset? Are those
significant when we do an mdadm --create --assume-clean?

Thanks,
Anshu

/dev/sda5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
           Name : GATEWAY:RAID5_500G
  Creation Time : Wed Apr 28 16:10:43 2010
     Raid Level : -unknown-
   Raid Devices : 0

 Avail Dev Size : 976768002 (465.76 GiB 500.11 GB)
  Used Dev Size : 976765954 (465.76 GiB 500.10 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : a8499a91:628ddde8:1cc8f4b9:749136f9

    Update Time : Sat May 19 23:04:23 2012
       Checksum : 9950883c - correct
         Events : 1


   Device Role : spare
   Array State :  ('A' == active, '.' == missing)

[-- Attachment #2: md5.txt --]
[-- Type: text/plain, Size: 3251 bytes --]

/dev/sda5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
           Name : GATEWAY:RAID5_500G
  Creation Time : Wed Apr 28 16:10:43 2010
     Raid Level : -unknown-
   Raid Devices : 0

 Avail Dev Size : 976768002 (465.76 GiB 500.11 GB)
  Used Dev Size : 976765954 (465.76 GiB 500.10 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : a8499a91:628ddde8:1cc8f4b9:749136f9

    Update Time : Sat May 19 23:04:23 2012
       Checksum : 9950883c - correct
         Events : 1


   Device Role : spare
   Array State :  ('A' == active, '.' == missing)
/dev/sdb5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
           Name : GATEWAY:RAID5_500G
  Creation Time : Wed Apr 28 16:10:43 2010
     Raid Level : -unknown-
   Raid Devices : 0

 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 6a8b21d9:db53ca34:5c9a1baa:5e5782ae

    Update Time : Sat May 19 23:04:23 2012
       Checksum : d237209e - correct
         Events : 1


   Device Role : spare
   Array State :  ('A' == active, '.' == missing)
/dev/sdc5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
           Name : GATEWAY:RAID5_500G
  Creation Time : Wed Apr 28 16:10:43 2010
     Raid Level : -unknown-
   Raid Devices : 0

 Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
  Used Dev Size : 976765954 (465.76 GiB 500.10 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : f6a10896:453db503:6ee797f6:c3e82660

    Update Time : Sat May 19 23:04:23 2012
       Checksum : 5c29ff09 - correct
         Events : 1


   Device Role : spare
   Array State :  ('A' == active, '.' == missing)
/dev/sdd5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
           Name : GATEWAY:RAID5_500G
  Creation Time : Wed Apr 28 16:10:43 2010
     Raid Level : -unknown-
   Raid Devices : 0

 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 556895ef:d6837868:2cca2c17:419725e9

    Update Time : Sat May 19 23:04:23 2012
       Checksum : c40d9d33 - correct
         Events : 1


   Device Role : spare
   Array State :  ('A' == active, '.' == missing)
/dev/sde5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
           Name : GATEWAY:RAID5_500G
  Creation Time : Wed Apr 28 16:10:43 2010
     Raid Level : -unknown-
   Raid Devices : 0

 Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : e5e08b73:78b52879:09aa765e:17005a82

    Update Time : Sat May 19 23:04:23 2012
       Checksum : 3932901d - correct
         Events : 1


   Device Role : spare
   Array State :  ('A' == active, '.' == missing)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: All spares in Raid 5, default chunk? version of mdadm significant?
       [not found] <CAK-d5dYwa69wcSGX-u97WmJbMceEX2D1wiXRea-mS+X+i9Dyyw@mail.gmail.com>
  2012-06-22 12:24 ` All spares in Raid 5, default chunk? version of mdadm significant? Anshuman Aggarwal
@ 2012-06-25  6:50 ` Anshuman Aggarwal
  2012-06-25  7:01   ` What was the default chunk size in previous versions of mdadm? Is there a way to set data/super offset? Trying to recreate a raid 5 md with all spares Anshuman Aggarwal
  2012-06-25 23:33   ` All spares in Raid 5, default chunk? version of mdadm significant? NeilBrown
  1 sibling, 2 replies; 4+ messages in thread
From: Anshuman Aggarwal @ 2012-06-25  6:50 UTC (permalink / raw)
  To: linux-raid

Hi all,
 Hate to bump a thread but have I missed any information that would help to get me help :)? 

Neil, 
 as far as the default chunk sizes and mdadm versions are concerned, I am guessing you may be amongst the few who would know that conclusively? Also could the data offset and super offset be significant?

Thanks,
Anshu

On 22-Jun-2012, at 5:50 PM, Anshuman Aggarwal wrote:

> Hi,
>  I have a 6 device raid 5 array which had 1 disk go bad and then due to a power outage, the machine shutdown and when it started all the disks were showing up as spares with the following mdadm -E output (sample for one given below and full for all devices attached).
> 
> This md device was part a Physical Volume for a LVM Volume Group. 
> 
> I am trying to recreate the array using mdadm --create --assume-clean using 1 device as missing. I am checking if the device is getting created correctly by checking the UUID of the created device which would match if the device gets created correctly.
> 
> I have tried a few combinations of the disk order that I believe is right however I think I'm getting tripped up by the fact that the mdadm I used to create this md device was some 2.x series and now we are on 3.x (and I may have taken some of the defaults originally which I don't' remember)
> What all 'defaults' have changed over the versions so I can try those? Like chunk size? Can we manually configure the super/data offset? Are those significant when we do an mdadm --create --assume-clean?
> 
> Thanks,
> Anshu
> 
> /dev/sda5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
>            Name : GATEWAY:RAID5_500G
>   Creation Time : Wed Apr 28 16:10:43 2010
>      Raid Level : -unknown-
>    Raid Devices : 0
> 
>  Avail Dev Size : 976768002 (465.76 GiB 500.11 GB)
>   Used Dev Size : 976765954 (465.76 GiB 500.10 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>           State : active
>     Device UUID : a8499a91:628ddde8:1cc8f4b9:749136f9
> 
>     Update Time : Sat May 19 23:04:23 2012
>        Checksum : 9950883c - correct
>          Events : 1
> 
> 
>    Device Role : spare
>    Array State :  ('A' == active, '.' == missing)
> 
> <md5.txt>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* What was the default chunk size in previous versions of mdadm? Is there a way to set data/super offset? Trying to recreate a raid 5 md with all spares
  2012-06-25  6:50 ` Anshuman Aggarwal
@ 2012-06-25  7:01   ` Anshuman Aggarwal
  2012-06-25 23:33   ` All spares in Raid 5, default chunk? version of mdadm significant? NeilBrown
  1 sibling, 0 replies; 4+ messages in thread
From: Anshuman Aggarwal @ 2012-06-25  7:01 UTC (permalink / raw)
  To: linux-raid

---I hope the subject is clearer now, in line with the mailing list expectations ----

Hi,
I have a 6 device raid 5 array which had 1 disk go bad and then due to a power outage, the machine shutdown and when it started all the disks were showing up as spares with the following mdadm -E output (sample for one given below and full for all devices attached).

This md device was part a Physical Volume for a LVM Volume Group. 

I am trying to recreate the array using mdadm --create --assume-clean using 1 device as missing. I am checking if the device is getting created correctly by checking the UUID of the created device which would match if the device gets created correctly.

I have tried a few combinations of the disk order that I believe is right however I think I'm getting tripped up by the fact that the mdadm I used to create this md device was some 2.x series and now we are on 3.x (and I may have taken some of the defaults originally which I don't' remember)
What all 'defaults' have changed over the versions so I can try those? Like chunk size? Can we manually configure the super/data offset? Are those significant when we do an mdadm --create --assume-clean?

Thanks,
Anshu

/dev/sda5:
         Magic : a92b4efc
       Version : 1.2
   Feature Map : 0x0
    Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
          Name : GATEWAY:RAID5_500G
 Creation Time : Wed Apr 28 16:10:43 2010
    Raid Level : -unknown-
  Raid Devices : 0

Avail Dev Size : 976768002 (465.76 GiB 500.11 GB)
 Used Dev Size : 976765954 (465.76 GiB 500.10 GB)
   Data Offset : 2048 sectors
  Super Offset : 8 sectors
         State : active
   Device UUID : a8499a91:628ddde8:1cc8f4b9:749136f9

   Update Time : Sat May 19 23:04:23 2012
      Checksum : 9950883c - correct
        Events : 1


  Device Role : spare
  Array State :  ('A' == active, '.' == missing)

<md5.txt>



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: All spares in Raid 5, default chunk? version of mdadm significant?
  2012-06-25  6:50 ` Anshuman Aggarwal
  2012-06-25  7:01   ` What was the default chunk size in previous versions of mdadm? Is there a way to set data/super offset? Trying to recreate a raid 5 md with all spares Anshuman Aggarwal
@ 2012-06-25 23:33   ` NeilBrown
  1 sibling, 0 replies; 4+ messages in thread
From: NeilBrown @ 2012-06-25 23:33 UTC (permalink / raw)
  To: Anshuman Aggarwal; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5073 bytes --]

On Mon, 25 Jun 2012 12:20:06 +0530 Anshuman Aggarwal
<anshuman.aggarwal@gmail.com> wrote:

> Hi all,
>  Hate to bump a thread but have I missed any information that would help to get me help :)? 

I would generally recommend waiting 1 week before re-posting.  Things do
certainly get dropped so resending is important.  But sometimes people go
away for the weekend or are otherwise busy. :-(

> 
> Neil, 
>  as far as the default chunk sizes and mdadm versions are concerned, I am guessing you may be amongst the few who would know that conclusively? Also could the data offset and super offset be significant?

Well .... it is all in the source code in git.
A bit of hunting suggest the default chunk size changed from 64K to 512K just
after 3.1....


> 
> Thanks,
> Anshu
> 
> On 22-Jun-2012, at 5:50 PM, Anshuman Aggarwal wrote:
> 
> > Hi,
> >  I have a 6 device raid 5 array which had 1 disk go bad and then due to a power outage, the machine shutdown and when it started all the disks were showing up as spares with the following mdadm -E output (sample for one given below and full for all devices attached).

Yes ... sorry about that ... you've probably seen
http://neil.brown.name/blog/20120615073245

> > 
> > This md device was part a Physical Volume for a LVM Volume Group. 
> > 
> > I am trying to recreate the array using mdadm --create --assume-clean using 1 device as missing. I am checking if the device is getting created correctly by checking the UUID of the created device which would match if the device gets created correctly.
> > 
> > I have tried a few combinations of the disk order that I believe is right however I think I'm getting tripped up by the fact that the mdadm I used to create this md device was some 2.x series and now we are on 3.x (and I may have taken some of the defaults originally which I don't' remember)
> > What all 'defaults' have changed over the versions so I can try those? Like chunk size? Can we manually configure the super/data offset? Are those significant when we do an mdadm --create --assume-clean?

Chunk size was 64K with 2.x.  However your array was created in April 2010
which is after 3.1.1 was released, so you might have been using 3.1.x ??
The default metadata switch to 1.2 in Feb 2010 for mdadm-3.1.2.

You were definitely using 1.2 metadata as that info wasn't destroyed
(which implies a super offset of 4K (8 sectors).
So maybe you created the array with mdadm-3.1.2 or later implying a chunk
default chunk size of 512K.

The data offset has changed a couple of times and when you add a spare it
might get a different data-offset than the rest of the array.  However you
appear to have the data-offsets recorded in your metadata:

$ grep -E '(/dev/|Data Offset)' /tmp/md5.txt
/dev/sda5:
    Data Offset : 2048 sectors
/dev/sdb5:
    Data Offset : 2048 sectors
/dev/sdc5:
    Data Offset : 272 sectors
/dev/sdd5:
    Data Offset : 2048 sectors
/dev/sde5:
    Data Offset : 2048 sectors


'272' was old.  2048 is newer.

As you have differing data offsets you cannot recreate with any released
version of mdadm.
If you

 git clone git://neil.brown.name/mdadm -b data_offset
 cd mdadm
 make

Then try things like:

 ./mdadm -C /dev/md/RAID5_500G -l5 -n6 -c 512 --assume-clean \
     /dev/sda5:2048s /dev/sdb5:2048s missing /dev/sdc5:272s /dev/sdd5:2048s \
     /dev/sde5:2048s

then try to assemble the volume group and check the filesystem.

I'm just guessing at the order and where to put the 'missing' device.  You
might know better - or might have a "RAID Conf printout" in recent kernel
logs which gives more hints.
The ":2048s" or ":272s" is specific to this "data_offset" version of mdadm
and tells it to set the data offset for that device to that many sectors.

You might need to try different permutations or different chunksizes until
the vg assembles properly and the 'fsck' reports everything is OK.

Good luck,

NeilBrown


> > 
> > Thanks,
> > Anshu
> > 
> > /dev/sda5:
> >           Magic : a92b4efc
> >         Version : 1.2
> >     Feature Map : 0x0
> >      Array UUID : b480fe0c:c9e29256:0fcf1b0c:1f8c762c
> >            Name : GATEWAY:RAID5_500G
> >   Creation Time : Wed Apr 28 16:10:43 2010
> >      Raid Level : -unknown-
> >    Raid Devices : 0
> > 
> >  Avail Dev Size : 976768002 (465.76 GiB 500.11 GB)
> >   Used Dev Size : 976765954 (465.76 GiB 500.10 GB)
> >     Data Offset : 2048 sectors
> >    Super Offset : 8 sectors
> >           State : active
> >     Device UUID : a8499a91:628ddde8:1cc8f4b9:749136f9
> > 
> >     Update Time : Sat May 19 23:04:23 2012
> >        Checksum : 9950883c - correct
> >          Events : 1
> > 
> > 
> >    Device Role : spare
> >    Array State :  ('A' == active, '.' == missing)
> > 
> > <md5.txt>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-06-25 23:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAK-d5dYwa69wcSGX-u97WmJbMceEX2D1wiXRea-mS+X+i9Dyyw@mail.gmail.com>
2012-06-22 12:24 ` All spares in Raid 5, default chunk? version of mdadm significant? Anshuman Aggarwal
2012-06-25  6:50 ` Anshuman Aggarwal
2012-06-25  7:01   ` What was the default chunk size in previous versions of mdadm? Is there a way to set data/super offset? Trying to recreate a raid 5 md with all spares Anshuman Aggarwal
2012-06-25 23:33   ` All spares in Raid 5, default chunk? version of mdadm significant? NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).