linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid6 issues
@ 2011-06-16 20:28 Chad Walker
  2011-06-18 19:48 ` Chad Walker
  0 siblings, 1 reply; 28+ messages in thread
From: Chad Walker @ 2011-06-16 20:28 UTC (permalink / raw)
  To: linux-raid

I have 15 drives in a raid6 plus a spare. I returned home after being
gone for 12 days and one of the drives was marked as faulty. The load
on the machine was crazy, and mdadm stop responding. I should've done
an strace, sorry. Likewise cat'ing /proc/mdstat was blocking. I
rebooted and mdadm started recovering, but to the faulty drive. I
checked in on /proc/mdstat periodically over the 35-hour recovery.
When it was down to the last bit, /proc/mdstat and mdadm stopped
responding again. I gave it 28 hours, and then when I still couldn't
get any insight into it I rebooted again. Now /proc/mdstat says it's
inactive. And I don't appear to be able to assemble it. I issued
--examine on each of the 16 drives and they all agreed with each other
except for the faulty drive. I popped the faulty drive out and
rebooted again, still no luck assembling.

This is what my /proc/mdstat looks like:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md1 : inactive sdd1[12](S) sdm1[6](S) sdf1[0](S) sdh1[2](S) sdi1[7](S)
sdb1[14](S) sdo1[4](S) sdg1[1](S) sdl1[8](S) sdk1[9](S) sdc1[13](S)
sdn1[3](S) sdj1[10](S) sdp1[15](S) sde1[11](S)
      29302715520 blocks

unused devices: <none>

This is what the --examine for /dev/sd[b-o]1 and /dev/sdq1 look like:
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 78e3f473:48bbfc34:0e051622:5c30970b
  Creation Time : Wed Mar 30 14:48:46 2011
     Raid Level : raid6
  Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
     Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
   Raid Devices : 15
  Total Devices : 16
Preferred Minor : 1

    Update Time : Wed Jun 15 07:45:12 2011
          State : active
 Active Devices : 14
Working Devices : 15
 Failed Devices : 1
  Spare Devices : 1
       Checksum : e4ff038f - correct
         Events : 38452

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    14       8       17       14      active sync   /dev/sdb1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      209        3      active sync   /dev/sdn1
   4     4       8      225        4      active sync   /dev/sdo1
   5     5       0        0        5      faulty removed
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8      129        7      active sync   /dev/sdi1
   8     8       8      177        8      active sync   /dev/sdl1
   9     9       8      161        9      active sync   /dev/sdk1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8       65       11      active sync   /dev/sde1
  12    12       8       49       12      active sync   /dev/sdd1
  13    13       8       33       13      active sync   /dev/sdc1
  14    14       8       17       14      active sync   /dev/sdb1
  15    15      65        1       15      spare   /dev/sdq1

And this is what --examine for /dev/sdp1 looked like:
/dev/sdp1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 78e3f473:48bbfc34:0e051622:5c30970b
  Creation Time : Wed Mar 30 14:48:46 2011
     Raid Level : raid6
  Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
     Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
   Raid Devices : 15
  Total Devices : 16
Preferred Minor : 1

    Update Time : Tue Jun 14 07:35:56 2011
          State : active
 Active Devices : 15
Working Devices : 16
 Failed Devices : 0
  Spare Devices : 1
       Checksum : e4fdb07b - correct
         Events : 38433

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     5       8      241        5      active sync   /dev/sdp1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      209        3      active sync   /dev/sdn1
   4     4       8      225        4      active sync   /dev/sdo1
   5     5       8      241        5      active sync   /dev/sdp1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8      129        7      active sync   /dev/sdi1
   8     8       8      177        8      active sync   /dev/sdl1
   9     9       8      161        9      active sync   /dev/sdk1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8       65       11      active sync   /dev/sde1
  12    12       8       49       12      active sync   /dev/sdd1
  13    13       8       33       13      active sync   /dev/sdc1
  14    14       8       17       14      active sync   /dev/sdb1
  15    15      65        1       15      spare   /dev/sdq1

I was scared to run mdadm --build --level=6 --raid-devices=15 /dev/md1
/dev/sdf1 /dev/sdg1....

system information:
Ubuntu 11.04, kernel 2.6.38, x86_64, mdadm version 3.1.4, 3ware 9650SE

Any advice? There's about 1TB of data on these drives that would cause
my wife to kill me (and about 9TB of data would just irritate her to
loose).

-chad

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: raid6 issues
  2011-06-16 20:28 raid6 issues Chad Walker
@ 2011-06-18 19:48 ` Chad Walker
  2011-06-18 19:55   ` Chad Walker
  0 siblings, 1 reply; 28+ messages in thread
From: Chad Walker @ 2011-06-18 19:48 UTC (permalink / raw)
  To: linux-raid

Anyone? Please help. I've been searching for answers for the last five
days. The (S) after the drives in the /proc/mdstat means that it
thinks they are all spares? I've seen some mention of an
'--assume-clean' option but I can't find any documentation on it. I'm
running 3.1.4 (what apt-get got), but I see on Neil Brown's site that
in the release for 3.1.5 there are 'Fixes for "--assemble --force" in
various unusual cases' and 'Allow "--assemble --update=no-bitmap" so
an array with a corrupt bitmap can still be assembled', would either
of these be applicable in my case? I will build 3.1.5 and see if it
helps.

-chad




On Thu, Jun 16, 2011 at 1:28 PM, Chad Walker
<chad@chad-cat-lore-eddie.com> wrote:
> I have 15 drives in a raid6 plus a spare. I returned home after being
> gone for 12 days and one of the drives was marked as faulty. The load
> on the machine was crazy, and mdadm stop responding. I should've done
> an strace, sorry. Likewise cat'ing /proc/mdstat was blocking. I
> rebooted and mdadm started recovering, but to the faulty drive. I
> checked in on /proc/mdstat periodically over the 35-hour recovery.
> When it was down to the last bit, /proc/mdstat and mdadm stopped
> responding again. I gave it 28 hours, and then when I still couldn't
> get any insight into it I rebooted again. Now /proc/mdstat says it's
> inactive. And I don't appear to be able to assemble it. I issued
> --examine on each of the 16 drives and they all agreed with each other
> except for the faulty drive. I popped the faulty drive out and
> rebooted again, still no luck assembling.
>
> This is what my /proc/mdstat looks like:
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md1 : inactive sdd1[12](S) sdm1[6](S) sdf1[0](S) sdh1[2](S) sdi1[7](S)
> sdb1[14](S) sdo1[4](S) sdg1[1](S) sdl1[8](S) sdk1[9](S) sdc1[13](S)
> sdn1[3](S) sdj1[10](S) sdp1[15](S) sde1[11](S)
>      29302715520 blocks
>
> unused devices: <none>
>
> This is what the --examine for /dev/sd[b-o]1 and /dev/sdq1 look like:
> /dev/sdb1:
>          Magic : a92b4efc
>        Version : 0.90.00
>           UUID : 78e3f473:48bbfc34:0e051622:5c30970b
>  Creation Time : Wed Mar 30 14:48:46 2011
>     Raid Level : raid6
>  Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
>     Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
>   Raid Devices : 15
>  Total Devices : 16
> Preferred Minor : 1
>
>    Update Time : Wed Jun 15 07:45:12 2011
>          State : active
>  Active Devices : 14
> Working Devices : 15
>  Failed Devices : 1
>  Spare Devices : 1
>       Checksum : e4ff038f - correct
>         Events : 38452
>
>         Layout : left-symmetric
>     Chunk Size : 64K
>
>      Number   Major   Minor   RaidDevice State
> this    14       8       17       14      active sync   /dev/sdb1
>
>   0     0       8       81        0      active sync   /dev/sdf1
>   1     1       8       97        1      active sync   /dev/sdg1
>   2     2       8      113        2      active sync   /dev/sdh1
>   3     3       8      209        3      active sync   /dev/sdn1
>   4     4       8      225        4      active sync   /dev/sdo1
>   5     5       0        0        5      faulty removed
>   6     6       8      193        6      active sync   /dev/sdm1
>   7     7       8      129        7      active sync   /dev/sdi1
>   8     8       8      177        8      active sync   /dev/sdl1
>   9     9       8      161        9      active sync   /dev/sdk1
>  10    10       8      145       10      active sync   /dev/sdj1
>  11    11       8       65       11      active sync   /dev/sde1
>  12    12       8       49       12      active sync   /dev/sdd1
>  13    13       8       33       13      active sync   /dev/sdc1
>  14    14       8       17       14      active sync   /dev/sdb1
>  15    15      65        1       15      spare   /dev/sdq1
>
> And this is what --examine for /dev/sdp1 looked like:
> /dev/sdp1:
>          Magic : a92b4efc
>        Version : 0.90.00
>           UUID : 78e3f473:48bbfc34:0e051622:5c30970b
>  Creation Time : Wed Mar 30 14:48:46 2011
>     Raid Level : raid6
>  Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
>     Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
>   Raid Devices : 15
>  Total Devices : 16
> Preferred Minor : 1
>
>    Update Time : Tue Jun 14 07:35:56 2011
>          State : active
>  Active Devices : 15
> Working Devices : 16
>  Failed Devices : 0
>  Spare Devices : 1
>       Checksum : e4fdb07b - correct
>         Events : 38433
>
>         Layout : left-symmetric
>     Chunk Size : 64K
>
>      Number   Major   Minor   RaidDevice State
> this     5       8      241        5      active sync   /dev/sdp1
>
>   0     0       8       81        0      active sync   /dev/sdf1
>   1     1       8       97        1      active sync   /dev/sdg1
>   2     2       8      113        2      active sync   /dev/sdh1
>   3     3       8      209        3      active sync   /dev/sdn1
>   4     4       8      225        4      active sync   /dev/sdo1
>   5     5       8      241        5      active sync   /dev/sdp1
>   6     6       8      193        6      active sync   /dev/sdm1
>   7     7       8      129        7      active sync   /dev/sdi1
>   8     8       8      177        8      active sync   /dev/sdl1
>   9     9       8      161        9      active sync   /dev/sdk1
>  10    10       8      145       10      active sync   /dev/sdj1
>  11    11       8       65       11      active sync   /dev/sde1
>  12    12       8       49       12      active sync   /dev/sdd1
>  13    13       8       33       13      active sync   /dev/sdc1
>  14    14       8       17       14      active sync   /dev/sdb1
>  15    15      65        1       15      spare   /dev/sdq1
>
> I was scared to run mdadm --build --level=6 --raid-devices=15 /dev/md1
> /dev/sdf1 /dev/sdg1....
>
> system information:
> Ubuntu 11.04, kernel 2.6.38, x86_64, mdadm version 3.1.4, 3ware 9650SE
>
> Any advice? There's about 1TB of data on these drives that would cause
> my wife to kill me (and about 9TB of data would just irritate her to
> loose).
>
> -chad
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: raid6 issues
  2011-06-18 19:48 ` Chad Walker
@ 2011-06-18 19:55   ` Chad Walker
  2011-06-18 23:01     ` NeilBrown
  0 siblings, 1 reply; 28+ messages in thread
From: Chad Walker @ 2011-06-18 19:55 UTC (permalink / raw)
  To: linux-raid

also output from "mdadm --assemble --scan --verbose"

mdadm: looking for devices for /dev/md1
mdadm: cannot open device /dev/sdp1: Device or resource busy
mdadm: /dev/sdp1 has wrong uuid.
mdadm: cannot open device /dev/sdo1: Device or resource busy
mdadm: /dev/sdo1 has wrong uuid.
mdadm: cannot open device /dev/sdn1: Device or resource busy
mdadm: /dev/sdn1 has wrong uuid.
mdadm: cannot open device /dev/sdm1: Device or resource busy
mdadm: /dev/sdm1 has wrong uuid.
mdadm: cannot open device /dev/sdl1: Device or resource busy
mdadm: /dev/sdl1 has wrong uuid.
mdadm: cannot open device /dev/sdk1: Device or resource busy
mdadm: /dev/sdk1 has wrong uuid.
mdadm: cannot open device /dev/sdj1: Device or resource busy
mdadm: /dev/sdj1 has wrong uuid.
mdadm: cannot open device /dev/sdi1: Device or resource busy
mdadm: /dev/sdi1 has wrong uuid.
mdadm: cannot open device /dev/sdh1: Device or resource busy
mdadm: /dev/sdh1 has wrong uuid.
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: /dev/sdg1 has wrong uuid.
mdadm: cannot open device /dev/sdf1: Device or resource busy
mdadm: /dev/sdf1 has wrong uuid.
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: /dev/sde1 has wrong uuid.
mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: /dev/sdd1 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdb1: Device or resource busy
mdadm: /dev/sdb1 has wrong uuid.


-chad




On Sat, Jun 18, 2011 at 12:48 PM, Chad Walker
<chad@chad-cat-lore-eddie.com> wrote:
> Anyone? Please help. I've been searching for answers for the last five
> days. The (S) after the drives in the /proc/mdstat means that it
> thinks they are all spares? I've seen some mention of an
> '--assume-clean' option but I can't find any documentation on it. I'm
> running 3.1.4 (what apt-get got), but I see on Neil Brown's site that
> in the release for 3.1.5 there are 'Fixes for "--assemble --force" in
> various unusual cases' and 'Allow "--assemble --update=no-bitmap" so
> an array with a corrupt bitmap can still be assembled', would either
> of these be applicable in my case? I will build 3.1.5 and see if it
> helps.
>
> -chad
>
>
>
>
> On Thu, Jun 16, 2011 at 1:28 PM, Chad Walker
> <chad@chad-cat-lore-eddie.com> wrote:
>> I have 15 drives in a raid6 plus a spare. I returned home after being
>> gone for 12 days and one of the drives was marked as faulty. The load
>> on the machine was crazy, and mdadm stop responding. I should've done
>> an strace, sorry. Likewise cat'ing /proc/mdstat was blocking. I
>> rebooted and mdadm started recovering, but to the faulty drive. I
>> checked in on /proc/mdstat periodically over the 35-hour recovery.
>> When it was down to the last bit, /proc/mdstat and mdadm stopped
>> responding again. I gave it 28 hours, and then when I still couldn't
>> get any insight into it I rebooted again. Now /proc/mdstat says it's
>> inactive. And I don't appear to be able to assemble it. I issued
>> --examine on each of the 16 drives and they all agreed with each other
>> except for the faulty drive. I popped the faulty drive out and
>> rebooted again, still no luck assembling.
>>
>> This is what my /proc/mdstat looks like:
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md1 : inactive sdd1[12](S) sdm1[6](S) sdf1[0](S) sdh1[2](S) sdi1[7](S)
>> sdb1[14](S) sdo1[4](S) sdg1[1](S) sdl1[8](S) sdk1[9](S) sdc1[13](S)
>> sdn1[3](S) sdj1[10](S) sdp1[15](S) sde1[11](S)
>>      29302715520 blocks
>>
>> unused devices: <none>
>>
>> This is what the --examine for /dev/sd[b-o]1 and /dev/sdq1 look like:
>> /dev/sdb1:
>>          Magic : a92b4efc
>>        Version : 0.90.00
>>           UUID : 78e3f473:48bbfc34:0e051622:5c30970b
>>  Creation Time : Wed Mar 30 14:48:46 2011
>>     Raid Level : raid6
>>  Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
>>     Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
>>   Raid Devices : 15
>>  Total Devices : 16
>> Preferred Minor : 1
>>
>>    Update Time : Wed Jun 15 07:45:12 2011
>>          State : active
>>  Active Devices : 14
>> Working Devices : 15
>>  Failed Devices : 1
>>  Spare Devices : 1
>>       Checksum : e4ff038f - correct
>>         Events : 38452
>>
>>         Layout : left-symmetric
>>     Chunk Size : 64K
>>
>>      Number   Major   Minor   RaidDevice State
>> this    14       8       17       14      active sync   /dev/sdb1
>>
>>   0     0       8       81        0      active sync   /dev/sdf1
>>   1     1       8       97        1      active sync   /dev/sdg1
>>   2     2       8      113        2      active sync   /dev/sdh1
>>   3     3       8      209        3      active sync   /dev/sdn1
>>   4     4       8      225        4      active sync   /dev/sdo1
>>   5     5       0        0        5      faulty removed
>>   6     6       8      193        6      active sync   /dev/sdm1
>>   7     7       8      129        7      active sync   /dev/sdi1
>>   8     8       8      177        8      active sync   /dev/sdl1
>>   9     9       8      161        9      active sync   /dev/sdk1
>>  10    10       8      145       10      active sync   /dev/sdj1
>>  11    11       8       65       11      active sync   /dev/sde1
>>  12    12       8       49       12      active sync   /dev/sdd1
>>  13    13       8       33       13      active sync   /dev/sdc1
>>  14    14       8       17       14      active sync   /dev/sdb1
>>  15    15      65        1       15      spare   /dev/sdq1
>>
>> And this is what --examine for /dev/sdp1 looked like:
>> /dev/sdp1:
>>          Magic : a92b4efc
>>        Version : 0.90.00
>>           UUID : 78e3f473:48bbfc34:0e051622:5c30970b
>>  Creation Time : Wed Mar 30 14:48:46 2011
>>     Raid Level : raid6
>>  Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
>>     Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
>>   Raid Devices : 15
>>  Total Devices : 16
>> Preferred Minor : 1
>>
>>    Update Time : Tue Jun 14 07:35:56 2011
>>          State : active
>>  Active Devices : 15
>> Working Devices : 16
>>  Failed Devices : 0
>>  Spare Devices : 1
>>       Checksum : e4fdb07b - correct
>>         Events : 38433
>>
>>         Layout : left-symmetric
>>     Chunk Size : 64K
>>
>>      Number   Major   Minor   RaidDevice State
>> this     5       8      241        5      active sync   /dev/sdp1
>>
>>   0     0       8       81        0      active sync   /dev/sdf1
>>   1     1       8       97        1      active sync   /dev/sdg1
>>   2     2       8      113        2      active sync   /dev/sdh1
>>   3     3       8      209        3      active sync   /dev/sdn1
>>   4     4       8      225        4      active sync   /dev/sdo1
>>   5     5       8      241        5      active sync   /dev/sdp1
>>   6     6       8      193        6      active sync   /dev/sdm1
>>   7     7       8      129        7      active sync   /dev/sdi1
>>   8     8       8      177        8      active sync   /dev/sdl1
>>   9     9       8      161        9      active sync   /dev/sdk1
>>  10    10       8      145       10      active sync   /dev/sdj1
>>  11    11       8       65       11      active sync   /dev/sde1
>>  12    12       8       49       12      active sync   /dev/sdd1
>>  13    13       8       33       13      active sync   /dev/sdc1
>>  14    14       8       17       14      active sync   /dev/sdb1
>>  15    15      65        1       15      spare   /dev/sdq1
>>
>> I was scared to run mdadm --build --level=6 --raid-devices=15 /dev/md1
>> /dev/sdf1 /dev/sdg1....
>>
>> system information:
>> Ubuntu 11.04, kernel 2.6.38, x86_64, mdadm version 3.1.4, 3ware 9650SE
>>
>> Any advice? There's about 1TB of data on these drives that would cause
>> my wife to kill me (and about 9TB of data would just irritate her to
>> loose).
>>
>> -chad
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: raid6 issues
  2011-06-18 19:55   ` Chad Walker
@ 2011-06-18 23:01     ` NeilBrown
  2011-06-18 23:14       ` Chad Walker
  0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2011-06-18 23:01 UTC (permalink / raw)
  To: Chad Walker; +Cc: linux-raid

On Sat, 18 Jun 2011 12:55:12 -0700 Chad Walker <chad@chad-cat-lore-eddie.com>
wrote:

> also output from "mdadm --assemble --scan --verbose"
> 
> mdadm: looking for devices for /dev/md1
> mdadm: cannot open device /dev/sdp1: Device or resource busy
> mdadm: /dev/sdp1 has wrong uuid.
> mdadm: cannot open device /dev/sdo1: Device or resource busy
> mdadm: /dev/sdo1 has wrong uuid.
> mdadm: cannot open device /dev/sdn1: Device or resource busy
> mdadm: /dev/sdn1 has wrong uuid.
> mdadm: cannot open device /dev/sdm1: Device or resource busy
> mdadm: /dev/sdm1 has wrong uuid.
> mdadm: cannot open device /dev/sdl1: Device or resource busy
> mdadm: /dev/sdl1 has wrong uuid.
> mdadm: cannot open device /dev/sdk1: Device or resource busy
> mdadm: /dev/sdk1 has wrong uuid.
> mdadm: cannot open device /dev/sdj1: Device or resource busy
> mdadm: /dev/sdj1 has wrong uuid.
> mdadm: cannot open device /dev/sdi1: Device or resource busy
> mdadm: /dev/sdi1 has wrong uuid.
> mdadm: cannot open device /dev/sdh1: Device or resource busy
> mdadm: /dev/sdh1 has wrong uuid.
> mdadm: cannot open device /dev/sdg1: Device or resource busy
> mdadm: /dev/sdg1 has wrong uuid.
> mdadm: cannot open device /dev/sdf1: Device or resource busy
> mdadm: /dev/sdf1 has wrong uuid.
> mdadm: cannot open device /dev/sde1: Device or resource busy
> mdadm: /dev/sde1 has wrong uuid.
> mdadm: cannot open device /dev/sdd1: Device or resource busy
> mdadm: /dev/sdd1 has wrong uuid.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm: /dev/sdc1 has wrong uuid.
> mdadm: cannot open device /dev/sdb1: Device or resource busy
> mdadm: /dev/sdb1 has wrong uuid.
> 
> 

Try

  mdadm -Ss
the the above --assemble with --force added.

NeilBrown

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: raid6 issues
  2011-06-18 23:01     ` NeilBrown
@ 2011-06-18 23:14       ` Chad Walker
  0 siblings, 0 replies; 28+ messages in thread
From: Chad Walker @ 2011-06-18 23:14 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Oh thank you! I guess I figured since the array was inactive, it was
stopped... rebuilding onto the spare now.

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md1 : active raid6 sdp1[15] sdf1[0] sdb1[14] sdc1[13] sdd1[12]
sde1[11] sdj1[10] sdk1[9] sdl1[8] sdi1[7] sdm1[6] sdo1[4] sdn1[3]
sdh1[2] sdg1[1]
      25395686784 blocks level 6, 64k chunk, algorithm 2 [15/14]
[UUUUU_UUUUUUUUU]
      [>....................]  recovery =  0.1% (2962644/1953514368)
finish=1351.7min speed=24049K/sec

unused devices: <none>

-chad




On Sat, Jun 18, 2011 at 6:01 PM, NeilBrown <neilb@suse.de> wrote:
> On Sat, 18 Jun 2011 12:55:12 -0700 Chad Walker <chad@chad-cat-lore-eddie.com>
> wrote:
>
>> also output from "mdadm --assemble --scan --verbose"
>>
>> mdadm: looking for devices for /dev/md1
>> mdadm: cannot open device /dev/sdp1: Device or resource busy
>> mdadm: /dev/sdp1 has wrong uuid.
>> mdadm: cannot open device /dev/sdo1: Device or resource busy
>> mdadm: /dev/sdo1 has wrong uuid.
>> mdadm: cannot open device /dev/sdn1: Device or resource busy
>> mdadm: /dev/sdn1 has wrong uuid.
>> mdadm: cannot open device /dev/sdm1: Device or resource busy
>> mdadm: /dev/sdm1 has wrong uuid.
>> mdadm: cannot open device /dev/sdl1: Device or resource busy
>> mdadm: /dev/sdl1 has wrong uuid.
>> mdadm: cannot open device /dev/sdk1: Device or resource busy
>> mdadm: /dev/sdk1 has wrong uuid.
>> mdadm: cannot open device /dev/sdj1: Device or resource busy
>> mdadm: /dev/sdj1 has wrong uuid.
>> mdadm: cannot open device /dev/sdi1: Device or resource busy
>> mdadm: /dev/sdi1 has wrong uuid.
>> mdadm: cannot open device /dev/sdh1: Device or resource busy
>> mdadm: /dev/sdh1 has wrong uuid.
>> mdadm: cannot open device /dev/sdg1: Device or resource busy
>> mdadm: /dev/sdg1 has wrong uuid.
>> mdadm: cannot open device /dev/sdf1: Device or resource busy
>> mdadm: /dev/sdf1 has wrong uuid.
>> mdadm: cannot open device /dev/sde1: Device or resource busy
>> mdadm: /dev/sde1 has wrong uuid.
>> mdadm: cannot open device /dev/sdd1: Device or resource busy
>> mdadm: /dev/sdd1 has wrong uuid.
>> mdadm: cannot open device /dev/sdc1: Device or resource busy
>> mdadm: /dev/sdc1 has wrong uuid.
>> mdadm: cannot open device /dev/sdb1: Device or resource busy
>> mdadm: /dev/sdb1 has wrong uuid.
>>
>>
>
> Try
>
>  mdadm -Ss
> the the above --assemble with --force added.
>
> NeilBrown
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RAID6 issues
@ 2011-09-13  6:14 Andriano
  2011-09-13  6:25 ` NeilBrown
  2011-09-27 18:46 ` Thomas Fjellstrom
  0 siblings, 2 replies; 28+ messages in thread
From: Andriano @ 2011-09-13  6:14 UTC (permalink / raw)
  To: linux-raid

Hello Linux-RAID mailing list,

I have an issue with my RAID6 array.
Here goes a short description of the system:

opensuse 11.4
Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
(a432f18) x86_64 x86_64 x86_64 GNU/Linux
Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
connected to the HBA, 2 - motherboard ports

I had some issues with one of the onboard connected disks, so tried to
plug it to different ports, just to eliminate possibly faulty port.
After reboot, suddenly other drives got kicked out from the array.
Re-assembling them gives weird errors.

--- some output ---
[3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
[5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
[8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
[8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
[8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
[8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
[8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
[8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
[8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
[8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk

#more /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff

#mdadm --assemble --force --scan /dev/md0
mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.

dmesg:
[ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
[ 8215.651865] md: md_import_device returned -22
[ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
[ 8215.652388] md: md_import_device returned -22
[ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
[ 8215.653182] md: md_import_device returned -22

mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
UUID for every disk, all checksums are correct,
the only difference is -  Avail Dev Size : 3907028896 is the same for
9 disks, and 3907028864 for sdc

mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything


I would really appreciate if someone could point me to the right direction.

thanks

Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  6:14 RAID6 issues Andriano
@ 2011-09-13  6:25 ` NeilBrown
  2011-09-13  6:33   ` Andriano
  2011-09-27 18:46 ` Thomas Fjellstrom
  1 sibling, 1 reply; 28+ messages in thread
From: NeilBrown @ 2011-09-13  6:25 UTC (permalink / raw)
  To: Andriano; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3104 bytes --]

On Tue, 13 Sep 2011 16:14:51 +1000 Andriano <chief000@gmail.com> wrote:

> Hello Linux-RAID mailing list,
> 
> I have an issue with my RAID6 array.
> Here goes a short description of the system:
> 
> opensuse 11.4
> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> connected to the HBA, 2 - motherboard ports
> 
> I had some issues with one of the onboard connected disks, so tried to
> plug it to different ports, just to eliminate possibly faulty port.
> After reboot, suddenly other drives got kicked out from the array.
> Re-assembling them gives weird errors.
> 
> --- some output ---
> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
> 
> #more /etc/mdadm.conf
> DEVICE partitions
> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
> 
> #mdadm --assemble --force --scan /dev/md0
> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
> 
> dmesg:
> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
> [ 8215.651865] md: md_import_device returned -22
> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
> [ 8215.652388] md: md_import_device returned -22
> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
> [ 8215.653182] md: md_import_device returned -22
> 
> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> UUID for every disk, all checksums are correct,
> the only difference is -  Avail Dev Size : 3907028896 is the same for
> 9 disks, and 3907028864 for sdc

Please provide that output so we can see it too - it might be helpful.

NeilBrown

> 
> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
> 
> 
> I would really appreciate if someone could point me to the right direction.
> 
> thanks
> 
> Andrew
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  6:25 ` NeilBrown
@ 2011-09-13  6:33   ` Andriano
  2011-09-13  6:44     ` NeilBrown
  0 siblings, 1 reply; 28+ messages in thread
From: Andriano @ 2011-09-13  6:33 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

>
>> Hello Linux-RAID mailing list,
>>
>> I have an issue with my RAID6 array.
>> Here goes a short description of the system:
>>
>> opensuse 11.4
>> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
>> connected to the HBA, 2 - motherboard ports
>>
>> I had some issues with one of the onboard connected disks, so tried to
>> plug it to different ports, just to eliminate possibly faulty port.
>> After reboot, suddenly other drives got kicked out from the array.
>> Re-assembling them gives weird errors.
>>
>> --- some output ---
>> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
>> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
>> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
>> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
>> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
>> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
>> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
>> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
>> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
>> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
>>
>> #more /etc/mdadm.conf
>> DEVICE partitions
>> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
>>
>> #mdadm --assemble --force --scan /dev/md0
>> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
>> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
>> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
>> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
>>
>> dmesg:
>> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
>> [ 8215.651865] md: md_import_device returned -22
>> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
>> [ 8215.652388] md: md_import_device returned -22
>> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
>> [ 8215.653182] md: md_import_device returned -22
>>
>> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
>> UUID for every disk, all checksums are correct,
>> the only difference is -  Avail Dev Size : 3907028896 is the same for
>> 9 disks, and 3907028864 for sdc
>
> Please provide that output so we can see it too - it might be helpful.
>
> NeilBrown


# mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
mdadm: --update=summaries not understood for 1.x metadata


>
>>
>> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
>>
>>
>> I would really appreciate if someone could point me to the right direction.
>>
>> thanks
>>
>> Andrew
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  6:33   ` Andriano
@ 2011-09-13  6:44     ` NeilBrown
  2011-09-13  7:05       ` Andriano
  0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2011-09-13  6:44 UTC (permalink / raw)
  To: Andriano; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3549 bytes --]

On Tue, 13 Sep 2011 16:33:36 +1000 Andriano <chief000@gmail.com> wrote:

> >
> >> Hello Linux-RAID mailing list,
> >>
> >> I have an issue with my RAID6 array.
> >> Here goes a short description of the system:
> >>
> >> opensuse 11.4
> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> >> connected to the HBA, 2 - motherboard ports
> >>
> >> I had some issues with one of the onboard connected disks, so tried to
> >> plug it to different ports, just to eliminate possibly faulty port.
> >> After reboot, suddenly other drives got kicked out from the array.
> >> Re-assembling them gives weird errors.
> >>
> >> --- some output ---
> >> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
> >> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
> >> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
> >> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
> >> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
> >> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
> >> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
> >> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
> >> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
> >> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
> >>
> >> #more /etc/mdadm.conf
> >> DEVICE partitions
> >> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
> >>
> >> #mdadm --assemble --force --scan /dev/md0
> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
> >>
> >> dmesg:
> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
> >> [ 8215.651865] md: md_import_device returned -22
> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
> >> [ 8215.652388] md: md_import_device returned -22
> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
> >> [ 8215.653182] md: md_import_device returned -22
> >>
> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> >> UUID for every disk, all checksums are correct,
> >> the only difference is -  Avail Dev Size : 3907028896 is the same for
> >> 9 disks, and 3907028864 for sdc
> >
> > Please provide that output so we can see it too - it might be helpful.
> >
> > NeilBrown
> 
> 
> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
> mdadm: --update=summaries not understood for 1.x metadata
> 

Sorry - I was too terse.

I meant that output of "mdadm -E ...."

NeilBrown


> 
> >
> >>
> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
> >>
> >>
> >> I would really appreciate if someone could point me to the right direction.
> >>
> >> thanks
> >>
> >> Andrew
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  6:44     ` NeilBrown
@ 2011-09-13  7:05       ` Andriano
  2011-09-13  7:38         ` NeilBrown
  0 siblings, 1 reply; 28+ messages in thread
From: Andriano @ 2011-09-13  7:05 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown <neilb@suse.de> wrote:
> On Tue, 13 Sep 2011 16:33:36 +1000 Andriano <chief000@gmail.com> wrote:
>
>> >
>> >> Hello Linux-RAID mailing list,
>> >>
>> >> I have an issue with my RAID6 array.
>> >> Here goes a short description of the system:
>> >>
>> >> opensuse 11.4
>> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
>> >> connected to the HBA, 2 - motherboard ports
>> >>
>> >> I had some issues with one of the onboard connected disks, so tried to
>> >> plug it to different ports, just to eliminate possibly faulty port.
>> >> After reboot, suddenly other drives got kicked out from the array.
>> >> Re-assembling them gives weird errors.
>> >>
>> >> --- some output ---
>> >> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
>> >> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
>> >> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
>> >> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
>> >> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
>> >> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
>> >> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
>> >> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
>> >> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
>> >> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
>> >>
>> >> #more /etc/mdadm.conf
>> >> DEVICE partitions
>> >> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
>> >>
>> >> #mdadm --assemble --force --scan /dev/md0
>> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
>> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
>> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
>> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
>> >>
>> >> dmesg:
>> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
>> >> [ 8215.651865] md: md_import_device returned -22
>> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
>> >> [ 8215.652388] md: md_import_device returned -22
>> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
>> >> [ 8215.653182] md: md_import_device returned -22
>> >>
>> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
>> >> UUID for every disk, all checksums are correct,
>> >> the only difference is -  Avail Dev Size : 3907028896 is the same for
>> >> 9 disks, and 3907028864 for sdc
>> >
>> > Please provide that output so we can see it too - it might be helpful.
>> >
>> > NeilBrown
>>
>>
>> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
>> mdadm: --update=summaries not understood for 1.x metadata
>>
>
> Sorry - I was too terse.
>
> I meant that output of "mdadm -E ...."
>
> NeilBrown
>
>
>>
>> >
>> >>
>> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
>> >>
>> >>
>> >> I would really appreciate if someone could point me to the right direction.
>> >>
>> >> thanks
>> >>
>> >> Andrew
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> >
>
>

/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b

    Update Time : Mon Sep 12 22:36:35 2011
       Checksum : 205f92e1 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : AAAAAAAAAA ('A' == active, '.' == missing)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
    Data Offset : 304 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : afa2f348:88bd0376:29bcfe96:df32a522

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : ee1facae - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : d1a7cfca:a4d7aef7:47b6d3c6:82d1da5b

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : 5ab164a8 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sde:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : ba9497e9:7665c161:1e596d49:8a642880

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : 8a731bdf - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdf:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 8d503057:423c455d:665af78a:093b99b8

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : 6d8a7fa6 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdg:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 7d0284af:74ceb0e9:31eab49e:d9fedff5

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : a34e1766 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdh:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : a97e691f:f81bb643:9cedde86:87f9bc69

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : c947df28 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAAAAAAAA ('A' == active, '.' == missing)
/dev/sdi:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : c5970279:a68c84f0:a5803880:91f69e74

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : d3e2fa15 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 8
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdj:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : a62447a6:604c0917:1ab4d073:2ca99f8f

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : 36452bba - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdk:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
           Name : hnas:0  (local to host hnas)
  Creation Time : Wed Jan 19 21:17:33 2011
     Raid Level : raid6
   Raid Devices : 10

 Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
     Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
  Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : ac64f1b6:578eb873:9f28bbd4:8abc61b3

    Update Time : Tue Sep 13 11:50:18 2011
       Checksum : 284b3598 - correct
         Events : 6446662

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 9
   Array State : AAAAAA.AAA ('A' == active, '.' == missing)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  7:05       ` Andriano
@ 2011-09-13  7:38         ` NeilBrown
  2011-09-13  7:51           ` Andriano
  0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2011-09-13  7:38 UTC (permalink / raw)
  To: Andriano; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 6046 bytes --]

On Tue, 13 Sep 2011 17:05:06 +1000 Andriano <chief000@gmail.com> wrote:

> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown <neilb@suse.de> wrote:
> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano <chief000@gmail.com> wrote:
> >
> >> >
> >> >> Hello Linux-RAID mailing list,
> >> >>
> >> >> I have an issue with my RAID6 array.
> >> >> Here goes a short description of the system:
> >> >>
> >> >> opensuse 11.4
> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> >> >> connected to the HBA, 2 - motherboard ports
> >> >>
> >> >> I had some issues with one of the onboard connected disks, so tried to
> >> >> plug it to different ports, just to eliminate possibly faulty port.
> >> >> After reboot, suddenly other drives got kicked out from the array.
> >> >> Re-assembling them gives weird errors.
> >> >>
> >> >> --- some output ---
> >> >> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
> >> >> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
> >> >> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
> >> >> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
> >> >> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
> >> >> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
> >> >> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
> >> >> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
> >> >> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
> >> >> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
> >> >>
> >> >> #more /etc/mdadm.conf
> >> >> DEVICE partitions
> >> >> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
> >> >>
> >> >> #mdadm --assemble --force --scan /dev/md0
> >> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> >> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> >> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> >> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
> >> >>
> >> >> dmesg:
> >> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
> >> >> [ 8215.651865] md: md_import_device returned -22
> >> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
> >> >> [ 8215.652388] md: md_import_device returned -22
> >> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
> >> >> [ 8215.653182] md: md_import_device returned -22
> >> >>
> >> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> >> >> UUID for every disk, all checksums are correct,
> >> >> the only difference is -  Avail Dev Size : 3907028896 is the same for
> >> >> 9 disks, and 3907028864 for sdc
> >> >
> >> > Please provide that output so we can see it too - it might be helpful.
> >> >
> >> > NeilBrown
> >>
> >>
> >> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
> >> mdadm: --update=summaries not understood for 1.x metadata
> >>
> >
> > Sorry - I was too terse.
> >
> > I meant that output of "mdadm -E ...."
> >
> > NeilBrown
> >
> >
> >>
> >> >
> >> >>
> >> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
> >> >>
> >> >>
> >> >> I would really appreciate if someone could point me to the right direction.
> >> >>
> >> >> thanks
> >> >>
> >> >> Andrew
> >> >> --
> >> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> >> the body of a message to majordomo@vger.kernel.org
> >> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> >
> >> >
> >
> >
> 
> /dev/sdb:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
>            Name : hnas:0  (local to host hnas)
>   Creation Time : Wed Jan 19 21:17:33 2011
>      Raid Level : raid6
>    Raid Devices : 10
> 
>  Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
>      Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
>   Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
>     Data Offset : 272 sectors
>    Super Offset : 8 sectors
>           State : active
>     Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b
> 
>     Update Time : Mon Sep 12 22:36:35 2011
>        Checksum : 205f92e1 - correct
>          Events : 6446662
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 6
>    Array State : AAAAAAAAAA ('A' == active, '.' == missing)
> /dev/sdc:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
>            Name : hnas:0  (local to host hnas)
>   Creation Time : Wed Jan 19 21:17:33 2011
>      Raid Level : raid6
>    Raid Devices : 10
> 
>  Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
>      Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
>     Data Offset : 304 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : afa2f348:88bd0376:29bcfe96:df32a522
> 
>     Update Time : Tue Sep 13 11:50:18 2011
>        Checksum : ee1facae - correct
>          Events : 6446662
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 5
>    Array State : AAAAAA.AAA ('A' == active, '.' == missing)
(snip)

Thanks.

The only explanation I can come up with is that the devices appear to be
smaller for some reason.
Can you run
  blockdev --getsz /dev/sd?

and report the result?
They should all be 3907029168 (Data Offset + Avail Dev Size).
If any are smaller - that is the problem.

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  7:38         ` NeilBrown
@ 2011-09-13  7:51           ` Andriano
  2011-09-13  8:10             ` NeilBrown
                               ` (2 more replies)
  0 siblings, 3 replies; 28+ messages in thread
From: Andriano @ 2011-09-13  7:51 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On Tue, Sep 13, 2011 at 5:38 PM, NeilBrown <neilb@suse.de> wrote:
> On Tue, 13 Sep 2011 17:05:06 +1000 Andriano <chief000@gmail.com> wrote:
>
>> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown <neilb@suse.de> wrote:
>> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano <chief000@gmail.com> wrote:
>> >
>> >> >
>> >> >> Hello Linux-RAID mailing list,
>> >> >>
>> >> >> I have an issue with my RAID6 array.
>> >> >> Here goes a short description of the system:
>> >> >>
>> >> >> opensuse 11.4
>> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
>> >> >> connected to the HBA, 2 - motherboard ports
>> >> >>
>> >> >> I had some issues with one of the onboard connected disks, so tried to
>> >> >> plug it to different ports, just to eliminate possibly faulty port.
>> >> >> After reboot, suddenly other drives got kicked out from the array.
>> >> >> Re-assembling them gives weird errors.
>> >> >>
>> >> >> --- some output ---
>> >> >> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
>> >> >> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
>> >> >> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
>> >> >> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
>> >> >> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
>> >> >> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
>> >> >> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
>> >> >> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
>> >> >> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
>> >> >> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
>> >> >>
>> >> >> #more /etc/mdadm.conf
>> >> >> DEVICE partitions
>> >> >> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
>> >> >>
>> >> >> #mdadm --assemble --force --scan /dev/md0
>> >> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
>> >> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
>> >> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
>> >> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
>> >> >>
>> >> >> dmesg:
>> >> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
>> >> >> [ 8215.651865] md: md_import_device returned -22
>> >> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
>> >> >> [ 8215.652388] md: md_import_device returned -22
>> >> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
>> >> >> [ 8215.653182] md: md_import_device returned -22
>> >> >>
>> >> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
>> >> >> UUID for every disk, all checksums are correct,
>> >> >> the only difference is -  Avail Dev Size : 3907028896 is the same for
>> >> >> 9 disks, and 3907028864 for sdc
>> >> >
>> >> > Please provide that output so we can see it too - it might be helpful.
>> >> >
>> >> > NeilBrown
>> >>
>> >>
>> >> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
>> >> mdadm: --update=summaries not understood for 1.x metadata
>> >>
>> >
>> > Sorry - I was too terse.
>> >
>> > I meant that output of "mdadm -E ...."
>> >
>> > NeilBrown
>> >
>> >
>> >>
>> >> >
>> >> >>
>> >> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
>> >> >>
>> >> >>
>> >> >> I would really appreciate if someone could point me to the right direction.
>> >> >>
>> >> >> thanks
>> >> >>
>> >> >> Andrew
>> >> >> --
>> >> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >> >> the body of a message to majordomo@vger.kernel.org
>> >> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >> >
>> >> >
>> >
>> >
>>
>> /dev/sdb:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
>>            Name : hnas:0  (local to host hnas)
>>   Creation Time : Wed Jan 19 21:17:33 2011
>>      Raid Level : raid6
>>    Raid Devices : 10
>>
>>  Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
>>      Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
>>   Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
>>     Data Offset : 272 sectors
>>    Super Offset : 8 sectors
>>           State : active
>>     Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b
>>
>>     Update Time : Mon Sep 12 22:36:35 2011
>>        Checksum : 205f92e1 - correct
>>          Events : 6446662
>>
>>          Layout : left-symmetric
>>      Chunk Size : 64K
>>
>>    Device Role : Active device 6
>>    Array State : AAAAAAAAAA ('A' == active, '.' == missing)
>> /dev/sdc:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
>>            Name : hnas:0  (local to host hnas)
>>   Creation Time : Wed Jan 19 21:17:33 2011
>>      Raid Level : raid6
>>    Raid Devices : 10
>>
>>  Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
>>      Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
>>     Data Offset : 304 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : afa2f348:88bd0376:29bcfe96:df32a522
>>
>>     Update Time : Tue Sep 13 11:50:18 2011
>>        Checksum : ee1facae - correct
>>          Events : 6446662
>>
>>          Layout : left-symmetric
>>      Chunk Size : 64K
>>
>>    Device Role : Active device 5
>>    Array State : AAAAAA.AAA ('A' == active, '.' == missing)
> (snip)
>
> Thanks.
>
> The only explanation I can come up with is that the devices appear to be
> smaller for some reason.
> Can you run
>  blockdev --getsz /dev/sd?
>
> and report the result?
> They should all be 3907029168 (Data Offset + Avail Dev Size).
> If any are smaller - that is the problem.
>
> NeilBrown
>
>

Apparently you're right
blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
/dev/sdh /dev/sdi /dev/sdj /dev/sdk
3907027055
3907027055
3907029168
3907029168
3907029168
3907029168
3907027055
3907029168
3907029168
3907029168

sdb, sdc and sdh - are smaller and they are problem disks

So what would be a solution to fix this issue?

thanks
Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  7:51           ` Andriano
@ 2011-09-13  8:10             ` NeilBrown
  2011-09-13  8:12             ` Alexander Kühn
  2011-09-13  8:44             ` Roman Mamedov
  2 siblings, 0 replies; 28+ messages in thread
From: NeilBrown @ 2011-09-13  8:10 UTC (permalink / raw)
  To: Andriano; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 7575 bytes --]

On Tue, 13 Sep 2011 17:51:56 +1000 Andriano <chief000@gmail.com> wrote:

> On Tue, Sep 13, 2011 at 5:38 PM, NeilBrown <neilb@suse.de> wrote:
> > On Tue, 13 Sep 2011 17:05:06 +1000 Andriano <chief000@gmail.com> wrote:
> >
> >> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown <neilb@suse.de> wrote:
> >> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano <chief000@gmail.com> wrote:
> >> >
> >> >> >
> >> >> >> Hello Linux-RAID mailing list,
> >> >> >>
> >> >> >> I have an issue with my RAID6 array.
> >> >> >> Here goes a short description of the system:
> >> >> >>
> >> >> >> opensuse 11.4
> >> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> >> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> >> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> >> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> >> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> >> >> >> connected to the HBA, 2 - motherboard ports
> >> >> >>
> >> >> >> I had some issues with one of the onboard connected disks, so tried to
> >> >> >> plug it to different ports, just to eliminate possibly faulty port.
> >> >> >> After reboot, suddenly other drives got kicked out from the array.
> >> >> >> Re-assembling them gives weird errors.
> >> >> >>
> >> >> >> --- some output ---
> >> >> >> [3:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdb
> >> >> >> [5:0:0:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdc
> >> >> >> [8:0:0:0]    disk    ATA      ST32000542AS     CC34  /dev/sdd
> >> >> >> [8:0:1:0]    disk    ATA      ST32000542AS     CC34  /dev/sde
> >> >> >> [8:0:2:0]    disk    ATA      ST32000542AS     CC34  /dev/sdf
> >> >> >> [8:0:3:0]    disk    ATA      ST32000542AS     CC34  /dev/sdg
> >> >> >> [8:0:4:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh
> >> >> >> [8:0:5:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdi
> >> >> >> [8:0:6:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdj
> >> >> >> [8:0:7:0]    disk    ATA      ST2000DL003-9VT1 CC32  /dev/sdk
> >> >> >>
> >> >> >> #more /etc/mdadm.conf
> >> >> >> DEVICE partitions
> >> >> >> ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff
> >> >> >>
> >> >> >> #mdadm --assemble --force --scan /dev/md0
> >> >> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> >> >> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> >> >> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> >> >> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
> >> >> >>
> >> >> >> dmesg:
> >> >> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
> >> >> >> [ 8215.651865] md: md_import_device returned -22
> >> >> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
> >> >> >> [ 8215.652388] md: md_import_device returned -22
> >> >> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
> >> >> >> [ 8215.653182] md: md_import_device returned -22
> >> >> >>
> >> >> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> >> >> >> UUID for every disk, all checksums are correct,
> >> >> >> the only difference is -  Avail Dev Size : 3907028896 is the same for
> >> >> >> 9 disks, and 3907028864 for sdc
> >> >> >
> >> >> > Please provide that output so we can see it too - it might be helpful.
> >> >> >
> >> >> > NeilBrown
> >> >>
> >> >>
> >> >> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
> >> >> mdadm: --update=summaries not understood for 1.x metadata
> >> >>
> >> >
> >> > Sorry - I was too terse.
> >> >
> >> > I meant that output of "mdadm -E ...."
> >> >
> >> > NeilBrown
> >> >
> >> >
> >> >>
> >> >> >
> >> >> >>
> >> >> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything
> >> >> >>
> >> >> >>
> >> >> >> I would really appreciate if someone could point me to the right direction.
> >> >> >>
> >> >> >> thanks
> >> >> >>
> >> >> >> Andrew
> >> >> >> --
> >> >> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> >> >> the body of a message to majordomo@vger.kernel.org
> >> >> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> >> >
> >> >> >
> >> >
> >> >
> >>
> >> /dev/sdb:
> >>           Magic : a92b4efc
> >>         Version : 1.2
> >>     Feature Map : 0x0
> >>      Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
> >>            Name : hnas:0  (local to host hnas)
> >>   Creation Time : Wed Jan 19 21:17:33 2011
> >>      Raid Level : raid6
> >>    Raid Devices : 10
> >>
> >>  Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
> >>      Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
> >>   Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
> >>     Data Offset : 272 sectors
> >>    Super Offset : 8 sectors
> >>           State : active
> >>     Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b
> >>
> >>     Update Time : Mon Sep 12 22:36:35 2011
> >>        Checksum : 205f92e1 - correct
> >>          Events : 6446662
> >>
> >>          Layout : left-symmetric
> >>      Chunk Size : 64K
> >>
> >>    Device Role : Active device 6
> >>    Array State : AAAAAAAAAA ('A' == active, '.' == missing)
> >> /dev/sdc:
> >>           Magic : a92b4efc
> >>         Version : 1.2
> >>     Feature Map : 0x0
> >>      Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
> >>            Name : hnas:0  (local to host hnas)
> >>   Creation Time : Wed Jan 19 21:17:33 2011
> >>      Raid Level : raid6
> >>    Raid Devices : 10
> >>
> >>  Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
> >>      Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
> >>     Data Offset : 304 sectors
> >>    Super Offset : 8 sectors
> >>           State : clean
> >>     Device UUID : afa2f348:88bd0376:29bcfe96:df32a522
> >>
> >>     Update Time : Tue Sep 13 11:50:18 2011
> >>        Checksum : ee1facae - correct
> >>          Events : 6446662
> >>
> >>          Layout : left-symmetric
> >>      Chunk Size : 64K
> >>
> >>    Device Role : Active device 5
> >>    Array State : AAAAAA.AAA ('A' == active, '.' == missing)
> > (snip)
> >
> > Thanks.
> >
> > The only explanation I can come up with is that the devices appear to be
> > smaller for some reason.
> > Can you run
> >  blockdev --getsz /dev/sd?
> >
> > and report the result?
> > They should all be 3907029168 (Data Offset + Avail Dev Size).
> > If any are smaller - that is the problem.
> >
> > NeilBrown
> >
> >
> 
> Apparently you're right
> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
> 3907027055
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 3907029168
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 
> sdb, sdc and sdh - are smaller and they are problem disks
> 
> So what would be a solution to fix this issue?
>

I'm afraid I cannot really help there.
The disks must have been bigger before else they could never have been
members of the array.
Maybe some jumper was changed?  Maybe a different controller hides some
sectors?
I really don't know the details of what can cause this.
Maybe try changing things until you see a pattern.
If you move devices between controller, so the small size move with the
device, or does it stay with the controller?  That sort of thing.

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  7:51           ` Andriano
  2011-09-13  8:10             ` NeilBrown
@ 2011-09-13  8:12             ` Alexander Kühn
  2011-09-13  8:44             ` Roman Mamedov
  2 siblings, 0 replies; 28+ messages in thread
From: Alexander Kühn @ 2011-09-13  8:12 UTC (permalink / raw)
  To: Andriano; +Cc: NeilBrown, linux-raid


Zitat von Andriano <chief000@gmail.com>:

> On Tue, Sep 13, 2011 at 5:38 PM, NeilBrown <neilb@suse.de> wrote:
>> On Tue, 13 Sep 2011 17:05:06 +1000 Andriano <chief000@gmail.com> wrote:
>>
>>> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown <neilb@suse.de> wrote:
>>> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano <chief000@gmail.com> wrote:
>>> >
>>> >> >
>>> >> >> Hello Linux-RAID mailing list,
>>> >> >>
>>> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>>> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>>> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>>> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>>> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
>>> >> >> connected to the HBA, 2 - motherboard ports
>>> >> >>
>>> >> >> I had some issues with one of the onboard connected disks,  
>>> so tried to
>>> >> >> plug it to different ports, just to eliminate possibly faulty port.
>>> >> >> After reboot, suddenly other drives got kicked out from the array.
>>> >> >> Re-assembling them gives weird errors.
>>> >> >>
> Apparently you're right
> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
> 3907027055
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 3907029168
> 3907027055
> 3907029168
> 3907029168
> 3907029168
>
> sdb, sdc and sdh - are smaller and they are problem disks
>
> So what would be a solution to fix this issue?

The solution seems obvious:
Plug them (or at least one of them) back into the original ports so  
that they regain their original size.
Then you can try and shrink your filesystem/logical volumes and then  
the array, then check everything is working (do a raid check too).
Then you can move one one the disks to a good port, zero the metadata  
on it and add it back to regain full redundancy. Once done, move the  
next...

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  7:51           ` Andriano
  2011-09-13  8:10             ` NeilBrown
  2011-09-13  8:12             ` Alexander Kühn
@ 2011-09-13  8:44             ` Roman Mamedov
  2011-09-13  8:57               ` Andriano
  2 siblings, 1 reply; 28+ messages in thread
From: Roman Mamedov @ 2011-09-13  8:44 UTC (permalink / raw)
  To: Andriano; +Cc: NeilBrown, linux-raid

[-- Attachment #1: Type: text/plain, Size: 1265 bytes --]

On Tue, 13 Sep 2011 17:51:56 +1000
Andriano <chief000@gmail.com> wrote:

> Apparently you're right
> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
> 3907027055
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 3907029168
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 
> sdb, sdc and sdh - are smaller and they are problem disks
> 
> So what would be a solution to fix this issue?

You mentioned you use Gigabyte EP35C-DS3 motherboard. Gigabyte BIOSes are known to cut off about 1 MByte or so from the end of HDDs (on the onboard controller, and maybe just the one on Port 0), setting an HPA area and storing a copy of the BIOS there. That's known as "(Virtual) Dual/Triple/Quad BIOS". Google for "gigabyte bios hpa" and you'll find a lot of reports about this problem. You can check if you can disable that "feature" in BIOS setup, but older boards did not have such option.

To restore the native capacity of the drives you can use "hdparm -N" (see its man page), while disks are on the non-onboard controller.

In the future, create your RAID from partitions, and leave 8-10 MB of space in the end of each disk for cases like these.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  8:44             ` Roman Mamedov
@ 2011-09-13  8:57               ` Andriano
  2011-09-13  9:05                 ` Andriano
  0 siblings, 1 reply; 28+ messages in thread
From: Andriano @ 2011-09-13  8:57 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: NeilBrown, linux-raid

On Tue, Sep 13, 2011 at 6:44 PM, Roman Mamedov <rm@romanrm.ru> wrote:
> On Tue, 13 Sep 2011 17:51:56 +1000
> Andriano <chief000@gmail.com> wrote:
>
>> Apparently you're right
>> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
>> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
>> 3907027055
>> 3907027055
>> 3907029168
>> 3907029168
>> 3907029168
>> 3907029168
>> 3907027055
>> 3907029168
>> 3907029168
>> 3907029168
>>
>> sdb, sdc and sdh - are smaller and they are problem disks
>>
>> So what would be a solution to fix this issue?
>
> You mentioned you use Gigabyte EP35C-DS3 motherboard. Gigabyte BIOSes are known to cut off about 1 MByte or so from the end of HDDs (on the onboard controller, and maybe just the one on Port 0), setting an HPA area and storing a copy of the BIOS there. That's known as "(Virtual) Dual/Triple/Quad BIOS". Google for "gigabyte bios hpa" and you'll find a lot of reports about this problem. You can check if you can disable that "feature" in BIOS setup, but older boards did not have such option.
>
> To restore the native capacity of the drives you can use "hdparm -N" (see its man page), while disks are on the non-onboard controller.
>
> In the future, create your RAID from partitions, and leave 8-10 MB of space in the end of each disk for cases like these.
>
> --
> With respect,
> Roman
>

Roman,

Looks like you have pointed to the source of the problem. The option
to backup BIOS has been enabled.
Is "hdparm -N" going to affect superblock or data integrity of these
disks? Or has that backup already done the damage?

thanks

Andrew

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  8:57               ` Andriano
@ 2011-09-13  9:05                 ` Andriano
  2011-09-13 10:29                   ` Roman Mamedov
  0 siblings, 1 reply; 28+ messages in thread
From: Andriano @ 2011-09-13  9:05 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: NeilBrown, linux-raid

On Tue, Sep 13, 2011 at 6:57 PM, Andriano <chief000@gmail.com> wrote:
> On Tue, Sep 13, 2011 at 6:44 PM, Roman Mamedov <rm@romanrm.ru> wrote:
>> On Tue, 13 Sep 2011 17:51:56 +1000
>> Andriano <chief000@gmail.com> wrote:
>>
>>> Apparently you're right
>>> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
>>> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
>>> 3907027055
>>> 3907027055
>>> 3907029168
>>> 3907029168
>>> 3907029168
>>> 3907029168
>>> 3907027055
>>> 3907029168
>>> 3907029168
>>> 3907029168
>>>
>>> sdb, sdc and sdh - are smaller and they are problem disks
>>>
>>> So what would be a solution to fix this issue?
>>
>> You mentioned you use Gigabyte EP35C-DS3 motherboard. Gigabyte BIOSes are known to cut off about 1 MByte or so from the end of HDDs (on the onboard controller, and maybe just the one on Port 0), setting an HPA area and storing a copy of the BIOS there. That's known as "(Virtual) Dual/Triple/Quad BIOS". Google for "gigabyte bios hpa" and you'll find a lot of reports about this problem. You can check if you can disable that "feature" in BIOS setup, but older boards did not have such option.
>>
>> To restore the native capacity of the drives you can use "hdparm -N" (see its man page), while disks are on the non-onboard controller.
>>
>> In the future, create your RAID from partitions, and leave 8-10 MB of space in the end of each disk for cases like these.
>>
>> --
>> With respect,
>> Roman
>>
>
> Roman,
>
> Looks like you have pointed to the source of the problem. The option
> to backup BIOS has been enabled.
> Is "hdparm -N" going to affect superblock or data integrity of these
> disks? Or has that backup already done the damage?
>
> thanks
>
> Andrew
>


Connected one of the offenders to HBA port, and hdparm outputs this:

#hdparm -N /dev/sdh

/dev/sdh:
 max sectors   = 3907027055/14715056(18446744073321613488?), HPA
setting seems invalid (buggy kernel device driver?)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  9:05                 ` Andriano
@ 2011-09-13 10:29                   ` Roman Mamedov
  2011-09-13 10:44                     ` Andriano
  0 siblings, 1 reply; 28+ messages in thread
From: Roman Mamedov @ 2011-09-13 10:29 UTC (permalink / raw)
  To: Andriano; +Cc: NeilBrown, linux-raid

[-- Attachment #1: Type: text/plain, Size: 873 bytes --]

On Tue, 13 Sep 2011 19:05:41 +1000
Andriano <chief000@gmail.com> wrote:

> Connected one of the offenders to HBA port, and hdparm outputs this:
> 
> #hdparm -N /dev/sdh
> 
> /dev/sdh:
>  max sectors   = 3907027055/14715056(18446744073321613488?), HPA
> setting seems invalid (buggy kernel device driver?)

You could just try "hdparm -N p3907029168" (capacity of the 'larger' disks), but that could fail if the device driver is indeed buggy.

Another possible course of action would be to try that on some other controller.
For example on your motherboard you have two violet ports, http://www.gigabyte.ru/products/upload/products/1470/100a.jpg
those are managed by the JMicron JMB363 controller, try plugging the disks which need HPA to be removed to those ports, AFAIR that JMicron controller works with "hdparm -N" just fine.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13 10:29                   ` Roman Mamedov
@ 2011-09-13 10:44                     ` Andriano
  2011-09-13 13:45                       ` Andriano
  0 siblings, 1 reply; 28+ messages in thread
From: Andriano @ 2011-09-13 10:44 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: NeilBrown, linux-raid

Thanks everyone, looks like the problem is solved.

For benefit of others who may experience same issue, here is what I've done:

- upgraded firmware on ST32000542AS disks - from CC34 to CC35. It must
be done using onboard SATA in Native IDE (not RAID/AHCI) mode.
After reconnecting them back to HBA, size of one of the offenders fixed itself!

- ran hdparm -N p3907029168 /dev/sdx command on other two disks and it
worked (probably it works straight after reboot)
Now mdadm -D shows the array as clean, degraded with one disk kicked
out, which is another story :)

now need to resync array and restore two LVs which hasn't mounted :(

On Tue, Sep 13, 2011 at 8:29 PM, Roman Mamedov <rm@romanrm.ru> wrote:
> On Tue, 13 Sep 2011 19:05:41 +1000
> Andriano <chief000@gmail.com> wrote:
>
>> Connected one of the offenders to HBA port, and hdparm outputs this:
>>
>> #hdparm -N /dev/sdh
>>
>> /dev/sdh:
>>  max sectors   = 3907027055/14715056(18446744073321613488?), HPA
>> setting seems invalid (buggy kernel device driver?)
>
> You could just try "hdparm -N p3907029168" (capacity of the 'larger' disks), but that could fail if the device driver is indeed buggy.
>
> Another possible course of action would be to try that on some other controller.
> For example on your motherboard you have two violet ports, http://www.gigabyte.ru/products/upload/products/1470/100a.jpg
> those are managed by the JMicron JMB363 controller, try plugging the disks which need HPA to be removed to those ports, AFAIR that JMicron controller works with "hdparm -N" just fine.
>
> --
> With respect,
> Roman
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13 10:44                     ` Andriano
@ 2011-09-13 13:45                       ` Andriano
  0 siblings, 0 replies; 28+ messages in thread
From: Andriano @ 2011-09-13 13:45 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: NeilBrown, linux-raid

Still trying to get the array back up.

Status: Clean, degraded with 9 out of 10 disks.
One disk - removed as non-fresh.

as a result two of LVs could not be mounted:

mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv1,
      missing codepage or helper program, or other error
      In some cases useful info is found in syslog - try
      dmesg | tail  or so

mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv2,
      missing codepage or helper program, or other error
      In some cases useful info is found in syslog - try
      dmesg | tail  or so

[ 3357.006833] JBD: no valid journal superblock found
[ 3357.006837] EXT4-fs (dm-1): error loading journal
[ 3357.022603] JBD: no valid journal superblock found
[ 3357.022606] EXT4-fs (dm-2): error loading journal



Apparently there is a problem with re-adding non-fresh disk back to the array.

#mdadm -a -v /dev/md0 /dev/sdf
mdadm: /dev/sdf reports being an active member for /dev/md0, but a
--re-add fails.
mdadm: not performing --add as that would convert /dev/sdf in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdf" first.

Question: Is there a way to resync the array using that non-fresh
disk, as it may contain blocks needed by these LVs.
At this stage I don't really want to add this disk as a spare.

Any suggestions please?


thanks

On Tue, Sep 13, 2011 at 8:44 PM, Andriano <chief000@gmail.com> wrote:
> Thanks everyone, looks like the problem is solved.
>
> For benefit of others who may experience same issue, here is what I've done:
>
> - upgraded firmware on ST32000542AS disks - from CC34 to CC35. It must
> be done using onboard SATA in Native IDE (not RAID/AHCI) mode.
> After reconnecting them back to HBA, size of one of the offenders fixed itself!
>
> - ran hdparm -N p3907029168 /dev/sdx command on other two disks and it
> worked (probably it works straight after reboot)
> Now mdadm -D shows the array as clean, degraded with one disk kicked
> out, which is another story :)
>
> now need to resync array and restore two LVs which hasn't mounted :(
>
> On Tue, Sep 13, 2011 at 8:29 PM, Roman Mamedov <rm@romanrm.ru> wrote:
>> On Tue, 13 Sep 2011 19:05:41 +1000
>> Andriano <chief000@gmail.com> wrote:
>>
>>> Connected one of the offenders to HBA port, and hdparm outputs this:
>>>
>>> #hdparm -N /dev/sdh
>>>
>>> /dev/sdh:
>>>  max sectors   = 3907027055/14715056(18446744073321613488?), HPA
>>> setting seems invalid (buggy kernel device driver?)
>>
>> You could just try "hdparm -N p3907029168" (capacity of the 'larger' disks), but that could fail if the device driver is indeed buggy.
>>
>> Another possible course of action would be to try that on some other controller.
>> For example on your motherboard you have two violet ports, http://www.gigabyte.ru/products/upload/products/1470/100a.jpg
>> those are managed by the JMicron JMB363 controller, try plugging the disks which need HPA to be removed to those ports, AFAIR that JMicron controller works with "hdparm -N" just fine.
>>
>> --
>> With respect,
>> Roman
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
@ 2011-09-13 14:24 NeilBrown
  0 siblings, 0 replies; 28+ messages in thread
From: NeilBrown @ 2011-09-13 14:24 UTC (permalink / raw)
  To: Andriano, Roman Mamedov; +Cc: linux-raid

(stupid android mail client insists on top-posting - sorry)
No.  You cannot (easily) get that device to be an active member of
the array again, and it almost certainly wouldn't help anyway.

It would only help if the data you want is on the device, and the
parity blocks that are being used to recreate it are corrupt.
I think it very unlikely that they are corrupt but the data isn't.

The problem seems to be that the journal superblock is bad.  That seems
to suggest that much of the rest of the filesystem is OK.
I would suggest you "fsck -n -f" the device and see how much it wants
to 'fix'.  If it is just a few things, I would just let fsck fix it up for you.

If there are pages and pages of errors - then you have bigger problems.

NeilBrown


Andriano <chief000@gmail.com> wrote:

>Still trying to get the array back up.
>
>Status: Clean, degraded with 9 out of 10 disks.
>One disk - removed as non-fresh.
>
>as a result two of LVs could not be mounted:
>
>mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv1,
>      missing codepage or helper program, or other error
>      In some cases useful info is found in syslog - try
>      dmesg | tail  or so
>
>mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv2,
>      missing codepage or helper program, or other error
>      In some cases useful info is found in syslog - try
>      dmesg | tail  or so
>
>[ 3357.006833] JBD: no valid journal superblock found
>[ 3357.006837] EXT4-fs (dm-1): error loading journal
>[ 3357.022603] JBD: no valid journal superblock found
>[ 3357.022606] EXT4-fs (dm-2): error loading journal
>
>
>
>Apparently there is a problem with re-adding non-fresh disk back to the array.
>
>#mdadm -a -v /dev/md0 /dev/sdf
>mdadm: /dev/sdf reports being an active member for /dev/md0, but a
>--re-add fails.
>mdadm: not performing --add as that would convert /dev/sdf in to a spare.
>mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdf" first.
>
>Question: Is there a way to resync the array using that non-fresh
>disk, as it may contain blocks needed by these LVs.
>At this stage I don't really want to add this disk as a spare.
>
>Any suggestions please?
>
>
>thanks
>
>On Tue, Sep 13, 2011 at 8:44 PM, Andriano <chief000@gmail.com> wrote:
>> Thanks everyone, looks like the problem is solved.
>>
>> For benefit of others who may experience same issue, here is what I've done:
>>
>> - upgraded firmware on ST32000542AS disks - from CC34 to CC35. It must
>> be done using onboard SATA in Native IDE (not RAID/AHCI) mode.
>> After reconnecting them back to HBA, size of one of the offenders fixed itself!
>>
>> - ran hdparm -N p3907029168 /dev/sdx command on other two disks and it
>> worked (probably it works straight after reboot)
>> Now mdadm -D shows the array as clean, degraded with one disk kicked
>> out, which is another story :)
>>
>> now need to resync array and restore two LVs which hasn't mounted :(
>>
>> On Tue, Sep 13, 2011 at 8:29 PM, Roman Mamedov <rm@romanrm.ru> wrote:
>>> On Tue, 13 Sep 2011 19:05:41 +1000
>>> Andriano <chief000@gmail.com> wrote:
>>>
>>>> Connected one of the offenders to HBA port, and hdparm outputs this:
>>>>
>>>> #hdparm -N /dev/sdh
>>>>
>>>> /dev/sdh:
>>>>  max sectors   = 3907027055/14715056(18446744073321613488?), HPA
>>>> setting seems invalid (buggy kernel device driver?)
>>>
>>> You could just try "hdparm -N p3907029168" (capacity of the 'larger' disks), but that could fail if the device driver is indeed buggy.
>>>
>>> Another possible course of action would be to try that on some other controller.
>>> For example on your motherboard you have two violet ports, http://www.gigabyte.ru/products/upload/products/1470/100a.jpg
>>> those are managed by the JMicron JMB363 controller, try plugging the disks which need HPA to be removed to those ports, AFAIR that JMicron controller works with "hdparm -N" just fine.
>>>
>>> --
>>> With respect,
>>> Roman
>>>
>>
>--
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-13  6:14 RAID6 issues Andriano
  2011-09-13  6:25 ` NeilBrown
@ 2011-09-27 18:46 ` Thomas Fjellstrom
  2011-09-27 19:14   ` Stan Hoeppner
  1 sibling, 1 reply; 28+ messages in thread
From: Thomas Fjellstrom @ 2011-09-27 18:46 UTC (permalink / raw)
  To: Andriano; +Cc: linux-raid

On September 13, 2011, Andriano wrote:
> Hello Linux-RAID mailing list,
> 
> I have an issue with my RAID6 array.
> Here goes a short description of the system:
> 
> opensuse 11.4
> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> connected to the HBA, 2 - motherboard ports

Hi, this is slightly off topic, but I have an AOC-SASLP-MV8 as well, and I'd 
suggest swapping it out for a different card. The linux mvsas driver has been 
crap for years, and just recently it nearly killed my raid5 array, twice. One 
time it needed to resync, the next it first kicked out one drive, then the rest 
shortly after. I was not amused.

Stan Hoeppner suggested the LSI SAS1068E card, which looks to be very nice. 
Only slightly more expensive than the SASLP and well supported under linux.

I'll hopefully be getting my hands on a LSI 9210-8i soon. Shortly there-after 
I'll be selling my SASLP.

[snip]
> 
> Andrew
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Thomas Fjellstrom
thomas@fjellstrom.ca

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-27 18:46 ` Thomas Fjellstrom
@ 2011-09-27 19:14   ` Stan Hoeppner
  2011-09-27 21:04     ` Thomas Fjellstrom
  0 siblings, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2011-09-27 19:14 UTC (permalink / raw)
  To: thomas; +Cc: Andriano, linux-raid

On 9/27/2011 1:46 PM, Thomas Fjellstrom wrote:

> Stan Hoeppner suggested the LSI SAS1068E card, which looks to be very nice.
> Only slightly more expensive than the SASLP and well supported under linux.
>
> I'll hopefully be getting my hands on a LSI 9210-8i soon. Shortly there-after
> I'll be selling my SASLP.

The 9210-8i is a newer generation card using the SAS2008 chip.  IOPS 
potential is over double that of the 1068E cards, 320M vs 140M, and 
you'll have support for drives larger than 2TB.  It also supports SATA3 
link speed whereas the 1068E chips only support SATA2.  It has a PCIe x8 
2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0 
interface for only 4GB/s.

In short, it's has quite a bit more capability than the 1068E based cards.

-- 
Stan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-27 19:14   ` Stan Hoeppner
@ 2011-09-27 21:04     ` Thomas Fjellstrom
  2011-09-28  2:47       ` Stan Hoeppner
  2011-09-28  6:03       ` Mikael Abrahamsson
  0 siblings, 2 replies; 28+ messages in thread
From: Thomas Fjellstrom @ 2011-09-27 21:04 UTC (permalink / raw)
  To: stan; +Cc: Andriano, linux-raid

> Stan Hoeppner <stan@hardwarefreak.com> wrote:
> >On 9/27/2011 1:46 PM, Thomas Fjellstrom wrote:

>> Stan Hoeppner suggested the LSI SAS1068E card, which looks to be very nice.
>> Only slightly more expensive than the SASLP and well supported under linux.
>>
>> I'll hopefully be getting my hands on a LSI 9210-8i soon. Shortly there-
after
>> I'll be selling my SASLP.
>
> The 9210-8i is a newer generation card using the SAS2008 chip.  IOPS 
> potential is over double that of the 1068E cards, 320M vs 140M, and 
> you'll have support for drives larger than 2TB.  It also supports SATA3 
> link speed whereas the 1068E chips only support SATA2.  It has a PCIe x8 
> 2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0 
> interface for only 4GB/s.
>
> In short, it's has quite a bit more capability than the 1068E based cards.


Yeah, i was impressed by the claimed specs. I bet if i knew how much it sells 
for, i'd be shocked. I did a little searching but didn't have much luck.

> -- 
> Stan

p.s. Sory for the duplicate Stan, I couldn't figure out how to disable html on 
my android mail client, and linux-raid bounced it.

--
Thomas Fjellstrom
thomas@fjellstrom.ca

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-27 21:04     ` Thomas Fjellstrom
@ 2011-09-28  2:47       ` Stan Hoeppner
  2011-09-28  6:52         ` Thomas Fjellstrom
  2011-09-28  6:03       ` Mikael Abrahamsson
  1 sibling, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2011-09-28  2:47 UTC (permalink / raw)
  To: Thomas Fjellstrom; +Cc: Andriano, linux-raid

On 9/27/2011 4:04 PM, Thomas Fjellstrom wrote:
>> Stan Hoeppner<stan@hardwarefreak.com>  wrote:

>> The 9210-8i is a newer generation card using the SAS2008 chip.  IOPS
>> potential is over double that of the 1068E cards, 320M vs 140M, and
>> you'll have support for drives larger than 2TB.  It also supports SATA3
>> link speed whereas the 1068E chips only support SATA2.  It has a PCIe x8
>> 2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0
>> interface for only 4GB/s.
>>
>> In short, it's has quite a bit more capability than the 1068E based cards.
>
>
> Yeah, i was impressed by the claimed specs. I bet if i knew how much it sells
> for, i'd be shocked. I did a little searching but didn't have much luck.

That's because the 9210* is...
"*Only available to OEMs through LSI direct sales."
IBM sells the 9210-8i as the ServeRAID M1015, available at Newegg for 
$320 (way over priced as with all things Big Blue).  IBM adds optional 
RAID5/50 fakeraid to the BIOS with an additional license key payment. 
The retail version of the 9210-8i is the LSI 9240-8i, available at 
Newegg for $265.

However, the specs on the 9211-8i are the same, and the connector layout 
is better--front vs top.  I recommend it over of the 9240-8i.  And it's 
a little cheaper to boot, $240 vs $265, at Newegg:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112

9211-8i full specs:
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9211-8i.aspx

Unless you need to use drives larger than 2TB the $155 Intel 1068E card 
is a far better buy at almost $100 less.  If you need to connect more 
than 8 drives, get a 9211-4i and one of these Intel expanders for 20 
drive ports:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207

This combo with run you ~$450, or $22.50/port for 20 ports.  The 9211-8i 
runs $30/port for 8 ports.

> p.s. Sory for the duplicate Stan, I couldn't figure out how to disable html on
> my android mail client, and linux-raid bounced it.

No need to apologize.  Sh*t happens.

-- 
Stan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-27 21:04     ` Thomas Fjellstrom
  2011-09-28  2:47       ` Stan Hoeppner
@ 2011-09-28  6:03       ` Mikael Abrahamsson
  2011-09-28  6:53         ` Thomas Fjellstrom
  1 sibling, 1 reply; 28+ messages in thread
From: Mikael Abrahamsson @ 2011-09-28  6:03 UTC (permalink / raw)
  To: Thomas Fjellstrom; +Cc: stan, Andriano, linux-raid

On Tue, 27 Sep 2011, Thomas Fjellstrom wrote:

> Yeah, i was impressed by the claimed specs. I bet if i knew how much it 
> sells for, i'd be shocked. I did a little searching but didn't have much 
> luck.

I've been able to procure several 1068E based cards for around USD50 on 
ebay. The IBM BR10i is one example. You might have to live without a 
proper mounting bracket, but it's a proper 1068E card as far as I can tell 
(has worked well in my testing).

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-28  2:47       ` Stan Hoeppner
@ 2011-09-28  6:52         ` Thomas Fjellstrom
  0 siblings, 0 replies; 28+ messages in thread
From: Thomas Fjellstrom @ 2011-09-28  6:52 UTC (permalink / raw)
  To: stan; +Cc: Andriano, linux-raid

On September 27, 2011, Stan Hoeppner wrote:
> On 9/27/2011 4:04 PM, Thomas Fjellstrom wrote:
> >> Stan Hoeppner<stan@hardwarefreak.com>  wrote:
> >> 
> >> The 9210-8i is a newer generation card using the SAS2008 chip.  IOPS
> >> potential is over double that of the 1068E cards, 320M vs 140M, and
> >> you'll have support for drives larger than 2TB.  It also supports SATA3
> >> link speed whereas the 1068E chips only support SATA2.  It has a PCIe x8
> >> 2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0
> >> interface for only 4GB/s.
> >> 
> >> In short, it's has quite a bit more capability than the 1068E based
> >> cards.
> > 
> > Yeah, i was impressed by the claimed specs. I bet if i knew how much it
> > sells for, i'd be shocked. I did a little searching but didn't have much
> > luck.
> 
> That's because the 9210* is...
> "*Only available to OEMs through LSI direct sales."
> IBM sells the 9210-8i as the ServeRAID M1015, available at Newegg for
> $320 (way over priced as with all things Big Blue).  IBM adds optional
> RAID5/50 fakeraid to the BIOS with an additional license key payment.
> The retail version of the 9210-8i is the LSI 9240-8i, available at
> Newegg for $265.
> 
> However, the specs on the 9211-8i are the same, and the connector layout
> is better--front vs top.  I recommend it over of the 9240-8i.  And it's
> a little cheaper to boot, $240 vs $265, at Newegg:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112
> 
> 9211-8i full specs:
> http://www.lsi.com/products/storagecomponents/Pages/LSISAS9211-8i.aspx
>
> Unless you need to use drives larger than 2TB the $155 Intel 1068E card
> is a far better buy at almost $100 less.  If you need to connect more
> than 8 drives, get a 9211-4i and one of these Intel expanders for 20
> drive ports:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207
> 
> This combo with run you ~$450, or $22.50/port for 20 ports.  The 9211-8i
> runs $30/port for 8 ports.

Very interesting and useful information :) I'm actually trading my time for 
the card, a very nice fellow on the local lug list offered after (I assume) he 
saw my earlier thread here.

I've been thinking about getting an expander. Won't happen for a bit/while.

> > p.s. Sory for the duplicate Stan, I couldn't figure out how to disable
> > html on my android mail client, and linux-raid bounced it.
> 
> No need to apologize.  Sh*t happens.

Thanks for the information so far, its been quite helpful.

-- 
Thomas Fjellstrom
thomas@fjellstrom.ca

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: RAID6 issues
  2011-09-28  6:03       ` Mikael Abrahamsson
@ 2011-09-28  6:53         ` Thomas Fjellstrom
  0 siblings, 0 replies; 28+ messages in thread
From: Thomas Fjellstrom @ 2011-09-28  6:53 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: stan, Andriano, linux-raid

On September 28, 2011, Mikael Abrahamsson wrote:
> On Tue, 27 Sep 2011, Thomas Fjellstrom wrote:
> > Yeah, i was impressed by the claimed specs. I bet if i knew how much it
> > sells for, i'd be shocked. I did a little searching but didn't have much
> > luck.
> 
> I've been able to procure several 1068E based cards for around USD50 on
> ebay. The IBM BR10i is one example. You might have to live without a
> proper mounting bracket, but it's a proper 1068E card as far as I can tell
> (has worked well in my testing).

Hm, I'll have to keep an eye open, when I decide to re-build my backup array. 
Thanks for the heads up :)

-- 
Thomas Fjellstrom
thomas@fjellstrom.ca

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2011-09-28  6:53 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-16 20:28 raid6 issues Chad Walker
2011-06-18 19:48 ` Chad Walker
2011-06-18 19:55   ` Chad Walker
2011-06-18 23:01     ` NeilBrown
2011-06-18 23:14       ` Chad Walker
  -- strict thread matches above, loose matches on Subject: below --
2011-09-13  6:14 RAID6 issues Andriano
2011-09-13  6:25 ` NeilBrown
2011-09-13  6:33   ` Andriano
2011-09-13  6:44     ` NeilBrown
2011-09-13  7:05       ` Andriano
2011-09-13  7:38         ` NeilBrown
2011-09-13  7:51           ` Andriano
2011-09-13  8:10             ` NeilBrown
2011-09-13  8:12             ` Alexander Kühn
2011-09-13  8:44             ` Roman Mamedov
2011-09-13  8:57               ` Andriano
2011-09-13  9:05                 ` Andriano
2011-09-13 10:29                   ` Roman Mamedov
2011-09-13 10:44                     ` Andriano
2011-09-13 13:45                       ` Andriano
2011-09-27 18:46 ` Thomas Fjellstrom
2011-09-27 19:14   ` Stan Hoeppner
2011-09-27 21:04     ` Thomas Fjellstrom
2011-09-28  2:47       ` Stan Hoeppner
2011-09-28  6:52         ` Thomas Fjellstrom
2011-09-28  6:03       ` Mikael Abrahamsson
2011-09-28  6:53         ` Thomas Fjellstrom
2011-09-13 14:24 NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).