linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm bug - inconsistent output in -D mode
@ 2009-01-15 11:50 Peter Rabbitson
  2009-01-18 20:04 ` Michał Przyłuski
  0 siblings, 1 reply; 2+ messages in thread
From: Peter Rabbitson @ 2009-01-15 11:50 UTC (permalink / raw)
  To: linux-raid

Hi,

I suppose this is a remnant of the effort to convert all sector counts
to bytes. Consider this output:

root@Thesaurus:~# mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 6b11b1ba:78985745:b320fc1a:1db68bcf
           Name : Thesaurus:Crypta  (local to host Thesaurus)
  Creation Time : Sat Mar  8 16:33:41 2008
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 578130828 (275.67 GiB 296.00 GB)   <--- Correct:
     Array Size : 1730162688 (825.01 GiB 885.84 GB)  <--- sectors(bytes)
  Used Dev Size : 576720896 (275.00 GiB 295.28 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : active
    Device UUID : 85e0e62b:56129e5f:b0d82e78:2ace7d37

    Update Time : Thu Jan 15 12:42:53 2009
       Checksum : 61a32529 - correct
         Events : 764390

         Layout : left-symmetric
     Chunk Size : 2048K

    Array Slot : 5 (failed, failed, 2, 3, 0, 1)  <--- by the way: wtf?
   Array State : uUuu 2 failed                   <--- ditto


root@Thesaurus:~# mdadm -D /dev/md5
/dev/md5:
        Version : 01.01
  Creation Time : Sat Mar  8 16:33:41 2008
     Raid Level : raid5
     Array Size : 865081344 (825.01 GiB 885.84 GB) <--- bytes(bytes)
  Used Dev Size : 576720896 (550.00 GiB 590.56 GB) <--- sectors(2xbytes)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Thu Jan 15 12:42:29 2009
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 2048K

           Name : Thesaurus:Crypta  (local to host Thesaurus)
           UUID : 6b11b1ba:78985745:b320fc1a:1db68bcf
         Events : 764391

    Number   Major   Minor   RaidDevice State
       4       8       67        0      active sync   /dev/sde3
       5       8        1        1      active sync   /dev/sda1
       2       8       19        2      active sync   /dev/sdb3
       3       8       35        3      active sync   /dev/sdc3

root@Thesaurus:~# mdadm -V
mdadm - v2.6.7.1 - 15th October 2008
root@Thesaurus:~# uname -a
Linux Thesaurus 2.6.24.7.th1 #1 PREEMPT Sun May 11 20:18:05 CEST 2008
i686 GNU/Linux

P.S. I know my kernel is old, but I suspect it's a mdadm problem.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: mdadm bug - inconsistent output in -D mode
  2009-01-15 11:50 mdadm bug - inconsistent output in -D mode Peter Rabbitson
@ 2009-01-18 20:04 ` Michał Przyłuski
  0 siblings, 0 replies; 2+ messages in thread
From: Michał Przyłuski @ 2009-01-18 20:04 UTC (permalink / raw)
  To: Peter Rabbitson; +Cc: linux-raid

2009/1/15 Peter Rabbitson <rabbit+list@rabbit.us>:
> Hi,
>
> I suppose this is a remnant of the effort to convert all sector counts
> to bytes. Consider this output:

Hello,
I have the same issue (not a biggie anyway) with a 6*750G raid6. I
guess it might be
related to 1.x superblocks though, I don't recall that happening with
my old 0.9 stuff.

[root@kylie kotek]# ~kotek/mdadm-2.6.4/mdadm --detail /dev/md1
/dev/md1:
        Version : 01.01.03
  Creation Time : Mon Nov 10 19:59:48 2008
     Raid Level : raid6
     Array Size : 2930294784 (2794.55 GiB 3000.62 GB)
  Used Dev Size : 1465147392 (698.64 GiB 750.16 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sun Jan 18 20:44:35 2009
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 1024K

           Name : 1
           UUID : 4eaba5a9:cb767b93:c73450fe:c1dc27c9
         Events : 487870

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       64        1      active sync   /dev/sde
       4       8       96        2      active sync   /dev/sdg
       5       8       80        3      active sync   /dev/sdf
       7       8       32        4      active sync   /dev/sdc
       6       8       16        5      active sync   /dev/sdb
[root@kylie kotek]# ~kotek/mdadm-2.6.7/mdadm --detail /dev/md1
/dev/md1:
        Version : 01.01
  Creation Time : Mon Nov 10 19:59:48 2008
     Raid Level : raid6
     Array Size : 2930294784 (2794.55 GiB 3000.62 GB)
  Used Dev Size : 1465147392 (1397.27 GiB 1500.31 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sun Jan 18 20:44:39 2009
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 1024K

           Name : 1
           UUID : 4eaba5a9:cb767b93:c73450fe:c1dc27c9
         Events : 487870

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       64        1      active sync   /dev/sde
       4       8       96        2      active sync   /dev/sdg
       5       8       80        3      active sync   /dev/sdf
       7       8       32        4      active sync   /dev/sdc
       6       8       16        5      active sync   /dev/sdb

and --examine in all cases gives (what seems to be) correct data, that is

 Avail Dev Size : 1465148904 (698.64 GiB 750.16 GB)
     Array Size : 5860589568 (2794.55 GiB 3000.62 GB)
  Used Dev Size : 1465147392 (698.64 GiB 750.16 GB)



> root@Thesaurus:~# mdadm -E /dev/sda1
[snip]
>
>    Array Slot : 5 (failed, failed, 2, 3, 0, 1)  <--- by the way: wtf?
>   Array State : uUuu 2 failed                   <--- ditto

    Array Slot : 6 (0, 1, failed, failed, 2, 3, 5, 4)
   Array State : uuuuuU 2 failed

That's how it looks here, and I guess that's perfectly OK. There was a
post on the list
by Neil regarding that matter some weeks ago. I believe those 2*fails are
due to a way that raid5/6 is created - as a degraded array, which will
rebuild to the nth disk.

However, in my case, it might be caused by the fact the array was
created with 1 really
missing drive (i.e. it contained 3, then 4, then 6 drives).

Oh, got it:
date	20 December 2008 02:16
subject	Re: can you help explain some --examine output to me?

> root@Thesaurus:~# mdadm -V
> mdadm - v2.6.7.1 - 15th October 2008
here 2.6.2 and 2.6.4 are correct, and 2.6.7 not quite.

Well ok, correct in terms of "human readable output", hard to say if
raw data is
correct, all depends if it's sectors or bytes, as Peter mentioned.

[kotek@kylie ~]$ uname -a
Linux kylie 2.6.23-0.214.rc8.git2.fc8 #1 SMP Fri Sep 28 17:10:49 EDT
2007 x86_64 x86_64 x86_64 GNU/Linux

> root@Thesaurus:~# uname -a
> Linux Thesaurus 2.6.24.7.th1 #1 PREEMPT Sun May 11 20:18:05 CEST 2008
> i686 GNU/Linux
>
> P.S. I know my kernel is old, but I suspect it's a mdadm problem.
Mine is older! ;-)

Greets,
Mike

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2009-01-18 20:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-15 11:50 mdadm bug - inconsistent output in -D mode Peter Rabbitson
2009-01-18 20:04 ` Michał Przyłuski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).