public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Buffalo LS-Q4.0 Raid 5 XFS errors
@ 2012-03-29  0:06 Kirk Anderson
  2012-03-29  6:40 ` Dave Chinner
  0 siblings, 1 reply; 8+ messages in thread
From: Kirk Anderson @ 2012-03-29  0:06 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 8624 bytes --]

I have a Buffalo LS-QLF55 4TB Raid 5 box.  It is out of warranty.  It was
using firmware 1.10.    The unit stopped responding and would not power down
through the web interface, nor the power button on the front of the unit.  I
unplugged the unit and plugged it back in.  The unit now shows the drives as
unformatted.  I have provided some information below and would greatly
appreciate some guidance as to what my next steps are to minimize my data
loss.  Any and all help is greatly appreciated.  Thanks, Kirk

 

 

root@LS-QLF55:~# uname -a

Linux LS-QLF55 2.6.22.7 #395 Thu May 21 22:24:49 JST 2009 armv5tejl unknown

root@LS-QLF55:~#

 

root@LS-QLF55:~# mount -t xfs /dev/md2 /mnt/array1

mount: mounting /dev/md2 on /mnt/array1 failed: Structure needs cleaning

 

root@LS-QLF55:~# mount -t xfs -o,ro,norecovery /dev/md2 /mnt/array1

mount: mounting /dev/md2 on /mnt/array1 failed: Structure needs cleaning

root@LS-QLF55:~#

 

Partial dmesg as it is lengthy.

 

root@LS-QLF55:~# dmesg

7:c6b86000 r6:00000000 r5:c6b85000 r4:00008000

[<c0099098>] (do_kern_mount+0x0/0xdc) from [<c00aebdc>]
(do_mount+0x578/0x5c8)

r8:00008000 r7:c6b85000 r6:00000008 r5:00000000 r4:00000000

[<c00ae664>] (do_mount+0x0/0x5c8) from [<c00aecb8>] (sys_mount+0x8c/0xd4)

[<c00aec2c>] (sys_mount+0x0/0xd4) from [<c0026f00>]
(ret_fast_syscall+0x0/0x2c)

r7:00000015 r6:00000000 r5:beba7abc r4:00000000

XFS: log mount/recovery failed: error 117

XFS: log mount failed

Filesystem "md2": Disabling barriers, not supported by the underlying device

XFS mounting filesystem md2

Starting XFS recovery on filesystem: md2 (logdev: internal)

Filesystem "md2": xfs_inode_recover: Bad inode magic number, dino ptr =
0xc6d2c000, dino bp = 0xc782fa80, ino = 256

Filesystem "md2": XFS internal error xlog_recover_do_inode_trans(1) at line
2310 of file fs/xfs/xfs_log_recover.c.  Caller 0xc017f368

[<c002b758>] (dump_stack+0x0/0x14) from [<c01666fc>]
(xfs_error_report+0x54/0x64)

[<c01666a8>] (xfs_error_report+0x0/0x64) from [<c017eb2c>]
(xlog_recover_do_inode_trans+0x28c/0x8ac)

[<c017e8a0>] (xlog_recover_do_inode_trans+0x0/0x8ac) from [<c017f368>]
(xlog_recover_do_trans+0x80/0x154)

[<c017f2e8>] (xlog_recover_do_trans+0x0/0x154) from [<c017f478>]
(xlog_recover_commit_trans+0x3c/0x54)

[<c017f43c>] (xlog_recover_commit_trans+0x0/0x54) from [<c017f5f4>]
(xlog_recover_process_data+0x164/0x224)

r7:c725e204 r6:c0b802d8 r5:08be0000 r4:e5000000

[<c017f490>] (xlog_recover_process_data+0x0/0x224) from [<c017f9b4>]
(xlog_do_recovery_pass+0x300/0x828)

[<c017f6b4>] (xlog_do_recovery_pass+0x0/0x828) from [<c017ff54>]
(xlog_do_log_recovery+0x78/0x9c)

[<c017fedc>] (xlog_do_log_recovery+0x0/0x9c) from [<c017ff98>]
(xlog_do_recover+0x20/0x138)

r9:00000000 r8:ad3a5038 r7:c0cb8aa0 r6:c0cb8aa0 r5:00000000

r4:0000be08

 

root@LS-QLF55:~# cat /proc/mdstat

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]

md2 : active raid5 sda6[0] sdd6[3] sdc6[2] sdb6[1]

      2906278656 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

 

md1 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]

      5004160 blocks [4/4] [UUUU]

 

md10 : active raid1 sda5[0] sdd5[3] sdc5[2] sdb5[1]

      1003904 blocks [4/4] [UUUU]

 

md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]

      1003904 blocks [4/4] [UUUU]

 

unused devices: <none>

root@LS-QLF55:~#

 

 

 

root@LS-QLF55:~# cat /proc/partitions

major minor  #blocks  name

 

   8     0  976761527 sda

   8     1    1004031 sda1

   8     2    5004247 sda2

   8     4          1 sda4

   8     5    1004031 sda5

   8     6  968759631 sda6

   8    16  976761527 sdb

   8    17    1004031 sdb1

   8    18    5004247 sdb2

   8    20          1 sdb4

   8    21    1004031 sdb5

   8    22  968759631 sdb6

   8    32  976761527 sdc

   8    33    1004031 sdc1

   8    34    5004247 sdc2

   8    36          1 sdc4

   8    37    1004031 sdc5

   8    38  968759631 sdc6

   8    48  976762584 sdd

   8    49    1004031 sdd1

   8    50    5004247 sdd2

   8    52          1 sdd4

   8    53    1004031 sdd5

   8    54  968759631 sdd6

  31     0        256 mtdblock0

   9     0    1003904 md0

   9    10    1003904 md10

   9     1    5004160 md1

   9     2 2906278656 md2

root@LS-QLF55:~#

 

 

root@LS-QLF55:~# mdadm --examine /dev/sda6

/dev/sda6:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 89944e67:449d284c:d2881af4:19e5a1cb

  Creation Time : Thu Feb 26 20:33:41 2009

     Raid Level : raid5

    Device Size : 968759552 (923.88 GiB 992.01 GB)

     Array Size : 2906278656 (2771.64 GiB 2976.03 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 2

 

    Update Time : Wed Mar 28 07:28:40 2012

          State : clean

Active Devices : 4

Working Devices : 4

Failed Devices : 0

  Spare Devices : 0

       Checksum : 36a3ffda - correct

         Events : 0.12

 

         Layout : left-symmetric

     Chunk Size : 64K

 

      Number   Major   Minor   RaidDevice State

this     0       8        6        0      active sync   /dev/sda6

 

   0     0       8        6        0      active sync   /dev/sda6

   1     1       8       22        1      active sync   /dev/sdb6

   2     2       8       38        2      active sync   /dev/sdc6

   3     3       8       54        3      active sync   /dev/sdd6

root@LS-QLF55:~#

 

root@LS-QLF55:~# mdadm --examine /dev/sdb6

/dev/sdb6:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 89944e67:449d284c:d2881af4:19e5a1cb

  Creation Time : Thu Feb 26 20:33:41 2009

     Raid Level : raid5

    Device Size : 968759552 (923.88 GiB 992.01 GB)

     Array Size : 2906278656 (2771.64 GiB 2976.03 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 2

 

    Update Time : Wed Mar 28 07:28:40 2012

          State : clean

Active Devices : 4

Working Devices : 4

Failed Devices : 0

  Spare Devices : 0

       Checksum : 36a3ffec - correct

         Events : 0.12

 

         Layout : left-symmetric

     Chunk Size : 64K

 

      Number   Major   Minor   RaidDevice State

this     1       8       22        1      active sync   /dev/sdb6

 

   0     0       8        6        0      active sync   /dev/sda6

   1     1       8       22        1      active sync   /dev/sdb6

   2     2       8       38        2      active sync   /dev/sdc6

   3     3       8       54        3      active sync   /dev/sdd6

root@LS-QLF55:~#

 

root@LS-QLF55:~# mdadm --examine /dev/sdc6

/dev/sdc6:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 89944e67:449d284c:d2881af4:19e5a1cb

  Creation Time : Thu Feb 26 20:33:41 2009

     Raid Level : raid5

    Device Size : 968759552 (923.88 GiB 992.01 GB)

     Array Size : 2906278656 (2771.64 GiB 2976.03 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 2

 

    Update Time : Wed Mar 28 07:28:40 2012

          State : clean

Active Devices : 4

Working Devices : 4

Failed Devices : 0

  Spare Devices : 0

       Checksum : 36a3fffe - correct

         Events : 0.12

 

         Layout : left-symmetric

     Chunk Size : 64K

 

      Number   Major   Minor   RaidDevice State

this     2       8       38        2      active sync   /dev/sdc6

 

   0     0       8        6        0      active sync   /dev/sda6

   1     1       8       22        1      active sync   /dev/sdb6

   2     2       8       38        2      active sync   /dev/sdc6

   3     3       8       54        3      active sync   /dev/sdd6

root@LS-QLF55:~#

 

root@LS-QLF55:~# mdadm --examine /dev/sdd6

/dev/sdd6:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 89944e67:449d284c:d2881af4:19e5a1cb

  Creation Time : Thu Feb 26 20:33:41 2009

     Raid Level : raid5

    Device Size : 968759552 (923.88 GiB 992.01 GB)

     Array Size : 2906278656 (2771.64 GiB 2976.03 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 2

 

    Update Time : Wed Mar 28 07:28:40 2012

          State : clean

Active Devices : 4

Working Devices : 4

Failed Devices : 0

  Spare Devices : 0

       Checksum : 36a40010 - correct

         Events : 0.12

 

         Layout : left-symmetric

     Chunk Size : 64K

 

      Number   Major   Minor   RaidDevice State

this     3       8       54        3      active sync   /dev/sdd6

 

   0     0       8        6        0      active sync   /dev/sda6

   1     1       8       22        1      active sync   /dev/sdb6

   2     2       8       38        2      active sync   /dev/sdc6

   3     3       8       54        3      active sync   /dev/sdd6

root@LS-QLF55:~#

 

 


[-- Attachment #1.2: Type: text/html, Size: 24448 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-03-30  4:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-29  0:06 Buffalo LS-Q4.0 Raid 5 XFS errors Kirk Anderson
2012-03-29  6:40 ` Dave Chinner
     [not found]   ` <002501cd0dbe$24be21c0$6e3a6540$@tx.rr.com>
2012-03-29 21:31     ` Dave Chinner
2012-03-29 22:15       ` Kirk Anderson
2012-03-29 23:03         ` Dave Chinner
2012-03-29 23:29           ` Kirk Anderson
2012-03-29 23:52             ` Dave Chinner
     [not found]               ` <005601cd0e11$c0b62d90$422288b0$@tx.rr.com>
2012-03-30  4:46                 ` Eric Sandeen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox