public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* xfs_repair failing.
@ 2012-07-16 21:51 Emmanuel Florac
  2012-07-16 21:56 ` Joe Landman
  0 siblings, 1 reply; 4+ messages in thread
From: Emmanuel Florac @ 2012-07-16 21:51 UTC (permalink / raw)
  To: xfs


Hi list,
I have a serious problem with a 60 TB XFS filesystem which looks
seriously hosed. It was repaired once earlier last week, but crashed
again after a repair friday and now xfs_repair fails with this message:

 bad magic # 0x1 in btcnt block 61/27137
 bad magic # 0xfffe68ff in btcnt block 61/201728942  

(xfs_repair 2.9.8)

The filesystem can only be mounted read-only at this point.

I suspect a RAID corruption (Adaptec 52445 with firmware 18252, Hitachi
HUA 3TB drives, RAID 6). The RAID controller have been replaced, but it
doesn't help. Any ideas?

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs_repair failing.
  2012-07-16 21:51 xfs_repair failing Emmanuel Florac
@ 2012-07-16 21:56 ` Joe Landman
  2012-07-16 22:05   ` Emmanuel Florac
  2012-07-17 15:53   ` Emmanuel Florac
  0 siblings, 2 replies; 4+ messages in thread
From: Joe Landman @ 2012-07-16 21:56 UTC (permalink / raw)
  To: xfs

On 07/16/2012 05:51 PM, Emmanuel Florac wrote:
>
> Hi list,
> I have a serious problem with a 60 TB XFS filesystem which looks
> seriously hosed. It was repaired once earlier last week, but crashed
> again after a repair friday and now xfs_repair fails with this message:
>
>   bad magic # 0x1 in btcnt block 61/27137
>   bad magic # 0xfffe68ff in btcnt block 61/201728942
>
> (xfs_repair 2.9.8)
>
> The filesystem can only be mounted read-only at this point.
>
> I suspect a RAID corruption (Adaptec 52445 with firmware 18252, Hitachi
> HUA 3TB drives, RAID 6). The RAID controller have been replaced, but it
> doesn't help. Any ideas?
>

Can you update the xfs tools to the current (3.1.8 I think)?  Also, 
which kernel is being used?



-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs_repair failing.
  2012-07-16 21:56 ` Joe Landman
@ 2012-07-16 22:05   ` Emmanuel Florac
  2012-07-17 15:53   ` Emmanuel Florac
  1 sibling, 0 replies; 4+ messages in thread
From: Emmanuel Florac @ 2012-07-16 22:05 UTC (permalink / raw)
  To: Joe Landman; +Cc: xfs

Le Mon, 16 Jul 2012 17:56:59 -0400 vous écriviez:

> Can you update the xfs tools to the current (3.1.8 I think)?  Also, 
> which kernel is being used?
> 

We'll try running a recent enough version of xfs_repair tomorrow. The
kernel is 3.2.21 at the moment. The xfs filesystem looks like this:

xfs_info /dev/dm-0
meta-data=/dev/mapper/vg0-raid   isize=256    agcount=71,
agsize=268435440 blks =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=19013298176,
imaxpct=1 =                       sunit=16     swidth=32 blks
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=16 blks,
lazy-count=1 realtime =none                   extsz=131072 blocks=0,
rtextents=0



-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs_repair failing.
  2012-07-16 21:56 ` Joe Landman
  2012-07-16 22:05   ` Emmanuel Florac
@ 2012-07-17 15:53   ` Emmanuel Florac
  1 sibling, 0 replies; 4+ messages in thread
From: Emmanuel Florac @ 2012-07-17 15:53 UTC (permalink / raw)
  To: Joe Landman; +Cc: xfs

Le Mon, 16 Jul 2012 17:56:59 -0400 vous écriviez:

> Can you update the xfs tools to the current (3.1.8 I think)?  Also, 
> which kernel is being used?
> 

I hadn't the latest version at hand, but we tried with a 3.1.4 version
and it failed in exactly the same manner. I tend to suspect an
incomptibility between the 3 TB drives and the RAID controller, but I
can't be sure. Did anyone heard of any such problem?

Apparently only the XFS filesystem fails; another (reiserfs) filesystem
on another partition on the same RAID array didn't suffer from any
problem.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-07-17 15:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-16 21:51 xfs_repair failing Emmanuel Florac
2012-07-16 21:56 ` Joe Landman
2012-07-16 22:05   ` Emmanuel Florac
2012-07-17 15:53   ` Emmanuel Florac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox