From: Eric Sandeen <sandeen@sandeen.net>
To: Michael Monnerie <michael.monnerie@is.it-management.at>
Cc: xfs@oss.sgi.com
Subject: Re: xfs_repair 3.1.2 crashing
Date: Fri, 11 Jun 2010 20:38:40 -0500 [thread overview]
Message-ID: <4C12E520.6040008@sandeen.net> (raw)
In-Reply-To: <201006120138.22265@zmi.at>
Michael Monnerie wrote:
> On Donnerstag, 10. Juni 2010 Eric Sandeen wrote:
>> It'd be great to at least capture the issue by creating an
>> xfs_metadump image for analysis...
>
> I sent it to you in private.
>
> But now I'm really puzzled: I bought 2 2TB drives, installed an lvm with
> xfs on them to have 4TB, and copied the contents from the server to
> these 4TB via rsync -aHAX. And now I have a broken XFS on that brand new
> created drives, without any crash, not even a reboot!
>
> I got this message after making a "du -s" on the new disks:
> du: cannot access `samba/backup/uranus/WindowsImageBackup/uranus/Backup
> 2010-06-05 010014/852c2690-cf1a-11de-b09b-806e6f6e6963.vhd': Structure
> needs cleaning
dmesg would be the right thing to do here ...
> So I umounted and xfs_repaired (v3.1.2) it:
> # xfs_repair -V
> xfs_repair version 3.1.2
which kernel, again? The fork offset problems smell like something that's fixed.
-Eric
> # xfs_repair /dev/swraid0/backup
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
> - zero log...
> - scan filesystem freespace and inode maps...
> - found root inode chunk
> Phase 3 - for each AG...
> - scan and clear agi unlinked lists...
> - process known inodes and perform inode discovery...
> - agno = 0
> - agno = 1
> local inode 2195133988 attr too small (size = 3, min size = 4)
> bad attribute fork in inode 2195133988, clearing attr fork
> clearing inode 2195133988 attributes
> cleared inode 2195133988
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
> - setting up duplicate extent list...
> - check for inodes claiming duplicate blocks...
> - agno = 2
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - agno = 3
> - agno = 1
> - agno = 0
> data fork in inode 2195133988 claims metadata block 537122652
> xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0'
> failed.
> Aborted
>
> What's this now? I copied the error from the source via rsync? ;-)
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-06-12 1:36 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-10 11:06 xfs_repair 3.1.2 crashing Michael Monnerie
2010-06-10 16:27 ` Eric Sandeen
2010-06-11 23:38 ` Michael Monnerie
2010-06-12 1:38 ` Eric Sandeen [this message]
2010-06-12 10:41 ` Michael Monnerie
2010-06-23 18:32 ` Michael Monnerie
2010-06-12 13:33 ` Michael Monnerie
2010-06-14 12:47 ` Michael Monnerie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C12E520.6040008@sandeen.net \
--to=sandeen@sandeen.net \
--cc=michael.monnerie@is.it-management.at \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox