public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Russell Cattelan <cattelan@thebarn.com>
To: mill / in-medias-res <mill@in-medias-res.com>
Cc: kirschbaum@in-medias-res.com, xfs@oss.sgi.com
Subject: Re: xfs_repair breaks; xfs_metadump hangs
Date: Fri, 06 Nov 2009 16:42:51 -0600	[thread overview]
Message-ID: <4AF4A66B.8090906@digitalelves.com> (raw)
In-Reply-To: <20091104152022.GA21347@mytux.intra.in-medias-res.com>

mill / in-medias-res wrote:
> Hello XFS-Community,
>
> i have some real trouble with restoring/repairing my two XFS Partion's. These
> Partion's are on a RAID-5 Array which "was broken". The first xfs_repair run
> on /dev/sdc1 did restore 80 GB from ca. 300-400 GB. The Problem was that 99,9%
> of the million files are in lost+found.
>
> Because i was more interested in restoring /dev/sdc2, i did forget about sdc1
> and run xfs_repair on the other Partion:
>
> cmd: xfs_repair -t 1 -P /dev/sdc2
> [...]
> corrupt inode 3256930831 ((a)extents = 1).  This is a bug.
> Please capture the filesystem metadata with xfs_metadump and
> report it to xfs@oss.sgi.com.
> cache_node_purge: refcount was 1, not zero (node=0x377d0008)
> fatal error -- couldn't map inode 3256930831, err = 117
>   
Hmm interesting.
Can you go into xfs_db and print out the bad inode? send it to us?
I'm guessing the extents are corrupted somehow.

One option to then flag the inode as deleted which will cause repair to
toss is hopefully clean up the mess.

Here is a write up how to do that.
http://jijo.free.net.ph/19

> time: 67,27s user 10,09s system 10% cpu 12:05,31 total
>
> I tried to run xfs_metadump serveral times and it hangs everytime on this position:
> xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
> Copied 1411840 of 4835520 inodes (0 of 3 AGs)
>
> It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
> Should i wait here?
>
> Versions:
> dpkg -l |grep xfs
> ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
> ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
> Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.
>
> The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.
>
> Best Regards,
> Maximilian Mill
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
>   

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2009-11-06 22:43 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-04 15:20 xfs_repair breaks; xfs_metadump hangs mill / in-medias-res
2009-11-05  0:59 ` Michael Monnerie
2009-11-06  2:27 ` Robert Brockway
2009-11-06  8:57   ` mill / in-medias-res
2009-11-06  9:09 ` mill / in-medias-res
2009-11-06 22:42 ` Russell Cattelan [this message]
2009-11-09  9:19   ` mill / in-medias-res
2009-11-09  9:51     ` mill / in-medias-res
2009-11-10  1:25       ` Russell Cattelan
  -- strict thread matches above, loose matches on Subject: below --
2009-11-05 11:22 mill / in-medias-res

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4AF4A66B.8090906@digitalelves.com \
    --to=cattelan@thebarn.com \
    --cc=kirschbaum@in-medias-res.com \
    --cc=mill@in-medias-res.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox