From: Tomek Kruszona <bloodyscarion@gmail.com>
To: xfs@oss.sgi.com
Subject: Re: xfs_repair stops on "traversing filesystem..."
Date: Fri, 10 Jul 2009 23:02:33 +0200 [thread overview]
Message-ID: <4A57AC69.7070502@gmail.com> (raw)
In-Reply-To: <4A57A1C4.40004@sandeen.net>
Eric Sandeen wrote:
> This looks like some of the caching that xfs_repair does is mis-sized,
> and it gets stuck when it's unable to find a slot for a new node to
> cache. IMHO that's still a bug that I'd like to work out. If it gets
> stuck this way, it'd probably be better to exit, and suggest a larger
> hash size.
>
> But anyway, I forced a bigger hash size:
>
> xfs_repair -P -o bhash=1024 <blah>
>
> and it did complete. 1024 is probably over the top, but it worked for
> me on a 4G machine w/ some swap.
:D
Is it safe to use xfs_repair without this options after the FS was
repaired? Or maybe I should use them every time I have similar problem?
> I'd strongly suggest doing a non-obfuscated xfs_metadump, do
> xfs_mdrestore of that to some temp.img, run xfs_repair <blah> on that
> temp.img, mount it, and see what you're left with; that way you'll know
> what you're getting into w/ repair.
> I ended up w/ about 5000 files in lost+found just FWIW...
It doesn't matter. On this filesystem is a lot of small files. Those are
image sequences used for video composition. It's backup machine so if
they're gone from filesystem they will be copied back from original
machine. No stress :)
I'm doing xfs_repair on the image now - it's Phase 4 and for now list of
files looks very similar to list that I saw during xfs_repair without
options you suggested.
> Out of curiosity, do you know how the fs was damaged?
I'm not sure. I see some possibilities. I played with write cache
options on the RAID controller when the FS was mounted and running.
Maybe then something went wrong... Second possible reason is that we had
power loss last time and this machine went down then :/
Last one is that I have some problems with XFS filesytems on LVM2. in
kernels <2.6.30 barriers are automatically disabled when underlying
device is some dm-device. As I'm using RAID controllers I should have
write cache disabled. So after upgrade to 2.6.30 message about disabled
barriers disappeared and it was safe to enable write cache again.
Somewhere in the meantime I wanted to check filesystem that everything
is ok with it and then the problem started - I couldn't finish
xfs_repair. This power loss was IIRC after my troubles with xfs_repair,
so the filesystem wasn't totally clean when power failed. Maybe this is
the reason of this mess ;)
Best regards
Tomasz Kruszona
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2009-07-10 21:02 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-09 14:13 xfs_repair stops on "traversing filesystem..." Tomek Kruszona
2009-07-09 14:54 ` Eric Sandeen
2009-07-09 15:03 ` Tomek Kruszona
2009-07-10 5:28 ` Eric Sandeen
2009-07-10 7:27 ` Tomek Kruszona
2009-07-10 14:35 ` Eric Sandeen
2009-07-10 20:17 ` Eric Sandeen
2009-07-10 21:02 ` Tomek Kruszona [this message]
2009-07-10 21:15 ` Eric Sandeen
2009-07-10 23:44 ` Tomek Kruszona
2009-07-11 0:36 ` Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A57AC69.7070502@gmail.com \
--to=bloodyscarion@gmail.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox