From: Eric Sandeen <sandeen@sandeen.net>
To: Alberto Accomazzi <aaccomazzi@gmail.com>
Cc: xfs@oss.sgi.com
Subject: Re: help with xfs_repair on 10TB fs
Date: Sat, 17 Jan 2009 12:50:29 -0600 [thread overview]
Message-ID: <49722875.90202@sandeen.net> (raw)
In-Reply-To: <adcf4ef70901171042p31054ae0rb56819fce7b6f47e@mail.gmail.com>
Alberto Accomazzi wrote:
> On Sat, Jan 17, 2009 at 12:33 PM, Eric Sandeen <sandeen@sandeen.net> wrote:
>
>> Alberto Accomazzi wrote:
>>> I need some help with figuring out how to repair a large XFS
>>> filesystem (10TB of data, 100+ million files). xfs_repair seems to
>>> have crapped out before finishing the job and now I'm not sure how to
>>> proceed.
>> How did it "crap out?
>
>
> Well, in the way I described below, namely it ran for several hours and then
> died without completing. As you can see from the log (which captured both
> stdout and stderr) there's nothing that indicates what terminated the
> program. And it's definitely not running now.
>
>
>> the src.rpm from
>>
>> http://kojipkgs.fedoraproject.org/packages/xfsprogs/2.10.2/3.fc11/src/
>>
>
> Ok, I guess it's worth giving it a shot. I assume I don't need to worry
> about kernel modules because the xfsprogs don't depend on that, right?
right.
>
>>> After bringing the system back, a mount of the fs reported problems:
>>>
>>> Starting XFS recovery on filesystem: sdb1 (logdev: internal)
>>> Filesystem "sdb1": XFS internal error xfs_btree_check_sblock at line 334
>> of file
>>> /home/buildsvn/rpmbuild/BUILD/xfs-kmod-0.4/_kmod_build_/xfs_btree.c.
>> Caller 0x
>>> ffffffff882fa8d2
>> so log replay is failing now; but that indicates an unclean shutdown.
>> Something else must have happened between the xfs_repair and this mount
>> instance?
>>
>
> Sorry, I wasn't clear: there was indeed an unclean shutdown (actually a
> couple), after which the mount would not succeed presumably because of the
> dirty log. I was able to mount the system read-only and take enough of a
> look to see that there was significant corruption of the data. Running
> xfs_repair -L at that point seemed the only option available. But do let me
> know if this line of thinking is incorrect.
yes, if you have a dirty log that won't replay, zapping the log via
repair is about the only option. I wonder what the first hint of
trouble here was, though, what led to all this misery.... :)
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2009-01-17 18:50 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-17 17:13 help with xfs_repair on 10TB fs Alberto Accomazzi
2009-01-17 17:33 ` Eric Sandeen
2009-01-17 18:42 ` Alberto Accomazzi
2009-01-17 18:50 ` Eric Sandeen [this message]
2009-01-17 23:14 ` Alberto Accomazzi
2009-01-17 23:49 ` Eric Sandeen
2009-01-18 20:34 ` Alberto Accomazzi
2009-01-17 17:35 ` Tru Huynh
2009-01-17 18:45 ` Alberto Accomazzi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49722875.90202@sandeen.net \
--to=sandeen@sandeen.net \
--cc=aaccomazzi@gmail.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox