From: David Chatterton <chatz@melbourne.sgi.com>
To: jamesb@loreland.org
Cc: Eric Sandeen <sandeen@sandeen.net>,
Klaus Strebel <klaus.strebel@gmx.net>,
xfs@oss.sgi.com
Subject: Re: problem with latest xfsprogs progress code
Date: Thu, 18 Jan 2007 09:51:26 +1100 [thread overview]
Message-ID: <45AEA86E.1060003@melbourne.sgi.com> (raw)
In-Reply-To: <45AE46C2.6090005@sandeen.net>
Eric Sandeen wrote:
> Klaus Strebel wrote:
>
>>>> Running 2.8.18 xfs_repair on a largeish (65TB, ~70M inodes) filesystem on
>>>> an x86_64 machine gives the following "progress" output:
>>>>
>>>> 12:15:36: process known inodes and inode discovery - 1461632 of 0 inod
>>>> es done
>>>> 12:15:36: Phase 3: elapsed time 14 minutes, 32 seconds - processed 100
>>>> 571 inodes per minute
>>>> 12:15:36: Phase 3: 0% done - estimated remaining time 3364 weeks, 3 da
>>>> ys, 7 hours, 30 minutes, 45 seconds
>>>>
>>>> Is this a known bug?
>> Hi James,
>>
>> why do you think that this is a bug? You have an almost infinitely large
>> filesystem, so the file-system check will also run for an almost
>> infinitely long time ;-).
>>
>> You see, not all that's possible is really desirable.
>
> Well, while 65TB is impressive*, and repairing it quickly is indeed a
> challenge, it probably still should not take 64+ years. ;-)
>
> Sounds like something is in fact going wrong.
>
> -Eric
>
> *it amuses me to see xfs users refer to nearly 100T as largeISH; clearly
> you all do not suffer from lowered expectations. :)
>
Barry is at linux.conf.au this week, he knows this code better than
anyone else.
Phase 3 is scanning the inodes in each allocation group, building up a
map of filesystem blocks that are marked as used.
See http://oss.sgi.com/projects/xfs/training/xfs_slides_11_repair.pdf
Scanning an AG and its inodes should not be taking this long.
Are you under memory pressure and the machine is just swapping to death?
Are you seeing I/O errors on the storage?
Is the storage using AVT mode and the luns are flipping between controllers?
Thanks,
David
--
David Chatterton
XFS Engineering Manager
SGI Australia
prev parent reply other threads:[~2007-01-17 22:52 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-01-16 16:30 problem with latest xfsprogs progress code James Braid
2007-01-17 11:00 ` James Braid
2007-01-17 14:08 ` Klaus Strebel
2007-01-17 15:54 ` Eric Sandeen
2007-01-17 22:51 ` David Chatterton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=45AEA86E.1060003@melbourne.sgi.com \
--to=chatz@melbourne.sgi.com \
--cc=jamesb@loreland.org \
--cc=klaus.strebel@gmx.net \
--cc=sandeen@sandeen.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox