From: Michael Weissenbacher <mw@dermichi.com>
To: xfs@oss.sgi.com
Subject: Re: Speeding up xfs_repair on filesystem with millions of inodes
Date: Wed, 28 Oct 2015 18:31:38 +0100 [thread overview]
Message-ID: <5631067A.4050306@dermichi.com> (raw)
In-Reply-To: <20151028001744.GO19199@dastard>
Hi Dave!
Everything is in good shape again. This time xfs_repair finished without
detecting any problems. So i suppose the only problem was that there
wasn't enough RAM.
---snip---
XFS_REPAIR Summary Wed Oct 28 15:02:30 2015
Phase Start End Duration
Phase 1: 10/27 23:19:34 10/27 23:19:34
Phase 2: 10/27 23:19:34 10/27 23:19:57 23 seconds
Phase 3: 10/27 23:19:57 10/28 04:10:50 4 hours, 50 minutes, 53
seconds
Phase 4: 10/28 04:10:50 10/28 09:03:00 4 hours, 52 minutes, 10
seconds
Phase 5: 10/28 09:03:00 10/28 09:03:16 16 seconds
Phase 6: 10/28 09:03:16 10/28 15:02:29 5 hours, 59 minutes, 13
seconds
Phase 7: 10/28 15:02:29 10/28 15:02:29
Total run time: 15 hours, 42 minutes, 55 seconds
---snip---
On 28.10.2015 01:17, Dave Chinner wrote:
>
> Maybe you have a disk that is dying. Do your drives have TLER
> enabled on them?
>
Thanks for the hint. These are all enterprise-grade Nearline-SAS drives
(SEAGATE ST32000444SS) attached to a Dell PERC 6/i controller. I think
it isn't even possible to turn TLER on or off on them. They should all
be in good shape since the controller automatically does periodic patrol
reads.
On 28.10.2015 01:17, Dave Chinner wrote:
>
> If kswapd is doing all the work, then it's essentially got no memory
> available. I would add significantly more swap space as well (e.g.
> add swap files to the root filesystem - you can do this while repair
> is running, too). If there's sufficient swap space, then repair
> should use it fairly efficiently - it doesn't tend to thrash swap
> because most of it's memory usage is for information that is only
> accessed once per phase or is parked until it is needed in a later
> phase so it doesn't need to be read from disk again...
>
Good to know. However, the system was never low on swap. It has 40GB
swap available and never used more than 10GB during the repair (with 8GB
RAM). On the second run, with 16GB RAM, the xfs_repair never used any
swap at all.
On 28.10.2015 01:17, Dave Chinner wrote:
>
> Defaults, but it's really only a guideline for cache sizing. If
> repair needs more memory to store metadata it is validating (like
> the directory structure) then it will consume as much as it needs.
>
Will keep that in mind.
Thanks again for your help.
with kind regards,
Michael
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2015-10-28 17:31 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-27 12:10 Speeding up xfs_repair on filesystem with millions of inodes Michael Weissenbacher
2015-10-27 19:38 ` Dave Chinner
2015-10-27 22:51 ` Michael Weissenbacher
2015-10-28 0:17 ` Dave Chinner
2015-10-28 17:31 ` Michael Weissenbacher [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5631067A.4050306@dermichi.com \
--to=mw@dermichi.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox