From: Dave Chinner <david@fromorbit.com>
To: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
Cc: xfs@oss.sgi.com
Subject: Re: quotacheck speed
Date: Mon, 13 Feb 2012 09:21:59 +1100 [thread overview]
Message-ID: <20120212222159.GJ12836@dastard> (raw)
In-Reply-To: <201202122201.07649.arekm@maven.pl>
On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
>
> Hi,
>
> When mounting 800GB filesystem (after repair for example) here quotacheck
> takes 10 minutes. Quite long time that adds to whole time of filesystem
> downtime (repair + quotacheck).
How long does a repair vs quotacheck of that same filesystem take?
repair has to iterate the inodes 2-3 times, so if that is faster
than quotacheck, then that is really important to know....
> I wonder if quotacheck can be somehow improved or done differently like doing
> it in parallel with normal fs usage (so there will be no downtime) ?
quotacheck makes the assumption that it is run on an otherwise idle
filesystem that nobody is accessing. Well, what it requires is that
nobody is modifying it. What we could do is bring the filesystem up
in a frozen state so that read-only access could be made but
modifications are blocked until the quotacheck is completed.
Also, quotacheck uses the bulkstat code to iterate all the inodes
quickly. Improvements in bulkstat speed will translate directly
into faster quotachecks. quotacheck could probably drive bulkstat in
a parallel manner to do the quotacheck faster, but that assumes that
the underlying storage is not already seek bound. What is the
utilisation of the underlying storage and CPU while quotacheck is
running?
Otherwise, bulkstat inode prefetching could be improved like
xfs_repair was to look at inode chunk density and change IO patterns
and to slice and dice large IO buffers into smaller inode buffers.
We can actually do that efficiently now that we don't use the page
cache for metadata caching. If repair is iterating inodes faster
than bulkstat, then this optimisation will be the reason and having
that data point is very important....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-02-12 22:22 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-12 21:01 quotacheck speed Arkadiusz Miśkiewicz
2012-02-12 22:21 ` Dave Chinner [this message]
2012-02-13 18:16 ` Arkadiusz Miśkiewicz
2012-02-13 23:13 ` Dave Chinner
2012-02-12 23:44 ` Christoph Hellwig
2012-02-13 0:17 ` Peter Grandi
2012-02-13 18:09 ` Arkadiusz Miśkiewicz
2012-02-13 23:42 ` Dave Chinner
2012-02-14 5:35 ` Arkadiusz Miśkiewicz
2012-02-15 10:39 ` Arkadiusz Miśkiewicz
2012-02-15 21:45 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120212222159.GJ12836@dastard \
--to=david@fromorbit.com \
--cc=arekm@maven.pl \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox