From: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: quotacheck speed
Date: Mon, 13 Feb 2012 19:16:51 +0100 [thread overview]
Message-ID: <201202131916.51209.arekm@maven.pl> (raw)
In-Reply-To: <20120212222159.GJ12836@dastard>
On Sunday 12 of February 2012, Dave Chinner wrote:
> On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> > Hi,
> >
> > When mounting 800GB filesystem (after repair for example) here quotacheck
> > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > downtime (repair + quotacheck).
>
> How long does a repair vs quotacheck of that same filesystem take?
> repair has to iterate the inodes 2-3 times, so if that is faster
> than quotacheck, then that is really important to know....
Don't have exact times but looking at nagios and dmesg it took about:
repair ~20 minutes, quotacheck ~10 minutes (it's 800GB of maildirs).
>
> > I wonder if quotacheck can be somehow improved or done differently like
> > doing it in parallel with normal fs usage (so there will be no downtime)
> > ?
>
> quotacheck makes the assumption that it is run on an otherwise idle
> filesystem that nobody is accessing. Well, what it requires is that
> nobody is modifying it. What we could do is bring the filesystem up
> in a frozen state so that read-only access could be made but
> modifications are blocked until the quotacheck is completed.
Read-only is better than no access at all. I was hoping that there is a way to
make quotacheck being recalculated on the fly with taking all write accesses
that happen in meantime into account.
> Also, quotacheck uses the bulkstat code to iterate all the inodes
> quickly. Improvements in bulkstat speed will translate directly
> into faster quotachecks. quotacheck could probably drive bulkstat in
> a parallel manner to do the quotacheck faster, but that assumes that
> the underlying storage is not already seek bound. What is the
> utilisation of the underlying storage and CPU while quotacheck is
> running?
Will try to gather more information then.
>
> Otherwise, bulkstat inode prefetching could be improved like
> xfs_repair was to look at inode chunk density and change IO patterns
> and to slice and dice large IO buffers into smaller inode buffers.
> We can actually do that efficiently now that we don't use the page
> cache for metadata caching. If repair is iterating inodes faster
> than bulkstat, then this optimisation will be the reason and having
> that data point is very important....
>
> Cheers,
>
> Dave.
--
Arkadiusz Miśkiewicz PLD/Linux Team
arekm / maven.pl http://ftp.pld-linux.org/
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-02-13 18:16 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-12 21:01 quotacheck speed Arkadiusz Miśkiewicz
2012-02-12 22:21 ` Dave Chinner
2012-02-13 18:16 ` Arkadiusz Miśkiewicz [this message]
2012-02-13 23:13 ` Dave Chinner
2012-02-12 23:44 ` Christoph Hellwig
2012-02-13 0:17 ` Peter Grandi
2012-02-13 18:09 ` Arkadiusz Miśkiewicz
2012-02-13 23:42 ` Dave Chinner
2012-02-14 5:35 ` Arkadiusz Miśkiewicz
2012-02-15 10:39 ` Arkadiusz Miśkiewicz
2012-02-15 21:45 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201202131916.51209.arekm@maven.pl \
--to=arekm@maven.pl \
--cc=david@fromorbit.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox