From: Dave Chinner <david@fromorbit.com>
To: Michael Spiegle <mike@nauticaltech.com>
Cc: xfs@oss.sgi.com
Subject: Re: 1B files, slow file creation, only AG0 used
Date: Tue, 13 Mar 2012 11:08:20 +1100 [thread overview]
Message-ID: <20120313000820.GC5091@dastard> (raw)
In-Reply-To: <CAEm1Pvm4+K3dBm+CQad3O2LLYBi76trYx9nBazwv5MNC24sZBg@mail.gmail.com>
On Mon, Mar 12, 2012 at 02:54:20PM -0700, Michael Spiegle wrote:
> I believe we figured out what was going wrong:
> 1) You definitely need inode64 as a mount option
> 2) It seems that the AG metadata was being cached. We had to unmount
> the system and remount it to get updated counts on per-AG usage.
If you were looking at it with xfs_db, then yes, that is what will
happen. Use "echo 1 > /proc/sys/vm/drop_caches" to get the cached
metadata dropped.
> For the moment, I've written a script to copy/rename/delete our files
> so that they are gradually migrated to new AGs. FWIW, I noticed that
> this operation is significantly faster on an EL6.2-based kernel
> (2.6.32) compared to EL5 (2.6.18). I'm also using the 'delaylog'
> mount option which probably helps a bit. I still have a few other
> curiosities about this particular issue though:
>
> On Sun, Mar 11, 2012 at 5:56 PM, Dave Chinner <david@fromorbit.com> wrote:
> >
> > Entirely normal. some operations require Io to complete (e.g.
> > reading directory blocks to find where to insert the new entry),
> > while adding the first file to a directory generally requires zero
> > IO. You're seeing the difference between cold cache and hot cache
> > performance.
> >
>
> In this situation, any files written to the same directory exhibited
> this issue regardless of cache state. For example:
>
> Takes 300ms to complete:
> touch tmp/0
>
> Takes 600ms to complete:
> touch tmp/0 tmp/1
>
> Takes 1200ms to complete:
> touch tmp/0 tmp/1 tmp/2 tmp/3
>
> I would expect the directory to be cached after the first file is
> created. I don't understand why all subsequent writes were affected
> as well.
I don't have enough information to help you. I don't know what
hardware you are running on, how big the directory is, what they
layout of the directory is, etc. The "needs to do IO" was simply a
SWAG....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2012-03-13 0:08 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-10 2:13 1B files, slow file creation, only AG0 used Michael Spiegle
2012-03-10 4:59 ` Eric Sandeen
2012-03-10 5:25 ` Michael Spiegle
2012-03-12 2:59 ` Stan Hoeppner
2012-03-12 22:11 ` Michael Spiegle
2012-03-12 0:56 ` Dave Chinner
2012-03-12 21:54 ` Michael Spiegle
2012-03-13 0:08 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120313000820.GC5091@dastard \
--to=david@fromorbit.com \
--cc=mike@nauticaltech.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox