public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Michael Monnerie <michael.monnerie@is.it-management.at>
Cc: xfs@oss.sgi.com
Subject: Re: deleting 2TB lots of files with delaylog: sync helps?
Date: Wed, 1 Sep 2010 13:19:54 +1000	[thread overview]
Message-ID: <20100901031954.GP705@dastard> (raw)
In-Reply-To: <201009010222.57350@zmi.at>

On Wed, Sep 01, 2010 at 02:22:31AM +0200, Michael Monnerie wrote:
> On Mittwoch, 1. September 2010 Dave Chinner wrote:
> > You're probably getting RMW cycles on inode writeback. I've been
> > noticing this lately with my benchmarking - the VM is being _very
> > aggressive_ reclaiming page cache pages vs inode caches and as a
> > result the inode buffers used for IO are being reclaimed between the
> > time it takes to create the inodes and when they are written back.
> > Hence you get lots of reads occurring during inode writeback.
> > 
> > By issuing a sync, you clear out all the inode writeback and all the
> > RMW cycles go away. As a result, there is more disk throughput
> > availble for the unlink processes.  There is a good chance this is
> > the case as the number of reads after the sync drop by an order of
> > magnitude...
> 
> Nice explanation.
>  
> > > Now it can be that the sync just causes more writes and stalls
> > > reads so overall it's slower, but I'm wondering why none of the
> > > devices says "100% util", which should be the case on deletes? Or
> > > is this again the "mistake" of the utilization calculation that
> > > writes do not really show up there?
> > 
> > You're probably CPU bound, not IO bound.
> 
> This is a hexa-core AMD Phenom(tm) II X6 1090T Processor with up to 
> 3.2GHz per core, so that shouldn't be

I'm getting a 8core/16thread server being CPU bound with multithreaded
unlink workloads using delaylog, so it's entirely possible that all
CPU cores are fully utilised on your machine.

> - or is there only one core used? 
> I think I read somewhere that each AG should get a core or so...

If all the files are in one AG, then it will serialise on the AGI
header and won't use much more than one CPU.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2010-09-01  3:19 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-31 23:30 deleting 2TB lots of files with delaylog: sync helps? Michael Monnerie
2010-09-01  0:06 ` Dave Chinner
2010-09-01  0:22   ` Michael Monnerie
2010-09-01  3:19     ` Dave Chinner [this message]
2010-09-01  4:42       ` Stan Hoeppner
2010-09-01  6:44         ` Dave Chinner
2010-09-02  5:37           ` Stan Hoeppner
2010-09-02  7:01             ` Dave Chinner
2010-09-02  8:41               ` Stan Hoeppner
2010-09-02 11:29                 ` Dave Chinner
2010-09-02 14:57                   ` Stan Hoeppner
2010-09-01  3:01   ` Stan Hoeppner
2010-09-01  3:41     ` Dave Chinner
2010-09-01  7:45       ` Michael Monnerie
2010-09-02  1:17         ` Dave Chinner
2010-09-02  2:15           ` Michael Monnerie
2010-09-02  7:51           ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100901031954.GP705@dastard \
    --to=david@fromorbit.com \
    --cc=michael.monnerie@is.it-management.at \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox