From: Dave Chinner <david@fromorbit.com>
To: Stan Hoeppner <stan@hardwarefreak.com>
Cc: xfs@oss.sgi.com
Subject: Re: deleting 2TB lots of files with delaylog: sync helps?
Date: Thu, 2 Sep 2010 17:01:08 +1000 [thread overview]
Message-ID: <20100902070108.GY705@dastard> (raw)
In-Reply-To: <4C7F3823.1040404@hardwarefreak.com>
On Thu, Sep 02, 2010 at 12:37:39AM -0500, Stan Hoeppner wrote:
> Dave Chinner put forth on 9/1/2010 1:44 AM:
>
> > 4p VM w/ 2GB RAM with the
> > disk image on a hw-RAID1 device make up of 2x500Gb SATA drives (create
> > and remove 800k files):
>
> > FSUse% Count Size Files/sec App Overhead
> > 2 800000 0 54517.1 6465501
> > $
> >
> > The same test run on a 8p VM w/ 16Gb RAM, with the disk image hosted
> > on a 12x2TB SAS dm RAID-0 array:
> >
> > FSUse% Count Size Files/sec App Overhead
> > 2 800000 0 51409.5 6186336
>
> Is this a single socket quad core Intel machine with hyperthreading
> enabled?
No, It's a dual socket (8c/16t) server.
> That would fully explain the results above. Looks like you
> ran out of memory bandwidth in the 4 "processor" case. Adding phantom
> CPUs merely made them churn without additional results.
No, that's definitely not the case. A different kernel in the
same 8p VM, 12x2TB SAS storage, w/ 4 threads, mount options "logbsize=262144"
FSUse% Count Size Files/sec App Overhead
0 800000 0 39554.2 7590355
4 threads with mount options "logbsize=262144,delaylog"
FSUse% Count Size Files/sec App Overhead
0 800000 0 67269.7 5697246
http://userweb.kernel.org/~dgc/shrinker-2.6.36/fs_mark-2.6.36-rc3-4-thread-delaylog-comparison.png
Top chart is CPu usage, 2nd chart is disk iops (purple is write),
thrid chart is disk bandwidth (purple is write), and the bottom
chart is create rate (yellow) and unlink rate (green).
From left to write, the first IO peak (~1000 iops, 250MB/s) is the
mkfs‥xfs. the next sustained load is the first fs_mark workload
without delayed logging - 2500 iops and 500MB/s, and the second is
the same workload again with delayed logging enabled (zero IO,
roughly 400% CPU utilisation and significantly higher create/unlink
rates).
I'll let you decide which of thw two IO patterns is sustainable on a
single sata disk yourself. ;)
> > FSUse% Count Size Files/sec App Overhead
> > 2 800000 0 15118.3 7524424
> >
> > delayed logging is 3.6x faster on the same filesystem. It went from
> > 15k files/s at ~120% CPU utilisation, to 54k files/s at 400% CPU
> > utilisation. IOWs, it is _clearly_ CPU bound with delayed logging as
> > there is no idle CPU left in the VM at all.
>
> Without seeing all of what you have available, going on strictly the
> data above, I disagree. I'd say your bottleneck is your memory/IPC
> bandwidth.
You are free to choose to believe I don't know I'm doing - if you
can get XFS to perform better, then I'll happily take the patches ;)
> If my guess about your platform is correct, try testing on a dual socket
> quad core Opteron with quad memory channels. Test with 2, 4, 6, and 8
> fs_mark threads.
Did that a long time ago - it's in the archives a few months back.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-09-02 7:00 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-31 23:30 deleting 2TB lots of files with delaylog: sync helps? Michael Monnerie
2010-09-01 0:06 ` Dave Chinner
2010-09-01 0:22 ` Michael Monnerie
2010-09-01 3:19 ` Dave Chinner
2010-09-01 4:42 ` Stan Hoeppner
2010-09-01 6:44 ` Dave Chinner
2010-09-02 5:37 ` Stan Hoeppner
2010-09-02 7:01 ` Dave Chinner [this message]
2010-09-02 8:41 ` Stan Hoeppner
2010-09-02 11:29 ` Dave Chinner
2010-09-02 14:57 ` Stan Hoeppner
2010-09-01 3:01 ` Stan Hoeppner
2010-09-01 3:41 ` Dave Chinner
2010-09-01 7:45 ` Michael Monnerie
2010-09-02 1:17 ` Dave Chinner
2010-09-02 2:15 ` Michael Monnerie
2010-09-02 7:51 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100902070108.GY705@dastard \
--to=david@fromorbit.com \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox