public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Linda Walsh <xfs@tlinx.org>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs-oss <xfs@oss.sgi.com>
Subject: Re: better perf and memory uage for xfs_fsr? Trivial patch against	xfstools-3.16 included...
Date: Thu, 08 Nov 2012 23:10:26 -0800	[thread overview]
Message-ID: <509CAC62.3040508@tlinx.org> (raw)
In-Reply-To: <20121108213911.GS6434@dastard>



Dave Chinner wrote:
> On Thu, Nov 08, 2012 at 12:30:11PM -0800, Linda Walsh wrote:
>> FWIW, the benefit, probably comes from the read-file, as the written file
>> is written with DIRECT I/O and I can't see that it should make a difference
>> there.
> 
> Hmmm, so it does. I think that's probably the bug that needs to be
> fixed, not so much using posix_fadvise....
---
	Well... using direct I/O might be another way of fixing it...
but I notice that neither the reads nor the writes seem to use the optimal
I/O size that takes into consideration RAID alignment.  It aligns for memory
alignment and aligns for a 2-4k device alignment, but doesn't seem to take
into consideration minor things like a 64k strip-unit x 12-wide-data-width
(768k).. if you do direct I/O. might want to be sure to RAID align it...


	Doing <64k at a time would cause heinous perf... while using
the SEQUENTIAL+READ-ONCE params seem to cause a notable I/O smoothing
(no dips/valleys on the I/O charts), though I don't know how much
(if any) real performance increase (or decrease) there was, as setting
up exactly fragmentation cases would be a pain...

	If you do LARGE I/O's on the READs.. say 256MB at a time, I
don't think exact alignment will matter that much, but I notice speed
improvements up to a 1GB buffer size in reads + writes in 'dd' using
direct I/O (couldn't test larger size, as device driver doesn't seem
to allow anything > 2GB-8k.. (this on a 64bit machine)
at least I think it is the dev.driver, hasn't been important enough
to chase down.

	While such large buffers might be bad on a memory tight
machine, on many 64-bit machines, it's well worth the throughput
and lower disk-transfer-time usage.  Meanwhile, that posix
call added on the read side really does seem to benefit...
Try it, you'll like it!  ;-) (not to say it is the 'best' fix,
but it's pretty low cost!)...

> Cheers,
> 
> Dave.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-11-09  7:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-08 12:51 better perf and memory uage for xfs_fsr? Trivial patch against xfstools-3.16 included Linda Walsh
2012-11-08 20:30 ` Linda Walsh
2012-11-08 21:39   ` Dave Chinner
2012-11-09  7:10     ` Linda Walsh [this message]
2012-11-09  8:16       ` Dave Chinner
2012-11-08 21:29 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=509CAC62.3040508@tlinx.org \
    --to=xfs@tlinx.org \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox