From: Dave Chinner <david@fromorbit.com>
To: rkj@softhome.net
Cc: xfs@oss.sgi.com
Subject: Re: Looking for Linux XFS file system performance tuning tips for LSI9271-8i + 8 SSD's RAID0
Date: Mon, 4 Feb 2013 23:52:34 +1100 [thread overview]
Message-ID: <20130204125234.GK2667@dastard> (raw)
In-Reply-To: <courier.510ECA60.00003A99@softhome.net>
On Sun, Feb 03, 2013 at 01:36:48PM -0700, rkj@softhome.net wrote:
>
> I am working with hardware RAID0 using LSI 9271-8i + 8 SSD's. I am
> using CentOS 6.3 on a Supermicro X9SAE-V motherboard with Intel Xeon
> E3-1275V2 CPU and 32GB 1600 MHz ECC RAM. My application is fast
> sensor data store and forward with UDP based file transfer using
> multiple 10GbE interfaces. So I do not have any concurrent loading,
> I am mainly interested in optimizing sequential read/write
> performance.
>
> Raw performance as measured by Gnome Disk Utility is around 4GB/s
> sustained read/write.
I don't know what that does - probably lots of concurrent IO to drive
deep queue depths to get the absolute maximum possible from the
device....
> With XFS buffer IO, my sequential writes max
> out at about 2.5 GB/s.
CPU bound on single threaded IO, I'd guess.
> With Direct IO, the sequential writes are
> around 3.5 GB/s but I noticed a drop-off in sequential reads for
> smaller record sizes.
Almost certainly IO latency bound on single threaded IO.
> I am trying to get the XFS sequential
> read/writes as close to 4 GB/s as possible.
Time to go look up how to use async IO or multithreaded direct
IO.
FWIW, the best benchmark is your application - none of what you've
talked about even come close to modelling the data flow a
network-disk-network store-and-forward system needs, and a data
rates of 4GB/s you are going to benchmark the network devices
flowing data at the same time you do disk IO....
> I have documented all of the various mkfs.xfs options I have tried,
> fstab mount options, iozone results, etc. in this forum thread:
Configuration changes won't make any difference to data IO latency
or CPU usage. IOWs, SSDs don't magically solve the problem of having
to optimise the way the applications/benchmarks do IO and so no
amount of tweaking the filesystem will get you to your goal if the
application is deficient...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2013-02-04 12:52 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-03 20:36 Looking for Linux XFS file system performance tuning tips for LSI 9271-8i + 8 SSD's RAID0 rkj
2013-02-04 9:11 ` Linda Walsh
2013-02-04 12:52 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130204125234.GK2667@dastard \
--to=david@fromorbit.com \
--cc=rkj@softhome.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox