From: "Salmon, Rene" <Rene.Salmon@bp.com>
To: nscott@aconex.com, David Chinner <dgc@sgi.com>
Cc: salmr0@bp.com, xfs@oss.sgi.com
Subject: Re: sunit not working
Date: Wed, 13 Jun 2007 13:46:20 -0500 [thread overview]
Message-ID: <1181760380.8754.53.camel@holwrs01> (raw)
In-Reply-To: <1181690478.3758.108.camel@edge.yarra.acx>
Hi,
More details on this:
Using dd with various block sizes to measure write performance only for
now.
This is using two options to dd. The direct I/O option for direct i/o
and the fsync option for buffered i/o.
Using direct:
/usr/bin/time -p dd of=/mnt/testfile if=/dev/zero oflag=direct
Using fsync:
/usr/bin/time -p dd of=/mnt/testfile if=/dev/zero conv=fsync
Using a 2Gbit/sec fiber channel card my theoretical max is 256
MBytes/sec. If we allow a bit of overhead for the card driver and
things the manufacturer claims the card should be able to max out at
around 200 MBytes/sec.
The block sizes I used range from 128KBytes - 1024000Kbytes and all the
writes generate a 1.0GB file.
Some of the results I got:
Buffered I/O(fsync):
--------------------
Linux seems to do a good job at buffering this. Regardless of the block
size I choose I always get write speeds of around 150MBytes/sec
Direct I/O(direct):
-------------------
The speeds I get here of course are very dependent on the block size I
choose and how well they align with the stripe size of the storage array
underneath. For the appropriate block sizes I get really good
performance about 200MBytes/sec.
>From your feedback is sounds like these are reasonable numbers.
Most of our user apps do not use direct I/O but rather buffered I/O. Is
150MBytes/sec as good as it gets for buffered I/O or is there something
I can tune to get a bit more out of buffered I/O?
Thanks
Rene
> >
> > Thanks that helps. Now that I know I have the right sunit and swidth
> > I have a performace related question.
> >
> > If I do a dd on the raw device or to the lun directy I get speeds of
> > around 190-200 MBytes/sec.
> >
> > As soon as I add xfs on top of the lun my speeds go to around 150
> > MBytes/sec. This is for a single stream write using various block
> > sizes on a 2 Gbit/sec fiber channel card.
> >
>
> Reads or writes?
> What are your I/O sizes?
> Buffered or direct IO?
> Including fsync time in there or not? etc, etc.
>
> (Actual dd commands used and their output results would be best)
> xfs_io is pretty good for this kind of analysis, as it gives very
> fine grained control of operations performed, has integrated bmap
> command, etc - use the -F flag for the raw device comparisons).
>
> > Is this overhead more or less what you would expect from xfs? Or is
> > there some tunning I need to do?
>
> You should be able to get very close to raw device speeds esp. for a
> single stream reader/writer, with some tuning.
>
> cheers.
>
next prev parent reply other threads:[~2007-06-13 18:46 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-06-11 23:55 sunit not working Salmon, Rene
2007-06-12 0:34 ` Nathan Scott
2007-06-12 13:12 ` salmr0
2007-06-12 23:21 ` Nathan Scott
2007-06-13 18:46 ` Salmon, Rene [this message]
2007-06-13 19:03 ` Sebastian Brings
2007-06-13 22:31 ` David Chinner
2007-06-12 23:28 ` David Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1181760380.8754.53.camel@holwrs01 \
--to=rene.salmon@bp.com \
--cc=dgc@sgi.com \
--cc=nscott@aconex.com \
--cc=salmr0@bp.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox