public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Settlemyer, Bradley W." <settlemyerbw@ornl.gov>
To: "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Weird performance on a FusionIO Octal (Random writes faster than Seq.)
Date: Thu, 14 Feb 2013 13:45:04 -0500	[thread overview]
Message-ID: <CD429AE0.4B5E%settlemyerbw@ornl.gov> (raw)

Hello

  So I'm getting weird performance using XFS on a 5TB FusionIO octal (a
solid state device plugged into my pcie bus).  It seems to be a newish
problem, but I can't go back to an old version of everything to prove
that, because I've only got one working Octal right now (they are a little
pricy).

  At any rate, when doing random 16MB requests to a file with 16 threads,
I get about 4.5GB/s.  When writing sequentially with 16 threads doing 16MB
requests, I get about 3.5GB/s -- the first time.  Once the file is written
the first time, a second pass results in 4.5GB/s.

  The thing is, I'm using preallocate on both types of I/O (that is, I
always preallocate the entire file whether its random or sequential).  I
allocate the exact same size file in both cases, its just faster the first
time with random writes rather than sequential writes.

  So if you had xdd 7.0 and an octal plugged into slot 6 of an HP DL585 G7
(running CentOS 6.3), you could replicate these test results with the
following commands (note xdd's default block size is 1024, so you'll see
everything is accounting for 1024 byte size blocks):

# Generate a set of random seek offsets within a file
data_size=$((256*8*1024*1024*1024))
rand_range=$((data_size / 16384 / 1024 - 1))
    shuf -i 0-$rand_range | awk "{i += 1}; {print i-1, \$0*16384,  \"16384
w 0 0\" };" > wseek_file


# Perform the random writes
numactl --cpunodebind=4 xdd -op write -target
/data/xfs-numa4/baseline-rand1 -reqsize 16384 -qd 16 -dio -verbose -bytes
2199023255552 -preallocate 2199023255552 -seek load wseek file

# Perform sequential writes
numactl --cpunodebind=4 xdd -op write -target
/data/xfs-numa4/baseline-seq1 -reqsize 16384 -qd 16 -dio -verbose -bytes
2199023255552 -preallocate 2199023255552

And so for the results I get the following (if you know xdd results):
COMBINED             1       1      16    2199023255552      131072
483.311   4549.910    271.196     0.230    67.644    write     16777216
 COMBINED             1       1      16    2199023255552      131072
605.366   3632.550    216.517     0.289    53.971    write     16777216


The bandwidths here are 4549.910 MB/s for the random, and 3632.550 MB/s
for the sequential.  If I run it again to the already existing file, I get:

COMBINED 1 1 16 2199023255552 131072 482.851 4554.248 271.454 0.230 66.016
write 16777216

Which is a write bandwidth of 4554.248MB/s.


Weird, right?

Cheers,
Brad

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2013-02-14 18:45 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-14 18:45 Settlemyer, Bradley W. [this message]
2013-02-15  2:05 ` Weird performance on a FusionIO Octal (Random writes faster than Seq.) Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CD429AE0.4B5E%settlemyerbw@ornl.gov \
    --to=settlemyerbw@ornl.gov \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox