public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: A little RAID experiment
Date: Wed, 18 Jul 2012 02:09:09 -0500	[thread overview]
Message-ID: <50066115.7070807@hardwarefreak.com> (raw)
In-Reply-To: <CAAxjCEwgDKLF=RY0aCCNTMsc1oefXWfyHKh+morYB9zVUrnH-A@mail.gmail.com>

On 7/18/2012 1:44 AM, Stefan Ring wrote:
> On Wed, Jul 18, 2012 at 4:18 AM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> On 7/17/2012 12:26 AM, Dave Chinner wrote:
>> ...
>>> I bet it's single threaded, which means it is:
>>
>> The data given seems to strongly suggest a single thread.
>>
>>> Which means throughput is limited by IO latency, not bandwidth.
>>> If it takes 10us to do the write(2), issue and process the IO
>>> completion, and it takes 10us for the hardware to do the IO, you're
>>> limited to 50,000 IOPS, or 200MB/s. Given that the best being seen
>>> is around 35MB/s, you're looking at around 10,000 IOPS of 100us
>>> round trip time. At 5MB/s, it's 1200 IOPS or around 800us round
>>> trip.
>>>
>>> That's why you get different performance from the different raid
>>> controllers - some process cache hits a lot faster than others.
>> ...
>>> IOWs, welcome to Understanding RAID Controller Caching Behaviours
>>> 101 :)
>>
>> It would be somewhat interesting to see Stefan's latency and throughput
>> numbers for 4/8/16 threads.  Maybe the sysbench "--num-threads=" option
>> is the ticket.  The docs state this is for testing scheduler
>> performance, and it's not clear whether this actually does threaded IO.
>>  If not, time for a new IO benchmark.
> 
> Yes, it is intentionally single-threaded and round-trip-bound, as that
> is exactly the kind of behavior that XFS chose to display.

You're referring to your original huge-metadata problem?  IIRC your
workload there was a single thread, wasn't it?

> I tested with more threads now. It is initially faster, which only
> serves to hasten the tanking, and the response time goes through the
> roof. I also needed to increase the --file-num. Apparently the
> filesystem (ext3) in this case cannot handle concurrent accesses to
> the same file.

*Gasp*  EXT3?  Not XFS?  Why are posting this thread on XFS?  The two
will likely have (significantly) different behavior.

Also, to make any meaningful comparison, we kinda need to know which
controller was targeted by these 3 runs below. ;)

> 4 threads:
> 
> [   2s] reads: 0.00 MB/s writes: 23.55 MB/s fsyncs: 0.00/s response
> time: 1.171ms (95%)
> [   4s] reads: 0.00 MB/s writes: 24.35 MB/s fsyncs: 0.00/s response
> time: 1.129ms (95%)
> [   6s] reads: 0.00 MB/s writes: 24.55 MB/s fsyncs: 0.00/s response
> time: 1.141ms (95%)
> [   8s] reads: 0.00 MB/s writes: 25.73 MB/s fsyncs: 0.00/s response
> time: 1.088ms (95%)
> [  10s] reads: 0.00 MB/s writes: 6.14 MB/s fsyncs: 0.00/s response
> time: 0.994ms (95%)
> [  12s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 2735.611ms (95%)
> [  14s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3800.107ms (95%)
> [  16s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 4404.397ms (95%)
> [  18s] reads: 0.00 MB/s writes: 0.00 MB/s fsyncs: 0.00/s response
> time: 3153.588ms (95%)
> [  20s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 4769.433ms (95%)
> 
> 
> 8 threads:
> 
> [   2s] reads: 0.00 MB/s writes: 26.99 MB/s fsyncs: 0.00/s response
> time: 2.451ms (95%)
> [   4s] reads: 0.00 MB/s writes: 28.12 MB/s fsyncs: 0.00/s response
> time: 3.153ms (95%)
> [   6s] reads: 0.00 MB/s writes: 25.97 MB/s fsyncs: 0.00/s response
> time: 2.965ms (95%)
> [   8s] reads: 0.00 MB/s writes: 23.23 MB/s fsyncs: 0.00/s response
> time: 2.560ms (95%)
> [  10s] reads: 0.00 MB/s writes: 0.00 MB/s fsyncs: 0.00/s response
> time: 791.041ms (95%)
> [  12s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3458.162ms (95%)
> [  14s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 5519.598ms (95%)
> [  16s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3219.401ms (95%)
> [  18s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 10235.289ms (95%)
> [  20s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3765.007ms (95%)
> 
> 16 threads:
> 
> [   2s] reads: 0.00 MB/s writes: 34.27 MB/s fsyncs: 0.00/s response
> time: 3.899ms (95%)
> [   4s] reads: 0.00 MB/s writes: 28.62 MB/s fsyncs: 0.00/s response
> time: 6.910ms (95%)
> [   6s] reads: 0.00 MB/s writes: 27.94 MB/s fsyncs: 0.00/s response
> time: 6.869ms (95%)
> [   8s] reads: 0.00 MB/s writes: 13.50 MB/s fsyncs: 0.00/s response
> time: 7.594ms (95%)
> [  10s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 2308.573ms (95%)
> [  12s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 4811.016ms (95%)
> [  14s] reads: 0.00 MB/s writes: 0.00 MB/s fsyncs: 0.00/s response
> time: 4635.714ms (95%)
> [  16s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 3200.185ms (95%)
> [  18s] reads: 0.00 MB/s writes: 0.03 MB/s fsyncs: 0.00/s response
> time: 9623.207ms (95%)
> [  20s] reads: 0.00 MB/s writes: 0.01 MB/s fsyncs: 0.00/s response
> time: 8053.211ms (95%)

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-07-18  7:09 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-25  8:07 A little RAID experiment Stefan Ring
2012-04-25 14:17 ` Roger Willcocks
2012-04-25 16:23   ` Stefan Ring
2012-04-27 14:03     ` Stan Hoeppner
2012-04-26  8:53   ` Stefan Ring
2012-04-27 15:10     ` Stan Hoeppner
2012-04-27 15:28     ` Joe Landman
2012-04-28  4:42       ` Stan Hoeppner
2012-04-27 13:50 ` Stan Hoeppner
2012-05-01 10:46   ` Stefan Ring
2012-05-30 11:07     ` Stefan Ring
2012-05-31  1:30       ` Stan Hoeppner
2012-05-31  6:44         ` Stefan Ring
2012-07-16 19:57 ` Stefan Ring
2012-07-16 20:03   ` Stefan Ring
2012-07-16 20:05     ` Stefan Ring
2012-07-16 21:27   ` Stan Hoeppner
2012-07-16 21:58     ` Stefan Ring
2012-07-17  1:39       ` Stan Hoeppner
2012-07-17  5:26         ` Dave Chinner
2012-07-18  2:18           ` Stan Hoeppner
2012-07-18  6:44             ` Stefan Ring
2012-07-18  7:09               ` Stan Hoeppner [this message]
2012-07-18  7:22                 ` Stefan Ring
2012-07-18 10:24                   ` Stan Hoeppner
2012-07-18 12:32                     ` Stefan Ring
2012-07-18 12:37                       ` Stefan Ring
2012-07-19  3:08                         ` Stan Hoeppner
2012-07-25  9:29                           ` Stefan Ring
2012-07-25 10:00                             ` Stan Hoeppner
2012-07-25 10:08                               ` Stefan Ring
2012-07-25 11:00                                 ` Stan Hoeppner
2012-07-26  8:32                             ` Dave Chinner
2012-09-11 16:37                               ` Stefan Ring
2012-07-16 22:16     ` Stefan Ring
2012-10-10 14:57 ` Stefan Ring
2012-10-10 21:27   ` Dave Chinner
2012-10-10 22:01     ` Stefan Ring
  -- strict thread matches above, loose matches on Subject: below --
2012-04-26 22:33 Richard Scobie
2012-04-27 21:30 ` Emmanuel Florac
2012-04-28  4:15   ` Richard Scobie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50066115.7070807@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox