public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Christian Affolter <c.affolter@purplehaze.ch>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: Random write result differences between RAID device and XFS
Date: Mon, 1 Feb 2016 09:59:23 +0100	[thread overview]
Message-ID: <56AF1E6B.5090402@purplehaze.ch> (raw)
In-Reply-To: <20160201054639.GU6033@dastard>

Hello Dave,

On 01.02.2016 06:46, Dave Chinner wrote:
> On Sat, Jan 30, 2016 at 11:43:56AM +0100, Christian Affolter wrote:
>> Hi Dave,
>>
>> On 29.01.2016 23:25, Dave Chinner wrote:
>>> On Fri, Jan 29, 2016 at 11:53:35AM +0100, Christian Affolter wrote:
>>>> Hi everyone,
>>>>
>>>> I'm trying to understand the differences of some bandwidth and IOPs test
>>>> results I see while running a random-write full-stripe-width aligned fio
>>>> test (using libaio with direct IO) on a hardware RAID 6 raw device
>>>> versus on the same device with the XFS file system on top of it.
>>>>
>>>> On the raw device I get:
>>>> write: io=24828MB, bw=423132KB/s, iops=137, runt= 60085msec
>>>>
>>>> With XFS on top of it:
>>>> write: io=14658MB, bw=249407KB/s, iops=81, runt= 60182msec
>>>
>>> Now repeat with a file that is contiguously allocated before you
>>> start. And also perhaps with the "swalloc" mount option.
>>
>> Wow, thanks! After specifying --fallocate=none (instead of the default
>> fallocate=posix), bandwidth and iops increases and are even higher than
>> on the raw device:
>>
>> write: io=30720MB, bw=599232KB/s, iops=195, runt= 52496msec
>>
>> I'm eager to learn what's going on behind the scenes, can you give a
>> short explanation?
> 
> Usually when concurrent direct IO writes are slower than the raw
> device it's because something is causing IO submission
> serialisation.  Usually that's to do with writes that extend the
> file because that can require the inode to be locked exclusively.
> Whatever behaviour the fio configuration change modifed, it removed
> the IO submission serialisation and so it's now running at full disk
> speed.
> 
> As to why XFS is faster than the raw block device, the XFS file
> is only 30GB, so the random writes are only seeking a short
> distance compared to the block device test which is seeking across
> the whole device.
> 
>> Btw. mounting the volume with "swalloc" didn't make any change.
> 
> Which means there is no performance differential between stripe unit
> and stripe width aligned writes in this test on your hardware.

Thank you so much for the detailed explanation and taking the time to help.


Best,
Chris

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2016-02-01  8:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-29 10:53 Random write result differences between RAID device and XFS Christian Affolter
2016-01-29 22:25 ` Dave Chinner
2016-01-30 10:43   ` Christian Affolter
2016-02-01  5:46     ` Dave Chinner
2016-02-01  8:59       ` Christian Affolter [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56AF1E6B.5090402@purplehaze.ch \
    --to=c.affolter@purplehaze.ch \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox