public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Stewart Webb <stew@messeduphare.co.uk>,
	Chris Murphy <lists@colorremedies.com>,
	"xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: xfs hardware RAID alignment over linear lvm
Date: Thu, 26 Sep 2013 20:10:51 -0500	[thread overview]
Message-ID: <5244DB1B.7000908@hardwarefreak.com> (raw)
In-Reply-To: <20130926215806.GQ26872@dastard>

On 9/26/2013 4:58 PM, Dave Chinner wrote:
> On Thu, Sep 26, 2013 at 04:22:30AM -0500, Stan Hoeppner wrote:
>> On 9/26/2013 3:55 AM, Stewart Webb wrote:
>>> Thanks for all this info Stan and Dave,
>>>
>>>> "Stripe size" is a synonym of XFS sw, which is su * #disks.  This is the
>>>> amount of data written across the full RAID stripe (excluding parity).
>>>
>>> The reason I stated Stripe size is because in this instance, I have 3ware
>>> RAID controllers, which refer to
>>> this value as "Stripe" in their tw_cli software (god bless manufacturers
>>> renaming everything)
>>>
>>> I do, however, have a follow-on question:
>>> On other systems, I have similar hardware:
>>> 3x Raid Controllers
>>> 1 of them has 10 disks as RAID 6 that I would like to add to a logical
>>> volume
>>> 2 of them have 12 disks as a RAID 6 that I would like to add to the same
>>> logical volume
>>>
>>> All have the same "Stripe" or "Strip Size" of 512 KB
>>>
>>> So if I where going to make 3 seperate xfs volumes, I would do the
>>> following:
>>> mkfs.xfs -d su=512k sw=8 /dev/sda
>>> mkfs.xfs -d su=512k sw=10 /dev/sdb
>>> mkfs.xfs -d su=512k sw=10 /dev/sdc
>>>
>>> I assume, If I where going to bring them all into 1 logical volume, it
>>> would be best placed to have the sw value set
>>> to a value that is divisible by both 8 and 10 - in this case 2?
>>
>> No.  In this case you do NOT stripe align XFS to the storage, because
>> it's impossible--the RAID stripes are dissimilar.  In this case you use
>> the default 4KB write out, as if this is a single disk drive.
>>
>> As Dave stated, if you format a concatenated device with XFS and you
>> desire to align XFS, then all constituent arrays must have the same
>> geometry.
>>
>> Two things to be aware of here:
>>
>> 1.  With a decent hardware write caching RAID controller, having XFS
>> alined to the RAID geometry is a small optimization WRT overall write
>> performance, because the controller is going to be doing the optimizing
>> of final writeback to the drives.
>>
>> 2. Alignment does not affect read performance.
> 
> Ah, but it does...
> 
>> 3.  XFS only performs aligned writes during allocation.
> 
> Right, and it does so not only to improve write performance, but to
> also maximise sequential read performance of the data that is
> written, especially when multiple files are being read
> simultaneously and IO latency is important to keep low (e.g.
> realtime video ingest and playout).

Absolutely correct, as Dave always is.  As my workloads are mostly
random, as are those of others I consult in other fora, I sometimes
forget the [multi]streaming case.  Which is not good, as many folks
choose XFS specifically for [multi]streaming workloads.  My remarks to
this audience should always reflect that.  Apologies for my oversight on
this occasion.

>> What really makes a difference as to whether alignment will be of
>> benefit to you, and how often, is your workload.  So at this point, you
>> need to describe the primary workload(s) of your systems we're discussing.
> 
> Yup, my thoughts exactly...
> 
> Cheers,
> 
> Dave.
> 

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-09-27  1:10 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-25 12:56 xfs hardware RAID alignment over linear lvm Stewart Webb
2013-09-25 21:18 ` Stan Hoeppner
2013-09-25 21:34   ` Chris Murphy
2013-09-25 21:48     ` Stan Hoeppner
2013-09-25 21:53       ` Chris Murphy
2013-09-25 21:57     ` Dave Chinner
2013-09-26  8:44       ` Stan Hoeppner
2013-09-26  8:55       ` Stewart Webb
2013-09-26  9:22         ` Stan Hoeppner
2013-09-26  9:28           ` Stewart Webb
2013-09-26 21:58           ` Dave Chinner
2013-09-27  1:10             ` Stan Hoeppner [this message]
2013-09-27 12:23               ` Stewart Webb
2013-09-27 13:09                 ` Stan Hoeppner
2013-09-27 13:29                   ` Stewart Webb
2013-09-28 14:54                     ` Stan Hoeppner
2013-09-30  8:48                       ` Stewart Webb

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5244DB1B.7000908@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=david@fromorbit.com \
    --cc=lists@colorremedies.com \
    --cc=stew@messeduphare.co.uk \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox