linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
@ 2014-01-31  6:35 Chris Murphy
  2014-02-01 18:47 ` Stan Hoeppner
  0 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2014-01-31  6:35 UTC (permalink / raw)
  To: stan@hardwarefreak.com Hoeppner; +Cc: xfs

Hopefully this is an acceptable way to avoid thread jacking, by renaming the  subject…

On Jan 30, 2014, at 10:58 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> 
> RAID60 is a nested RAID level just like RAID10 and RAID50.  It is a
> stripe, or RAID0, across multiple primary array types, RAID6 in this
> case.  The stripe width of each 'inner' RAID6 becomes the stripe unit of
> the 'outer' RAID0 array:
> 
> RAID6 geometry	 128KB * 12 = 1536KB
> RAID0 geometry  1536KB * 3  = 4608KB

My question is on this particular point. If this were hardware raid6, but I wanted to then stripe using md raid0, using the numbers above would I choose a raid0 chunk size of 1536KB? How critical is this value for, e.g. only large streaming read/write workloads? If it were smaller, say 256KB or even 32KB, would there be a significant performance consequence?

Chris Murphy
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-01-31  6:35 relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID Chris Murphy
@ 2014-02-01 18:47 ` Stan Hoeppner
  2014-02-01 20:55   ` Chris Murphy
  0 siblings, 1 reply; 12+ messages in thread
From: Stan Hoeppner @ 2014-02-01 18:47 UTC (permalink / raw)
  To: Chris Murphy; +Cc: xfs

On 1/31/2014 12:35 AM, Chris Murphy wrote:
> Hopefully this is an acceptable way to avoid thread jacking, by
> renaming the  subject…
> 
> On Jan 30, 2014, at 10:58 PM, Stan Hoeppner <stan@hardwarefreak.com>
> wrote:
>> 
>> RAID60 is a nested RAID level just like RAID10 and RAID50.  It is
>> a stripe, or RAID0, across multiple primary array types, RAID6 in
>> this case.  The stripe width of each 'inner' RAID6 becomes the
>> stripe unit of the 'outer' RAID0 array:
>> 
>> RAID6 geometry	 128KB * 12 = 1536KB RAID0 geometry  1536KB * 3  =
>> 4608KB
> 
> My question is on this particular point. If this were hardware raid6,
> but I wanted to then stripe using md raid0, using the numbers above
> would I choose a raid0 chunk size of 1536KB? How critical is this
> value for, e.g. only large streaming read/write workloads? If it were
> smaller, say 256KB or even 32KB, would there be a significant
> performance consequence?

You say 'if it were smaller...256/32KB'.  What is "it" referencing?

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-01 18:47 ` Stan Hoeppner
@ 2014-02-01 20:55   ` Chris Murphy
  2014-02-01 21:44     ` Stan Hoeppner
  0 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2014-02-01 20:55 UTC (permalink / raw)
  To: stan; +Cc: xfs


On Feb 1, 2014, at 11:47 AM, Stan Hoeppner <stan@hardwarefreak.com> wrote:

> On 1/31/2014 12:35 AM, Chris Murphy wrote:
>> Hopefully this is an acceptable way to avoid thread jacking, by
>> renaming the  subject…
>> 
>> On Jan 30, 2014, at 10:58 PM, Stan Hoeppner <stan@hardwarefreak.com>
>> wrote:
>>> 
>>> RAID60 is a nested RAID level just like RAID10 and RAID50.  It is
>>> a stripe, or RAID0, across multiple primary array types, RAID6 in
>>> this case.  The stripe width of each 'inner' RAID6 becomes the
>>> stripe unit of the 'outer' RAID0 array:
>>> 
>>> RAID6 geometry	 128KB * 12 = 1536KB RAID0 geometry  1536KB * 3  =
>>> 4608KB
>> 
>> My question is on this particular point. If this were hardware raid6,
>> but I wanted to then stripe using md raid0, using the numbers above
>> would I choose a raid0 chunk size of 1536KB? How critical is this
>> value for, e.g. only large streaming read/write workloads? If it were
>> smaller, say 256KB or even 32KB, would there be a significant
>> performance consequence?
> 
> You say 'if it were smaller...256/32KB'.  What is "it" referencing?

it = chunk size for md raid0. 

So chunk size 128KB * 12 disks, hardware raid6. Chunk size 32KB [1] striping the raid6's with md raid0.


Chris Murphy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-01 20:55   ` Chris Murphy
@ 2014-02-01 21:44     ` Stan Hoeppner
  2014-02-02 18:09       ` Chris Murphy
  0 siblings, 1 reply; 12+ messages in thread
From: Stan Hoeppner @ 2014-02-01 21:44 UTC (permalink / raw)
  To: Chris Murphy; +Cc: xfs

On 2/1/2014 2:55 PM, Chris Murphy wrote:
> 
> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner <stan@hardwarefreak.com>
> wrote:
> 
>> On 1/31/2014 12:35 AM, Chris Murphy wrote:
>>> Hopefully this is an acceptable way to avoid thread jacking, by 
>>> renaming the  subject…
>>> 
>>> On Jan 30, 2014, at 10:58 PM, Stan Hoeppner
>>> <stan@hardwarefreak.com> wrote:
>>>> 
>>>> RAID60 is a nested RAID level just like RAID10 and RAID50.  It
>>>> is a stripe, or RAID0, across multiple primary array types,
>>>> RAID6 in this case.  The stripe width of each 'inner' RAID6
>>>> becomes the stripe unit of the 'outer' RAID0 array:
>>>> 
>>>> RAID6 geometry	 128KB * 12 = 1536KB RAID0 geometry  1536KB * 3
>>>> = 4608KB
>>> 
>>> My question is on this particular point. If this were hardware
>>> raid6, but I wanted to then stripe using md raid0, using the
>>> numbers above would I choose a raid0 chunk size of 1536KB? How
>>> critical is this value for, e.g. only large streaming read/write
>>> workloads? If it were smaller, say 256KB or even 32KB, would
>>> there be a significant performance consequence?
>> 
>> You say 'if it were smaller...256/32KB'.  What is "it"
>> referencing?
> 
> it = chunk size for md raid0.
> 
> So chunk size 128KB * 12 disks, hardware raid6. Chunk size 32KB [1]
> striping the raid6's with md raid0.

Frankly, I don't know whether you're pulling my chain, or really don't
understand the concept of nested striping.  I'll assume the latter.

When nesting stripes, the chunk size of the outer stripe is -always-
equal to the stripe width of each inner striped array, as I clearly
demonstrated earlier:

3 RAID6 arrays
RAID6  geometry	 128KB * 12 = 1536KB
RAID60 geometry 1536KB *  3 = 4608KB

mdadm allows you enough rope to hang yourself in this situation because
it doesn't know the geometry of the underlying hardware arrays, and has
no code to do sanity checking even if it did.  Thus it can't save you
from yourself.

RAID HBA and SAN controller firmware simply won't allow this.  They
configure the RAID60 chunk size automatically equal to the RAID6 stripe
width.  If some vendor's firmware allows one to manually enter the
RAID60 chunk size with a value different from the RAID6 stripe width,
stay away from that vendor.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-01 21:44     ` Stan Hoeppner
@ 2014-02-02 18:09       ` Chris Murphy
  2014-02-02 21:30         ` Dave Chinner
  2014-02-03 10:50         ` relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID Stan Hoeppner
  0 siblings, 2 replies; 12+ messages in thread
From: Chris Murphy @ 2014-02-02 18:09 UTC (permalink / raw)
  To: stan@hardwarefreak.com Hoeppner; +Cc: xfs


On Feb 1, 2014, at 2:44 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:

> On 2/1/2014 2:55 PM, Chris Murphy wrote:
>> 
>> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner <stan@hardwarefreak.com>
>> wrote:
>> 
>>> On 1/31/2014 12:35 AM, Chris Murphy wrote:
>>>> Hopefully this is an acceptable way to avoid thread jacking, by 
>>>> renaming the  subject…
>>>> 
>>>> On Jan 30, 2014, at 10:58 PM, Stan Hoeppner
>>>> <stan@hardwarefreak.com> wrote:
>>>>> 
>>>>> RAID60 is a nested RAID level just like RAID10 and RAID50.  It
>>>>> is a stripe, or RAID0, across multiple primary array types,
>>>>> RAID6 in this case.  The stripe width of each 'inner' RAID6
>>>>> becomes the stripe unit of the 'outer' RAID0 array:
>>>>> 
>>>>> RAID6 geometry	 128KB * 12 = 1536KB RAID0 geometry  1536KB * 3
>>>>> = 4608KB
>>>> 
>>>> My question is on this particular point. If this were hardware
>>>> raid6, but I wanted to then stripe using md raid0, using the
>>>> numbers above would I choose a raid0 chunk size of 1536KB? How
>>>> critical is this value for, e.g. only large streaming read/write
>>>> workloads? If it were smaller, say 256KB or even 32KB, would
>>>> there be a significant performance consequence?
>>> 
>>> You say 'if it were smaller...256/32KB'.  What is "it"
>>> referencing?
>> 
>> it = chunk size for md raid0.
>> 
>> So chunk size 128KB * 12 disks, hardware raid6. Chunk size 32KB [1]
>> striping the raid6's with md raid0.
> 
> Frankly, I don't know whether you're pulling my chain, or really don't
> understand the concept of nested striping.  I'll assume the latter.

The former would be inappropriate, and the latter is more plausible anyway, so this is the better assumption.


> When nesting stripes, the chunk size of the outer stripe is -always-
> equal to the stripe width of each inner striped array, as I clearly
> demonstrated earlier:

Except when it's hardware raid6, and software raid0, and the user doesn't know they need to specify the chunk size in this manner. And instead they use the mdadm default. What you're saying makes complete sense, but I don't think this is widespread knowledge or well documented anywhere that regular end users would know this by and large.

> 
> 3 RAID6 arrays
> RAID6  geometry	 128KB * 12 = 1536KB
> RAID60 geometry 1536KB *  3 = 4608KB
> 
> mdadm allows you enough rope to hang yourself in this situation because
> it doesn't know the geometry of the underlying hardware arrays, and has
> no code to do sanity checking even if it did.  Thus it can't save you
> from yourself.

That's right, and this is the exact scenario I'm suggesting. Depending on version, mdadm has two possible default chunk sizes, either 64KB or 512KB.

How bad is the resulting performance hit? Would a 64KB chunk be equally bad as a 512KB chunk? Or is this only quantifiable with testing (i.e. it could be a negligible performance hit, or it could be huge)?


> RAID HBA and SAN controller firmware simply won't allow this.  They
> configure the RAID60 chunk size automatically equal to the RAID6 stripe
> width.  If some vendor's firmware allows one to manually enter the
> RAID60 chunk size with a value different from the RAID6 stripe width,
> stay away from that vendor.

I understand that, but the scenario and question I'm posing is for multiple hardware raid6's striped with md raid0. The use case are enclosures with raid6 but not raid60, so the enclosures are striped using software raid. I'm trying to understand the consequence magnitude when choosing an md raid0 chunk size other than the correct one. Is this a 5% performance hit, or a 30% performance hit?


Chris Murphy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-02 18:09       ` Chris Murphy
@ 2014-02-02 21:30         ` Dave Chinner
  2014-02-03  4:39           ` Stan Hoeppner
  2014-02-03 10:50         ` relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID Stan Hoeppner
  1 sibling, 1 reply; 12+ messages in thread
From: Dave Chinner @ 2014-02-02 21:30 UTC (permalink / raw)
  To: Chris Murphy; +Cc: stan@hardwarefreak.com Hoeppner, xfs

On Sun, Feb 02, 2014 at 11:09:11AM -0700, Chris Murphy wrote:
> On Feb 1, 2014, at 2:44 PM, Stan Hoeppner <stan@hardwarefreak.com>
> wrote:
> > On 2/1/2014 2:55 PM, Chris Murphy wrote:
> >> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner
> >> <stan@hardwarefreak.com> wrote:
> > When nesting stripes, the chunk size of the outer stripe is
> > -always- equal to the stripe width of each inner striped array,
> > as I clearly demonstrated earlier:
> 
> Except when it's hardware raid6, and software raid0, and the user
> doesn't know they need to specify the chunk size in this manner.
> And instead they use the mdadm default. What you're saying makes
> complete sense, but I don't think this is widespread knowledge or
> well documented anywhere that regular end users would know this by
> and large.

And that is why this is a perfect example of what I'd like to see
people writing documentation for.

http://oss.sgi.com/archives/xfs/2013-12/msg00588.html

This is not the first time we've had this nested RAID discussion,
nor will it be the last. However, being able to point ot a web page
or or documentation makes it a whole lot easier.....

Stan - any chance you might be able to spare an hour a week to write
something about optimal RAID storage configuration for XFS?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-02 21:30         ` Dave Chinner
@ 2014-02-03  4:39           ` Stan Hoeppner
  2014-02-03  5:24             ` Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Stan Hoeppner @ 2014-02-03  4:39 UTC (permalink / raw)
  To: Dave Chinner, Chris Murphy; +Cc: xfs

On 2/2/2014 3:30 PM, Dave Chinner wrote:
> On Sun, Feb 02, 2014 at 11:09:11AM -0700, Chris Murphy wrote:
>> On Feb 1, 2014, at 2:44 PM, Stan Hoeppner <stan@hardwarefreak.com>
>> wrote:
>>> On 2/1/2014 2:55 PM, Chris Murphy wrote:
>>>> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner
>>>> <stan@hardwarefreak.com> wrote:
>>> When nesting stripes, the chunk size of the outer stripe is
>>> -always- equal to the stripe width of each inner striped array,
>>> as I clearly demonstrated earlier:
>>
>> Except when it's hardware raid6, and software raid0, and the user
>> doesn't know they need to specify the chunk size in this manner.
>> And instead they use the mdadm default. What you're saying makes
>> complete sense, but I don't think this is widespread knowledge or
>> well documented anywhere that regular end users would know this by
>> and large.
> 
> And that is why this is a perfect example of what I'd like to see
> people writing documentation for.
> 
> http://oss.sgi.com/archives/xfs/2013-12/msg00588.html
> 
> This is not the first time we've had this nested RAID discussion,
> nor will it be the last. However, being able to point ot a web page
> or or documentation makes it a whole lot easier.....
> 
> Stan - any chance you might be able to spare an hour a week to write
> something about optimal RAID storage configuration for XFS?

I could do more, probably rather quickly.  What kind of scope, format,
style?  Should this be structured as reference manual style
documentation, FAQ, blog??  I'm leaning more towards reference style.

How about starting with a lead-in explaining why the workload should
always drive storage architecture.  Then I'll describe the various
standard and nested RAID levels, concatenations, etc and some
dis/advantages of each.  Finally I'll give examples of a few common and
a high end workloads, one or more storage architectures suitable for
each and why, and how XFS should be configured optimally for each
workload and stack combination WRT geometry, AGs, etc.  I could also
touch on elevator selection and other common kernel tweaks often needed
with XFS.

I could provide a workload example with each RAID level/storage
architecture in lieu of the separate workload section.  Many readers
would probably like to see it presented in that manner as they often
start at the wrong end of the tunnel.  However, that would be
antithetical to the assertion that the workload drives the stack design,
which is a concept we want to reinforce as often as possible I think.
So I think the former 3 section layout is better.

I should be able to knock most of this out fairly quickly, but I'll need
help on some of it.  For example I don't have any first hand experience
with large high end workloads.  I could make up a plausible theoretical
example but I'd rather have as many real-world workloads as possible.
What I have in mind for workload examples is something like the
following.  It would be great if list members who have one the workloads
below would contribute their details and pointers, any secret sauce,
etc.  Thus when we refer someone to this document they know they're
reading of an actual real world production configuration.  Though I
don't plan to name sites, people, etc, just the technical configurations.

1.  Small file, highly parallel, random IO
 -- mail queue, maildir mailbox storage
 -- HPC, filesystem as a database
 -- ??

2.  Virtual machine consolidation w/mixed guest workload

3.  Large scale database
 -- transactional
 -- warehouse, data mining
 -- ??

4.  High bandwidth parallel streaming
 -- video ingestion/playback
 -- satellite data capture
 -- other HPC ??

5.  Large scale NFS server, mixed client workload

Lemme know if this is ok or if you'd like it to take a different
direction, if you have better or additional example workload classes,
etc.  If mostly ok, I'll get started on the first 2 sections and fill in
the 3rd as people submit examples.

-- 
Stan


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-03  4:39           ` Stan Hoeppner
@ 2014-02-03  5:24             ` Dave Chinner
  2014-02-03  9:36               ` Stan Hoeppner
  0 siblings, 1 reply; 12+ messages in thread
From: Dave Chinner @ 2014-02-03  5:24 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: Chris Murphy, xfs

On Sun, Feb 02, 2014 at 10:39:18PM -0600, Stan Hoeppner wrote:
> On 2/2/2014 3:30 PM, Dave Chinner wrote:
> > On Sun, Feb 02, 2014 at 11:09:11AM -0700, Chris Murphy wrote:
> >> On Feb 1, 2014, at 2:44 PM, Stan Hoeppner <stan@hardwarefreak.com>
> >> wrote:
> >>> On 2/1/2014 2:55 PM, Chris Murphy wrote:
> >>>> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner
> >>>> <stan@hardwarefreak.com> wrote:
> >>> When nesting stripes, the chunk size of the outer stripe is
> >>> -always- equal to the stripe width of each inner striped array,
> >>> as I clearly demonstrated earlier:
> >>
> >> Except when it's hardware raid6, and software raid0, and the user
> >> doesn't know they need to specify the chunk size in this manner.
> >> And instead they use the mdadm default. What you're saying makes
> >> complete sense, but I don't think this is widespread knowledge or
> >> well documented anywhere that regular end users would know this by
> >> and large.
> > 
> > And that is why this is a perfect example of what I'd like to see
> > people writing documentation for.
> > 
> > http://oss.sgi.com/archives/xfs/2013-12/msg00588.html
> > 
> > This is not the first time we've had this nested RAID discussion,
> > nor will it be the last. However, being able to point ot a web page
> > or or documentation makes it a whole lot easier.....
> > 
> > Stan - any chance you might be able to spare an hour a week to write
> > something about optimal RAID storage configuration for XFS?
> 
> I could do more, probably rather quickly.  What kind of scope, format,
> style?  Should this be structured as reference manual style
> documentation, FAQ, blog??  I'm leaning more towards reference style.

Agreed - reference style is probably best. As for format style, I'm
tending towards a simple, text editor friendly markup like asciidoc.
>From there we can use it to generate PDFs, wiki documentation, etc
and so make it available in whatever format is convenient.

(Oh, wow, 'apt-get install asciidoc' wants to pull in about 1.1GB of
dependencies)

> How about starting with a lead-in explaining why the workload should
> always drive storage architecture.  Then I'll describe the various
> standard and nested RAID levels, concatenations, etc and some
> dis/advantages of each.  Finally I'll give examples of a few common and
> a high end workloads, one or more storage architectures suitable for
> each and why, and how XFS should be configured optimally for each
> workload and stack combination WRT geometry, AGs, etc. 

That sounds like a fine plan.

The only thing I can think of that is obviously missing from this is
the process of problem diagnosis. e.g. what to do when something
goes wrong. The most common the mistake we see is trying to repair
the filesystem when th storage is still broken and making a bigger
mess. Having something that describes what to look for (e.g. raid
reconstruction getting disks out of order) and how to recover from
problems with as little risk and data loss as possible would be
invaluable.

> I could also touch on elevator selection and other common kernel
> tweaks often needed with XFS.

I suspect you'll need to deal with elevators and IO schedulers and
the impact of BBWC on reordering and merging early on in the storage
architecture discussion. ;)

As for kernel tweaks outside the storage stack, i wouldn't bother
right now - we can always add it later it it's appropriate.

> I could provide a workload example with each RAID level/storage
> architecture in lieu of the separate workload section.  Many readers
> would probably like to see it presented in that manner as they often
> start at the wrong end of the tunnel.  However, that would be
> antithetical to the assertion that the workload drives the stack design,
> which is a concept we want to reinforce as often as possible I think.
> So I think the former 3 section layout is better.

Rearranging text is much easier than writing it in the first place,
so I think we can worry about that once the document starts to take
place.

> I should be able to knock most of this out fairly quickly, but I'll need
> help on some of it.  For example I don't have any first hand experience
> with large high end workloads.  I could make up a plausible theoretical
> example but I'd rather have as many real-world workloads as possible.
> What I have in mind for workload examples is something like the
> following.  It would be great if list members who have one the workloads
> below would contribute their details and pointers, any secret sauce,
> etc.  Thus when we refer someone to this document they know they're
> reading of an actual real world production configuration.  Though I
> don't plan to name sites, people, etc, just the technical configurations.

1. General purpose (i.e. unspecialised) configuration that should be
good for most users.

> 
> 1.  Small file, highly parallel, random IO
>  -- mail queue, maildir mailbox storage
>  -- HPC, filesystem as a database
>  -- ??

The hot topic of the moment that fits into this category is object
stores for distributed storage. i.e. gluster and ceph running
openstack storage layers like swift to store large numbers of
pictures of cats.

> 2.  Virtual machine consolidation w/mixed guest workload

There's a whole lot of stuff here that is dependent on exactly how
the VM infrastructure is set up, so this might be difficult to
simplify enough to be useful.

> 3.  Large scale database
>  -- transactional
>  -- warehouse, data mining

They are actually two very different workloads. Data mining is
really starting to move towards distributed databases that
specialise in high bandwidth sequential IO so I'm not sure that it
really is any different these days to a traditional HPC
application in terms of IO...

> 4.  High bandwidth parallel streaming
>  -- video ingestion/playback
>  -- satellite data capture
>  -- other HPC ??

Large scale data archiving (i.e. write-once workloads), pretty much
anything HPC...

> 5.  Large scale NFS server, mixed client workload

I'd just say large scale NFS server, because - apart from modifying
the structure to suit NFS access patterns - the underly config
is still going to be driven by the dominant workloads.
storage config is still

> Lemme know if this is ok or if you'd like it to take a different
> direction, if you have better or additional example workload classes,
> etc.  If mostly ok, I'll get started on the first 2 sections and fill in
> the 3rd as people submit examples.

It sounds good to me - I think that the first 2 sections are the
core of the work - it's the theory that is in our heads (i.e. the
black magic) that is simply not documented in a way that people can
use.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-03  5:24             ` Dave Chinner
@ 2014-02-03  9:36               ` Stan Hoeppner
  2014-02-03 21:54                 ` Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Stan Hoeppner @ 2014-02-03  9:36 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On 2/2/2014 11:24 PM, Dave Chinner wrote:
> On Sun, Feb 02, 2014 at 10:39:18PM -0600, Stan Hoeppner wrote:
>> On 2/2/2014 3:30 PM, Dave Chinner wrote:
...
>>> And that is why this is a perfect example of what I'd like to see
>>> people writing documentation for.
>>>
>>> http://oss.sgi.com/archives/xfs/2013-12/msg00588.html
>>>
>>> This is not the first time we've had this nested RAID discussion,
>>> nor will it be the last. However, being able to point ot a web page
>>> or or documentation makes it a whole lot easier.....
>>>
>>> Stan - any chance you might be able to spare an hour a week to write
>>> something about optimal RAID storage configuration for XFS?
>>
>> I could do more, probably rather quickly.  What kind of scope, format,
>> style?  Should this be structured as reference manual style
>> documentation, FAQ, blog??  I'm leaning more towards reference style.
> 
> Agreed - reference style is probably best. As for format style, I'm
> tending towards a simple, text editor friendly markup like asciidoc.
> From there we can use it to generate PDFs, wiki documentation, etc
> and so make it available in whatever format is convenient.

Works for me, I'm a plain text kinda guy.

> (Oh, wow, 'apt-get install asciidoc' wants to pull in about 1.1GB of
> dependencies)
> 
>> How about starting with a lead-in explaining why the workload should
>> always drive storage architecture.  Then I'll describe the various
>> standard and nested RAID levels, concatenations, etc and some
>> dis/advantages of each.  Finally I'll give examples of a few common and
>> a high end workloads, one or more storage architectures suitable for
>> each and why, and how XFS should be configured optimally for each
>> workload and stack combination WRT geometry, AGs, etc. 
> 
> That sounds like a fine plan.
> 
> The only thing I can think of that is obviously missing from this is
> the process of problem diagnosis. e.g. what to do when something
> goes wrong. The most common the mistake we see is trying to repair
> the filesystem when th storage is still broken and making a bigger
> mess. Having something that describes what to look for (e.g. raid
> reconstruction getting disks out of order) and how to recover from
> problems with as little risk and data loss as possible would be
> invaluable.

Ahh ok.  So you're going for the big scope described in your Dec 13
email, not the paltry "optimal RAID storage configuration for XFS"
described above.  Now I understand the 1 hour a week question. :)

I'll brain dump as much as I can, in a hopefully somewhat coherent
starting doc.  I'll do my best starting the XFS troubleshooting part,
but I'm much weaker here than with XFS architecture and theory.

>> I could also touch on elevator selection and other common kernel
>> tweaks often needed with XFS.
> 
> I suspect you'll need to deal with elevators and IO schedulers and
> the impact of BBWC on reordering and merging early on in the storage
> architecture discussion. ;)

Definitely bigger scope than I originally was thinking, but I'm all in.

> As for kernel tweaks outside the storage stack, i wouldn't bother
> right now - we can always add it later it it's appropriate.

'k

>> I could provide a workload example with each RAID level/storage
>> architecture in lieu of the separate workload section.  Many readers
>> would probably like to see it presented in that manner as they often
>> start at the wrong end of the tunnel.  However, that would be
>> antithetical to the assertion that the workload drives the stack design,
>> which is a concept we want to reinforce as often as possible I think.
>> So I think the former 3 section layout is better.
> 
> Rearranging text is much easier than writing it in the first place,
> so I think we can worry about that once the document starts to take
> place.

Yep.

>> I should be able to knock most of this out fairly quickly, but I'll need
>> help on some of it.  For example I don't have any first hand experience
>> with large high end workloads.  I could make up a plausible theoretical
>> example but I'd rather have as many real-world workloads as possible.
>> What I have in mind for workload examples is something like the
>> following.  It would be great if list members who have one the workloads
>> below would contribute their details and pointers, any secret sauce,
>> etc.  Thus when we refer someone to this document they know they're
>> reading of an actual real world production configuration.  Though I
>> don't plan to name sites, people, etc, just the technical configurations.
> 
> 1. General purpose (i.e. unspecialised) configuration that should be
> good for most users.

Format with XFS defaults.  Done. :)

What detail should go with this?  Are you thinking SOHO server here,
single disk web server.  Anything with low IO rate and a smallish disk/RAID?

>> 1.  Small file, highly parallel, random IO
>>  -- mail queue, maildir mailbox storage
>>  -- HPC, filesystem as a database
>>  -- ??
> 
> The hot topic of the moment that fits into this category is object
> stores for distributed storage. i.e. gluster and ceph running
> openstack storage layers like swift to store large numbers of
> pictures of cats.

The direction I was really wanting to go here is highlighting the
difference between striped RAID and linear concat, how XFS AG
parallelism on concat can provide better performance than striping for
some workloads, and why.  For a long time I've wanted to create a
document about this with graphs containing "disk silo" icons, showing
the AGs spanning the striped RAID horizontally and spanning the concat
disks vertically, explaining the difference in seek patterns and how
they affect a random IO workload.

Maybe I should make concat a separate topic entirely, as it can benefit
multiple workload types, from the smallest to the largest storage
setups.  XFS' ability to scale IO throughput nearly infinitely over
concatenated storage is unique to Linux, and fairly unique to
filesystems in general TTBOMK.  It is one of its greatest strengths.
I'd like to cover this in good detail.

>> 2.  Virtual machine consolidation w/mixed guest workload
> 
> There's a whole lot of stuff here that is dependent on exactly how
> the VM infrastructure is set up, so this might be difficult to
> simplify enough to be useful.

I was thinking along the lines of consolidating lots of relatively low
IO throughput guests with thin provisioning, like VPS hosting.  For
instance a KVM host and a big XFS, sparse files exported to Linux guests
as drives.  Maybe nobody is doing this with XFS.

>> 3.  Large scale database
>>  -- transactional
>>  -- warehouse, data mining
> 
> They are actually two very different workloads. Data mining is
> really starting to move towards distributed databases that
> specialise in high bandwidth sequential IO so I'm not sure that it
> really is any different these days to a traditional HPC
> application in terms of IO...

Yeah, I wasn't sure if anyone was still doing it on singe hosts at scale
but through it in just in case.  The big TPC-H systems have all been
clusters with shared nothing storage for about a decade.

>> 4.  High bandwidth parallel streaming
>>  -- video ingestion/playback
>>  -- satellite data capture
>>  -- other HPC ??
> 
> Large scale data archiving (i.e. write-once workloads), pretty much
> anything HPC...
> 
>> 5.  Large scale NFS server, mixed client workload
> 
> I'd just say large scale NFS server, because - apart from modifying
> the structure to suit NFS access patterns - the underly config
> is still going to be driven by the dominant workloads.
> storage config is still

I figured you'd write this section since you have more experience with
big NFS than anyone here.

>> Lemme know if this is ok or if you'd like it to take a different
>> direction, if you have better or additional example workload classes,
>> etc.  If mostly ok, I'll get started on the first 2 sections and fill in
>> the 3rd as people submit examples.
> 
> It sounds good to me - I think that the first 2 sections are the
> core of the work - it's the theory that is in our heads (i.e. the
> black magic) that is simply not documented in a way that people can
> use.

Agreed.

I'll get started.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-02 18:09       ` Chris Murphy
  2014-02-02 21:30         ` Dave Chinner
@ 2014-02-03 10:50         ` Stan Hoeppner
  1 sibling, 0 replies; 12+ messages in thread
From: Stan Hoeppner @ 2014-02-03 10:50 UTC (permalink / raw)
  To: Chris Murphy; +Cc: xfs

On 2/2/2014 12:09 PM, Chris Murphy wrote:
> 
> On Feb 1, 2014, at 2:44 PM, Stan Hoeppner <stan@hardwarefreak.com>
> wrote:
> 
>> On 2/1/2014 2:55 PM, Chris Murphy wrote:
>>> 
>>> On Feb 1, 2014, at 11:47 AM, Stan Hoeppner
>>> <stan@hardwarefreak.com> wrote:
>>> 
>>>> On 1/31/2014 12:35 AM, Chris Murphy wrote:
>>>>> Hopefully this is an acceptable way to avoid thread jacking,
>>>>> by renaming the  subject…
>>>>> 
>>>>> On Jan 30, 2014, at 10:58 PM, Stan Hoeppner 
>>>>> <stan@hardwarefreak.com> wrote:
>>>>>> 
>>>>>> RAID60 is a nested RAID level just like RAID10 and RAID50.
>>>>>> It is a stripe, or RAID0, across multiple primary array
>>>>>> types, RAID6 in this case.  The stripe width of each
>>>>>> 'inner' RAID6 becomes the stripe unit of the 'outer' RAID0
>>>>>> array:
>>>>>> 
>>>>>> RAID6 geometry	 128KB * 12 = 1536KB RAID0 geometry  1536KB
>>>>>> * 3 = 4608KB
>>>>> 
>>>>> My question is on this particular point. If this were
>>>>> hardware raid6, but I wanted to then stripe using md raid0,
>>>>> using the numbers above would I choose a raid0 chunk size of
>>>>> 1536KB? How critical is this value for, e.g. only large
>>>>> streaming read/write workloads? If it were smaller, say 256KB
>>>>> or even 32KB, would there be a significant performance
>>>>> consequence?
>>>> 
>>>> You say 'if it were smaller...256/32KB'.  What is "it" 
>>>> referencing?
>>> 
>>> it = chunk size for md raid0.
>>> 
>>> So chunk size 128KB * 12 disks, hardware raid6. Chunk size 32KB
>>> [1] striping the raid6's with md raid0.
>> 
>> Frankly, I don't know whether you're pulling my chain, or really
>> don't understand the concept of nested striping.  I'll assume the
>> latter.
> 
> The former would be inappropriate, and the latter is more plausible
> anyway, so this is the better assumption.
> 
> 
>> When nesting stripes, the chunk size of the outer stripe is
>> -always- equal to the stripe width of each inner striped array, as
>> I clearly demonstrated earlier:
> 
> Except when it's hardware raid6, and software raid0, and the user
> doesn't know they need to specify the chunk size in this manner. And
> instead they use the mdadm default. What you're saying makes complete
> sense, but I don't think this is widespread knowledge or well
> documented anywhere that regular end users would know this by and
> large.

This is not widespread knowledge and is not well documented.  And by
definition "regular end users" are not creating RAID60 arrays.  In the
not too distant future there will be a little more information available
about proper geometry setup for RAID60.

>> 3 RAID6 arrays 
>> RAID6  geometry	 128KB * 12 = 1536KB
>> RAID60 geometry	1536KB *  3 = 4608KB
>> 
>> mdadm allows you enough rope to hang yourself in this situation
>> because it doesn't know the geometry of the underlying hardware
>> arrays, and has no code to do sanity checking even if it did.  Thus
>> it can't save you from yourself.
> 
> That's right, and this is the exact scenario I'm suggesting.
> Depending on version, mdadm has two possible default chunk sizes,
> either 64KB or 512KB.
> 
> How bad is the resulting performance hit? Would a 64KB chunk be
> equally bad as a 512KB chunk? Or is this only quantifiable with
> testing (i.e. it could be a negligible performance hit, or it could
> be huge)?

...
>> RAID HBA and SAN controller firmware simply won't allow this.
>> They configure the RAID60 chunk size automatically equal to the
>> RAID6 stripe width.  If some vendor's firmware allows one to
>> manually enter the RAID60 chunk size with a value different from
>> the RAID6 stripe width, stay away from that vendor.
> 
> I understand that, but the scenario and question I'm posing is for
> multiple hardware raid6's striped with md raid0. The use case are
> enclosures with raid6 but not raid60, so the enclosures are striped
> using software raid. I'm trying to understand the consequence
> magnitude when choosing an md raid0 chunk size other than the correct
> one. Is this a 5% performance hit, or a 30% performance hit?

You must answer other questions before you can answer those above, as
this is entirely workload dependent, as everything always is.  It also
depends on whether or how you aligned XFS.  The wrong outer stripe chunk
size may badly hurt performance, or it may not affect it much at all.
Depends on what/how you're writing, and the demands of the workload.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.
  2014-02-03  9:36               ` Stan Hoeppner
@ 2014-02-03 21:54                 ` Dave Chinner
  2014-02-04  7:06                   ` documentation framework [was Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.] Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Dave Chinner @ 2014-02-03 21:54 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: xfs

On Mon, Feb 03, 2014 at 03:36:01AM -0600, Stan Hoeppner wrote:
> On 2/2/2014 11:24 PM, Dave Chinner wrote:
> > On Sun, Feb 02, 2014 at 10:39:18PM -0600, Stan Hoeppner wrote:
> >> On 2/2/2014 3:30 PM, Dave Chinner wrote:
> ...
> >>> And that is why this is a perfect example of what I'd like to see
> >>> people writing documentation for.
> >>>
> >>> http://oss.sgi.com/archives/xfs/2013-12/msg00588.html
> >>>
> >>> This is not the first time we've had this nested RAID discussion,
> >>> nor will it be the last. However, being able to point ot a web page
> >>> or or documentation makes it a whole lot easier.....
> >>>
> >>> Stan - any chance you might be able to spare an hour a week to write
> >>> something about optimal RAID storage configuration for XFS?
> >>
> >> I could do more, probably rather quickly.  What kind of scope, format,
> >> style?  Should this be structured as reference manual style
> >> documentation, FAQ, blog??  I'm leaning more towards reference style.
> > 
> > Agreed - reference style is probably best. As for format style, I'm
> > tending towards a simple, text editor friendly markup like asciidoc.
> > From there we can use it to generate PDFs, wiki documentation, etc
> > and so make it available in whatever format is convenient.
> 
> Works for me, I'm a plain text kinda guy.

Ok, I'll put together a basic repository and build framework for us
to work from.

> > The only thing I can think of that is obviously missing from this is
> > the process of problem diagnosis. e.g. what to do when something
> > goes wrong. The most common the mistake we see is trying to repair
> > the filesystem when th storage is still broken and making a bigger
> > mess. Having something that describes what to look for (e.g. raid
> > reconstruction getting disks out of order) and how to recover from
> > problems with as little risk and data loss as possible would be
> > invaluable.
> 
> Ahh ok.  So you're going for the big scope described in your Dec 13
> email, not the paltry "optimal RAID storage configuration for XFS"
> described above.  Now I understand the 1 hour a week question. :)

Well, that's where I'd like such a document to end up. Let's plan
for the big picture, but start with small chunks of work that slowly
fill in the big picture.

> I'll brain dump as much as I can, in a hopefully somewhat coherent
> starting doc.  I'll do my best starting the XFS troubleshooting part,
> but I'm much weaker here than with XFS architecture and theory.

That's fine, I don't expect you to write everything ;)

> >> I should be able to knock most of this out fairly quickly, but I'll need
> >> help on some of it.  For example I don't have any first hand experience
> >> with large high end workloads.  I could make up a plausible theoretical
> >> example but I'd rather have as many real-world workloads as possible.
> >> What I have in mind for workload examples is something like the
> >> following.  It would be great if list members who have one the workloads
> >> below would contribute their details and pointers, any secret sauce,
> >> etc.  Thus when we refer someone to this document they know they're
> >> reading of an actual real world production configuration.  Though I
> >> don't plan to name sites, people, etc, just the technical configurations.
> > 
> > 1. General purpose (i.e. unspecialised) configuration that should be
> > good for most users.
> 
> Format with XFS defaults.  Done. :)
> 
> What detail should go with this?  Are you thinking SOHO server here,
> single disk web server.  Anything with low IO rate and a smallish disk/RAID?

I'm not really thinking of a specific configuration here. This is
more a case of "don't [want to] care about optimisation" or "don't
know enough about the workload to optimise it", etc. So not really a
specific configuration, but more of "get the basics right"
configuration guideline.

> >> 1.  Small file, highly parallel, random IO
> >>  -- mail queue, maildir mailbox storage
> >>  -- HPC, filesystem as a database
> >>  -- ??
> > 
> > The hot topic of the moment that fits into this category is object
> > stores for distributed storage. i.e. gluster and ceph running
> > openstack storage layers like swift to store large numbers of
> > pictures of cats.
> 
> The direction I was really wanting to go here is highlighting the
> difference between striped RAID and linear concat, how XFS AG
> parallelism on concat can provide better performance than striping for
> some workloads, and why.  For a long time I've wanted to create a
> document about this with graphs containing "disk silo" icons, showing
> the AGs spanning the striped RAID horizontally and spanning the concat
> disks vertically, explaining the difference in seek patterns and how
> they affect a random IO workload.

I would expect that to be part of the "theory of operation" section
more than an example.

> Maybe I should make concat a separate topic entirely, as it can benefit
> multiple workload types, from the smallest to the largest storage
> setups.  XFS' ability to scale IO throughput nearly infinitely over
> concatenated storage is unique to Linux, and fairly unique to
> filesystems in general TTBOMK.  It is one of its greatest strengths.
> I'd like to cover this in good detail.

Right - and that's one of the reasons I mentioned that the NFS
server setups should be dealt with specially, as they are prime
candidates for optimisation via linear concatentation of RAID
stripes rather than nested stripes....

> >> 2.  Virtual machine consolidation w/mixed guest workload
> > 
> > There's a whole lot of stuff here that is dependent on exactly how
> > the VM infrastructure is set up, so this might be difficult to
> > simplify enough to be useful.
> 
> I was thinking along the lines of consolidating lots of relatively low
> IO throughput guests with thin provisioning, like VPS hosting.  For
> instance a KVM host and a big XFS, sparse files exported to Linux guests
> as drives.  Maybe nobody is doing this with XFS.

If you consider me nobody, then nobody is doing that. All my test
VMs are hosted this way using either sparse or preallocated files.
Make sure that you describe the use of extent size hints for sparse
image files to minimise fragmentation....

> >> Lemme know if this is ok or if you'd like it to take a different
> >> direction, if you have better or additional example workload classes,
> >> etc.  If mostly ok, I'll get started on the first 2 sections and fill in
> >> the 3rd as people submit examples.
> > 
> > It sounds good to me - I think that the first 2 sections are the
> > core of the work - it's the theory that is in our heads (i.e. the
> > black magic) that is simply not documented in a way that people can
> > use.
> 
> Agreed.
> 
> I'll get started.

Ok, I'll get a repo and build skeleton together, and we can go from
there.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* documentation framework [was Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.]
  2014-02-03 21:54                 ` Dave Chinner
@ 2014-02-04  7:06                   ` Dave Chinner
  0 siblings, 0 replies; 12+ messages in thread
From: Dave Chinner @ 2014-02-04  7:06 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: xfs

[-- Attachment #1: Type: text/plain, Size: 2526 bytes --]

On Tue, Feb 04, 2014 at 08:54:11AM +1100, Dave Chinner wrote:
> On Mon, Feb 03, 2014 at 03:36:01AM -0600, Stan Hoeppner wrote:
> > On 2/2/2014 11:24 PM, Dave Chinner wrote:
> > > On Sun, Feb 02, 2014 at 10:39:18PM -0600, Stan Hoeppner wrote:
> > >> On 2/2/2014 3:30 PM, Dave Chinner wrote:
> > ...
> > >>> And that is why this is a perfect example of what I'd like to see
> > >>> people writing documentation for.
> > >>>
> > >>> http://oss.sgi.com/archives/xfs/2013-12/msg00588.html
> > >>>
> > >>> This is not the first time we've had this nested RAID discussion,
> > >>> nor will it be the last. However, being able to point ot a web page
> > >>> or or documentation makes it a whole lot easier.....
> > >>>
> > >>> Stan - any chance you might be able to spare an hour a week to write
> > >>> something about optimal RAID storage configuration for XFS?
> > >>
> > >> I could do more, probably rather quickly.  What kind of scope, format,
> > >> style?  Should this be structured as reference manual style
> > >> documentation, FAQ, blog??  I'm leaning more towards reference style.
> > > 
> > > Agreed - reference style is probably best. As for format style, I'm
> > > tending towards a simple, text editor friendly markup like asciidoc.
> > > From there we can use it to generate PDFs, wiki documentation, etc
> > > and so make it available in whatever format is convenient.
> > 
> > Works for me, I'm a plain text kinda guy.
> 
> Ok, I'll put together a basic repository and build framework for us
> to work from.

Stan, run:

$ git clone git://oss.sgi.com/xfs/xfs-documentation

to get a tree that contains the framework for building asciidoc
files into html and pdf format. Note that you can't find it from
gitweb of oss.sgi.com yet because I don't have permissions to add it
to the project list, but you can browse it directly from:

http://oss.sgi.com/cgi-bin/gitweb.cgi/xfs/xfs-documentation


I've attached the pdf output of the document I quickly converted
from text to asciidoc format so you can get an idea of how easy it
is to convert plain txt to asciidoc, and how good the output from
just using defaults and simple markup is.

A good cheat sheet for asciidoc markup can be found here:

http://powerman.name/doc/asciidoc

Time to start learning how to use guilt and sending patches. ;)

FWIW, where do we want to locate your new document? It is end
user/admin documentation, so perhaps a new user/ subdirectory to
indicate the target of the documentation?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

[-- Attachment #2: xfs-delayed-logging-design.pdf --]
[-- Type: application/pdf, Size: 107844 bytes --]

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-02-04  8:11 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-31  6:35 relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID Chris Murphy
2014-02-01 18:47 ` Stan Hoeppner
2014-02-01 20:55   ` Chris Murphy
2014-02-01 21:44     ` Stan Hoeppner
2014-02-02 18:09       ` Chris Murphy
2014-02-02 21:30         ` Dave Chinner
2014-02-03  4:39           ` Stan Hoeppner
2014-02-03  5:24             ` Dave Chinner
2014-02-03  9:36               ` Stan Hoeppner
2014-02-03 21:54                 ` Dave Chinner
2014-02-04  7:06                   ` documentation framework [was Re: relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID.] Dave Chinner
2014-02-03 10:50         ` relationship of nested stripe sizes, was: Question regarding XFS on LVM over hardware RAID Stan Hoeppner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).