linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: "C. Morgan Hamill" <chamill@wesleyan.edu>, xfs <xfs@oss.sgi.com>
Subject: Re: Question regarding XFS on LVM over hardware RAID.
Date: Tue, 18 Feb 2014 17:07:24 -0600	[thread overview]
Message-ID: <5303E7AC.50903@hardwarefreak.com> (raw)
In-Reply-To: <1392748390-sup-1943@al.wesleyan.edu>

On 2/18/2014 1:44 PM, C. Morgan Hamill wrote:
> Howdy, sorry for digging up this thread, but I've run into an issue
> again, and am looking for advice.
> 
> Excerpts from Stan Hoeppner's message of 2014-02-04 03:00:54 -0500:
>> After a little digging and thinking this through...
>>
>> The default PE size is 4MB but up to 16GB with LVM1, and apparently
>> unlimited size with LVM2.  It can be a few thousand times larger than
>> any sane stripe width.  This makes it pretty clear that PEs exist
>> strictly for volume management operations, used by the LVM tools, but
>> have no relationship to regular write IOs.  Thus the PE size need not
>> match nor be evenly divisible by the stripe width.  It's not part of the
>> alignment equation.
> 
> So in the course of actually going about this, I realized that this
> actually is not true (I think).

Two different issues.

> Logical volumes can only have sizes that are multiple of the physical
> extent size (by definition, really), and so there's no way to have
> logical volumes end on a multiple of the array's stripe width, given my
> stripe width of 9216s, there doesn't seem to be an abundance of integer
> solutions to 2^n mod 9216 = 0.
> 
> So my question is, then, does it matter if logical volumes (or, really,
> XFS file systems) actually end right on a multiple of the stripe width,
> or only that it _begin_ on a multiple of it (leaving a bit of dead space
> before the next logical volume)?

Create each LV starting on a stripe boundary.  There will be some
unallocated space between LVs.  Use the mkfs.xfs -d size= option to
create your filesystems inside of each LV such that the filesystem total
size is evenly divisible by the stripe width.  This results in an
additional small amount of unallocated space within, and at the end of,
each LV.

It's nice if you can line everything up, but when using RAID6 and one or
two bays for hot spares, one rarely ends up with 8 or 16 data spindles.

> If not, I'll tweak things to ensure my stripe width is a power of 2.

That's not possible with 12 data spindles per RAID, not possible with 42
drives in 3 chassis.  Not without a bunch of idle drives.

I still don't understand why you believe you need LVM in the mix, and
more than one filesystem.

>  - I need to expose, in the end, three-ish (two or four would be OK)
>    filesystems to the backup software, which should come fairly close
>    to minimizing the effects of the archive maintenance jobs integrity
>    checks, mostly).  CrashPlan will spawn 2 jobs per store point, so
>    a max of 8 at any given time should be a nice balance between
>    under-utilizing and saturating the IO.

Backup software is unaware of mount points.  It uses paths just like
every other program.  The number of XFS filesystems is irrelevant to
"minimizing the effects of the archive maintenance jobs".  You cannot
bog down XFS.  You will bog down the drives no matter how many
filesystems when using RAID60.

Here is what you should do:

Format the RAID60 directly with XFS.  Create 3 or 4 directories for
CrashPlan to use as its "store points".  If you need to expand in the
future, as I said previously, simply add another 14 drive RAID6 chassis,
format it directly with XFS, mount it at an appropriate place in the
directory tree and give that path to CrashPlan.  Does it have a limit on
the number of "store points"?

-- 
Stan


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-02-18 23:07 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-29 14:26 Question regarding XFS on LVM over hardware RAID C. Morgan Hamill
2014-01-29 15:07 ` Eric Sandeen
2014-01-29 19:11   ` C. Morgan Hamill
2014-01-29 23:55     ` Stan Hoeppner
2014-01-30 14:28       ` C. Morgan Hamill
2014-01-30 20:28         ` Dave Chinner
2014-01-31  5:58           ` Stan Hoeppner
2014-01-31 21:14             ` C. Morgan Hamill
2014-02-01 21:06               ` Stan Hoeppner
2014-02-02 21:21                 ` Dave Chinner
2014-02-03 16:12                   ` C. Morgan Hamill
2014-02-03 21:41                     ` Dave Chinner
2014-02-04  8:00                       ` Stan Hoeppner
2014-02-18 19:44                         ` C. Morgan Hamill
2014-02-18 23:07                           ` Stan Hoeppner [this message]
2014-02-20 18:31                             ` C. Morgan Hamill
2014-02-21  3:33                               ` Stan Hoeppner
2014-02-21  8:57                                 ` Emmanuel Florac
2014-02-22  2:21                                   ` Stan Hoeppner
2014-02-25 17:04                                     ` C. Morgan Hamill
2014-02-25 17:17                                       ` Emmanuel Florac
2014-02-25 20:08                                       ` Stan Hoeppner
2014-02-26 14:19                                         ` C. Morgan Hamill
2014-02-26 17:49                                           ` Stan Hoeppner
2014-02-21 19:17                                 ` C. Morgan Hamill
2014-02-03 16:07                 ` C. Morgan Hamill
2014-01-29 22:40   ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5303E7AC.50903@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=chamill@wesleyan.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).