From: Stan Hoeppner <stan@hardwarefreak.com>
To: Drew <drew.kay@gmail.com>
Cc: Jim Santos <iluvgadgets@gmail.com>,
Linux RAID Mailing List <linux-raid@vger.kernel.org>
Subject: Re: "Missing" RAID devices
Date: Tue, 21 May 2013 17:06:59 -0500 [thread overview]
Message-ID: <519BF003.5080005@hardwarefreak.com> (raw)
In-Reply-To: <CACJz6QvU_o=mbSwaUEdE3WyD2ytipBYPch7pjBuov=NoxofhJg@mail.gmail.com>
On 5/21/2013 4:02 PM, Drew wrote:
> On Tue, May 21, 2013 at 1:43 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> On 5/21/2013 12:03 PM, Drew wrote:
>>> Hi Jim,
>>>
>>> The other question I'd ask is why do you have 10 raid1 arrays on those
>>> two disks?
>>
>> No joke. That setup is ridiculous. RAID exists to guard against a
>> drive failure, not as a substitute for volume management.
>>
>>> Given you have an initramfs, at most you should have separate
>>> partitions (raid'd) for /boot & root. Everything else should be broken
>>> down using LVM. Way more flexible to move things around in future as
>>> required.
>>
>> LVM isn't even required. Using partitions (atop MD) or a single large
>> filesystem (XFS) with quotas works just as well.
>
> Agreed. For simple setups, a single boot & root is just fine.
>
> I'd assumed the OP's reasons for using multiple partitions was valid,
> so keeping those partitions over top a single raid array meant LVM was
> the best choice.
We don't have enough information yet to make such a determination.
Multiple LVM devices may most closely mimic his current setup, but that
doesn't mean it's the best choice. It doesn't mean it's not either. We
simply haven't been informed why he was using 10 md/RAID1 devices. My
gut instinct says it's simply a lack of education, not a special
requirement.
The principal reason for such a setup is to prevent runaway processes
from filling the storage. Thus /var which normally contains the logs
and mail spool is often put on a separate partition. This problem can
also be addressed using filesystem quotas.
There is more than one way to skin a cat, as they say. If he's using
these 10 partitions simply for organization purposes, then there's no
need for 10 LVM devices nor FS quotas on a single FS, but simply a good
directory hierarchy.
--
Stan
prev parent reply other threads:[~2013-05-21 22:06 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-21 12:51 "Missing" RAID devices Jim Santos
2013-05-21 15:31 ` Phil Turmel
2013-05-21 22:22 ` Jim Santos
2013-05-22 0:02 ` Phil Turmel
2013-05-22 0:16 ` Jim Santos
2013-05-22 22:43 ` Stan Hoeppner
2013-05-22 23:26 ` Phil Turmel
2013-05-23 5:59 ` Stan Hoeppner
2013-05-23 8:30 ` keld
2013-05-24 3:45 ` Stan Hoeppner
2013-05-24 6:32 ` keld
2013-05-24 7:37 ` Stan Hoeppner
2013-05-24 17:15 ` keld
2013-05-24 19:05 ` Stan Hoeppner
2013-05-24 19:22 ` keld
2013-05-25 1:42 ` Stan Hoeppner
2013-05-24 9:23 ` David Brown
2013-05-24 18:03 ` keld
2013-05-23 8:22 ` David Brown
2013-05-21 16:23 ` Doug Ledford
2013-05-21 17:03 ` Drew
[not found] ` <519BDC8C.1040202@hardwarefreak.com>
2013-05-21 21:02 ` Drew
2013-05-21 22:06 ` Stan Hoeppner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=519BF003.5080005@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=drew.kay@gmail.com \
--cc=iluvgadgets@gmail.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).