linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* A few remaining questions about installing to RAID-10
@ 2009-10-02  1:27 Ben DJ
  2009-10-02  2:31 ` Ben DJ
                   ` (3 more replies)
  0 siblings, 4 replies; 37+ messages in thread
From: Ben DJ @ 2009-10-02  1:27 UTC (permalink / raw)
  To: linux-raid

I'm setting up new linux boxes, hoping to install whatever OS I choose
to a software RAID array.

I've got 4 identical SATA drives, and would ideally like to use RAID-10.

I've read a bunch of slightly stale How-To docs, and have a few questions.

(1) Can Linux boot from /boot on RAID-10?  Oldest info I found said no
boot from RAID at all, then more recent docs said boot from RAID-1
works.  I found nothing on RAID-10.  What's the latest sccop on this?

(2) As far as I can tell, none of the installers in Centos, Ubuntu or
Opensuse are RAID-10 aware.  Seems like the sanest way to get setup
would be to boot from SystemRescueCD, do the partitioning and RAID
creation, then re-boot from an installer disk using the pre-setup
disks.

Am I missing some other, simpler approach?

(3) Assuming that I'll have to boot from RAID-1 (Just suspect that
RAID-10 is not yet an option for /boot, but willing to be shown
wrong!), I'm considering 3 partitioning/raid_config options,

  (a)
    DISK1      DISK2      DISK3      DISK4
   [ RAID-1  /boot                        ]
   [ RAID-1  swap                         ]
   [ RAID-10 LVM, /root & 'other' parts   ]

  (b)
    DISK1      DISK2      DISK3      DISK4
   [ RAID-1  /boot  ]    [ RAID-1  swap   ]
   [ RAID-10 LVM, /root & 'other' parts   ]

  (c)
    DISK1      DISK2      DISK3      DISK4
   [ RAID-1  /boot                        ]
   [ RAID-10 LVM, /root, swap & 'other'   ]

Are there any clear benefits/concerns of one config over the other?

(4) In setting up the RAID arrays, I've got a choice of metadata
versions.  It seems that the distros' installers default to "1.0", but
that "1.1" & "1.2" are both available, too.

Should I just use the newest, 1.2?  Any problems if I do?

(5) In whatever config is "best" in (3), above, is it still good
advice to install the bootlader into multiple MBRs? For example, if I
extend the RAID-1 over all 4-disks, then, I install the loader into
all four MBRs?

I think these are the last details I need to iron out before getting started.

Thanks for any help!

Ben

^ permalink raw reply	[flat|nested] 37+ messages in thread

* A few remaining questions about installing to RAID-10
  2009-10-02  1:27 Ben DJ
@ 2009-10-02  2:31 ` Ben DJ
  2009-10-02  3:16 ` Neil Brown
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 37+ messages in thread
From: Ben DJ @ 2009-10-02  2:31 UTC (permalink / raw)
  To: linux-raid

I'm setting up new linux boxes, hoping to install whatever OS I choose
to a software RAID array.

I've got 4 identical SATA drives, and would ideally like to use RAID-10.

I've read a bunch of slightly stale How-To docs, and have a few questions.

(1) Can Linux boot from /boot on RAID-10?  Oldest info I found said no
boot from RAID at all, then more recent docs said boot from RAID-1
works.  I found nothing on RAID-10.  What's the latest sccop on this?

(2) As far as I can tell, none of the installers in Centos, Ubuntu or
Opensuse are RAID-10 aware.  Seems like the sanest way to get setup
would be to boot from SystemRescueCD, do the partitioning and RAID
creation, then re-boot from an installer disk using the pre-setup
disks.

Am I missing some other, simpler approach?

(3) Assuming that I'll have to boot from RAID-1 (Just suspect that
RAID-10 is not yet an option for /boot, but willing to be shown
wrong!), I'm considering 3 partitioning/raid_config options,

 (a)
   DISK1      DISK2      DISK3      DISK4
  [ RAID-1  /boot                        ]
  [ RAID-1  swap                         ]
  [ RAID-10 LVM, /root & 'other' parts   ]

 (b)
   DISK1      DISK2      DISK3      DISK4
  [ RAID-1  /boot  ]    [ RAID-1  swap   ]
  [ RAID-10 LVM, /root & 'other' parts   ]

 (c)
   DISK1      DISK2      DISK3      DISK4
  [ RAID-1  /boot                        ]
  [ RAID-10 LVM, /root, swap & 'other'   ]

Are there any clear benefits/concerns of one config over the other?

(4) In setting up the RAID arrays, I've got a choice of metadata
versions.  It seems that the distros' installers default to "1.0", but
that "1.1" & "1.2" are both available, too.

Should I just use the newest, 1.2?  Any problems if I do?

(5) In whatever config is "best" in (3), above, is it still good
advice to install the bootlader into multiple MBRs? For example, if I
extend the RAID-1 over all 4-disks, then, I install the loader into
all four MBRs?

I think these are the last details I need to iron out before getting started.

Thanks for any help!

Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02  1:27 Ben DJ
  2009-10-02  2:31 ` Ben DJ
@ 2009-10-02  3:16 ` Neil Brown
  2009-10-02  3:46   ` Ben DJ
  2009-10-02  7:51 ` Keld Jørn Simonsen
  2009-10-02  7:54 ` Robin Hill
  3 siblings, 1 reply; 37+ messages in thread
From: Neil Brown @ 2009-10-02  3:16 UTC (permalink / raw)
  To: Ben DJ; +Cc: linux-raid

On Thursday October 1, bendj095124367913213465@gmail.com wrote:
> I'm setting up new linux boxes, hoping to install whatever OS I choose
> to a software RAID array.
> 
> I've got 4 identical SATA drives, and would ideally like to use RAID-10.
> 
> I've read a bunch of slightly stale How-To docs, and have a few questions.
> 
> (1) Can Linux boot from /boot on RAID-10?  Oldest info I found said no
> boot from RAID at all, then more recent docs said boot from RAID-1
> works.  I found nothing on RAID-10.  What's the latest sccop on this?

Linux doesn't boot.  Something else has to boot Linux.
So the question really is:
  Can Lilo, or grub, or grub2 or whatever boot on RAID-10.

It is possible that grub2 can or will be able to one day, but I think
the safest answer to work with at the moment is "no".

> 
> (2) As far as I can tell, none of the installers in Centos, Ubuntu or
> Opensuse are RAID-10 aware.  Seems like the sanest way to get setup
> would be to boot from SystemRescueCD, do the partitioning and RAID
> creation, then re-boot from an installer disk using the pre-setup
> disks.
> 
> Am I missing some other, simpler approach?

I haven't used any of those installers in ages but I wouldn't be
surprised if what you say is true.  Your "sanest way" suggestion is
the approach that I would probably take.

> 
> (3) Assuming that I'll have to boot from RAID-1 (Just suspect that
> RAID-10 is not yet an option for /boot, but willing to be shown
> wrong!), I'm considering 3 partitioning/raid_config options,
> 
>   (a)
>     DISK1      DISK2      DISK3      DISK4
>    [ RAID-1  /boot                        ]
>    [ RAID-1  swap                         ]
>    [ RAID-10 LVM, /root & 'other' parts   ]
> 
>   (b)
>     DISK1      DISK2      DISK3      DISK4
>    [ RAID-1  /boot  ]    [ RAID-1  swap   ]
>    [ RAID-10 LVM, /root & 'other' parts   ]
> 
>   (c)
>     DISK1      DISK2      DISK3      DISK4
>    [ RAID-1  /boot                        ]
>    [ RAID-10 LVM, /root, swap & 'other'   ]
> 
> Are there any clear benefits/concerns of one config over the other?

I would probably go for 'c' as it is most flexible.  If, however, you
are going to be hitting swap a lot, and cannot afford extra memory,
'a' might be the better option.
It isn't really a lot between them.

> 
> (4) In setting up the RAID arrays, I've got a choice of metadata
> versions.  It seems that the distros' installers default to "1.0", but
> that "1.1" & "1.2" are both available, too.
> 
> Should I just use the newest, 1.2?  Any problems if I do?

1.2 isn't "newer".  1.0, 1.1, 1.2 are similar but put the metadata in
different places.
I has been suggested that we call them 1.end, 1.start and 1.offset (or
something like that) and while it might be a good idea, it hasn't
happened.

You almost certainly want 1.0 for /boot otherwise you probably won't
be able to boot.  I would suggest 1.1 for your RAID10.

> 
> (5) In whatever config is "best" in (3), above, is it still good
> advice to install the bootlader into multiple MBRs? For example, if I
> extend the RAID-1 over all 4-disks, then, I install the loader into
> all four MBRs?

Yes.  Else you might be able to boot if the first device dies.

Good luck,
NeilBrown

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02  3:16 ` Neil Brown
@ 2009-10-02  3:46   ` Ben DJ
  2009-10-02 12:30     ` adfas asd
  0 siblings, 1 reply; 37+ messages in thread
From: Ben DJ @ 2009-10-02  3:46 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Hi Neil,

On Thu, Oct 1, 2009 at 8:16 PM, Neil Brown <neilb@suse.de> wrote:
> ...

Thanks for clearing things up for me.  Very helpful.

Shame about not being able to boot from RAID-10 -- yet.  It'd make the
setup simpler, I think.

> I haven't used any of those installers in ages

Just out of curiosity, what DO you use? A different installer (I
notice you're at Suse), or a completely manual approach?


> I would probably go for 'c' as it is most flexible.  If, however, you
> are going to be hitting swap a lot, and cannot afford extra memory,
> 'a' might be the better option.

These are 8-24GB RAM boxes.  (c), then, should be OK.

> 1.2 isn't "newer".  1.0, 1.1, 1.2 are similar but put the metadata in different places.
> I has been suggested that we call them 1.end, 1.start and 1.offset (or
> something like that) and while it might be a good idea, it hasn't
> happened.

I knew that metadata location was different between them, but had
misunderstood that to be 'progress', with the newer version being
'better'.

> You almost certainly want 1.0 for /boot otherwise you probably won't be able to boot.

Glad you confirmed that!  I missed any docs that said "must be1.0 to boot".

>  I would suggest 1.1 for your RAID10.

I'm guessing that's for performance reasons, and is no more/less
'recoverable' in the case of a disaster?

>> (5) In whatever config is "best" in (3), above, is it still good
>> advice to install the bootlader into multiple MBRs? For example, if I
>> extend the RAID-1 over all 4-disks, then, I install the loader into
>> all four MBRs?
>
> Yes.  Else you might be able to boot if the first device dies.

That's what I'll do.  I've not seen any examples of RAID-1 over
4-disks.  Usually just 2.  Recovery process is the same for 4 vs 2,
right?  I figure I won't get, or notice, much performance difference
between 2- or 4- disk.  I just assume it'll be a bit more robust

> Good luck,

THanks!

Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02  1:27 Ben DJ
  2009-10-02  2:31 ` Ben DJ
  2009-10-02  3:16 ` Neil Brown
@ 2009-10-02  7:51 ` Keld Jørn Simonsen
  2009-10-02  7:54 ` Robin Hill
  3 siblings, 0 replies; 37+ messages in thread
From: Keld Jørn Simonsen @ 2009-10-02  7:51 UTC (permalink / raw)
  To: Ben DJ; +Cc: linux-raid

On Thu, Oct 01, 2009 at 06:27:57PM -0700, Ben DJ wrote:
> I'm setting up new linux boxes, hoping to install whatever OS I choose
> to a software RAID array.
> 
> I've got 4 identical SATA drives, and would ideally like to use RAID-10.
> 
> I've read a bunch of slightly stale How-To docs, and have a few questions.
> 
> (1) Can Linux boot from /boot on RAID-10?  Oldest info I found said no
> boot from RAID at all, then more recent docs said boot from RAID-1
> works.  I found nothing on RAID-10.  What's the latest sccop on this?

You can boot from a raid-10,n2 with the superblock last, standard
raid10,n2 will do. But the normal thing is to boot from raid-1

There is a description of a setup at 
http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

> (2) As far as I can tell, none of the installers in Centos, Ubuntu or
> Opensuse are RAID-10 aware.  Seems like the sanest way to get setup
> would be to boot from SystemRescueCD, do the partitioning and RAID
> creation, then re-boot from an installer disk using the pre-setup
> disks.

I have set up centos systems using the above guide.

best regards
keld

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02  1:27 Ben DJ
                   ` (2 preceding siblings ...)
  2009-10-02  7:51 ` Keld Jørn Simonsen
@ 2009-10-02  7:54 ` Robin Hill
  3 siblings, 0 replies; 37+ messages in thread
From: Robin Hill @ 2009-10-02  7:54 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1144 bytes --]

On Thu Oct 01, 2009 at 06:27:57PM -0700, Ben DJ wrote:

> (2) As far as I can tell, none of the installers in Centos, Ubuntu or
> Opensuse are RAID-10 aware.  Seems like the sanest way to get setup
> would be to boot from SystemRescueCD, do the partitioning and RAID
> creation, then re-boot from an installer disk using the pre-setup
> disks.
> 
> Am I missing some other, simpler approach?
> 
With OpenSUSE you can just switch to the second console at that point of
the install and do the partitioning & RAID setup there (you can in text
mode anyway, I've not used the graphical installer), then you just need
to tell the installer to rescan the partition table and it'll pick up
the new details.

I'm pretty sure CentOS doesn't offer multiple consoles during the
install so a SystemRescueCD/LiveCD will be needed there.  I've never
used Ubuntu so I've no idea what it offers.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02  3:46   ` Ben DJ
@ 2009-10-02 12:30     ` adfas asd
  2009-10-02 14:54       ` Ben DJ
  0 siblings, 1 reply; 37+ messages in thread
From: adfas asd @ 2009-10-02 12:30 UTC (permalink / raw)
  To: linux-raid

I am booting just fine from a RAID10-o2 array, although I'm told I'm not supposed to.  I've unplugged each drive and tried it, and they boot just fine degraded.

I followed this guide, to convert my -running- Debian system, and just made a few substitutions for RAID10:
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch




      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 12:30     ` adfas asd
@ 2009-10-02 14:54       ` Ben DJ
  2009-10-02 15:06         ` Keld Jørn Simonsen
                           ` (2 more replies)
  0 siblings, 3 replies; 37+ messages in thread
From: Ben DJ @ 2009-10-02 14:54 UTC (permalink / raw)
  To: chimera_god, keld, robin; +Cc: linux-raid

Hi everybody,

2009/10/2 Keld Jørn Simonsen <keld@dkuug.dk>:
> You can boot from a raid-10,n2 with the superblock last, standard
> raid10,n2 will do. But the normal thing is to boot from raid-1
>
> There is a description of a setup at
> http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

Thanks for the URL.  Is there a reason that raid-1 is the "normal
thing"? Just popularity or something technical?

In your reference it says "The root file system can be on another raid
than the /boot partition. We recommend an raid10,f2, as the root file
system will mostly be reads, and the raid10,f2 raid type is the
fastest for reads, while also sufficiently fast for writes. Other
relevant raid types would be raid10,o2 or raid1." and "a small /boot
partition. We recommend something like 200 MB on an ext3 raid1".

How or why did you arrive at "raid-10,n2 with the superblock last"?

On Fri, Oct 2, 2009 at 12:54 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> With OpenSUSE you can just switch to the second console at that point of

I didn't know about the console switching option, thanks.  I'll look
for docs on how to switch; I guess a F-key does the trick.  As far as
you know, even though the graphical installers don't offer RAID-10
config as an option, are they RAID-10 aware enough to correctly
recognize/identify the array as RAID-10 after manual setup then
rescan?

SystemRescueCD approach still seems straightforward too, and can be
doc'd as doable across any OS install.

On Fri, Oct 2, 2009 at 5:30 AM, adfas asd <chimera_god@yahoo.com> wrote:
> I am booting just fine from a RAID10-o2 array, although I'm told I'm not supposed to.

Why "not supposed to"?  Is it the RAID 10? or the 'o2' that's the problem?

>I've unplugged each drive and tried it, and they boot just fine degraded.

That's great to hear.  Just curious, how many drives in your array at the time?

> I followed this guide, to convert my -running- Debian system, and just made a few substitutions for RAID10:
> http://www.howtoforge.com/software-raid1-grub-boot-debian-etch

Thanks for the additional URL.

In all these cases /boot on RAID-10 was on its own partition, NOT on
/boot in a LVM-on-RAID, right?

Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 14:54       ` Ben DJ
@ 2009-10-02 15:06         ` Keld Jørn Simonsen
  2009-10-02 15:15         ` Robin Hill
  2009-10-03 14:12         ` adfas asd
  2 siblings, 0 replies; 37+ messages in thread
From: Keld Jørn Simonsen @ 2009-10-02 15:06 UTC (permalink / raw)
  To: Ben DJ; +Cc: chimera_god, robin, linux-raid

On Fri, Oct 02, 2009 at 07:54:04AM -0700, Ben DJ wrote:
> Hi everybody,
> 
> 2009/10/2 Keld Jørn Simonsen <keld@dkuug.dk>:
> > You can boot from a raid-10,n2 with the superblock last, standard
> > raid10,n2 will do. But the normal thing is to boot from raid-1
> >
> > There is a description of a setup at
> > http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk
> 
> Thanks for the URL.  Is there a reason that raid-1 is the "normal
> thing"? Just popularity or something technical?

it is because raid1 is what traditionally worked. raid10,n2 will not work
in some situations.  raid10 is "new" and "dangerous" (FUD). Many people
do not know about raid10, and its wonders.

> In your reference it says "The root file system can be on another raid
> than the /boot partition. We recommend an raid10,f2, as the root file
> system will mostly be reads, and the raid10,f2 raid type is the
> fastest for reads, while also sufficiently fast for writes. Other
> relevant raid types would be raid10,o2 or raid1." and "a small /boot
> partition. We recommend something like 200 MB on an ext3 raid1".
> 
> How or why did you arrive at "raid-10,n2 with the superblock last"?

what lilo and grub really do is that they just read from one disk, so if
the raid type happens to be the same layout as  a non-raid partition, it
works. raid10,n2 default layout (ver 0.90) is equivalent with non-raid,
as the raid superblock is in the end of the partition. I understand that
other versions of the superblock are put other places, such as in the
beginning of the partition.

best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 14:54       ` Ben DJ
  2009-10-02 15:06         ` Keld Jørn Simonsen
@ 2009-10-02 15:15         ` Robin Hill
  2009-10-02 15:46           ` Ben DJ
  2009-10-03 14:12         ` adfas asd
  2 siblings, 1 reply; 37+ messages in thread
From: Robin Hill @ 2009-10-02 15:15 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2734 bytes --]

On Fri Oct 02, 2009 at 07:54:04AM -0700, Ben DJ wrote:

> Hi everybody,
> 
> 2009/10/2 Keld Jørn Simonsen <keld@dkuug.dk>: 
> > You can boot from a raid-10,n2 with the superblock last, standard
> > raid10,n2 will do. But the normal thing is to boot from raid-1
> >
> > There is a description of a setup at
> > http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk
>
> Thanks for the URL.  Is there a reason that raid-1 is the "normal
> thing"? Just popularity or something technical?

The main reason is that RAID-1 works across any number of disks (as
they're all identical) whereas RAID-10,nX working is an artifact of how
that layout works for X drives (they end up being identical to RAID-1).

> On Fri, Oct 2, 2009 at 12:54 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> > With OpenSUSE you can just switch to the second console at that point of
> 
> I didn't know about the console switching option, thanks.  I'll look
> for docs on how to switch; I guess a F-key does the trick.  As far as
> you know, even though the graphical installers don't offer RAID-10
> config as an option, are they RAID-10 aware enough to correctly
> recognize/identify the array as RAID-10 after manual setup then
> rescan?
> 
> SystemRescueCD approach still seems straightforward too, and can be
> doc'd as doable across any OS install.
> 
Yes, Alt-F2 will get you to the second console, Alt-F3 for the third (or
Ctrl-Alt-F2 from within X).  The installer uses some of the consoles for
progress/debugging output, but at least one of them is just a basic
shell so you can run any normal commands from there.

The installer is sufficiently RAID-10 aware to pick up the arrays (if
not, the SystemRescueCD approach wouldn't work either).

> On Fri, Oct 2, 2009 at 5:30 AM, adfas asd <chimera_god@yahoo.com> wrote:
> > I am booting just fine from a RAID10-o2 array, although I'm told I'm
> > not supposed to.
> 
> Why "not supposed to"?  Is it the RAID 10? or the 'o2' that's the problem?
> 
It's the combination - RAID10,o2 for 2 drives results in one drive
having all data in normal order (so bootable) and the other having every
pair of chunks flipped.  I'm surprised that the bootloader even
recognises the filesystem, let alone manages to boot from it.

Cheers,
    Robin

P.S. The Wikipedia entry for Non-standard RAID levels has diagrams for a
number of the Linux RAID-10 layouts which may help to understand why
some layouts boot and others don't.
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 15:15         ` Robin Hill
@ 2009-10-02 15:46           ` Ben DJ
  2009-10-02 15:59             ` Robin Hill
  0 siblings, 1 reply; 37+ messages in thread
From: Ben DJ @ 2009-10-02 15:46 UTC (permalink / raw)
  To: linux-raid, keld, robin

keld@dkuug.dk,robin@robinhill.me.uk


Hi,

2009/10/2 Keld Jørn Simonsen <keld@dkuug.dk>:
> raid10 is "new" and "dangerous" (FUD). Many people
> do not know about raid10, and its wonders.

So, the "usual suspects".

> if the raid type happens to be the same layout as  a non-raid partition, it
> works. raid10,n2 default layout (ver 0.90) is equivalent with non-raid,
> as the raid superblock is in the end of the partition.

That makes sense, thanks. Not clear whether that's "officially
supported" or "just lucky".

Just to be ccertain, which metadata version, 0.90(default) or 1.0?
According to the man page,

	0, 0.90, default
		 Use  the original 0.90 format superblock.  This format limits
arrays to 28 component devices and limits
		 component devices of levels 1 and greater to 2 terabytes.

	1, 1.0, 1.1, 1.2
		 Use the new version-1 format superblock.  This has few
restrictions.  The different sub-versions  store
		 the  superblock  at  different  locations on the device, either at
the end (for 1.0), at the start (for
		 1.1) or 4K from the start (for 1.2).

it's v 1.0 that specifically states the super block is "at the end".
Will either/both work?

On Fri, Oct 2, 2009 at 8:15 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> The main reason is that RAID-1 works across any number of disks (as
> they're all identical) whereas RAID-10,nX working is an artifact of how
> that layout works for X drives (they end up being identical to RAID-1).

As X is a variable, according to the man page,

  "The number is the number of copies of each datablock.  2 is normal,
3 can be useful.
   This number  can  be  at most equal to the number of devices in the
array.  It does
   not need to divide evenly into that number (e.g. it is perfectly
legal to have an 'n2'
   layout for an array with an odd number of devices)."

does "RAID-10,nX working is an artifact" hold true for any X?

If I've a 4-drive RAID-10 array that I want to have a bootable-/boot
on, should X best be "normmal" 2, "useful" 3, or "unmentioned" 4?  As
usual it seems that performance, redundancy/robustness, and space
utilization, as well as just being able to boot, all have a part to
play.

>> On Fri, Oct 2, 2009 at 12:54 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> Yes, Alt-F2 will get you to the second console, Alt-F3 for the third (or
> Ctrl-Alt-F2 from within X).

Just found the docs, too. Thanks!

> It's the combination - RAID10,o2 for 2 drives results in one drive
> having all data in normal order (so bootable) and the other having every
> pair of chunks flipped.

Sound like RAID10,n2 (or n3 or n4, in my case) is the right approach.

> The Wikipedia entry for Non-standard RAID levels has diagrams

Found it, Thanks.

Thanks for the help & discussion!

Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 15:46           ` Ben DJ
@ 2009-10-02 15:59             ` Robin Hill
  2009-10-02 16:31               ` Ben DJ
  0 siblings, 1 reply; 37+ messages in thread
From: Robin Hill @ 2009-10-02 15:59 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3430 bytes --]

On Fri Oct 02, 2009 at 08:46:47AM -0700, Ben DJ wrote:

> keld@dkuug.dk,robin@robinhill.me.uk
> 
> 
> Hi,
> 
> 2009/10/2 Keld Jørn Simonsen <keld@dkuug.dk>:
> > if the raid type happens to be the same layout as  a non-raid partition, it
> > works. raid10,n2 default layout (ver 0.90) is equivalent with non-raid,
> > as the raid superblock is in the end of the partition.
> 
> That makes sense, thanks. Not clear whether that's "officially
> supported" or "just lucky".
> 
> Just to be ccertain, which metadata version, 0.90(default) or 1.0?
> According to the man page,
> 
> 	0, 0.90, default
> 		 Use  the original 0.90 format superblock.  This format limits
> arrays to 28 component devices and limits
> 		 component devices of levels 1 and greater to 2 terabytes.
> 
> 	1, 1.0, 1.1, 1.2
> 		 Use the new version-1 format superblock.  This has few
> restrictions.  The different sub-versions  store
> 		 the  superblock  at  different  locations on the device, either at
> the end (for 1.0), at the start (for
> 		 1.1) or 4K from the start (for 1.2).
> 
> it's v 1.0 that specifically states the super block is "at the end".
> Will either/both work?
> 
Either 0.90 or 1.0 will work (both are held at the end)- 0.90 allows for
kernel auto-assembly, but is a legacy format.  Any modern Linux
distribution will use an initrd which can assemble arrays with any
superblock, so current advice would be to use 1.0.

> On Fri, Oct 2, 2009 at 8:15 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> > The main reason is that RAID-1 works across any number of disks (as
> > they're all identical) whereas RAID-10,nX working is an artifact of how
> > that layout works for X drives (they end up being identical to RAID-1).
> 
> As X is a variable, according to the man page,
> 
>   "The number is the number of copies of each datablock.  2 is normal,
> 3 can be useful.
>    This number  can  be  at most equal to the number of devices in the
> array.  It does
>    not need to divide evenly into that number (e.g. it is perfectly
> legal to have an 'n2'
>    layout for an array with an odd number of devices)."
> 
> does "RAID-10,nX working is an artifact" hold true for any X?
> 
Yes - RAID-10,nX for X disks will result in (effectively) a RAID-1
layout.

> If I've a 4-drive RAID-10 array that I want to have a bootable-/boot
> on, should X best be "normmal" 2, "useful" 3, or "unmentioned" 4?  As
> usual it seems that performance, redundancy/robustness, and space
> utilization, as well as just being able to boot, all have a part to
> play.
> 
For a boot partition I'd stick with RAID-1, then use RAID-10,f2 for
other partitions.  There's no real advantage in using RAID-10,n4 over
RAID-1 (it's a different code-path so there may be slight differences
one way or the other, but nothing that will matter for a boot
partition anyway), and just makes the setup more unusual.

It's up to you though - RAID-10,n4 and a four-way RAID-1 will have the
same on-disk layout, so either will work fine.  Oh - one other thing is
that RAID-1 is expandable (you can add other partitions later) whereas
RAID-10 is not currently.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 15:59             ` Robin Hill
@ 2009-10-02 16:31               ` Ben DJ
  2009-10-02 17:01                 ` Robin Hill
  0 siblings, 1 reply; 37+ messages in thread
From: Ben DJ @ 2009-10-02 16:31 UTC (permalink / raw)
  To: linux-raid, robin

Hi,

On Fri, Oct 2, 2009 at 8:59 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> Either 0.90 or 1.0 will work (both are held at the end)- 0.90 allows for
> kernel auto-assembly, but is a legacy format.  Any modern Linux
> distribution will use an initrd which can assemble arrays with any
> superblock, so current advice would be to use 1.0.

Ok, that's clear.

> For a boot partition I'd stick with RAID-1, then use RAID-10,f2 for
> other partitions.

f2 is clear for read-mostly space, like most of /root.  But for /var?
/home? other /data?

I'm learning there a 'subtleties' to all this.

> There's no real advantage in using RAID-10,n4 over
> RAID-1 (it's a different code-path so there may be slight differences
> one way or the other, but nothing that will matter for a boot
> partition anyway), and just makes the setup more unusual.

I thought that RAID-10's disk failure survivability is n-2, for n >=4
drives.  RAID-1 over 4 drives can also survive 2-drive failure?

> It's up to you though - RAID-10,n4 and a four-way RAID-1 will have the
> same on-disk layout, so either will work fine.  Oh - one other thing is
> that RAID-1 is expandable (you can add other partitions later) whereas
> RAID-10 is not currently.

Athough needing to expand /boot is hardly a requirement, I did NOT
realize that RAID-10 is not expandable.  Partitions ARE certainly
add-able within the LVM that spans the RAID-10, but I thoguth that
spindles, and new partition could be added after initial creation.
Need to read some more.

Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 16:31               ` Ben DJ
@ 2009-10-02 17:01                 ` Robin Hill
  2009-10-02 17:24                   ` Ben DJ
  0 siblings, 1 reply; 37+ messages in thread
From: Robin Hill @ 2009-10-02 17:01 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2222 bytes --]

On Fri Oct 02, 2009 at 09:31:07AM -0700, Ben DJ wrote:

> Hi,
> 
> On Fri, Oct 2, 2009 at 8:59 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> > For a boot partition I'd stick with RAID-1, then use RAID-10,f2 for
> > other partitions.
> 
> f2 is clear for read-mostly space, like most of /root.  But for /var?
> /home? other /data?
> 
> I'm learning there a 'subtleties' to all this.
> 
Depends really - far isn't much slower on writes (the I/O elevator and
filesystem buffering help).  If you know /var will be write-mostly (it's
often the default location for MySQL data files as well) then a near
layout may be better though.

> > There's no real advantage in using RAID-10,n4 over
> > RAID-1 (it's a different code-path so there may be slight differences
> > one way or the other, but nothing that will matter for a boot
> > partition anyway), and just makes the setup more unusual.
> 
> I thought that RAID-10's disk failure survivability is n-2, for n >=4
> drives.  RAID-1 over 4 drives can also survive 2-drive failure?
> 
RAID-1 over N disks can survive N-1 drive failures.  Each disk holds a
full copy of the data.

> > It's up to you though - RAID-10,n4 and a four-way RAID-1 will have the
> > same on-disk layout, so either will work fine.  Oh - one other thing is
> > that RAID-1 is expandable (you can add other partitions later) whereas
> > RAID-10 is not currently.
> 
> Athough needing to expand /boot is hardly a requirement, I did NOT
> realize that RAID-10 is not expandable.  Partitions ARE certainly
> add-able within the LVM that spans the RAID-10, but I thoguth that
> spindles, and new partition could be added after initial creation.
> Need to read some more.
> 
Currently only RAID-1,4,5 and 6 support growing arrays (by either adding
spindles or increasing the capacity of each spindle).  There are plans
to add this capability to other RAID levels but I'm not sure where this
sits on the priority list.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 17:01                 ` Robin Hill
@ 2009-10-02 17:24                   ` Ben DJ
  0 siblings, 0 replies; 37+ messages in thread
From: Ben DJ @ 2009-10-02 17:24 UTC (permalink / raw)
  To: linux-raid

Hi Robin,

On Fri, Oct 2, 2009 at 10:01 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> Depends really - far isn't much slower on writes (the I/O elevator and
> filesystem buffering help).  If you know /var will be write-mostly (it's
> often the default location for MySQL data files as well) then a near
> layout may be better though.

Ok, that's clear.  /var is typically a mix, it seems.  I may well
create a second set of partitions for read-write on /data & /var/cache
(I already put /tmp in tmpfs/RAM. /home I've mixed feeling about) --
once I've convinced myself that the benefits really show up in the
benchmarks.  And decide whether it's worth the complexity.   And, of
course, then the which FS to use decisions (would prefer ZFS, but n/a.
 brtfs is promising but still experimental afaict.  ext4 &/or xfs may
be my solutions); but that's for a different ML, I guess.

> RAID-1 over N disks can survive N-1 drive failures.  Each disk holds a full copy of the data.

I really need to stop confusing RAID-0/1 mirrors & stripes.  THanks.

> Currently only RAID-1,4,5 and 6 support growing arrays (by either adding
> spindles or increasing the capacity of each spindle).  There are plans
> to add this capability to other RAID levels but I'm not sure where this
> sits on the priority list.

Good to know.  With 4x1TB drives, I've got 2TB RAID-10 space to play
with.  More than enough without worrying about the expansion too much.
 For now.

Thanks.

Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-02 14:54       ` Ben DJ
  2009-10-02 15:06         ` Keld Jørn Simonsen
  2009-10-02 15:15         ` Robin Hill
@ 2009-10-03 14:12         ` adfas asd
  2009-10-03 16:55           ` Keld Jørn Simonsen
                             ` (3 more replies)
  2 siblings, 4 replies; 37+ messages in thread
From: adfas asd @ 2009-10-03 14:12 UTC (permalink / raw)
  To: linux-raid

--- On Fri, 10/2/09, Ben DJ <bendj095124367913213465@gmail.com> wrote:
> On Fri, Oct 2, 2009 at 5:30 AM, adfas asd <chimera_god@yahoo.com>
> wrote:
> >I've unplugged each drive and tried it, and they boot
> just fine degraded.
> 
> That's great to hear.  Just curious, how many drives
> in your array at the time?

Two disks, 2TB each.  RAID10-o2 means that there are two redundant copies of my data in the array, stored respectively on sda and sdb.

 
> > I followed this guide, to convert my -running- Debian
> system, and just made a few substitutions for RAID10:
> > http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
> 
> Thanks for the additional URL.
> 
> In all these cases /boot on RAID-10 was on its own
> partition, NOT on
> /boot in a LVM-on-RAID, right?

I didn't use LVM, and don't trust layering of any technology.  My two-drive array was set up much as he recommended it: md1 - / (including /boot), md2 - swap, md3 - /home.  I am against any further fracturing of partitions, as modern disk drives have lots of space, and lots of parts becomes unmaintainable.

I will say that this system appears to be a performance problem for me, as I run MythTV on / and have my videos in /home.  When Myth is scanning to eliminate commercials it must frantically sweep between / and /home, and overall system performance is impacted.  I am working on putting / on a dedicated high-performance 2.5" drive and keeping my videos on the array as they're much too large to back up.  In the process I'm also going to split the array so that one drive is in the PeeCee and the other is NASsed out in the garage in case of theft or fire.

When I need more space (soon) I'll add two more drives, and still mirror them two copies, with one side of the mirror in the PeeCee (sda, sdb) and the other NASsed in the garage (sdc, sdd).

What I know is that 'offset' will boot and fail over.  I don't know if 'far' will.  I also know that .90 will boot and fail over.  I don't know whether 1.x will.  When building the array I tried to use 1.2 (as I thought it was newest/best) but there was a bitch at the beginning of boot and it wouldn't boot (for other reasons) so I reverted to .90.  When I do it again I will likely use 1.0, given what I've recently learned here.

I am still confused about the benefits of far vs offset.  Keld (the developer) says that although offset is newer, it's not necessarily better than far, only more compatible.  I have not found any rigorous performance comparisons of far vs offset.

I am shocked to read that RAID10 is not expandable.  I will want to add disks in the future. I will want to add space, but not partitions.  Does this mean I'll have to completely rebuild the array?  Once your data gets to a certain size it becomes unmanageable to rebuild the array.


      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-03 14:12         ` adfas asd
@ 2009-10-03 16:55           ` Keld Jørn Simonsen
  2009-10-03 17:02           ` Thomas Fjellstrom
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 37+ messages in thread
From: Keld Jørn Simonsen @ 2009-10-03 16:55 UTC (permalink / raw)
  To: adfas asd; +Cc: linux-raid

On Sat, Oct 03, 2009 at 07:12:55AM -0700, adfas asd wrote:
> --- On Fri, 10/2/09, Ben DJ <bendj095124367913213465@gmail.com> wrote:
> > On Fri, Oct 2, 2009 at 5:30 AM, adfas asd <chimera_god@yahoo.com>
> > wrote:
> > 
> What I know is that 'offset' will boot and fail over.  I don't know if 'far' will.  I also know that .90 will boot and fail over.  I don't know whether 1.x will.  When building the array I tried to use 1.2 (as I thought it was newest/best) but there was a bitch at the beginning of boot and it wouldn't boot (for other reasons) so I reverted to .90.  When I do it again I will likely use 1.0, given what I've recently learned here.

I am surprised that raid10,o2 will fail over. But maybe your bootloader
understands raidi10,o2. My understanding is that on the second copy of
the disk, blocks are flipped, and thus unreadable as a simple boot
device.

> I am still confused about the benefits of far vs offset.  Keld (the developer) says that although offset is newer, it's not necessarily better than far, only more compatible.  I have not found any rigorous performance comparisons of far vs offset.

I am not the developer of far, only the designer, and then I did a small patch.

There are a number of performance comparisons of far and offset at
http://linux-raid.osdl.org/index.php/Performance

If you are employing a filesystem, then be sure to look at tests
that also do so, to take advantage of the effects of the file system
elevator algorithms and buffering.

Best regards
keld

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-03 14:12         ` adfas asd
  2009-10-03 16:55           ` Keld Jørn Simonsen
@ 2009-10-03 17:02           ` Thomas Fjellstrom
  2009-10-03 19:05           ` Drew
  2009-10-05  4:59           ` Leslie Rhorer
  3 siblings, 0 replies; 37+ messages in thread
From: Thomas Fjellstrom @ 2009-10-03 17:02 UTC (permalink / raw)
  To: adfas asd; +Cc: linux-raid

On Sat October 3 2009, adfas asd wrote:
> --- On Fri, 10/2/09, Ben DJ <bendj095124367913213465@gmail.com> wrote:
> > On Fri, Oct 2, 2009 at 5:30 AM, adfas asd <chimera_god@yahoo.com>
> >
> > wrote:
> > >I've unplugged each drive and tried it, and they boot
> >
> > just fine degraded.
> >
> > That's great to hear.  Just curious, how many drives
> > in your array at the time?
> 
> Two disks, 2TB each.  RAID10-o2 means that there are two redundant copies
>  of my data in the array, stored respectively on sda and sdb.
> 
> > > I followed this guide, to convert my -running- Debian
> >
> > system, and just made a few substitutions for RAID10:
> > > http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
> >
> > Thanks for the additional URL.
> >
> > In all these cases /boot on RAID-10 was on its own
> > partition, NOT on
> > /boot in a LVM-on-RAID, right?
> 
> I didn't use LVM, and don't trust layering of any technology.  My two-drive
>  array was set up much as he recommended it: md1 - / (including /boot), md2
>  - swap, md3 - /home.  I am against any further fracturing of partitions,
>  as modern disk drives have lots of space, and lots of parts becomes
>  unmaintainable.
> 
> I will say that this system appears to be a performance problem for me, as
>  I run MythTV on / and have my videos in /home.  When Myth is scanning to
>  eliminate commercials it must frantically sweep between / and /home, and
>  overall system performance is impacted.  I am working on putting / on a
>  dedicated high-performance 2.5" drive and keeping my videos on the array
>  as they're much too large to back up.  In the process I'm also going to
>  split the array so that one drive is in the PeeCee and the other is NASsed
>  out in the garage in case of theft or fire.
> 
> When I need more space (soon) I'll add two more drives, and still mirror
>  them two copies, with one side of the mirror in the PeeCee (sda, sdb) and
>  the other NASsed in the garage (sdc, sdd).
> 
> What I know is that 'offset' will boot and fail over.  I don't know if
>  'far' will.  I also know that .90 will boot and fail over.  I don't know
>  whether 1.x will.  When building the array I tried to use 1.2 (as I
>  thought it was newest/best) but there was a bitch at the beginning of boot
>  and it wouldn't boot (for other reasons) so I reverted to .90.  When I do
>  it again I will likely use 1.0, given what I've recently learned here.
> 
> I am still confused about the benefits of far vs offset.  Keld (the
>  developer) says that although offset is newer, it's not necessarily better
>  than far, only more compatible.  I have not found any rigorous performance
>  comparisons of far vs offset.
> 
> I am shocked to read that RAID10 is not expandable.  I will want to add
>  disks in the future. I will want to add space, but not partitions.  Does
>  this mean I'll have to completely rebuild the array?  Once your data gets
>  to a certain size it becomes unmanageable to rebuild the array.
> 

I'm assuming they just haven't had time to do it yet. the raid 10 module is 
fairly new, while the raid5/6 code has been around for ages. I'm hoping that 
by the time I need to expand my new raid10 array, there will be full expansion 
support like raid5 has.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-03 14:12         ` adfas asd
  2009-10-03 16:55           ` Keld Jørn Simonsen
  2009-10-03 17:02           ` Thomas Fjellstrom
@ 2009-10-03 19:05           ` Drew
  2009-10-03 21:09             ` adfas asd
  2009-10-05  4:59           ` Leslie Rhorer
  3 siblings, 1 reply; 37+ messages in thread
From: Drew @ 2009-10-03 19:05 UTC (permalink / raw)
  To: adfas asd; +Cc: linux-raid

> I didn't use LVM, and don't trust layering of any technology.

I'm curious as to why you say that?

From my experience almost everything in software is layered or an
abstraction. The Network & Storage stacks are just layers of code
running over top other lower layers. LVM & RAID are just another layer
in the storage stack, and well tested ones I might add.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-03 19:05           ` Drew
@ 2009-10-03 21:09             ` adfas asd
  2009-10-04  5:59               ` Thomas Fjellstrom
  0 siblings, 1 reply; 37+ messages in thread
From: adfas asd @ 2009-10-03 21:09 UTC (permalink / raw)
  To: linux-raid



--- On Sat, 10/3/09, Drew <drew.kay@gmail.com> wrote:

> From: Drew <drew.kay@gmail.com>
> > I didn't use LVM, and don't
> trust layering of any technology.
> 
> I'm curious as to why you say that?
> 
> From my experience almost everything in software is layered
> or an
> abstraction. The Network & Storage stacks are just
> layers of code
> running over top other lower layers. LVM & RAID are
> just another layer
> in the storage stack, and well tested ones I might add.

Feel free.  But the more direct a system, the more reliable it is.  Basic scientific principle.  

To me, LVM just adds a function (expansion) which should be in the RAID module (and likely soon will be), yet adds a whole 'nother layer of complexity.  No thanks.  BTRFS addresses all these issues, but I was an early adopter and got burned by the loss of my data at a crucial point.



      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-03 21:09             ` adfas asd
@ 2009-10-04  5:59               ` Thomas Fjellstrom
  0 siblings, 0 replies; 37+ messages in thread
From: Thomas Fjellstrom @ 2009-10-04  5:59 UTC (permalink / raw)
  To: adfas asd; +Cc: linux-raid

On Sat October 3 2009, adfas asd wrote:
> --- On Sat, 10/3/09, Drew <drew.kay@gmail.com> wrote:
> > From: Drew <drew.kay@gmail.com>
> >
> > > I didn't use LVM, and don't
> >
> > trust layering of any technology.
> >
> > I'm curious as to why you say that?
> >
> > From my experience almost everything in software is layered
> > or an
> > abstraction. The Network & Storage stacks are just
> > layers of code
> > running over top other lower layers. LVM & RAID are
> > just another layer
> > in the storage stack, and well tested ones I might add.
> 
> Feel free.  But the more direct a system, the more reliable it is.  Basic
>  scientific principle.
> 
> To me, LVM just adds a function (expansion) which should be in the RAID
>  module (and likely soon will be), yet adds a whole 'nother layer of
>  complexity.  No thanks.  BTRFS addresses all these issues, but I was an
>  early adopter and got burned by the loss of my data at a crucial point.
> 

It actually adds volumes, something you just don't get with mdraid. Unless you 
count partitions, but that is just.. ew. I had LVM blow up in my face too, but 
it wasn't LVM's fault, it was mine ;) I was the one that was running a "big" 
(for me, at the time) jbod array with no backups. I can hardly blame the loss 
of that data on LVM.

> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-03 14:12         ` adfas asd
                             ` (2 preceding siblings ...)
  2009-10-03 19:05           ` Drew
@ 2009-10-05  4:59           ` Leslie Rhorer
  2009-10-05 13:58             ` adfas asd
  3 siblings, 1 reply; 37+ messages in thread
From: Leslie Rhorer @ 2009-10-05  4:59 UTC (permalink / raw)
  To: linux-raid

> I didn't use LVM, and don't trust layering of any technology.  My two-
> drive array was set up much as he recommended it: md1 - / (including
> /boot), md2 - swap, md3 - /home.  I am against any further fracturing of
> partitions, as modern disk drives have lots of space, and lots of parts
> becomes unmaintainable.

	Actually, I think for your situation, LVM might be a reasonable
solution.  Regardless, I don't particularly recommend using any partitions
at all.  I don't.  All of my arrays are on raw disks, and all of my file
systems use the entire unpartitioned array.  Again, there can be good
reasons for partitioning disks, but in your case I don't see any advantages.

> I will say that this system appears to be a performance problem for me, as
> I run MythTV on / and have my videos in /home.  When Myth is scanning to
> eliminate commercials it must frantically sweep between / and /home, and
> overall system performance is impacted.  I am working on putting / on a
> dedicated high-performance 2.5" drive and keeping my videos on the array
> as they're much too large to back up.

	You've said that before, but it is specious, as you are potentially
using more space to mirror them on a RAID array than you would by
implementing a backup array.  You don't have a separate backup, but they are
using more space on your proposed system than they would be with my proposal
for you.  I also rather expect in the long run it may be more of a
management hassle for you.  It's your decision, of course.

> What I know is that 'offset' will boot and fail over.  I don't know if
> 'far' will.  I also know that .90 will boot and fail over.  I don't know
> whether 1.x will.  When building the array I tried to use 1.2 (as I
> thought it was newest/best) but there was a bitch at the beginning of boot
> and it wouldn't boot (for other reasons) so I reverted to .90.  When I do
> it again I will likely use 1.0, given what I've recently learned here.

	Are you aware the .90 superblock limits the array to 28 devices,
total, and the individual device size cannot exceed 2T?  This is probably
not an issue for you right now, but in the not-so-distant future it could
easily become so.  3T drives are supposed to come out quite soon, and
easting up drives on each array the way you are using RAID10, 28 may start
looking like a small number before too many months have passed.  If you
insist on booting from the array, I suggest you look very carefully into the
superblock structure.  1.0 and 1.2 may serve you better than 1.1, depending
on exactly what you want to do and how you plan to do it.

	You also need to make sure the filesystem supports your needs in the
future.

> I am still confused about the benefits of far vs offset.  Keld (the
> developer) says that although offset is newer, it's not necessarily better
> than far, only more compatible.  I have not found any rigorous performance
> comparisons of far vs offset.
> 
> I am shocked to read that RAID10 is not expandable.  I will want to add
> disks in the future. I will want to add space, but not partitions.  Does
> this mean I'll have to completely rebuild the array?  Once your data gets
> to a certain size it becomes unmanageable to rebuild the array.

	Well, there's more than one way to skin a cat, of course, but
growing a RAID10 array through rebuilding is definitely a bit of a fiddle.
I know Neil was looking at including RAID reshaping on RAID10 arrays, but I
don't think it has been implemented.  Currently I think adding a drive is
only available on RAID 1, 4, 5, and 6, or at least so the man page says for
my copy of mdadm, which is 2.6.7.2.

	For this and other reasons, I really think you would be better
served in your situation to employ independent arrays, either RAID5, RAID6,
or LVM,  in the house and in the garage.  Of course you could then include
the two arrays in a RAID0 volume with write mostly.  Again, it's your
choice.
	


^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-05  4:59           ` Leslie Rhorer
@ 2009-10-05 13:58             ` adfas asd
  0 siblings, 0 replies; 37+ messages in thread
From: adfas asd @ 2009-10-05 13:58 UTC (permalink / raw)
  To: linux-raid



--- On Sun, 10/4/09, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
> > I am
> working on putting / on a
> > dedicated high-performance 2.5" drive and keeping my
> videos on the array
> > as they're much too large to back up.
> 
>     You've said that before, but it is
> specious, as you are potentially
> using more space to mirror them on a RAID array than you
> would by
> implementing a backup array.  You don't have a
> separate backup, but they are
> using more space on your proposed system than they would be
> with my proposal
> for you.  I also rather expect in the long run it may
> be more of a
> management hassle for you.  It's your decision, of
> course.

I don't understand why you would say this.  Whether I set up a RAID1 in the garage and one in the house and RAID0 them, or just set up a RAID10, it should not have any impact on the space.  And more arrays means -more- management hassle, not less. 

And I don't understand how you can set up an array without partitions?  Do you then LVM it and partition -that-? Is that actually possible?

 


      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
       [not found] <20091005145711322.FNMG18886@cdptpa-omta02.mail.rr.com>
@ 2009-10-05 15:30 ` adfas asd
  2009-10-05 15:49   ` Drew
  2009-10-05 18:03   ` Leslie Rhorer
  0 siblings, 2 replies; 37+ messages in thread
From: adfas asd @ 2009-10-05 15:30 UTC (permalink / raw)
  To: linux-raid

WAT?!  How can I mount an unpartitioned/unformatted array?  This is more direct than using a filesystem, although there would be no journalling, right?  If I were to mount my small disk for / and the unpartitioned array as /home for example. 

Seems like journalling/crash recovery is pretty vital.  Could not partitioning be recommendable?


--- On Mon, 10/5/09, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
> Partitions are not needed at all unless your underlying
> topology requires
> it:
> 
> `mdadm --create --raid-devices=2 --metadata=1.0 --chunk=128
> --level=1
> /dev/md0 /dev/sda /dev/sdb`
> 
>     Will work just fine.  You can also
> use LVM with raw disks.  Both
> systems simply stitch together storage units provided by
> the drive
> subsystem, so any block device found in /dev can be
> used.  Once the array is
> assembled, it is not necessary to partition it, either,
> again unless your
> overlaying topology requires it.  On my systems, the
> boot drives are
> partitioned to allow booting into Windows if necessary, and
> usually the swap
> is also in a partition on the boot drive, but my arrays are
> completely
> un-partitioned.
> 
> 


      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-05 15:30 ` A few remaining questions about installing to RAID-10 adfas asd
@ 2009-10-05 15:49   ` Drew
  2009-10-05 15:55     ` adfas asd
  2009-10-05 18:13     ` Leslie Rhorer
  2009-10-05 18:03   ` Leslie Rhorer
  1 sibling, 2 replies; 37+ messages in thread
From: Drew @ 2009-10-05 15:49 UTC (permalink / raw)
  To: adfas asd; +Cc: linux-raid

> WAT?!  How can I mount an unpartitioned/unformatted array?  This is more direct than using a filesystem, although there would be no journalling, right?  If I were to mount my small disk for / and the unpartitioned array as /home for example.

What's he's saying is that RAID and LVM do not need partitions to work
properly. You can create a raid array on the raw(unpartitioned)
/dev/sdX devices and then create a LVM volume group using a physical
volume(PV) on a raw(unpartitioned) /dev/mdX (RAID) device.

The filesystem still needs a partition/volume but that can created
either by partitioning /dev/mdX *or* creating a logical volume in LVM.

For example, on my system at home I use RAID-6 created using
/dev/sd[b-f]. The resulting /dev/md0 acts as a physical volume for LVM
and I've created several Logical Volumes aka "partitions" for the
filesystems to reside in.

> Seems like journalling/crash recovery is pretty vital.  Could not partitioning be recommendable?

See above. You still have a partition/volume of some sort where the
filesystem is created.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-05 15:49   ` Drew
@ 2009-10-05 15:55     ` adfas asd
  2009-10-05 16:19       ` Drew
  2009-10-05 18:13     ` Leslie Rhorer
  1 sibling, 1 reply; 37+ messages in thread
From: adfas asd @ 2009-10-05 15:55 UTC (permalink / raw)
  To: linux-raid

--- On Mon, 10/5/09, Drew <drew.kay@gmail.com> wrote:
> > Seems like journalling/crash recovery is pretty vital.
>  Could not partitioning be recommendable?
> 
> See above. You still have a partition/volume of some sort
> where the
> filesystem is created.

... But no journalling for crash recovery?

Can this be recommendable?  If no answer I presume not.


      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-05 15:55     ` adfas asd
@ 2009-10-05 16:19       ` Drew
  0 siblings, 0 replies; 37+ messages in thread
From: Drew @ 2009-10-05 16:19 UTC (permalink / raw)
  To: adfas asd; +Cc: linux-raid

> ... But no journalling for crash recovery?

Journalling is a function of the filesystem, not the block devices.
RAID is there to make sure you won't lose data if a physical disk
dies.



-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-05 15:30 ` A few remaining questions about installing to RAID-10 adfas asd
  2009-10-05 15:49   ` Drew
@ 2009-10-05 18:03   ` Leslie Rhorer
  1 sibling, 0 replies; 37+ messages in thread
From: Leslie Rhorer @ 2009-10-05 18:03 UTC (permalink / raw)
  To: linux-raid



> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of adfas asd
> Sent: Monday, October 05, 2009 10:31 AM
> To: linux-raid@vger.kernel.org
> Subject: RE: A few remaining questions about installing to RAID-10
> 
> WAT?!  How can I mount an unpartitioned/unformatted array?  This is more

	I didn't say "unformatted".  Mkfs does not require a partition, but
you're going to need a file system if you are going to have files.

> direct than using a filesystem, although there would be no journalling,
> right?

	Journaling is a facility of the file system, not the partition.  If
you install a journalled file system, then you will have a journal.  If you
install something like ext2, then you won't.  I happen to be using XFS,
which is a journalling file system, on my video server and the backup server
arrays.

>  If I were to mount my small disk for / and the unpartitioned array
> as /home for example.

	In your case it may not make too much difference, as I take it the
system is not logged into by anyone but you (or am I mistaken?), but I would
generally speaking shy away from putting /home in anything which might not
be available at login time, or which is likely to be taken offline when the
system is up and running.  It can be done, of course, but logins can be a
bit problematical for ordinary users when /home is not present.  That said,
yes, the small disk can be / and the array can be /home.  I don't recommend
running without swap, so if the small disk is the only other physical drive,
then it does need to be partitioned at a minimum into your main space and
some swap.  A separate /boot partition is not a terrible idea, either,
especially on a multi-boot system.  I have some systems with a separate
/boot partition and some without.  Sometimes there is a really good reason
to make /var a separate partition, but in your case I think not.

> Seems like journalling/crash recovery is pretty vital.  Could not
> partitioning be recommendable?

	Not for that purpose, no.  They have nothing to do with one another.
Partitioning allows a single disk to have multiple mount points, or to boot
more than one OS, or to allow multiple file systems serving different
purposes to all reside on a single physical disk.  Journalling creates a
cache for writes which allows a file system to better survive a dirty
shutdown without corrupting the file system.  The journal can be on the same
physical disk as the file system or a completely separate disk system.  It
can be on a separate partition, but it is not required.



^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-05 15:49   ` Drew
  2009-10-05 15:55     ` adfas asd
@ 2009-10-05 18:13     ` Leslie Rhorer
  2009-10-05 18:41       ` Drew
  1 sibling, 1 reply; 37+ messages in thread
From: Leslie Rhorer @ 2009-10-05 18:13 UTC (permalink / raw)
  To: linux-raid

> > WAT?!  How can I mount an unpartitioned/unformatted array?  This is more
> direct than using a filesystem, although there would be no journalling,
> right?  If I were to mount my small disk for / and the unpartitioned array
> as /home for example.
> 
> What's he's saying is that RAID and LVM do not need partitions to work
> properly. You can create a raid array on the raw(unpartitioned)
> /dev/sdX devices and then create a LVM volume group using a physical
> volume(PV) on a raw(unpartitioned) /dev/mdX (RAID) device.

	True, but it is simpler than that.  Partitions are not needed in
general unless one needs to create a separate task space on a single drive
volume.  If one has no swap space, then it is entirely possible to run a
Linux system with no partitions at all.  The file system can be created on a
raw disk, and as long as the MBR is good and the boot loader can read the
file system, you're good.  Note I don't genrally recommend running without a
swap space, and going to extra trouble just to prevent partitioning is not
very productive, but there is no reason to partition a drive if it is not
necessary, whether the "drive" is a single physical disk or an array.

> The filesystem still needs a partition/volume but that can created
> either by partitioning /dev/mdX *or* creating a logical volume in LVM.

	It doesn't need a partition, just a block device.  That device can
be a partition on a disk, a whole disk, or an array.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-05 18:13     ` Leslie Rhorer
@ 2009-10-05 18:41       ` Drew
  2009-10-06  0:50         ` Leslie Rhorer
  0 siblings, 1 reply; 37+ messages in thread
From: Drew @ 2009-10-05 18:41 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

        True, but it is simpler than that.  Partitions are not needed in
> general unless one needs to create a separate task space on a single drive
> volume.  If one has no swap space, then it is entirely possible to run a
> Linux system with no partitions at all.  The file system can be created on a
> raw disk, and as long as the MBR is good and the boot loader can read the
> file system, you're good.  Note I don't genrally recommend running without a
> swap space, and going to extra trouble just to prevent partitioning is not
> very productive, but there is no reason to partition a drive if it is not
> necessary, whether the "drive" is a single physical disk or an array.

I did not know that. I always assumed that for physical disks you
needed a partition of some sort so the filesystem doesn't stomp all
over the boot sector (where used).That said I think I'll stick to
partitioning my boot drive. Feels safer somehow.

>> The filesystem still needs a partition/volume but that can created
>> either by partitioning /dev/mdX *or* creating a logical volume in LVM.
>
>        It doesn't need a partition, just a block device.  That device can
> be a partition on a disk, a whole disk, or an array.

I forgot you can do that with mdX. It's been a while since I've made
filesystems (other then boot drives) without LVM.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-05 18:41       ` Drew
@ 2009-10-06  0:50         ` Leslie Rhorer
  2009-10-06  3:42           ` Drew
  2009-10-07 17:58           ` adfas asd
  0 siblings, 2 replies; 37+ messages in thread
From: Leslie Rhorer @ 2009-10-06  0:50 UTC (permalink / raw)
  To: linux-raid

> > necessary, whether the "drive" is a single physical disk or an array.
> 
> I did not know that. I always assumed that for physical disks you
> needed a partition of some sort so the filesystem doesn't stomp all
> over the boot sector (where used).

	If the boot sector is not used, though, and there is no swap on the
drive, then partitioning is not absolutely required.  If the MBR is used,
then indeed one must make sure the file system doesn't stomp on it.

> That said I think I'll stick to
> partitioning my boot drive. Feels safer somehow.

	Oh, I agree, without question.  I'm a big fan of a small boot drive
with 3 or 4 partitions and one or more arrays with no partitions at all.

> >> The filesystem still needs a partition/volume but that can created
> >> either by partitioning /dev/mdX *or* creating a logical volume in LVM.
> >
> >        It doesn't need a partition, just a block device.  That device
> can
> > be a partition on a disk, a whole disk, or an array.
> 
> I forgot you can do that with mdX. It's been a while since I've made
> filesystems (other then boot drives) without LVM.

	Yep.  Sure can.  I've used LVM, and for some applications it's
great.  Indeed, as I mentioned, adfas might be well served with LVM volumes
on each machine.  It certainly would provide great performance, although as
I also mentioned, I don't think drive performance is really his problem, per
se.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-06  0:50         ` Leslie Rhorer
@ 2009-10-06  3:42           ` Drew
  2009-10-06  9:11             ` Keld Jørn Simonsen
  2009-10-06 13:24             ` adfas asd
  2009-10-07 17:58           ` adfas asd
  1 sibling, 2 replies; 37+ messages in thread
From: Drew @ 2009-10-06  3:42 UTC (permalink / raw)
  To: linux-raid

>        Yep.  Sure can.  I've used LVM, and for some applications it's
> great.  Indeed, as I mentioned, adfas might be well served with LVM volumes
> on each machine.  It certainly would provide great performance, although as
> I also mentioned, I don't think drive performance is really his problem, per
> se.

I follow the threads on the MythTV list so I'd agree there are better
ways to partition/RAID the system. The MythTV guys are, in many cases,
pushing Terabytes of data through consumer grade SATA drives, RAID,
LVM and high performance filesystems like JFS/XFS. It's amazing
sometimes the extent they go to tuning their rigs to handle the
various on-disk buffers and what not MythTV seems to use.

Adfas: You may want to consider talking with the boys over at MythTV.
If you want to tune your rig for MythTV those guys are the best ones
to talk with as they know their app very well.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-06  3:42           ` Drew
@ 2009-10-06  9:11             ` Keld Jørn Simonsen
  2009-10-06 13:24             ` adfas asd
  1 sibling, 0 replies; 37+ messages in thread
From: Keld Jørn Simonsen @ 2009-10-06  9:11 UTC (permalink / raw)
  To: Drew; +Cc: linux-raid

On Mon, Oct 05, 2009 at 08:42:22PM -0700, Drew wrote:
> >        Yep.  Sure can.  I've used LVM, and for some applications it's
> > great.  Indeed, as I mentioned, adfas might be well served with LVM volumes
> > on each machine.  It certainly would provide great performance, although as
> > I also mentioned, I don't think drive performance is really his problem, per
> > se.
> 
> I follow the threads on the MythTV list so I'd agree there are better
> ways to partition/RAID the system. The MythTV guys are, in many cases,
> pushing Terabytes of data through consumer grade SATA drives, RAID,
> LVM and high performance filesystems like JFS/XFS. It's amazing
> sometimes the extent they go to tuning their rigs to handle the
> various on-disk buffers and what not MythTV seems to use.
> 
> Adfas: You may want to consider talking with the boys over at MythTV.
> If you want to tune your rig for MythTV those guys are the best ones
> to talk with as they know their app very well.

Are there any conclusions from the mythtv group?

Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-06  3:42           ` Drew
  2009-10-06  9:11             ` Keld Jørn Simonsen
@ 2009-10-06 13:24             ` adfas asd
  2009-10-06 22:57               ` Ben DJ
  1 sibling, 1 reply; 37+ messages in thread
From: adfas asd @ 2009-10-06 13:24 UTC (permalink / raw)
  To: linux-raid

Thanks for the input guys.  Wish I could get similar info for my NAS thread.


--- On Mon, 10/5/09, Drew <drew.kay@gmail.com> wrote:

> From: Drew <drew.kay@gmail.com>
> Subject: Re: A few remaining questions about installing to RAID-10
> To: linux-raid@vger.kernel.org
> Date: Monday, October 5, 2009, 8:42 PM
> >        Yep.  Sure can.
>  I've used LVM, and for some applications it's
> > great.  Indeed, as I mentioned, adfas might be well
> served with LVM volumes
> > on each machine.  It certainly would provide great
> performance, although as
> > I also mentioned, I don't think drive performance is
> really his problem, per
> > se.
> 
> I follow the threads on the MythTV list so I'd agree there
> are better
> ways to partition/RAID the system. The MythTV guys are, in
> many cases,
> pushing Terabytes of data through consumer grade SATA
> drives, RAID,
> LVM and high performance filesystems like JFS/XFS. It's
> amazing
> sometimes the extent they go to tuning their rigs to handle
> the
> various on-disk buffers and what not MythTV seems to use.
> 
> Adfas: You may want to consider talking with the boys over
> at MythTV.
> If you want to tune your rig for MythTV those guys are the
> best ones
> to talk with as they know their app very well.
> 
> 
> -- 
> Drew
> 
> "Nothing in life is to be feared. It is only to be
> understood."
> --Marie Curie
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: A few remaining questions about installing to RAID-10
  2009-10-06 13:24             ` adfas asd
@ 2009-10-06 22:57               ` Ben DJ
  0 siblings, 0 replies; 37+ messages in thread
From: Ben DJ @ 2009-10-06 22:57 UTC (permalink / raw)
  To: linux-raid

MythTV?  Go away, come back & it appears my thread's been hijacked
without a 2nd thought  :-(

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-06  0:50         ` Leslie Rhorer
  2009-10-06  3:42           ` Drew
@ 2009-10-07 17:58           ` adfas asd
  2009-10-25  5:20             ` Leslie Rhorer
  1 sibling, 1 reply; 37+ messages in thread
From: adfas asd @ 2009-10-07 17:58 UTC (permalink / raw)
  To: linux-raid

--- On Mon, 10/5/09, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>  Indeed, as I mentioned, adfas might be well
> served with LVM volumes
> on each machine.  It certainly would provide great
> performance, although as
> I also mentioned, I don't think drive performance is really
> his problem, per
> se.

Hang on a sec.  LVM would make it easy to add drives.  And as I understood it, you recommend a RAID in the garage and another in the HTPC, LVMed together?  Well with this I could add another array, but that's not what's needed.  I'd need to add a drive to the garage and another to the HTPC, which means each of those arrays would need to be completely reconstructed.

OTOH if I set up the garage as an LVM and the HTPC as another, then RAID those LVMs together, maybe I could add a drive to each LVM... AS LONG AS I add the same size drive to BOTH the garage and HTPC for expansion?  And no need to completely reconstruct the RAID array?


      

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: A few remaining questions about installing to RAID-10
  2009-10-07 17:58           ` adfas asd
@ 2009-10-25  5:20             ` Leslie Rhorer
  0 siblings, 0 replies; 37+ messages in thread
From: Leslie Rhorer @ 2009-10-25  5:20 UTC (permalink / raw)
  To: linux-raid


	I apologize for the late reply.  In all the myriad e-mail I get, I
missed this one.

> --- On Mon, 10/5/09, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
> >  Indeed, as I mentioned, adfas might be well
> > served with LVM volumes
> > on each machine.  It certainly would provide great
> > performance, although as
> > I also mentioned, I don't think drive performance is really
> > his problem, per
> > se.
> 
> Hang on a sec.  LVM would make it easy to add drives.  And as I understood
> it, you recommend a RAID in the garage and another in the HTPC, LVMed
> together?

	No.  I am suggesting an LVM volume in the HTPC and an LVM volume in
the garage.  'No RAID at all.

> Well with this I could add another array, but that's not what's
> needed.  I'd need to add a drive to the garage and another to the HTPC,
> which means each of those arrays would need to be completely
> reconstructed.
> 
> OTOH if I set up the garage as an LVM and the HTPC as another, then RAID
> those LVMs together, maybe I could add a drive to each LVM... AS LONG AS I
> add the same size drive to BOTH the garage and HTPC for expansion?  And no
> need to completely reconstruct the RAID array?

	That might be possible.  I don't really recommend it, but it might
be made to work.


^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2009-10-25  5:20 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20091005145711322.FNMG18886@cdptpa-omta02.mail.rr.com>
2009-10-05 15:30 ` A few remaining questions about installing to RAID-10 adfas asd
2009-10-05 15:49   ` Drew
2009-10-05 15:55     ` adfas asd
2009-10-05 16:19       ` Drew
2009-10-05 18:13     ` Leslie Rhorer
2009-10-05 18:41       ` Drew
2009-10-06  0:50         ` Leslie Rhorer
2009-10-06  3:42           ` Drew
2009-10-06  9:11             ` Keld Jørn Simonsen
2009-10-06 13:24             ` adfas asd
2009-10-06 22:57               ` Ben DJ
2009-10-07 17:58           ` adfas asd
2009-10-25  5:20             ` Leslie Rhorer
2009-10-05 18:03   ` Leslie Rhorer
2009-10-02  1:27 Ben DJ
2009-10-02  2:31 ` Ben DJ
2009-10-02  3:16 ` Neil Brown
2009-10-02  3:46   ` Ben DJ
2009-10-02 12:30     ` adfas asd
2009-10-02 14:54       ` Ben DJ
2009-10-02 15:06         ` Keld Jørn Simonsen
2009-10-02 15:15         ` Robin Hill
2009-10-02 15:46           ` Ben DJ
2009-10-02 15:59             ` Robin Hill
2009-10-02 16:31               ` Ben DJ
2009-10-02 17:01                 ` Robin Hill
2009-10-02 17:24                   ` Ben DJ
2009-10-03 14:12         ` adfas asd
2009-10-03 16:55           ` Keld Jørn Simonsen
2009-10-03 17:02           ` Thomas Fjellstrom
2009-10-03 19:05           ` Drew
2009-10-03 21:09             ` adfas asd
2009-10-04  5:59               ` Thomas Fjellstrom
2009-10-05  4:59           ` Leslie Rhorer
2009-10-05 13:58             ` adfas asd
2009-10-02  7:51 ` Keld Jørn Simonsen
2009-10-02  7:54 ` Robin Hill

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).