* RAID Configuration For New Home Server
@ 2010-06-01 17:08 Carlos Mennens
2010-06-01 17:56 ` Mark Knecht
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Carlos Mennens @ 2010-06-01 17:08 UTC (permalink / raw)
To: Mdadm
I built a new home server this weekend & am ready to load my O.S.
(Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
(SATA). I don't really have a specific function of this server at home
beyond holding my data reliably and decent read / write performance.
My question to you experts is what do you recommend I configure for
this particular configuration? Should I run RAID 5 or RAID 10? To
spare or not to spare? I really appreciate any best suggestions for
general over all function on this matter.
-Carlos
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-01 17:08 RAID Configuration For New Home Server Carlos Mennens
@ 2010-06-01 17:56 ` Mark Knecht
2010-06-01 22:59 ` Ryan Wagoner
2010-06-02 4:03 ` Leslie Rhorer
2010-06-02 6:08 ` Simon Matthews
2 siblings, 1 reply; 25+ messages in thread
From: Mark Knecht @ 2010-06-01 17:56 UTC (permalink / raw)
To: Carlos Mennens; +Cc: Mdadm
On Tue, Jun 1, 2010 at 10:08 AM, Carlos Mennens <carloswill@gmail.com> wrote:
> I built a new home server this weekend & am ready to load my O.S.
> (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
> (SATA). I don't really have a specific function of this server at home
> beyond holding my data reliably and decent read / write performance.
> My question to you experts is what do you recommend I configure for
> this particular configuration? Should I run RAID 5 or RAID 10? To
> spare or not to spare? I really appreciate any best suggestions for
> general over all function on this matter.
>
> -Carlos
I'm not an expert so take my input with a grain of salt but you don't
state how much space you need on the machine. Is 320GB enough? If so
consider a 3-drive RAID1 and save the 4th drive as a spare. That's
what I run using 3-drive 500GB WD drives. Works well over the last few
months.
Keep in mind that as a newbie, if you repeat any of my learning curve,
booting from RAID is more difficult. I chose to not use RAID for the
/boot sector and just duplicated the grub setup and kernel + grub
files. If my Drive 0 goes down and I cannot I can reset the boot drive
in BIOS and boot from the second or third drives.
Hope this helps,
Mark
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-01 17:56 ` Mark Knecht
@ 2010-06-01 22:59 ` Ryan Wagoner
0 siblings, 0 replies; 25+ messages in thread
From: Ryan Wagoner @ 2010-06-01 22:59 UTC (permalink / raw)
To: Mdadm
On Tue, Jun 1, 2010 at 1:56 PM, Mark Knecht <markknecht@gmail.com> wrote:
> On Tue, Jun 1, 2010 at 10:08 AM, Carlos Mennens <carloswill@gmail.com> wrote:
>> I built a new home server this weekend & am ready to load my O.S.
>> (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
>> (SATA). I don't really have a specific function of this server at home
>> beyond holding my data reliably and decent read / write performance.
>> My question to you experts is what do you recommend I configure for
>> this particular configuration? Should I run RAID 5 or RAID 10? To
>> spare or not to spare? I really appreciate any best suggestions for
>> general over all function on this matter.
>>
>> -Carlos
>
> I'm not an expert so take my input with a grain of salt but you don't
> state how much space you need on the machine. Is 320GB enough? If so
> consider a 3-drive RAID1 and save the 4th drive as a spare. That's
> what I run using 3-drive 500GB WD drives. Works well over the last few
> months.
>
> Keep in mind that as a newbie, if you repeat any of my learning curve,
> booting from RAID is more difficult. I chose to not use RAID for the
> /boot sector and just duplicated the grub setup and kernel + grub
> files. If my Drive 0 goes down and I cannot I can reset the boot drive
> in BIOS and boot from the second or third drives.
>
> Hope this helps,
> Mark
> --
Besides that you have these options
RAID 5 with 3 drives and 1 spare - Gives you 640GB (320GB x 2) usable space
RAID 5 with 4 drives - Gives you 960GB (320GB x 3) usable space
RAID 6 with 4 drives - Gives you 640GB (320GB x 2) usable space
RAID 10 with 4 drives - Gives you 640GB (320GB x 2) usable space
If you need more than 640GB of space then RAID 5 across the 4 drives
is the way to go. Otherwise it comes down to performance and risk.
RAID 10 gives you the best performance and protects you from 1 disk
failure, possibly 2 if the correct drives fail. RAID 6 will protect
you from any 2 drive failures, but with more parity overhead than RAID
5.
I'm a big fan of RAID 10 if you can afford the space and need the
speed. Otherwise for smaller drives and spindles RAID 5 is great. If
you are using TB+ size drives across 5+ spindles RAID 6 is great for
reliability.
My home setup consists of 3 x 1TB drives in RAID 5. With hdparm I get
around 85MB/s on the drive and 130MB/s on the RAID 5. The array
resyncs at 85MB/s, which takes around 3 hours to complete. From my
Windows desktop over SMB I am getting around 30-50MB/s. Gigabit
Ethernet in a perfect world maxes out around 110MB/s.
Ryan
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-01 17:08 RAID Configuration For New Home Server Carlos Mennens
2010-06-01 17:56 ` Mark Knecht
@ 2010-06-02 4:03 ` Leslie Rhorer
2010-06-02 6:08 ` Simon Matthews
2 siblings, 0 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-02 4:03 UTC (permalink / raw)
To: 'Carlos Mennens', 'Mdadm'
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Carlos Mennens
> Sent: Tuesday, June 01, 2010 12:08 PM
> To: Mdadm
> Subject: RAID Configuration For New Home Server
>
> I built a new home server this weekend & am ready to load my O.S.
> (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
> (SATA). I don't really have a specific function of this server at home
> beyond holding my data reliably and decent read / write performance.
> My question to you experts is what do you recommend I configure for
> this particular configuration? Should I run RAID 5 or RAID 10? To
> spare or not to spare? I really appreciate any best suggestions for
> general over all function on this matter.
Well, really, you need to define your needs more thoroughly and then
configure the system accordingly. What constitutes "decent" performance?
How is the server attached to your LAN? What sort of load is the server
going to encounter? How much down time can you tolerate in the event of a
failure? If your LAN is only 100Mbps or worse wireless, then any RAID
system is going to deliver better performance than the LAN connection can
handle. If you have one or perhaps even multiple Gig-E links into the LAN,
however, it's a very different matter. How many workstations do you have
online simultaneously, and what horsepower? What application mix? My
servers, for example, have Gig-E links into the LAN, and I have several
moderately high power workstations attached, but usually only 1 or 2 are
active at a time, and much of what is served is video with a maximum of 25
Mbps or so. The main server has a 12 x 1T RAID6 array and the backup has a
9 x 1T RAID6 array. The only time they are ever loaded anywhere near
capacity is when backup up to or restoring from the backup server, and even
then the bottleneck is often the 1Gbps Ethernet links.
Also, what many people fail to take into account is growth. Your
decision should include a growth strategy. RAID10 is fine for 4 drives, but
if the data is likely to eventually reach more than 6 drives in extent,
RAID10 - with a minimum of 12 drives and counting, isn't so attractive. In
my case, for example, a pair of 9T RAID10 arrays would not only cost a great
deal to build, but the power requirements would also be significant.
Your backup strategy is also a factor. Having a backup server
online as I do means the marginally greater risk of a RAID5 system over a
RAID6 might be worth it for a relatively small number of member disks.
Indeed, both servers were RAID5 until each respective system exceeded 6 data
discs, at which point they were migrated to RAID6.
I suggest you think about your needs a little more carefully and
detail them here concisely, and we can make better recommendations.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-01 17:08 RAID Configuration For New Home Server Carlos Mennens
2010-06-01 17:56 ` Mark Knecht
2010-06-02 4:03 ` Leslie Rhorer
@ 2010-06-02 6:08 ` Simon Matthews
2010-06-02 6:33 ` Leslie Rhorer
2 siblings, 1 reply; 25+ messages in thread
From: Simon Matthews @ 2010-06-02 6:08 UTC (permalink / raw)
To: Carlos Mennens; +Cc: Mdadm
On Tue, Jun 1, 2010 at 10:08 AM, Carlos Mennens <carloswill@gmail.com> wrote:
> I built a new home server this weekend & am ready to load my O.S.
> (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
> (SATA). I don't really have a specific function of this server at home
> beyond holding my data reliably and decent read / write performance.
> My question to you experts is what do you recommend I configure for
> this particular configuration? Should I run RAID 5 or RAID 10? To
> spare or not to spare? I really appreciate any best suggestions for
> general over all function on this matter.
How about 2 drives in a RAID0 or LVM configuration and the other two
in another machine and used for backups?
Simon
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-02 6:08 ` Simon Matthews
@ 2010-06-02 6:33 ` Leslie Rhorer
2010-06-02 7:51 ` tron
2010-06-02 7:54 ` tron
0 siblings, 2 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-02 6:33 UTC (permalink / raw)
To: 'Simon Matthews'; +Cc: 'Mdadm'
> On Tue, Jun 1, 2010 at 10:08 AM, Carlos Mennens <carloswill@gmail.com>
> wrote:
> > I built a new home server this weekend & am ready to load my O.S.
> > (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
> > (SATA). I don't really have a specific function of this server at home
> > beyond holding my data reliably and decent read / write performance.
> > My question to you experts is what do you recommend I configure for
> > this particular configuration? Should I run RAID 5 or RAID 10? To
> > spare or not to spare? I really appreciate any best suggestions for
> > general over all function on this matter.
>
> How about 2 drives in a RAID0 or LVM configuration and the other two
> in another machine and used for backups?
If down time is not an issue, and he is comfortable living with a
single unprotected system in the event of a drive failure, that's not a bad
solution, at all, especially given the small size of his arrays. If it were
me, I would invest in at least one more drive to make one of the arrays a
RAID5 and the other a RAID0 (or LVM). That way, in the event of a drive
failure, he is still left with redundancy of one form or another.
Additionally, if he makes the primary array the RAID5 array, the no single
drive failure will take his server offline, and he can recover with no down
time. If performance is more important than zero down time, then he could
make his backup the RAID5 array. That way he doesn't have to worry so much
about a second drive failure while he is restoring data to his failed
primary array after it is repaired. Indeed, a RAID5 array backed up by an
LVM volume is precisely what I ran before my primary server exceeded 4T of
data.
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-02 6:33 ` Leslie Rhorer
@ 2010-06-02 7:51 ` tron
2010-06-02 14:24 ` Leslie Rhorer
2010-06-02 7:54 ` tron
1 sibling, 1 reply; 25+ messages in thread
From: tron @ 2010-06-02 7:51 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: 'Simon Matthews', 'Mdadm'
>> On Tue, Jun 1, 2010 at 10:08 AM, Carlos Mennens <carloswill@gmail.com>
>> wrote:
>> > I built a new home server this weekend & am ready to load my O.S.
>> > (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
>> > (SATA). I don't really have a specific function of this server at home
>> > beyond holding my data reliably and decent read / write performance.
>> > My question to you experts is what do you recommend I configure for
>> > this particular configuration? Should I run RAID 5 or RAID 10? To
>> > spare or not to spare? I really appreciate any best suggestions for
>> > general over all function on this matter.
>>
>> How about 2 drives in a RAID0 or LVM configuration and the other two
>> in another machine and used for backups?
>
> If down time is not an issue, and he is comfortable living with a
> single unprotected system in the event of a drive failure, that's not a
> bad
> solution, at all, especially given the small size of his arrays. If it
> were
> me, I would invest in at least one more drive to make one of the arrays a
> RAID5 and the other a RAID0 (or LVM). That way, in the event of a drive
> failure, he is still left with redundancy of one form or another.
> Additionally, if he makes the primary array the RAID5 array, the no single
> drive failure will take his server offline, and he can recover with no
> down
> time. If performance is more important than zero down time, then he could
> make his backup the RAID5 array. That way he doesn't have to worry so
> much
> about a second drive failure while he is restoring data to his failed
> primary array after it is repaired. Indeed, a RAID5 array backed up by an
> LVM volume is precisely what I ran before my primary server exceeded 4T of
> data.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
There are about as many answers to this as there are people using your
setup so let's all agree that there's no "one way" of doing things.
With that being said let me share a setup I recently configured for a
slackbuild and slackbuild-storage server at home. I had 5x320gb SATA
drives to spare since I upgraded my main fileserver, since mdadm works on
partition level I started with creating my root (/) partition on one drive
and setting it to 20gb (/dev/sda1), then a partition for storage
(/dev/sda2 -> /var/storage) with the remaining diskspace. I preceded to
copy the partition table over to my other drives using sfdisk (sfdisk-d
/dev/sda|sfdisk /dev/sdX). Next I created a raid1 using all the /dev/sdX1
partitions (mdadm -C /dev/md0 -l 1 -n 5 /dev/sd[a,b,c,d,e]1) and after
that created the raid5 (mdadm -C /dev/md1 -l 5 -n 5 -x 0
/dev/sd[a,b,c,d,e]2) with no spares that's what the -x 0 is for.
This has the benefit of having a rock solid raid1 configuration for your /
and also benefiting from having a relatively safe storage using raid5.
Also it's real easy to setup and if you want to add more storage you can
just pop another drive in, copy the partition table using sfdisk and add
the new partitions to the existing raid-devices using mdadm.
- Hakan
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-02 6:33 ` Leslie Rhorer
2010-06-02 7:51 ` tron
@ 2010-06-02 7:54 ` tron
2010-06-02 13:00 ` Carlos Mennens
1 sibling, 1 reply; 25+ messages in thread
From: tron @ 2010-06-02 7:54 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: 'Simon Matthews', 'Mdadm'
>> On Tue, Jun 1, 2010 at 10:08 AM, Carlos Mennens <carloswill@gmail.com>
>> wrote:
>> > I built a new home server this weekend & am ready to load my O.S.
>> > (Arch Linux) on it today. It has 4 x 320 GB Seagate Barracuda's
>> > (SATA). I don't really have a specific function of this server at home
>> > beyond holding my data reliably and decent read / write performance.
>> > My question to you experts is what do you recommend I configure for
>> > this particular configuration? Should I run RAID 5 or RAID 10? To
>> > spare or not to spare? I really appreciate any best suggestions for
>> > general over all function on this matter.
>>
>> How about 2 drives in a RAID0 or LVM configuration and the other two
>> in another machine and used for backups?
>
> If down time is not an issue, and he is comfortable living with a
> single unprotected system in the event of a drive failure, that's not a
> bad
> solution, at all, especially given the small size of his arrays. If it
> were
> me, I would invest in at least one more drive to make one of the arrays a
> RAID5 and the other a RAID0 (or LVM). That way, in the event of a drive
> failure, he is still left with redundancy of one form or another.
> Additionally, if he makes the primary array the RAID5 array, the no single
> drive failure will take his server offline, and he can recover with no
> down
> time. If performance is more important than zero down time, then he could
> make his backup the RAID5 array. That way he doesn't have to worry so
> much
> about a second drive failure while he is restoring data to his failed
> primary array after it is repaired. Indeed, a RAID5 array backed up by an
> LVM volume is precisely what I ran before my primary server exceeded 4T of
> data.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
There are about as many answers to this as there are people using your
setup so let's all agree that there's no "one way" of doing things.
With that being said let me share a setup I recently configured for a
slackbuild and slackbuild-storage server at home. I had 5x320gb SATA
drives to spare since I upgraded my main fileserver and since mdadm works
on partition level I started with creating my root (/) partition on one
drive and setting it to 20gb (/dev/sda1), then a partition for storage
(/dev/sda2 -> /var/storage) with the remaining diskspace. I proceded to
copy the partition table over to my other drives using sfdisk (sfdisk-d
/dev/sda|sfdisk /dev/sdX). Next I created a raid1 using all the /dev/sdX1
partitions (mdadm -C /dev/md0 -l 1 -n 5 /dev/sd[a,b,c,d,e]1) and after
that created the raid5 with all the /dev/sdX2 partitions (mdadm -C
/dev/md1 -l 5 -n 5 -x 0 /dev/sd[a,b,c,d,e]2) with no spares that's what
the -x 0 is for.
This has the benefit of having a rock solid raid1 configuration for your /
and also benefiting from having a relatively safe storage using raid5.
Also it's real easy to setup and if you want to add more storage you can
just pop another drive in, copy the partition table using sfdisk and add
the new partitions to the existing raid-devices using mdadm.
- Hakan
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-02 7:54 ` tron
@ 2010-06-02 13:00 ` Carlos Mennens
2010-06-02 15:31 ` John Robinson
2010-06-05 17:19 ` Leslie Rhorer
0 siblings, 2 replies; 25+ messages in thread
From: Carlos Mennens @ 2010-06-02 13:00 UTC (permalink / raw)
To: Mdadm
On Wed, Jun 2, 2010 at 3:54 AM, <tron@monkii.net> wrote:
> There are about as many answers to this as there are people using your
> setup so let's all agree that there's no "one way" of doing things.
Thanks for all the suggestions and you guys are right. There will no
right or wrong answer here but I just want to make sure I am not doing
anything that will hinder / limit performance in my system. At most my
system will simply idle and do nothing more than store a few files for
me so I think RAID5 is going to be my selection for my / file system.
I have 4 identical drives and need to partition them all the same to
avoid any inconsistencies across the RAID array. Since Grub doesn't
support RAID5 for /boot, I will need to make a 4 disk RAID1 for /boot
& do the same for Swap. Does this look reasonable to you guys?
Partitioning the 1st disk below:
/dev/sda1 100 MB - RAID (bootable)
/dev/sda2 2 GB - RAID
/dev/sda3 320 GB - RAID
Do that same partition schema above for all 4 drives and then create my RAID:
/
mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda3 /dev/sdb3
/dev/sdc3 /dev/sdd3
/boot
mdadm --create /dev/md1 --level=1 --raid-devices=4 /dev/sda1 /dev/sdb1
/dev/sdc1 /dev/sdd1
Swap
mdadm --create /dev/md2 --level=1 --raid-devices=4 /dev/sda2 /dev/sdb2
/dev/sdc2 /dev/sdd2
Would you guys change anything in my partition or 'mdadm' command?
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-02 7:51 ` tron
@ 2010-06-02 14:24 ` Leslie Rhorer
0 siblings, 0 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-02 14:24 UTC (permalink / raw)
To: tron; +Cc: 'Simon Matthews', 'Mdadm'
> There are about as many answers to this as there are people using your
> setup
Times two.
> so let's all agree that there's no "one way" of doing things.
I think that's pretty evident.
> drives to spare since I upgraded my main fileserver, since mdadm works on
> partition level
Mdadm does not "work on the partition level". It should work with
any valid block device. My data arrays do not have partitioned members.
They are built entirely from raw disks with no partitions either underneath
or above the md layer. My boot arrays are built from a pair of disks with
three partitions - /, /boot, and swap.
> I started with creating my root (/) partition on one drive
> and setting it to 20gb (/dev/sda1), then a partition for storage
> (/dev/sda2 -> /var/storage) with the remaining diskspace. I preceded to
> copy the partition table over to my other drives using sfdisk (sfdisk-d
> /dev/sda|sfdisk /dev/sdX). Next I created a raid1 using all the /dev/sdX1
> partitions (mdadm -C /dev/md0 -l 1 -n 5 /dev/sd[a,b,c,d,e]1) and after
> that created the raid5 (mdadm -C /dev/md1 -l 5 -n 5 -x 0
> /dev/sd[a,b,c,d,e]2) with no spares that's what the -x 0 is for.
I prefer to use separate boot drives. I do have the boot drives set
up as arrays, but only as RAID1 pairs. The boot drives are set up as RAID1
merely to eliminate down time, not so much to prevent data loss, since the
boot drives can in the event of catastrophic failure be fairly easily
reconstructed from scratch.
> This has the benefit of having a rock solid raid1 configuration for your /
> and also benefiting from having a relatively safe storage using raid5.
> Also it's real easy to setup and if you want to add more storage you can
> just pop another drive in, copy the partition table using sfdisk and add
> the new partitions to the existing raid-devices using mdadm.
True, but after the 4th drive is added to the array, the level of
redundancy for / starts to get rather silly, and the waste of space may
start to add up to quite a bit.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-02 13:00 ` Carlos Mennens
@ 2010-06-02 15:31 ` John Robinson
2010-06-05 17:45 ` Kristleifur Daðason
2010-06-05 17:19 ` Leslie Rhorer
1 sibling, 1 reply; 25+ messages in thread
From: John Robinson @ 2010-06-02 15:31 UTC (permalink / raw)
To: Carlos Mennens; +Cc: Mdadm
On 02/06/2010 14:00, Carlos Mennens wrote:
> On Wed, Jun 2, 2010 at 3:54 AM, <tron@monkii.net> wrote:
>> There are about as many answers to this as there are people using your
>> setup so let's all agree that there's no "one way" of doing things.
>
> Thanks for all the suggestions and you guys are right. There will no
> right or wrong answer here but I just want to make sure I am not doing
> anything that will hinder / limit performance in my system. At most my
> system will simply idle and do nothing more than store a few files for
> me so I think RAID5 is going to be my selection for my / file system.
> I have 4 identical drives and need to partition them all the same to
> avoid any inconsistencies across the RAID array. Since Grub doesn't
> support RAID5 for /boot, I will need to make a 4 disk RAID1 for /boot
> & do the same for Swap. Does this look reasonable to you guys?
>
> Partitioning the 1st disk below:
>
> /dev/sda1 100 MB - RAID (bootable)
> /dev/sda2 2 GB - RAID
> /dev/sda3 320 GB - RAID
>
> Do that same partition schema above for all 4 drives and then create my RAID:
>
> /
> mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda3 /dev/sdb3
> /dev/sdc3 /dev/sdd3
>
> /boot
> mdadm --create /dev/md1 --level=1 --raid-devices=4 /dev/sda1 /dev/sdb1
> /dev/sdc1 /dev/sdd1
>
> Swap
> mdadm --create /dev/md2 --level=1 --raid-devices=4 /dev/sda2 /dev/sdb2
> /dev/sdc2 /dev/sdd2
>
> Would you guys change anything in my partition or 'mdadm' command?
I'd use RAID-10,f2 for the swap, and I'd consider a larger than default
chunk size for the RAID-5.
If I remember correctly, RAID-10 isn't resizeable at the moment, but for
swap that doesn't matter in that if you add drives you can turn swap
off, recreate the swap device with more drives in it, and turn swap on
again.
I'd also try to avoid using several new drives all from the same batch
from the same manufacturer, but if that's what I had to use I'd run
badblocks in write mode on them all first to run them in a little and
make sure all of them passed without any sectors being reallocated
(check with smartctl). That may just be paranoia on my part but I did
have a batch of drives with 2 duff ones in it not long ago. Anyway, if
I'd done that, I'd create the arrays with --assume-clean because the
drives would definitely be full of all zeroes.
Once built I'd add an internal write-intent bitmap with a much larger
than default chunk size (16MB probably) to the big RAID-5 array.
Cheers,
John.
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-02 13:00 ` Carlos Mennens
2010-06-02 15:31 ` John Robinson
@ 2010-06-05 17:19 ` Leslie Rhorer
2010-06-05 19:41 ` Mark Knecht
1 sibling, 1 reply; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-05 17:19 UTC (permalink / raw)
To: 'Carlos Mennens', 'Mdadm'
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Carlos Mennens
> Sent: Wednesday, June 02, 2010 8:00 AM
> To: Mdadm
> Subject: Re: RAID Configuration For New Home Server
>
> On Wed, Jun 2, 2010 at 3:54 AM, <tron@monkii.net> wrote:
> > There are about as many answers to this as there are people using your
> > setup so let's all agree that there's no "one way" of doing things.
>
> Thanks for all the suggestions and you guys are right. There will no
> right or wrong answer here but I just want to make sure I am not doing
> anything that will hinder / limit performance in my system. At most my
> system will simply idle and do nothing more than store a few files for
> me so I think RAID5 is going to be my selection for my / file system.
> I have 4 identical drives and need to partition them all the same to
> avoid any inconsistencies across the RAID array. Since Grub doesn't
> support RAID5 for /boot, I will need to make a 4 disk RAID1 for /boot
> & do the same for Swap. Does this look reasonable to you guys?
>
> Partitioning the 1st disk below:
>
> /dev/sda1 100 MB - RAID (bootable)
> /dev/sda2 2 GB - RAID
> /dev/sda3 320 GB - RAID
>
> Do that same partition schema above for all 4 drives and then create my
> RAID:
>
> /
> mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda3 /dev/sdb3
> /dev/sdc3 /dev/sdd3
>
> /boot
> mdadm --create /dev/md1 --level=1 --raid-devices=4 /dev/sda1 /dev/sdb1
> /dev/sdc1 /dev/sdd1
>
> Swap
> mdadm --create /dev/md2 --level=1 --raid-devices=4 /dev/sda2 /dev/sdb2
> /dev/sdc2 /dev/sdd2
It's certainly workable. You might consider something other than
RAID1 for your swap partition.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-02 15:31 ` John Robinson
@ 2010-06-05 17:45 ` Kristleifur Daðason
0 siblings, 0 replies; 25+ messages in thread
From: Kristleifur Daðason @ 2010-06-05 17:45 UTC (permalink / raw)
To: John Robinson; +Cc: Carlos Mennens, Mdadm
On Wed, Jun 2, 2010 at 3:31 PM, John Robinson
<john.robinson@anonymous.org.uk> wrote:
>
> I'd consider a larger than default chunk size for the RAID-5.
+1 absolutely
When building a server for storing mostly video files on RAID-6, the
files being around 500MB to 4GB in size, I ended up with 256KB chunks.
Gained quite a bit of speed compared to the default! (Also remember to
play around with stripe cache after you're up and running.)
(I did some very basic benchmarking. I repeated some tests I found
online, which indicated that RAID chunks around 256KB were fastest. My
tests agreed.)
I would consider not building a swap partition at all, but rather use
a swap *file*, that you can put on any partition. Here's a howto:
http://www.linux.com/archive/feature/113956 Alternately consider
putting the swap file on a RAID0 for the most speed, which also costs
you a smaller chunk of each disk for the same swap size. The safest
swap is mirrored though, and I'd agree with RAID10,f2 in that case.
Hmm ... no conclusion ... I'm thinking out loud more than I intended
to. To state something concrete: I think my conclusion du jour
regarding swap RAID is probably that swap space isn't that important
anyway for regular use - one should always have enough memory to begin
with so relegating swap mostly to emergency use. Regarding swap as an
emergency feature, it's probably best to have it mirrored - you
wouldn't want your swap space to ever vanish in the middle of
anything. And so I'd go with RAID10,f2, which is by far the fastest
mirror.
-- Kristleifur
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-05 17:19 ` Leslie Rhorer
@ 2010-06-05 19:41 ` Mark Knecht
2010-06-05 23:56 ` Leslie Rhorer
0 siblings, 1 reply; 25+ messages in thread
From: Mark Knecht @ 2010-06-05 19:41 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: Carlos Mennens, Mdadm
On Sat, Jun 5, 2010 at 10:19 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> owner@vger.kernel.org] On Behalf Of Carlos Mennens
>> Sent: Wednesday, June 02, 2010 8:00 AM
>> To: Mdadm
>> Subject: Re: RAID Configuration For New Home Server
>>
>> On Wed, Jun 2, 2010 at 3:54 AM, <tron@monkii.net> wrote:
>> > There are about as many answers to this as there are people using your
>> > setup so let's all agree that there's no "one way" of doing things.
>>
>> Thanks for all the suggestions and you guys are right. There will no
>> right or wrong answer here but I just want to make sure I am not doing
>> anything that will hinder / limit performance in my system. At most my
>> system will simply idle and do nothing more than store a few files for
>> me so I think RAID5 is going to be my selection for my / file system.
>> I have 4 identical drives and need to partition them all the same to
>> avoid any inconsistencies across the RAID array. Since Grub doesn't
>> support RAID5 for /boot, I will need to make a 4 disk RAID1 for /boot
>> & do the same for Swap. Does this look reasonable to you guys?
>>
>> Partitioning the 1st disk below:
>>
>> /dev/sda1 100 MB - RAID (bootable)
>> /dev/sda2 2 GB - RAID
>> /dev/sda3 320 GB - RAID
>>
>> Do that same partition schema above for all 4 drives and then create my
>> RAID:
>>
>> /
>> mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda3 /dev/sdb3
>> /dev/sdc3 /dev/sdd3
>>
>> /boot
>> mdadm --create /dev/md1 --level=1 --raid-devices=4 /dev/sda1 /dev/sdb1
>> /dev/sdc1 /dev/sdd1
>>
>> Swap
>> mdadm --create /dev/md2 --level=1 --raid-devices=4 /dev/sda2 /dev/sdb2
>> /dev/sdc2 /dev/sdd2
>
> It's certainly workable. You might consider something other than
> RAID1 for your swap partition.
Looks reasonable. Some comments:
1) I didn't bother using RAID on my /boot. I just installed grub on
each of the 3 drives but only boot from the first one. If that
partition goes bad I can boot from the second or third drive any time
by just telling BIOS to use a different drive. This saves me from
dealing with any mkinitrd stuff. I've never had a boot partition go
bad because of the drive itself in 14 years running Linux. They go bad
because I write the wrong stuff there. RAID doesn't solve that
problem. This method does require that I update the two backups by
hand once in awhile. That's OK by me.
2) I don't use RAID for swap. I let the kernel do that internally. I
almost never swap out on my home server so trying to protect that with
RAID for the few moments I might use it seems like overkill to me.
3) Your main RAID is exactly what I use on my home server, albeit I
use 3 drives, not 4.
HTH,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-05 19:41 ` Mark Knecht
@ 2010-06-05 23:56 ` Leslie Rhorer
2010-06-06 1:04 ` Keld Simonsen
2010-06-06 2:43 ` Mark Knecht
0 siblings, 2 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-05 23:56 UTC (permalink / raw)
To: 'Mark Knecht'; +Cc: 'Carlos Mennens', 'Mdadm'
> > It's certainly workable. You might consider something other than
> > RAID1 for your swap partition.
>
> Looks reasonable. Some comments:
>
> 1) I didn't bother using RAID on my /boot. I just installed grub on
> each of the 3 drives but only boot from the first one. If that
> partition goes bad I can boot from the second or third drive any time
> by just telling BIOS to use a different drive. This saves me from
> dealing with any mkinitrd stuff. I've never had a boot partition go
> bad because of the drive itself in 14 years running Linux. They go bad
> because I write the wrong stuff there. RAID doesn't solve that
> problem. This method does require that I update the two backups by
> hand once in awhile. That's OK by me.
Define, "once in awhile [sic]". It's certainly possible to do it,
but the very reason I went with boot arrays rather than boot partitions was
it was getting to be a pain to update the backup drives all the time.
Almost every time a package is added or deleted, /etc gets updated. Keeping
different copies of the configuration files in /etc in the initrd and the
root partition is not the best of ideas, although if course it can be done.
Any package which must be available at boot *MUST* update initrd, and if
most distro packages are anything, it is update rich.
> 2) I don't use RAID for swap. I let the kernel do that internally. I
> almost never swap out on my home server so trying to protect that with
> RAID for the few moments I might use it seems like overkill to me.
I halfway agree. My servers almost never use any significant amount
of swap, and even my workstations only use it very occasionally. There have
been instances, however, where the swap has grown to be quite large. With
that in mind, and given the very small amount he has allocated for swap, one
might suggest a RAID0 array of the areas to be used for swap, or maybe an
LVM volume.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-05 23:56 ` Leslie Rhorer
@ 2010-06-06 1:04 ` Keld Simonsen
2010-06-06 1:57 ` Simon Matthews
2010-06-06 2:01 ` Leslie Rhorer
2010-06-06 2:43 ` Mark Knecht
1 sibling, 2 replies; 25+ messages in thread
From: Keld Simonsen @ 2010-06-06 1:04 UTC (permalink / raw)
To: Leslie Rhorer
Cc: 'Mark Knecht', 'Carlos Mennens', 'Mdadm'
On Sat, Jun 05, 2010 at 06:56:31PM -0500, Leslie Rhorer wrote:
> > > It's certainly workable. You might consider something other than
> > > RAID1 for your swap partition.
> >
> > Looks reasonable. Some comments:
> >
>
> > 2) I don't use RAID for swap. I let the kernel do that internally. I
> > almost never swap out on my home server so trying to protect that with
> > RAID for the few moments I might use it seems like overkill to me.
>
> I halfway agree. My servers almost never use any significant amount
> of swap, and even my workstations only use it very occasionally. There have
> been instances, however, where the swap has grown to be quite large. With
> that in mind, and given the very small amount he has allocated for swap, one
> might suggest a RAID0 array of the areas to be used for swap, or maybe an
> LVM volume.
If you use some mirrored RAID for swap, your system will continue to run, if
one of your disks go bad. Then you can replace the faulty disk at a later,
and possibly more convenient time.
If you do not have RAID, your system will most likely go down, if the swap partiion
is damaged.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-06 1:04 ` Keld Simonsen
@ 2010-06-06 1:57 ` Simon Matthews
2010-06-06 2:01 ` Leslie Rhorer
1 sibling, 0 replies; 25+ messages in thread
From: Simon Matthews @ 2010-06-06 1:57 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: Mark Knecht, Carlos Mennens, Mdadm
On Sat, Jun 5, 2010 at 6:04 PM, Keld Simonsen <keld@keldix.com> wrote:
> On Sat, Jun 05, 2010 at 06:56:31PM -0500, Leslie Rhorer wrote:
>> > > It's certainly workable. You might consider something other than
>> > > RAID1 for your swap partition.
>> >
>> > Looks reasonable. Some comments:
>> >
>>
>> > 2) I don't use RAID for swap. I let the kernel do that internally. I
>> > almost never swap out on my home server so trying to protect that with
>> > RAID for the few moments I might use it seems like overkill to me.
>>
>> I halfway agree. My servers almost never use any significant amount
>> of swap, and even my workstations only use it very occasionally. There have
>> been instances, however, where the swap has grown to be quite large. With
>> that in mind, and given the very small amount he has allocated for swap, one
>> might suggest a RAID0 array of the areas to be used for swap, or maybe an
>> LVM volume.
>
> If you use some mirrored RAID for swap, your system will continue to run, if
> one of your disks go bad. Then you can replace the faulty disk at a later,
> and possibly more convenient time.
>
> If you do not have RAID, your system will most likely go down, if the swap partiion
> is damaged.
Or if you put your swap on a RAID0 device.
Simon.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-06 1:04 ` Keld Simonsen
2010-06-06 1:57 ` Simon Matthews
@ 2010-06-06 2:01 ` Leslie Rhorer
1 sibling, 0 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-06 2:01 UTC (permalink / raw)
To: 'Keld Simonsen'
Cc: 'Mark Knecht', 'Carlos Mennens', 'Mdadm'
> From: Keld Simonsen [mailto:keld@keldix.com]
> Sent: Saturday, June 05, 2010 8:04 PM
> To: Leslie Rhorer
> Cc: 'Mark Knecht'; 'Carlos Mennens'; 'Mdadm'
> Subject: Re: RAID Configuration For New Home Server
>
> On Sat, Jun 05, 2010 at 06:56:31PM -0500, Leslie Rhorer wrote:
> > > > It's certainly workable. You might consider something other
> than
> > > > RAID1 for your swap partition.
> > >
> > > Looks reasonable. Some comments:
> > >
> >
> > > 2) I don't use RAID for swap. I let the kernel do that internally. I
> > > almost never swap out on my home server so trying to protect that with
> > > RAID for the few moments I might use it seems like overkill to me.
> >
> > I halfway agree. My servers almost never use any significant amount
> > of swap, and even my workstations only use it very occasionally. There
> have
> > been instances, however, where the swap has grown to be quite large.
> With
> > that in mind, and given the very small amount he has allocated for swap,
> one
> > might suggest a RAID0 array of the areas to be used for swap, or maybe
> an
> > LVM volume.
>
> If you use some mirrored RAID for swap, your system will continue to run,
> if
> one of your disks go bad. Then you can replace the faulty disk at a later,
> and possibly more convenient time.
>
> If you do not have RAID, your system will most likely go down, if the swap
> partiion
> is damaged.
True. A 4 disk RAID1 array is overkill to the point of absurdity,
though. Perhaps a RAID10 or RAID4 array would be a good compromise.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-05 23:56 ` Leslie Rhorer
2010-06-06 1:04 ` Keld Simonsen
@ 2010-06-06 2:43 ` Mark Knecht
2010-06-06 5:19 ` Leslie Rhorer
1 sibling, 1 reply; 25+ messages in thread
From: Mark Knecht @ 2010-06-06 2:43 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: Carlos Mennens, Mdadm
On Sat, Jun 5, 2010 at 4:56 PM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> > It's certainly workable. You might consider something other than
>> > RAID1 for your swap partition.
>>
>> Looks reasonable. Some comments:
>>
>> 1) I didn't bother using RAID on my /boot. I just installed grub on
>> each of the 3 drives but only boot from the first one. If that
>> partition goes bad I can boot from the second or third drive any time
>> by just telling BIOS to use a different drive. This saves me from
>> dealing with any mkinitrd stuff. I've never had a boot partition go
>> bad because of the drive itself in 14 years running Linux. They go bad
>> because I write the wrong stuff there. RAID doesn't solve that
>> problem. This method does require that I update the two backups by
>> hand once in awhile. That's OK by me.
>
> Define, "once in awhile [sic]".
Every 2-3 months I make sure each drive is up to date.
> It's certainly possible to do it,
> but the very reason I went with boot arrays rather than boot partitions was
> it was getting to be a pain to update the backup drives all the time.
All the time vs once every 2-3 months.
Even an out-of-date boot drive will allow me to boot the machine and
get things fixed.
> Almost every time a package is added or deleted, /etc gets updated. Keeping
> different copies of the configuration files in /etc in the initrd and the
> root partition is not the best of ideas, although if course it can be done.
As I said I don't using an initrd. I've never learned how to build one
and didn't need it if I didn't use RAID on /boot.
I don't understand your comments about /etc as it's not kept in /boot.
/etc, /, /home, and all other directories are on RAID. Only /boot
isn't, so it needs only a kernel and grub.
> Any package which must be available at boot *MUST* update initrd, and if
> most distro packages are anything, it is update rich.
>
>> 2) I don't use RAID for swap. I let the kernel do that internally. I
>> almost never swap out on my home server so trying to protect that with
>> RAID for the few moments I might use it seems like overkill to me.
>
> I halfway agree. My servers almost never use any significant amount
> of swap, and even my workstations only use it very occasionally. There have
> been instances, however, where the swap has grown to be quite large. With
> that in mind, and given the very small amount he has allocated for swap, one
> might suggest a RAID0 array of the areas to be used for swap, or maybe an
> LVM volume.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-06 2:43 ` Mark Knecht
@ 2010-06-06 5:19 ` Leslie Rhorer
2010-06-06 11:23 ` Leslie Rhorer
2010-06-06 14:50 ` Mark Knecht
0 siblings, 2 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-06 5:19 UTC (permalink / raw)
To: 'Mark Knecht'; +Cc: 'Mdadm'
> >> problem. This method does require that I update the two backups by
> >> hand once in awhile. That's OK by me.
> >
> > Define, "once in awhile [sic]".
>
> Every 2-3 months I make sure each drive is up to date.
I was having to update one server or the other every week or so,
sometimes more than once a week. I could have written scripts to do it, or
used rsync, I suppose, but I opted for RAID1.
> > It's certainly possible to do it,
> > but the very reason I went with boot arrays rather than boot partitions
> was
> > it was getting to be a pain to update the backup drives all the time.
>
> All the time vs once every 2-3 months.
>
> Even an out-of-date boot drive will allow me to boot the machine and
> get things fixed.
>
> > Almost every time a package is added or deleted, /etc gets updated.
> Keeping
> > different copies of the configuration files in /etc in the initrd and
> the
> > root partition is not the best of ideas, although if course it can be
> done.
>
> As I said I don't using an initrd. I've never learned how to build one
> and didn't need it if I didn't use RAID on /boot.
Most distros these days employ an initrd. One re-builds it by
running whatever application is included with the distro for that purpose.
In the case of Debian and Debian derivatives, it is currently
`update-initramfs`. It's built the first time by installing a Linux distro.
It's not that difficult to do by hand, however. Most are built using cpio
and gzip. Once one has the directory structure one wishes, one simply
creates a compressed tarball (cpioball?) from the structure. It includes a
copy of /etc, with some special scripts to allow the system to work prior to
the existence of the "real" /.
> I don't understand your comments about /etc as it's not kept in /boot.
> /etc, /, /home, and all other directories are on RAID. Only /boot
> isn't, so it needs only a kernel and grub.
The initrd has a copy of /etc - especially the boot configuration
files such as mdadm.conf, fstab, etc.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-06 5:19 ` Leslie Rhorer
@ 2010-06-06 11:23 ` Leslie Rhorer
2010-06-06 14:56 ` Mark Knecht
2010-06-06 14:50 ` Mark Knecht
1 sibling, 1 reply; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-06 11:23 UTC (permalink / raw)
To: 'Mark Knecht'; +Cc: 'Mdadm'
> > >> problem. This method does require that I update the two backups by
> > >> hand once in awhile. That's OK by me.
> > >
> > > Define, "once in awhile [sic]".
> >
> > Every 2-3 months I make sure each drive is up to date.
>
> I was having to update one server or the other every week or so,
> sometimes more than once a week. I could have written scripts to do it,
> or
> used rsync, I suppose, but I opted for RAID1.
>
> > > It's certainly possible to do it,
> > > but the very reason I went with boot arrays rather than boot
> partitions
> > was
> > > it was getting to be a pain to update the backup drives all the time.
> >
> > All the time vs once every 2-3 months.
> >
> > Even an out-of-date boot drive will allow me to boot the machine and
> > get things fixed.
> >
> > > Almost every time a package is added or deleted, /etc gets updated.
> > Keeping
> > > different copies of the configuration files in /etc in the initrd and
> > the
> > > root partition is not the best of ideas, although if course it can be
> > done.
> >
> > As I said I don't using an initrd. I've never learned how to build one
> > and didn't need it if I didn't use RAID on /boot.
>
> Most distros these days employ an initrd. One re-builds it by
> running whatever application is included with the distro for that purpose.
> In the case of Debian and Debian derivatives, it is currently
> `update-initramfs`. It's built the first time by installing a Linux
> distro.
> It's not that difficult to do by hand, however. Most are built using cpio
> and gzip. Once one has the directory structure one wishes, one simply
> creates a compressed tarball (cpioball?) from the structure. It includes
> a
> copy of /etc, with some special scripts to allow the system to work prior
> to
> the existence of the "real" /.
>
> > I don't understand your comments about /etc as it's not kept in /boot.
> > /etc, /, /home, and all other directories are on RAID. Only /boot
> > isn't, so it needs only a kernel and grub.
>
> The initrd has a copy of /etc - especially the boot configuration
> files such as mdadm.conf, fstab, etc.
Oh, I almost forgot. It may be of notable mention an array with a
1.0 superblock can be read as if it were an ordinary partition. This means
one can build a 1.0 superblock array containing a file system (ext2 may be
the best choice in this case), but boot from the partition just as if it
were not an array. Once the system is booted, the array can be assembled
and then /boot can be mounted. This is because:
1. The 1.0 superblock is at the *END* of the array.
2. The file system when created only uses up the front part of the
partition, leaving the superblock intact.
3. GRUB does not require the file system to be mounted in order to read the
kernel and the initrd. (Actually, it could be made to work even if it did
mount the partition, but since it does not, it makes it much easier.)
Indeed, this is the only way of which I know to boot to an array
using GRUB legacy.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-06 5:19 ` Leslie Rhorer
2010-06-06 11:23 ` Leslie Rhorer
@ 2010-06-06 14:50 ` Mark Knecht
2010-06-06 20:35 ` Leslie Rhorer
1 sibling, 1 reply; 25+ messages in thread
From: Mark Knecht @ 2010-06-06 14:50 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: Mdadm
On Sat, Jun 5, 2010 at 10:19 PM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> >> problem. This method does require that I update the two backups by
>> >> hand once in awhile. That's OK by me.
>> >
>> > Define, "once in awhile [sic]".
>>
>> Every 2-3 months I make sure each drive is up to date.
>
> I was having to update one server or the other every week or so,
> sometimes more than once a week. I could have written scripts to do it, or
> used rsync, I suppose, but I opted for RAID1.
>
I can see that. My point of view was (I think) more in line with the
OP's title of this thread, being a home server. I have only 1 home
server. I suspect he does also as it's his first time doing RAID, as
it was for me.
<SNIP>
>>
>> As I said I don't using an initrd. I've never learned how to build one
>> and didn't need it if I didn't use RAID on /boot.
>
> Most distros these days employ an initrd. One re-builds it by
> running whatever application is included with the distro for that purpose.
> In the case of Debian and Debian derivatives, it is currently
> `update-initramfs`. It's built the first time by installing a Linux distro.
> It's not that difficult to do by hand, however. Most are built using cpio
> and gzip. Once one has the directory structure one wishes, one simply
> creates a compressed tarball (cpioball?) from the structure. It includes a
> copy of /etc, with some special scripts to allow the system to work prior to
> the existence of the "real" /.
>
>> I don't understand your comments about /etc as it's not kept in /boot.
>> /etc, /, /home, and all other directories are on RAID. Only /boot
>> isn't, so it needs only a kernel and grub.
>
> The initrd has a copy of /etc - especially the boot configuration
> files such as mdadm.conf, fstab, etc.
Ah, OK. Well again, as I don't use an initrd or even know what it is I
have a lot of learning to do in that area. I've just never bothered.
I've run Linux as a desktop system for about 13 or 14 years now. Maybe
some other distros used initrd by default but Gentoo, which I've run
since about 2000 or so, doesn't require it. For desktop systems based
on lowest common denominator PC hardware it's never seemed to provide
any real advantage. Certainly it does for bigger servers where you
guys buy fancy hard drive controllers and the like.
But again, I'm talking about something I don't use and don't know much
about so I have a pretty limited perspective. Getting started with
RAID is the first time I've seen the value for my machines, and then
only for the boot partition.
Thanks!
Cheers,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: RAID Configuration For New Home Server
2010-06-06 11:23 ` Leslie Rhorer
@ 2010-06-06 14:56 ` Mark Knecht
2010-06-06 20:47 ` Leslie Rhorer
0 siblings, 1 reply; 25+ messages in thread
From: Mark Knecht @ 2010-06-06 14:56 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: Mdadm
On Sun, Jun 6, 2010 at 4:23 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
<SNIP>
>
> Oh, I almost forgot. It may be of notable mention an array with a
> 1.0 superblock can be read as if it were an ordinary partition. This means
> one can build a 1.0 superblock array containing a file system (ext2 may be
> the best choice in this case), but boot from the partition just as if it
> were not an array. Once the system is booted, the array can be assembled
> and then /boot can be mounted. This is because:
>
> 1. The 1.0 superblock is at the *END* of the array.
>
> 2. The file system when created only uses up the front part of the
> partition, leaving the superblock intact.
>
> 3. GRUB does not require the file system to be mounted in order to read the
> kernel and the initrd. (Actually, it could be made to work even if it did
> mount the partition, but since it does not, it makes it much easier.)
>
> Indeed, this is the only way of which I know to boot to an array
> using GRUB legacy.
>
>
Actually that is of interest. If there's a way to use a single RAID
boot partition by itself once in awhile then there's value there if
something has gone wrong and you're just trying to get the machine up
and running.
Thanks!
- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-06 14:50 ` Mark Knecht
@ 2010-06-06 20:35 ` Leslie Rhorer
0 siblings, 0 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-06 20:35 UTC (permalink / raw)
To: 'Mark Knecht'; +Cc: 'Mdadm'
> -----Original Message-----
> From: Mark Knecht [mailto:markknecht@gmail.com]
> Sent: Sunday, June 06, 2010 9:51 AM
> To: Leslie Rhorer
> Cc: Mdadm
> Subject: Re: RAID Configuration For New Home Server
>
> On Sat, Jun 5, 2010 at 10:19 PM, Leslie Rhorer <lrhorer@satx.rr.com>
> wrote:
> >> >> problem. This method does require that I update the two backups by
> >> >> hand once in awhile. That's OK by me.
> >> >
> >> > Define, "once in awhile [sic]".
> >>
> >> Every 2-3 months I make sure each drive is up to date.
> >
> > I was having to update one server or the other every week or so,
> > sometimes more than once a week. I could have written scripts to do it,
> or
> > used rsync, I suppose, but I opted for RAID1.
> >
>
> I can see that. My point of view was (I think) more in line with the
> OP's title of this thread, being a home server. I have only 1 home
> server. I suspect he does also as it's his first time doing RAID, as
> it was for me.
Mine are home servers. I don't run Linux on any of my servers at
work. To be sure, there is always a first time for everything and everyone,
and the novice may indeed (quite rightly) choose a different approach than a
more seasoned individual. Many times a choice may be between easier /
simpler setup and more protracted maintenance. Indeed, even though I was
not new to *nix, when I first built my (personal) servers, I opted for a
non-RAID boot. It's only over time - about 3 years - I grew tired of
updating the contents of my offline drives. That, and I had several unused
old 500G PATA drives laying around, so I figured I would create RAID systems
for booting, rather than simple partitions.
To be sure, I won't think any less of the OP should he decide to opt
for a non-RAID boot solution. I just think it is good he has the full
perspective of all his options before he makes a choice.
> Ah, OK. Well again, as I don't use an initrd or even know what it is I
> have a lot of learning to do in that area. I've just never bothered.
An initial RAM Disk boots the basic operating system very quickly
and compactly. Embedded systems and live CDs, as well as most installation
CDs employ a RAM disk permanently. Most hard drive based systems overwrite
the RAM disk with a partition on the hard drive.
> I've run Linux as a desktop system for about 13 or 14 years now. Maybe
> some other distros used initrd by default but Gentoo, which I've run
> since about 2000 or so, doesn't require it. For desktop systems based
> on lowest common denominator PC hardware it's never seemed to provide
> any real advantage. Certainly it does for bigger servers where you
> guys buy fancy hard drive controllers and the like.
The initrd sees its greatest advantage on resource-thin systems.
Since the image is compressed, it is small and loads very quickly even from
a slow drive. Floppy based boots are really only practical using an initrd.
They also have the advantage of not being directly editable, which reduces
the goof factor, and since the image is read-only, limits the exposure to
boot failure after a dirty shut-down.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: RAID Configuration For New Home Server
2010-06-06 14:56 ` Mark Knecht
@ 2010-06-06 20:47 ` Leslie Rhorer
0 siblings, 0 replies; 25+ messages in thread
From: Leslie Rhorer @ 2010-06-06 20:47 UTC (permalink / raw)
To: 'Mark Knecht'; +Cc: 'Mdadm'
> > Oh, I almost forgot. It may be of notable mention an array with
> a
> > 1.0 superblock can be read as if it were an ordinary partition. This
> means
> > one can build a 1.0 superblock array containing a file system (ext2 may
> be
> > the best choice in this case), but boot from the partition just as if it
> > were not an array. Once the system is booted, the array can be
> assembled
> > and then /boot can be mounted. This is because:
> >
> > 1. The 1.0 superblock is at the *END* of the array.
> >
> > 2. The file system when created only uses up the front part of the
> > partition, leaving the superblock intact.
> >
> > 3. GRUB does not require the file system to be mounted in order to read
> the
> > kernel and the initrd. (Actually, it could be made to work even if it
> did
> > mount the partition, but since it does not, it makes it much easier.)
> >
> > Indeed, this is the only way of which I know to boot to an array
> > using GRUB legacy.
> >
> >
>
> Actually that is of interest. If there's a way to use a single RAID
> boot partition by itself once in awhile then there's value there if
> something has gone wrong and you're just trying to get the machine up
> and running.
Well, of course a basic RAID1 array can be assembled with only one
member present, but yes, a RAID1 member partition with a 1.0 superblock can
be safely read (but not written!) as if it were an ordinary partition. If
one has an unassembled RAID1 array comprised of 3 partitions, say /dev/sda1,
/dev/sdb1, and /dev/sdc1, one can simply mount /dev/sdb1 like any other
formatted partition. Neither the OS nor the file system will be aware the
mount point is anything but a partially filled partition. Of course, one
does not ordinarily wish to write to the partition which is so mounted
(mounting as read-only is a good idea), since doing so will de-sync the
array, and chances are the data would be lost when the array is
re-assembled, anyway.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2010-06-06 20:47 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-06-01 17:08 RAID Configuration For New Home Server Carlos Mennens
2010-06-01 17:56 ` Mark Knecht
2010-06-01 22:59 ` Ryan Wagoner
2010-06-02 4:03 ` Leslie Rhorer
2010-06-02 6:08 ` Simon Matthews
2010-06-02 6:33 ` Leslie Rhorer
2010-06-02 7:51 ` tron
2010-06-02 14:24 ` Leslie Rhorer
2010-06-02 7:54 ` tron
2010-06-02 13:00 ` Carlos Mennens
2010-06-02 15:31 ` John Robinson
2010-06-05 17:45 ` Kristleifur Daðason
2010-06-05 17:19 ` Leslie Rhorer
2010-06-05 19:41 ` Mark Knecht
2010-06-05 23:56 ` Leslie Rhorer
2010-06-06 1:04 ` Keld Simonsen
2010-06-06 1:57 ` Simon Matthews
2010-06-06 2:01 ` Leslie Rhorer
2010-06-06 2:43 ` Mark Knecht
2010-06-06 5:19 ` Leslie Rhorer
2010-06-06 11:23 ` Leslie Rhorer
2010-06-06 14:56 ` Mark Knecht
2010-06-06 20:47 ` Leslie Rhorer
2010-06-06 14:50 ` Mark Knecht
2010-06-06 20:35 ` Leslie Rhorer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).