* Re: About seting up Raid5 on a four disk box.
2009-10-13 13:13 About seting up Raid5 on a four disk box Antonio Perez
@ 2009-10-13 14:04 ` Majed B.
2009-10-13 14:52 ` Antonio Perez
2009-10-13 15:23 ` Robin Hill
2009-10-13 20:54 ` Bill Davidsen
2 siblings, 1 reply; 16+ messages in thread
From: Majed B. @ 2009-10-13 14:04 UTC (permalink / raw)
To: ap23563m; +Cc: linux-raid
So what you want to do is have 3 partitions where you have /, /boot & the rest?
For desktop usage it's OK to use that setup since you won't be writing
to / and the other segments a lot at the same time.
If you're running an application which writes a lot of data to / and
you require to read/write a lot of data of the rest of the disk, it
will conflict and slow things down a lot.
Basically, you're partitioning each disk and making each partition
belong to an array. So if the collective partitions of Array1 are busy
with something and the partitions of Array2 are also busy, you'll slow
down because you're reading/writing to/from the same disk from two
different partitions.
On Tue, Oct 13, 2009 at 4:13 PM, Antonio Perez <ap23563m@gmx.com> wrote:
> If I'm posting to the wrong group, sorry. just point to the RTFM link.
>
> This post is about setting up a Debian box with four disks (size should not
> be important, me thinks), let's assume that a Raid 5 is the correct type for
> the intended use.
>
> Keeping aside LVM and/or layering of md (just for simplicity), and taking
> into account that /boot, / and maybe other areas should go in a Raid 1
> configuration, for booting reliability. I have three questions that perhaps
> you could help to clarify:
>
> 1.- Should the "rest of the disk" be only one partition?
> I have read that making several partitions and setting several md disks:
> sd[a..d]2 --> md1
> sd[a..d]3 --> md2
> sd[a..d]4 --> md3
> would help with the rebuild time of each md, which sounds correct. It is
> also proposed that the md on the outer area of the disk would be faster
> allowing for better control of performance, assigning faster mds to the more
> used filesystems.
>
> However, and this I don't know, those sda[2..4] are not really different
> devices (spindles) and reads to one md would conflict (or not?) with reads
> to the other mds.
>
> Setting the whole disk as one partition would prevent any conflict but would
> take longer to rebuild and files would be spread over the whole area of the
> disk.
>
> I really don't know the internals of md well enough to tell what advantages
> and problems one setup has over the other.
>
> 2.- On the Raid 1: How many sectors to copy? 63?
> On an update of grub code, core.img could change, which means that the first
> 63 sectors (to be on the safe side) of the disk which gets the update should
> be copied to the other 3 disks.
> Or is it that the md code would mirror sectors 1-62 and only the MBR needs
> to be manually mirroed?
>
> 2.- Is there a recomended way to trigger the said copy of question 2?
> Where should a call to copy the MBR should be placed? On update-grub?
>
> TIA
>
> --
> Antonio Perez
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: About seting up Raid5 on a four disk box.
2009-10-13 14:04 ` Majed B.
@ 2009-10-13 14:52 ` Antonio Perez
2009-10-13 15:10 ` Majed B.
0 siblings, 1 reply; 16+ messages in thread
From: Antonio Perez @ 2009-10-13 14:52 UTC (permalink / raw)
To: linux-raid
Majed B. wrote:
> So what you want to do is have 3 partitions where you have /, /boot & the
> rest?
No, /boot and / should be on one small partition as RAID 1, maybe 1G. The
question is about the rest of the disk.
For sake of simplicity, lets assume that the rest of the disk is all data.
Which could be several LVM partitons, but let's not go to this next level.
Is it better for *md* to make just one big partition?
> For desktop usage it's OK to use that setup since you won't be writing
> to / and the other segments a lot at the same time.
> If you're running an application which writes a lot of data to / and
> you require to read/write a lot of data of the rest of the disk, it
> will conflict and slow things down a lot.
>
> Basically, you're partitioning each disk and making each partition
> belong to an array.
That is correct.
> So if the collective partitions of Array1 are busy
> with something and the partitions of Array2 are also busy, you'll slow
> down because you're reading/writing to/from the same disk from two
> different partitions.
Yes, right, but is this slow down better/worse on several partitions or just
one?
Thanks.
> On Tue, Oct 13, 2009 at 4:13 PM, Antonio Perez <ap23563m@gmx.com> wrote:
>> If I'm posting to the wrong group, sorry. just point to the RTFM link.
>>
>> This post is about setting up a Debian box with four disks (size should
>> not be important, me thinks), let's assume that a Raid 5 is the correct
>> type for the intended use.
>>
>> Keeping aside LVM and/or layering of md (just for simplicity), and taking
>> into account that /boot, / and maybe other areas should go in a Raid 1
>> configuration, for booting reliability. I have three questions that
>> perhaps you could help to clarify:
>>
>> 1.- Should the "rest of the disk" be only one partition?
>> I have read that making several partitions and setting several md disks:
>> sd[a..d]2 --> md1
>> sd[a..d]3 --> md2
>> sd[a..d]4 --> md3
>> would help with the rebuild time of each md, which sounds correct. It is
>> also proposed that the md on the outer area of the disk would be faster
>> allowing for better control of performance, assigning faster mds to the
>> more used filesystems.
>>
>> However, and this I don't know, those sda[2..4] are not really different
>> devices (spindles) and reads to one md would conflict (or not?) with
>> reads to the other mds.
>>
>> Setting the whole disk as one partition would prevent any conflict but
>> would take longer to rebuild and files would be spread over the whole
>> area of the disk.
>>
>> I really don't know the internals of md well enough to tell what
>> advantages and problems one setup has over the other.
>>
>> 2.- On the Raid 1: How many sectors to copy? 63?
>> On an update of grub code, core.img could change, which means that the
>> first 63 sectors (to be on the safe side) of the disk which gets the
>> update should be copied to the other 3 disks.
>> Or is it that the md code would mirror sectors 1-62 and only the MBR
>> needs to be manually mirroed?
>>
>> 2.- Is there a recomended way to trigger the said copy of question 2?
>> Where should a call to copy the MBR should be placed? On update-grub?
>>
>> TIA
>>
>> --
>> Antonio Perez
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
--
Antonio Perez
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-13 14:52 ` Antonio Perez
@ 2009-10-13 15:10 ` Majed B.
2009-10-14 6:23 ` Antonio Perez
0 siblings, 1 reply; 16+ messages in thread
From: Majed B. @ 2009-10-13 15:10 UTC (permalink / raw)
To: ap23563m; +Cc: linux-raid
I think I already answered your question:
> For desktop usage it's OK to use that setup since you won't be writing
> to / and the other segments a lot at the same time.
> If you're running an application which writes a lot of data to / and
> you require to read/write a lot of data of the rest of the disk, it
> will conflict and slow things down a lot.
>
> Basically, you're partitioning each disk and making each partition
> belong to an array.
If you misunderstood part, or I did, let me know :)
On Tue, Oct 13, 2009 at 5:52 PM, Antonio Perez <ap23563m@gmx.com> wrote:
> Majed B. wrote:
>
>> So what you want to do is have 3 partitions where you have /, /boot & the
>> rest?
>
> No, /boot and / should be on one small partition as RAID 1, maybe 1G. The
> question is about the rest of the disk.
>
> For sake of simplicity, lets assume that the rest of the disk is all data.
> Which could be several LVM partitons, but let's not go to this next level.
>
> Is it better for *md* to make just one big partition?
>
>> For desktop usage it's OK to use that setup since you won't be writing
>> to / and the other segments a lot at the same time.
>
>> If you're running an application which writes a lot of data to / and
>> you require to read/write a lot of data of the rest of the disk, it
>> will conflict and slow things down a lot.
>>
>> Basically, you're partitioning each disk and making each partition
>> belong to an array.
>
> That is correct.
>
>> So if the collective partitions of Array1 are busy
>> with something and the partitions of Array2 are also busy, you'll slow
>> down because you're reading/writing to/from the same disk from two
>> different partitions.
>
> Yes, right, but is this slow down better/worse on several partitions or just
> one?
>
> Thanks.
>
>
>> On Tue, Oct 13, 2009 at 4:13 PM, Antonio Perez <ap23563m@gmx.com> wrote:
>>> If I'm posting to the wrong group, sorry. just point to the RTFM link.
>>>
>>> This post is about setting up a Debian box with four disks (size should
>>> not be important, me thinks), let's assume that a Raid 5 is the correct
>>> type for the intended use.
>>>
>>> Keeping aside LVM and/or layering of md (just for simplicity), and taking
>>> into account that /boot, / and maybe other areas should go in a Raid 1
>>> configuration, for booting reliability. I have three questions that
>>> perhaps you could help to clarify:
>>>
>>> 1.- Should the "rest of the disk" be only one partition?
>>> I have read that making several partitions and setting several md disks:
>>> sd[a..d]2 --> md1
>>> sd[a..d]3 --> md2
>>> sd[a..d]4 --> md3
>>> would help with the rebuild time of each md, which sounds correct. It is
>>> also proposed that the md on the outer area of the disk would be faster
>>> allowing for better control of performance, assigning faster mds to the
>>> more used filesystems.
>>>
>>> However, and this I don't know, those sda[2..4] are not really different
>>> devices (spindles) and reads to one md would conflict (or not?) with
>>> reads to the other mds.
>>>
>>> Setting the whole disk as one partition would prevent any conflict but
>>> would take longer to rebuild and files would be spread over the whole
>>> area of the disk.
>>>
>>> I really don't know the internals of md well enough to tell what
>>> advantages and problems one setup has over the other.
>>>
>>> 2.- On the Raid 1: How many sectors to copy? 63?
>>> On an update of grub code, core.img could change, which means that the
>>> first 63 sectors (to be on the safe side) of the disk which gets the
>>> update should be copied to the other 3 disks.
>>> Or is it that the md code would mirror sectors 1-62 and only the MBR
>>> needs to be manually mirroed?
>>>
>>> 2.- Is there a recomended way to trigger the said copy of question 2?
>>> Where should a call to copy the MBR should be placed? On update-grub?
>>>
>>> TIA
>>>
>>> --
>>> Antonio Perez
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
>>
>>
>
> --
> Antonio Perez
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: About seting up Raid5 on a four disk box.
2009-10-13 15:10 ` Majed B.
@ 2009-10-14 6:23 ` Antonio Perez
2009-10-14 6:40 ` Majed B.
0 siblings, 1 reply; 16+ messages in thread
From: Antonio Perez @ 2009-10-14 6:23 UTC (permalink / raw)
To: linux-raid
Majed B. wrote:
> I think I already answered your question:
>
>> For desktop usage it's OK to use that setup since you won't be writing
>> to / and the other segments a lot at the same time.
>
>> If you're running an application which writes a lot of data to / and
>> you require to read/write a lot of data of the rest of the disk, it
>> will conflict and slow things down a lot.
>>
>> Basically, you're partitioning each disk and making each partition
>> belong to an array.
>
> If you misunderstood part, or I did, let me know :)
Thanks Majeb. I believe I understand you.
This is what I get from your comment:
If the disks are setup with several partitions, and the corresponding
partitions belong to a md array as this:
sd[a..d]1 --> md1
sd[a..d]2 --> md2
sd[a..d]3 --> md3
If md1 is used as / and md2 is used as /home (or /data) there will be no
concurrent reads to both places.
While this may be true, depending on the specific application, it is not
what I am asking.
________________________________________________________________________
The point is:
If md1 is used as /data1 and md2 is used as /data2 will it work?
Will the md system be aware that those are on the same disk (spindle) and
use the correct queuing on the reads for the best reading speed possible?
Or will md get confused on the correct read sequence causing additional head
seeks which will degrade overall performance.
Sorry if I confuse you more, this is not an "simple" question.
Please read Robin Hill answer. :-)
--
Antonio Perez
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-14 6:23 ` Antonio Perez
@ 2009-10-14 6:40 ` Majed B.
2009-10-14 12:53 ` Antonio Perez
0 siblings, 1 reply; 16+ messages in thread
From: Majed B. @ 2009-10-14 6:40 UTC (permalink / raw)
To: ap23563m; +Cc: linux-raid
I see.
Well, whether MD is smart enough to rearrange a queue when the arrays
are sharing disks or not, it still means that you can't run your
processes in parallel.
I did see MD put the arrays in queue during resyncs when I had a
similar setup. I didn't benchmark the arrays at that time, so I don't
have solid numbers of how much of a performance penalty that setup
caused.
On Wed, Oct 14, 2009 at 9:23 AM, Antonio Perez <ap23563m@gmx.com> wrote:
> Majed B. wrote:
>
>> I think I already answered your question:
>>
>>> For desktop usage it's OK to use that setup since you won't be writing
>>> to / and the other segments a lot at the same time.
>>
>>> If you're running an application which writes a lot of data to / and
>>> you require to read/write a lot of data of the rest of the disk, it
>>> will conflict and slow things down a lot.
>>>
>>> Basically, you're partitioning each disk and making each partition
>>> belong to an array.
>>
>> If you misunderstood part, or I did, let me know :)
>
> Thanks Majeb. I believe I understand you.
>
> This is what I get from your comment:
>
> If the disks are setup with several partitions, and the corresponding
> partitions belong to a md array as this:
> sd[a..d]1 --> md1
> sd[a..d]2 --> md2
> sd[a..d]3 --> md3
>
> If md1 is used as / and md2 is used as /home (or /data) there will be no
> concurrent reads to both places.
>
> While this may be true, depending on the specific application, it is not
> what I am asking.
> ________________________________________________________________________
>
> The point is:
>
> If md1 is used as /data1 and md2 is used as /data2 will it work?
>
> Will the md system be aware that those are on the same disk (spindle) and
> use the correct queuing on the reads for the best reading speed possible?
>
> Or will md get confused on the correct read sequence causing additional head
> seeks which will degrade overall performance.
>
> Sorry if I confuse you more, this is not an "simple" question.
>
> Please read Robin Hill answer. :-)
>
> --
> Antonio Perez
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: About seting up Raid5 on a four disk box.
2009-10-14 6:40 ` Majed B.
@ 2009-10-14 12:53 ` Antonio Perez
2009-10-14 13:41 ` Majed B.
0 siblings, 1 reply; 16+ messages in thread
From: Antonio Perez @ 2009-10-14 12:53 UTC (permalink / raw)
To: linux-raid
Majed B. wrote:
> I see.
>
> Well, whether MD is smart enough to rearrange a queue when the arrays
> are sharing disks or not, it still means that you can't run your
> processes in parallel.
Does that means that different processes (treads) can't request reads on
different places on the disk concurrently?
I am sorry, but it seems that we are speaking different languages.
--
Antonio Perez
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-14 12:53 ` Antonio Perez
@ 2009-10-14 13:41 ` Majed B.
2009-10-14 14:53 ` Antonio Perez
0 siblings, 1 reply; 16+ messages in thread
From: Majed B. @ 2009-10-14 13:41 UTC (permalink / raw)
To: ap23563m; +Cc: linux-raid
On Wed, Oct 14, 2009 at 3:53 PM, Antonio Perez <ap23563m@gmx.com> wrote:
>
> Majed B. wrote:
>
> > I see.
> >
> > Well, whether MD is smart enough to rearrange a queue when the arrays
> > are sharing disks or not, it still means that you can't run your
> > processes in parallel.
>
> Does that means that different processes (treads) can't request reads on
> different places on the disk concurrently?
>
> I am sorry, but it seems that we are speaking different languages.
>
> --
> Antonio Perez
Your applications can request as much files as they want regardless of
their location, but the mechanical heads move together, so they can be
at a single location at a point of time reading a single stream of a
file (or multiple files if they were next to each other on the
physical platter).
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-14 13:41 ` Majed B.
@ 2009-10-14 14:53 ` Antonio Perez
2009-10-14 21:08 ` Bill Davidsen
0 siblings, 1 reply; 16+ messages in thread
From: Antonio Perez @ 2009-10-14 14:53 UTC (permalink / raw)
To: linux-raid
Majed B. wrote:
> Your applications can request as much files as they want regardless of
> their location, but the mechanical heads move together, so they can be
> at a single location at a point of time reading a single stream of a
> file (or multiple files if they were next to each other on the
> physical platter).
Yes that's completely correct, and it's the job of the "elevator" (*) on the
"md" code to decide which sector, and in what order, such sectors will be
serviced. If the "elevator" is aware of the underlying hardware, it will do
the right thing if there are several md or just one.
*(I am not an expert on this, is just what I got from Robin)
--
Antonio Perez
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-14 14:53 ` Antonio Perez
@ 2009-10-14 21:08 ` Bill Davidsen
0 siblings, 0 replies; 16+ messages in thread
From: Bill Davidsen @ 2009-10-14 21:08 UTC (permalink / raw)
To: ap23563m; +Cc: linux-raid
Antonio Perez wrote:
> Majed B. wrote:
>
>
>> Your applications can request as much files as they want regardless of
>> their location, but the mechanical heads move together, so they can be
>> at a single location at a point of time reading a single stream of a
>> file (or multiple files if they were next to each other on the
>> physical platter).
>>
>
> Yes that's completely correct, and it's the job of the "elevator" (*) on the
> "md" code to decide which sector, and in what order, such sectors will be
> serviced. If the "elevator" is aware of the underlying hardware, it will do
> the right thing if there are several md or just one.
>
>
The elevator is part of disk access, not md. It schedules head motion
for all users of the drive.
> *(I am not an expert on this, is just what I got from Robin)
>
>
--
Bill Davidsen <davidsen@tmr.com>
Unintended results are the well-earned reward for incompetence.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-13 13:13 About seting up Raid5 on a four disk box Antonio Perez
2009-10-13 14:04 ` Majed B.
@ 2009-10-13 15:23 ` Robin Hill
2009-10-14 7:45 ` Antonio Perez
2009-10-13 20:54 ` Bill Davidsen
2 siblings, 1 reply; 16+ messages in thread
From: Robin Hill @ 2009-10-13 15:23 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2457 bytes --]
On Tue Oct 13, 2009 at 09:13:13AM -0400, Antonio Perez wrote:
> 1.- Should the "rest of the disk" be only one partition?
> I have read that making several partitions and setting several md disks:
> sd[a..d]2 --> md1
> sd[a..d]3 --> md2
> sd[a..d]4 --> md3
> would help with the rebuild time of each md, which sounds correct. It is
> also proposed that the md on the outer area of the disk would be faster
> allowing for better control of performance, assigning faster mds to the more
> used filesystems.
>
> However, and this I don't know, those sda[2..4] are not really different
> devices (spindles) and reads to one md would conflict (or not?) with reads
> to the other mds.
>
Any reads to the same disk will conflict - whether they're on separate
arrays or within a single array shouldn't make any difference (the IO
elevator is set on the disk rather than the array, so I assume it will
gather requests across all arrays). md itself knows that the arrays
share disks, so will ensure any maintenance tasks (resyncs, etc) only
run on a single array at a time.
> 2.- On the Raid 1: How many sectors to copy? 63?
> On an update of grub code, core.img could change, which means that the first
> 63 sectors (to be on the safe side) of the disk which gets the update should
> be copied to the other 3 disks.
> Or is it that the md code would mirror sectors 1-62 and only the MBR needs
> to be manually mirroed?
>
Not sure what you mean here. If you have /boot on a RAID1 array then
it's mirrored, so the stage1.5 and stage2 bootloaders get mirrored via
that. The stage1 bootloader is usually installed into the MBR, so only
that needs replicating separately.
> 2.- Is there a recomended way to trigger the said copy of question 2?
> Where should a call to copy the MBR should be placed? On update-grub?
>
I always just rerun the grub install for each disk manually (see
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
for example). Automating it will depend on your distribution - if
update-grub is what's called after a grub update (and it isn't
overwritten by the update) then this would seem the logical place to do
it.
HTH,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: About seting up Raid5 on a four disk box.
2009-10-13 15:23 ` Robin Hill
@ 2009-10-14 7:45 ` Antonio Perez
2009-10-14 8:56 ` Robin Hill
0 siblings, 1 reply; 16+ messages in thread
From: Antonio Perez @ 2009-10-14 7:45 UTC (permalink / raw)
To: linux-raid
Robin Hill wrote:
> On Tue Oct 13, 2009 at 09:13:13AM -0400, Antonio Perez wrote:
[snip]
>> However, and this I don't know, those sda[2..4] are not really different
>> devices (spindles) and reads to one md would conflict (or not?) with
>> reads to the other mds.
> Any reads to the same disk will conflict - whether they're on separate
> arrays or within a single array shouldn't make any difference
Perfect, you understood exactly what I was asking, thanks.
> (the IO elevator is set on the disk rather than the array, so I assume
> it will gather requests across all arrays). md itself knows that the
> arrays share disks, so will ensure any maintenance tasks (resyncs, etc)
> only run on a single array at a time.
Clear explanation, thanks.
Then, the two consequences of dividing disks in several md are correct:
1.- Will improve rebuild time, as each md is smaller.
2.- Will allow to control if the md is in the outside (faster) disk area.
Am I understanding you correctly?
>> 2.- On the Raid 1: How many sectors to copy? 63?
>> On an update of grub code, core.img could change, which means that the
>> first 63 sectors (to be on the safe side) of the disk which gets the
>> update should be copied to the other 3 disks.
>> Or is it that the md code would mirror sectors 1-62 and only the MBR
>> needs to be manually mirroed?
> Not sure what you mean here. If you have /boot on a RAID1 array then
> it's mirrored, so the stage1.5 and stage2 bootloaders get mirrored via
> that. The stage1 bootloader is usually installed into the MBR, so only
> that needs replicating separately.
There are two areas of interest:
1.- The "Boot disk area" including the MBR.:
On the usual (at least for me) partition scheme, the first cylinder*,
Sectors (LBA) 0-62 [CHS 0/0/1 to 0/0/63] of the disk are left empty, except
for sector 0, a.k.a. MBR. That is called the disk boot area.
Grub uses the sector 0 as the MBR and several sectors following it to embed
the booting code: core.img.
2.- The "partition boot area":
There is also an empty space at the beginning of each partition of 63
sectors. The first partition uses (LBA) 63-125 [CHS 0/1/0 to 0/1/63] for the
"partition boot area" therefore restoring alignment on the disk heads.
Next partitions repeat this setup.
This area is used to boot if grub is installed to sda1 instead of sda
So, I assume that a RAID1 on "sd[a..d]1" will mirror the "Area 2" but will
not touch any of the "Area 1".
Is this assumption correct?
* note: it really feels ugly to speak of cylinders on this day and age, but
thats how the PC-style scheme works.
>> 2.- Is there a recomended way to trigger the said copy of question 2?
>> Where should a call to copy the MBR should be placed? On update-grub?
> I always just rerun the grub install for each disk manually (see
> http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
> for example).
That's the usual procedure on a first install, correct.
> Automating it will depend on your distribution - if
> update-grub is what's called after a grub update (and it isn't
> overwritten by the update) then this would seem the logical place to do
> it.
I'll have to dig deeper to make any intelligent question, Thanks.
> HTH,
> Robin
It did a lot, thanks again!
--
Antonio Perez
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-14 7:45 ` Antonio Perez
@ 2009-10-14 8:56 ` Robin Hill
2009-10-14 12:55 ` Antonio Perez
0 siblings, 1 reply; 16+ messages in thread
From: Robin Hill @ 2009-10-14 8:56 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3187 bytes --]
On Wed Oct 14, 2009 at 03:45:21AM -0400, Antonio Perez wrote:
> Robin Hill wrote:
>
> > On Tue Oct 13, 2009 at 09:13:13AM -0400, Antonio Perez wrote:
>
> Then, the two consequences of dividing disks in several md are correct:
>
> 1.- Will improve rebuild time, as each md is smaller.
> 2.- Will allow to control if the md is in the outside (faster) disk area.
>
> Am I understanding you correctly?
>
Certainly both of these will apply, yes. My understanding is that
there'll be little (if any) negative effects from this, but that's based
more on what (and where) the tunables are for the IO stack, rather than
any knowledge of what's actually going on in it.
> There are two areas of interest:
>
> 1.- The "Boot disk area" including the MBR.:
> On the usual (at least for me) partition scheme, the first cylinder*,
> Sectors (LBA) 0-62 [CHS 0/0/1 to 0/0/63] of the disk are left empty, except
> for sector 0, a.k.a. MBR. That is called the disk boot area.
> Grub uses the sector 0 as the MBR and several sectors following it to embed
> the booting code: core.img.
>
> 2.- The "partition boot area":
> There is also an empty space at the beginning of each partition of 63
> sectors. The first partition uses (LBA) 63-125 [CHS 0/1/0 to 0/1/63] for the
> "partition boot area" therefore restoring alignment on the disk heads.
> Next partitions repeat this setup.
> This area is used to boot if grub is installed to sda1 instead of sda
>
> So, I assume that a RAID1 on "sd[a..d]1" will mirror the "Area 2" but will
> not touch any of the "Area 1".
>
> Is this assumption correct?
>
Yes - anything that falls within the blocks covered by the array will
get replicated, so "Area 2" will automatically get taken care of (you'll
have to make sure the RAID metadata version is chosen correctly so as
not to conflict though - 1.1/1.2 metadata may overlap with the area used
by grub here).
As for "Area 1" - from what you've written (and reading the grub
internals documentation), I can only assume you're referring to the
stage1.5 bootloader (unless you're using grub2?). This can live on the
file system (I don't see anything obvious in the documentation about how
you set where it goes - the documentation seems to suggest that the file
system is the default), so so there may well be no need to do anything
about this (other than the MBR itself).
> >> 2.- Is there a recomended way to trigger the said copy of question 2?
> >> Where should a call to copy the MBR should be placed? On update-grub?
>
> > I always just rerun the grub install for each disk manually (see
> > http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
> > for example).
>
> That's the usual procedure on a first install, correct.
>
It's what I do every update - it's a quick enough process, and grub
updates aren't very common (all the development's on grub2 now).
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: About seting up Raid5 on a four disk box.
2009-10-14 8:56 ` Robin Hill
@ 2009-10-14 12:55 ` Antonio Perez
0 siblings, 0 replies; 16+ messages in thread
From: Antonio Perez @ 2009-10-14 12:55 UTC (permalink / raw)
To: linux-raid
Robin Hill wrote:
> On Wed Oct 14, 2009 at 03:45:21AM -0400, Antonio Perez wrote:
>
>> Robin Hill wrote:
[snip]
>> Is this assumption correct?
>>
> Yes - anything that falls within the blocks covered by the array will
> get replicated, so "Area 2" will automatically get taken care of (you'll
> have to make sure the RAID metadata version is chosen correctly so as
> not to conflict though - 1.1/1.2 metadata may overlap with the area used
> by grub here).
Thanks. Yes, I am aware of the conflicts between metadata 1.x and grub.
> As for "Area 1" - from what you've written (and reading the grub
> internals documentation), I can only assume you're referring to the
> stage1.5 bootloader (unless you're using grub2?). This can live on the
> file system (I don't see anything obvious in the documentation about how
> you set where it goes - the documentation seems to suggest that the file
> system is the default), so so there may well be no need to do anything
> about this (other than the MBR itself).
I do use Grub2, but both gub-legacy and grub2 now do use the disk sectors
following the MBR to embed the boot code.
Looking at: http://thestarman.pcministry.com/asm/mbr/GRUB.htm
the pointer at 0x7C44 (disk offset 0x0044) is the sector which is to be
loaded next. It is usually 0x0001, which means simply the next sector.
I'll look at the grub-source later, but I am pretty sure that is the case.
>> >> 2.- Is there a recomended way to trigger the said copy of question 2?
>> >> Where should a call to copy the MBR should be placed? On update-grub?
>>
>> > I always just rerun the grub install for each disk manually (see
>>http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
>> > for example).
>>
>> That's the usual procedure on a first install, correct.
>>
> It's what I do every update - it's a quick enough process, and grub
> updates aren't very common (all the development's on grub2 now).
Thanks, Robin
--
Antonio Perez
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: About seting up Raid5 on a four disk box.
2009-10-13 13:13 About seting up Raid5 on a four disk box Antonio Perez
2009-10-13 14:04 ` Majed B.
2009-10-13 15:23 ` Robin Hill
@ 2009-10-13 20:54 ` Bill Davidsen
2009-10-14 7:49 ` Antonio Perez
2 siblings, 1 reply; 16+ messages in thread
From: Bill Davidsen @ 2009-10-13 20:54 UTC (permalink / raw)
To: ap23563m; +Cc: linux-raid
Antonio Perez wrote:
> If I'm posting to the wrong group, sorry. just point to the RTFM link.
>
> This post is about setting up a Debian box with four disks (size should not
> be important, me thinks), let's assume that a Raid 5 is the correct type for
> the intended use.
>
>
It's not, brief explanation follows.
You absolutely want your /boot to be raid-1 so it boots if the 1st drive
fails. You want to grub install in the MBR of each drive, by hand. The
boot will get the raid arrays going, so they can be raid-5, but read on.
If this will be a desktop, make / raid-10, it will be much faster. If
you will be doing things which are big enough to trigger swap use, that
should be raid-10 as well, allocated on the outer part of the drive. You
other data can be in raid-5 partitions, if that's the balance of space
and reliability for you. If you will use ext4 read the man pages on
stripe= and stride= settings, they should match what you are doing. If
these are large drives, ext4 has some options you should understand, DO
NOT just let the installer create the filesystem, they are not yet
clever enough, don't ask intended use questions, etc.
Read the comments on this stuff in the list archives, pick the ideas on
tuning which you find helpful, virtually any tuning you pick based on
projected use will beat the defaults.
Late thought: raid-10 swap is seriously faster than raid-5, I don't know
if suspend works on it, I don't raid laptops or suspend servers. The
wisdom of the list may appear here. ;-)
> Keeping aside LVM and/or layering of md (just for simplicity), and taking
> into account that /boot, / and maybe other areas should go in a Raid 1
> configuration, for booting reliability. I have three questions that perhaps
> you could help to clarify:
>
> 1.- Should the "rest of the disk" be only one partition?
> I have read that making several partitions and setting several md disks:
> sd[a..d]2 --> md1
> sd[a..d]3 --> md2
> sd[a..d]4 --> md3
> would help with the rebuild time of each md, which sounds correct. It is
> also proposed that the md on the outer area of the disk would be faster
> allowing for better control of performance, assigning faster mds to the more
> used filesystems.
>
> However, and this I don't know, those sda[2..4] are not really different
> devices (spindles) and reads to one md would conflict (or not?) with reads
> to the other mds.
>
> Setting the whole disk as one partition would prevent any conflict but would
> take longer to rebuild and files would be spread over the whole area of the
> disk.
>
> I really don't know the internals of md well enough to tell what advantages
> and problems one setup has over the other.
>
> 2.- On the Raid 1: How many sectors to copy? 63?
> On an update of grub code, core.img could change, which means that the first
> 63 sectors (to be on the safe side) of the disk which gets the update should
> be copied to the other 3 disks.
> Or is it that the md code would mirror sectors 1-62 and only the MBR needs
> to be manually mirroed?
>
> 2.- Is there a recomended way to trigger the said copy of question 2?
> Where should a call to copy the MBR should be placed? On update-grub?
>
> TIA
>
>
--
Bill Davidsen <davidsen@tmr.com>
Unintended results are the well-earned reward for incompetence.
^ permalink raw reply [flat|nested] 16+ messages in thread