* Full use of varying drive sizes?
@ 2009-09-22 11:24 Jon Hardcastle
2009-09-22 11:52 ` Kristleifur Daðason
` (2 more replies)
0 siblings, 3 replies; 22+ messages in thread
From: Jon Hardcastle @ 2009-09-22 11:24 UTC (permalink / raw)
To: linux-raid
Hey guys,
I have an array made of many drive sizes ranging from 500GB to 1TB and I appreciate that the array can only be a multiple of the smallest - I use the differing sizes as i just buy the best value drive at the time and hope that as i phase out the old drives I can '--grow' the array. That is all fine and dandy.
But could someone tell me, did I dream that there might one day be support to allow you to actually use that unused space in the array? Because that would be awesome! (if a little hairy re: spare drives - have to be the size of the largest drive in the array atleast..?) I have 3x500GB 2x750GB 1x1TB so I have 1TB of completely unused space!
Cheers.
Jon H
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 11:24 Full use of varying drive sizes? Jon Hardcastle
@ 2009-09-22 11:52 ` Kristleifur Daðason
2009-09-22 12:58 ` John Robinson
2009-09-22 13:05 ` Tapani Tarvainen
2009-09-23 10:07 ` Goswin von Brederlow
2 siblings, 1 reply; 22+ messages in thread
From: Kristleifur Daðason @ 2009-09-22 11:52 UTC (permalink / raw)
To: linux-raid
On Tue, Sep 22, 2009 at 11:24 AM, Jon Hardcastle
<jd_hardcastle@yahoo.com> wrote:
> Hey guys,
>
> I have an array made of many drive sizes ranging from 500GB to 1TB and I appreciate that the array can only be a multiple of the smallest - I use the differing sizes as i just buy the best value drive at the time and hope that as i phase out the old drives I can '--grow' the array. That is all fine and dandy.
>
> But could someone tell me, did I dream that there might one day be support to allow you to actually use that unused space in the array? Because that would be awesome! (if a little hairy re: spare drives - have to be the size of the largest drive in the array atleast..?) I have 3x500GB 2x750GB 1x1TB so I have 1TB of completely unused space!
>
> Cheers.
>
> Jon H
>
Here's a thought:
Imaginary case: Say you have a 500, a 1000 and a 1500 GB drive. You
could JBOD the 500 and the 1000 together and mirror that against the
1500GB.
Disclaimer:
I don't know if it makes any sense to do this. I haven't seen this
method mentioned before, IIRC. It may be too esoteric to get any
press, or it may be simply stupid.
-- Kristleifur
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 11:52 ` Kristleifur Daðason
@ 2009-09-22 12:58 ` John Robinson
2009-09-22 13:07 ` Majed B.
0 siblings, 1 reply; 22+ messages in thread
From: John Robinson @ 2009-09-22 12:58 UTC (permalink / raw)
To: Linux RAID
On 22/09/2009 12:52, Kristleifur Daðason wrote:
> On Tue, Sep 22, 2009 at 11:24 AM, Jon Hardcastle
> <jd_hardcastle@yahoo.com> wrote:
>> Hey guys,
>>
>> I have an array made of many drive sizes ranging from 500GB to 1TB and I appreciate that the array can only be a multiple of the smallest - I use the differing sizes as i just buy the best value drive at the time and hope that as i phase out the old drives I can '--grow' the array. That is all fine and dandy.
>>
>> But could someone tell me, did I dream that there might one day be support to allow you to actually use that unused space in the array? Because that would be awesome! (if a little hairy re: spare drives - have to be the size of the largest drive in the array atleast..?) I have 3x500GB 2x750GB 1x1TB so I have 1TB of completely unused space!
>
> Here's a thought:
> Imaginary case: Say you have a 500, a 1000 and a 1500 GB drive. You
> could JBOD the 500 and the 1000 together and mirror that against the
> 1500GB.
>
> Disclaimer:
> I don't know if it makes any sense to do this. I haven't seen this
> method mentioned before, IIRC. It may be too esoteric to get any
> press, or it may be simply stupid.
Sure you can do that. In Jon's case, a RAID-5 across all 6 discs using
the first 500GB, leaving 2 x 250GB and 1x 500GB free. The 2 x 250GB
could be JBOD'ed together and mirrored against the 500GB, giving another
500GB of usable storage. The two md arrays can in turn be JBOD'ed or
perhaps better LVM'ed together.
Another approach would be to have another RAID-5 across the 3 larger
drives, again providing an additional 500GB of usable storage, this time
leaving 1 x 250GB wasted, but available if another 1TB drive was added.
I think this may be the approach Netgear's X-RAID 2 takes to using
mixed-size discs: http://www.readynas.com/?p=656
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 11:24 Full use of varying drive sizes? Jon Hardcastle
2009-09-22 11:52 ` Kristleifur Daðason
@ 2009-09-22 13:05 ` Tapani Tarvainen
2009-09-23 10:07 ` Goswin von Brederlow
2 siblings, 0 replies; 22+ messages in thread
From: Tapani Tarvainen @ 2009-09-22 13:05 UTC (permalink / raw)
To: Jon; +Cc: linux-raid
On Tue, Sep 22, 2009 at 04:24:23AM -0700, Jon Hardcastle (jd_hardcastle@yahoo.com) wrote:
> I have an array made of many drive sizes ranging from 500GB to 1TB
> But could someone tell me, did I dream that there might one day be
> support to allow you to actually use that unused space in the array?
You can partition the disks and raid the partitions separately.
> I have 3x500GB 2x750GB 1x1TB so I have 1TB of completely unused
> space!
If yoy partition the 750GB disks as 500+250 and the 1TB disk as
500+250+250 you could, for example, create one 6x500GB RAID6 array
and one 3x250GB RAID5 and have 250GB non-raid space, or one
6x500GB RAID6 and two 2x250GB RAID1's.
--
Tapani Tarvainen
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 12:58 ` John Robinson
@ 2009-09-22 13:07 ` Majed B.
2009-09-22 15:38 ` Jon Hardcastle
` (2 more replies)
0 siblings, 3 replies; 22+ messages in thread
From: Majed B. @ 2009-09-22 13:07 UTC (permalink / raw)
To: Linux RAID
When I first put up a storage box, it was built out of 4x 500GB disks,
later on, I expanded to 1TB disks.
What I did was partition the 1TB disks into 2x 500GB partitions, then
create 2 RAID arrays: Each array out of partitions:
md0: sda1, sdb1, sdc1, ...etc.
md1: sda2, sdb2, sdc2, ...etc.
All of those below LVM.
This worked for a while, but when more 1TB disks started making way
into the array, performance dropped because the disk had to read from
2 partitions on the same disk, and even worse: When a disk fail, both
arrays were affected, and things only got nastier and worse with time.
I would not recommend that you create arrays of partitions that rely
on each other.
I do find the JBOD -> Mirror approach suggested earlier to be convenient though.
On Tue, Sep 22, 2009 at 3:58 PM, John Robinson
<john.robinson@anonymous.org.uk> wrote:
> On 22/09/2009 12:52, Kristleifur Dađason wrote:
>>
>> On Tue, Sep 22, 2009 at 11:24 AM, Jon Hardcastle
>> <jd_hardcastle@yahoo.com> wrote:
>>>
>>> Hey guys,
>>>
>>> I have an array made of many drive sizes ranging from 500GB to 1TB and I
>>> appreciate that the array can only be a multiple of the smallest - I use the
>>> differing sizes as i just buy the best value drive at the time and hope that
>>> as i phase out the old drives I can '--grow' the array. That is all fine and
>>> dandy.
>>>
>>> But could someone tell me, did I dream that there might one day be
>>> support to allow you to actually use that unused space in the array? Because
>>> that would be awesome! (if a little hairy re: spare drives - have to be the
>>> size of the largest drive in the array atleast..?) I have 3x500GB 2x750GB
>>> 1x1TB so I have 1TB of completely unused space!
>>
>> Here's a thought:
>> Imaginary case: Say you have a 500, a 1000 and a 1500 GB drive. You
>> could JBOD the 500 and the 1000 together and mirror that against the
>> 1500GB.
>>
>> Disclaimer:
>> I don't know if it makes any sense to do this. I haven't seen this
>> method mentioned before, IIRC. It may be too esoteric to get any
>> press, or it may be simply stupid.
>
> Sure you can do that. In Jon's case, a RAID-5 across all 6 discs using the
> first 500GB, leaving 2 x 250GB and 1x 500GB free. The 2 x 250GB could be
> JBOD'ed together and mirrored against the 500GB, giving another 500GB of
> usable storage. The two md arrays can in turn be JBOD'ed or perhaps better
> LVM'ed together.
>
> Another approach would be to have another RAID-5 across the 3 larger drives,
> again providing an additional 500GB of usable storage, this time leaving 1 x
> 250GB wasted, but available if another 1TB drive was added. I think this may
> be the approach Netgear's X-RAID 2 takes to using mixed-size discs:
> http://www.readynas.com/?p=656
>
> Cheers,
>
> John.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 13:07 ` Majed B.
@ 2009-09-22 15:38 ` Jon Hardcastle
2009-09-22 15:47 ` Majed B.
` (2 more replies)
2009-09-23 8:20 ` John Robinson
2009-09-23 10:15 ` Tapani Tarvainen
2 siblings, 3 replies; 22+ messages in thread
From: Jon Hardcastle @ 2009-09-22 15:38 UTC (permalink / raw)
To: Linux RAID, Majed B.
Some good suggestions here, thanks guys.
Do I >DID< imagine some built in support for making use of this space?
As a side note. when i do a repair or check on my array.. does it check the WHOLE DRIVE.. or just the part that is being used? I.e. in my case.. I have a 1TB drive but only an array multiple of 500GB.. i'd like to think it is checking the whole whack as it may have to take over some day...
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------
--- On Tue, 22/9/09, Majed B. <majedb@gmail.com> wrote:
> From: Majed B. <majedb@gmail.com>
> Subject: Re: Full use of varying drive sizes?
> To: "Linux RAID" <linux-raid@vger.kernel.org>
> Date: Tuesday, 22 September, 2009, 2:07 PM
> When I first put up a storage box, it
> was built out of 4x 500GB disks,
> later on, I expanded to 1TB disks.
>
> What I did was partition the 1TB disks into 2x 500GB
> partitions, then
> create 2 RAID arrays: Each array out of partitions:
> md0: sda1, sdb1, sdc1, ...etc.
> md1: sda2, sdb2, sdc2, ...etc.
>
> All of those below LVM.
>
> This worked for a while, but when more 1TB disks started
> making way
> into the array, performance dropped because the disk had to
> read from
> 2 partitions on the same disk, and even worse: When a disk
> fail, both
> arrays were affected, and things only got nastier and worse
> with time.
>
> I would not recommend that you create arrays of partitions
> that rely
> on each other.
>
> I do find the JBOD -> Mirror approach suggested earlier
> to be convenient though.
>
> On Tue, Sep 22, 2009 at 3:58 PM, John Robinson
> <john.robinson@anonymous.org.uk>
> wrote:
> > On 22/09/2009 12:52, Kristleifur Dađason wrote:
> >>
> >> On Tue, Sep 22, 2009 at 11:24 AM, Jon Hardcastle
> >> <jd_hardcastle@yahoo.com>
> wrote:
> >>>
> >>> Hey guys,
> >>>
> >>> I have an array made of many drive sizes
> ranging from 500GB to 1TB and I
> >>> appreciate that the array can only be a
> multiple of the smallest - I use the
> >>> differing sizes as i just buy the best value
> drive at the time and hope that
> >>> as i phase out the old drives I can '--grow'
> the array. That is all fine and
> >>> dandy.
> >>>
> >>> But could someone tell me, did I dream that
> there might one day be
> >>> support to allow you to actually use that
> unused space in the array? Because
> >>> that would be awesome! (if a little hairy re:
> spare drives - have to be the
> >>> size of the largest drive in the array
> atleast..?) I have 3x500GB 2x750GB
> >>> 1x1TB so I have 1TB of completely unused
> space!
> >>
> >> Here's a thought:
> >> Imaginary case: Say you have a 500, a 1000 and a
> 1500 GB drive. You
> >> could JBOD the 500 and the 1000 together and
> mirror that against the
> >> 1500GB.
> >>
> >> Disclaimer:
> >> I don't know if it makes any sense to do this. I
> haven't seen this
> >> method mentioned before, IIRC. It may be too
> esoteric to get any
> >> press, or it may be simply stupid.
> >
> > Sure you can do that. In Jon's case, a RAID-5 across
> all 6 discs using the
> > first 500GB, leaving 2 x 250GB and 1x 500GB free. The
> 2 x 250GB could be
> > JBOD'ed together and mirrored against the 500GB,
> giving another 500GB of
> > usable storage. The two md arrays can in turn be
> JBOD'ed or perhaps better
> > LVM'ed together.
> >
> > Another approach would be to have another RAID-5
> across the 3 larger drives,
> > again providing an additional 500GB of usable storage,
> this time leaving 1 x
> > 250GB wasted, but available if another 1TB drive was
> added. I think this may
> > be the approach Netgear's X-RAID 2 takes to using
> mixed-size discs:
> > http://www.readynas.com/?p=656
> >
> > Cheers,
> >
> > John.
> > --
> > To unsubscribe from this list: send the line
> "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
>
>
>
> --
> Majed B.
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 15:38 ` Jon Hardcastle
@ 2009-09-22 15:47 ` Majed B.
2009-09-22 15:48 ` Ryan Wagoner
2009-09-22 16:04 ` Robin Hill
2 siblings, 0 replies; 22+ messages in thread
From: Majed B. @ 2009-09-22 15:47 UTC (permalink / raw)
To: Jon; +Cc: Linux RAID
md checks its part only, as far as I know. It makes sense since it
would be checking the integrity of the array's disks/partitions.
If you want the disks to undergo full check up, I'd suggest you run
smartd and run both short self tests and offline tests. Offline tests
take a very long time because they do thorough tests.
Make sure you setup smartd to email you on reports/problems so that
you can find bad sectors as soon as they happen and fix them asap.
On Tue, Sep 22, 2009 at 6:38 PM, Jon Hardcastle <jd_hardcastle@yahoo.com> wrote:
> Some good suggestions here, thanks guys.
>
> Do I >DID< imagine some built in support for making use of this space?
>
> As a side note. when i do a repair or check on my array.. does it check the WHOLE DRIVE.. or just the part that is being used? I.e. in my case.. I have a 1TB drive but only an array multiple of 500GB.. i'd like to think it is checking the whole whack as it may have to take over some day...
>
> -----------------------
> N: Jon Hardcastle
> E: Jon@eHardcastle.com
> 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
> -----------------------
>
>
> --- On Tue, 22/9/09, Majed B. <majedb@gmail.com> wrote:
>
>> From: Majed B. <majedb@gmail.com>
>> Subject: Re: Full use of varying drive sizes?
>> To: "Linux RAID" <linux-raid@vger.kernel.org>
>> Date: Tuesday, 22 September, 2009, 2:07 PM
>> When I first put up a storage box, it
>> was built out of 4x 500GB disks,
>> later on, I expanded to 1TB disks.
>>
>> What I did was partition the 1TB disks into 2x 500GB
>> partitions, then
>> create 2 RAID arrays: Each array out of partitions:
>> md0: sda1, sdb1, sdc1, ...etc.
>> md1: sda2, sdb2, sdc2, ...etc.
>>
>> All of those below LVM.
>>
>> This worked for a while, but when more 1TB disks started
>> making way
>> into the array, performance dropped because the disk had to
>> read from
>> 2 partitions on the same disk, and even worse: When a disk
>> fail, both
>> arrays were affected, and things only got nastier and worse
>> with time.
>>
>> I would not recommend that you create arrays of partitions
>> that rely
>> on each other.
>>
>> I do find the JBOD -> Mirror approach suggested earlier
>> to be convenient though.
>>
>> On Tue, Sep 22, 2009 at 3:58 PM, John Robinson
>> <john.robinson@anonymous.org.uk>
>> wrote:
>> > On 22/09/2009 12:52, Kristleifur Dađason wrote:
>> >>
>> >> On Tue, Sep 22, 2009 at 11:24 AM, Jon Hardcastle
>> >> <jd_hardcastle@yahoo.com>
>> wrote:
>> >>>
>> >>> Hey guys,
>> >>>
>> >>> I have an array made of many drive sizes
>> ranging from 500GB to 1TB and I
>> >>> appreciate that the array can only be a
>> multiple of the smallest - I use the
>> >>> differing sizes as i just buy the best value
>> drive at the time and hope that
>> >>> as i phase out the old drives I can '--grow'
>> the array. That is all fine and
>> >>> dandy.
>> >>>
>> >>> But could someone tell me, did I dream that
>> there might one day be
>> >>> support to allow you to actually use that
>> unused space in the array? Because
>> >>> that would be awesome! (if a little hairy re:
>> spare drives - have to be the
>> >>> size of the largest drive in the array
>> atleast..?) I have 3x500GB 2x750GB
>> >>> 1x1TB so I have 1TB of completely unused
>> space!
>> >>
>> >> Here's a thought:
>> >> Imaginary case: Say you have a 500, a 1000 and a
>> 1500 GB drive. You
>> >> could JBOD the 500 and the 1000 together and
>> mirror that against the
>> >> 1500GB.
>> >>
>> >> Disclaimer:
>> >> I don't know if it makes any sense to do this. I
>> haven't seen this
>> >> method mentioned before, IIRC. It may be too
>> esoteric to get any
>> >> press, or it may be simply stupid.
>> >
>> > Sure you can do that. In Jon's case, a RAID-5 across
>> all 6 discs using the
>> > first 500GB, leaving 2 x 250GB and 1x 500GB free. The
>> 2 x 250GB could be
>> > JBOD'ed together and mirrored against the 500GB,
>> giving another 500GB of
>> > usable storage. The two md arrays can in turn be
>> JBOD'ed or perhaps better
>> > LVM'ed together.
>> >
>> > Another approach would be to have another RAID-5
>> across the 3 larger drives,
>> > again providing an additional 500GB of usable storage,
>> this time leaving 1 x
>> > 250GB wasted, but available if another 1TB drive was
>> added. I think this may
>> > be the approach Netgear's X-RAID 2 takes to using
>> mixed-size discs:
>> > http://www.readynas.com/?p=656
>> >
>> > Cheers,
>> >
>> > John.
>> > --
>> > To unsubscribe from this list: send the line
>> "unsubscribe linux-raid" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>> >
>>
>>
>>
>> --
>> Majed B.
>> --
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 15:38 ` Jon Hardcastle
2009-09-22 15:47 ` Majed B.
@ 2009-09-22 15:48 ` Ryan Wagoner
2009-09-22 16:04 ` Robin Hill
2 siblings, 0 replies; 22+ messages in thread
From: Ryan Wagoner @ 2009-09-22 15:48 UTC (permalink / raw)
To: Jon; +Cc: Linux RAID, Majed B.
It is only going to check the part of the drive used for that array.
So on your 1TB drive with a 500GB partition allocated to the array
only the 500GB part will be checked. Mdadm doesn't know about the rest
of the drive and even if it did it is not like it can compare that
space against anything.
Ryan
On Tue, Sep 22, 2009 at 11:38 AM, Jon Hardcastle
<jd_hardcastle@yahoo.com> wrote:
> Some good suggestions here, thanks guys.
>
> Do I >DID< imagine some built in support for making use of this space?
>
> As a side note. when i do a repair or check on my array.. does it check the WHOLE DRIVE.. or just the part that is being used? I.e. in my case.. I have a 1TB drive but only an array multiple of 500GB.. i'd like to think it is checking the whole whack as it may have to take over some day...
>
> -----------------------
> N: Jon Hardcastle
> E: Jon@eHardcastle.com
> 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
> -----------------------
>
>
> --- On Tue, 22/9/09, Majed B. <majedb@gmail.com> wrote:
>
>> From: Majed B. <majedb@gmail.com>
>> Subject: Re: Full use of varying drive sizes?
>> To: "Linux RAID" <linux-raid@vger.kernel.org>
>> Date: Tuesday, 22 September, 2009, 2:07 PM
>> When I first put up a storage box, it
>> was built out of 4x 500GB disks,
>> later on, I expanded to 1TB disks.
>>
>> What I did was partition the 1TB disks into 2x 500GB
>> partitions, then
>> create 2 RAID arrays: Each array out of partitions:
>> md0: sda1, sdb1, sdc1, ...etc.
>> md1: sda2, sdb2, sdc2, ...etc.
>>
>> All of those below LVM.
>>
>> This worked for a while, but when more 1TB disks started
>> making way
>> into the array, performance dropped because the disk had to
>> read from
>> 2 partitions on the same disk, and even worse: When a disk
>> fail, both
>> arrays were affected, and things only got nastier and worse
>> with time.
>>
>> I would not recommend that you create arrays of partitions
>> that rely
>> on each other.
>>
>> I do find the JBOD -> Mirror approach suggested earlier
>> to be convenient though.
>>
>> On Tue, Sep 22, 2009 at 3:58 PM, John Robinson
>> <john.robinson@anonymous.org.uk>
>> wrote:
>> > On 22/09/2009 12:52, Kristleifur Dađason wrote:
>> >>
>> >> On Tue, Sep 22, 2009 at 11:24 AM, Jon Hardcastle
>> >> <jd_hardcastle@yahoo.com>
>> wrote:
>> >>>
>> >>> Hey guys,
>> >>>
>> >>> I have an array made of many drive sizes
>> ranging from 500GB to 1TB and I
>> >>> appreciate that the array can only be a
>> multiple of the smallest - I use the
>> >>> differing sizes as i just buy the best value
>> drive at the time and hope that
>> >>> as i phase out the old drives I can '--grow'
>> the array. That is all fine and
>> >>> dandy.
>> >>>
>> >>> But could someone tell me, did I dream that
>> there might one day be
>> >>> support to allow you to actually use that
>> unused space in the array? Because
>> >>> that would be awesome! (if a little hairy re:
>> spare drives - have to be the
>> >>> size of the largest drive in the array
>> atleast..?) I have 3x500GB 2x750GB
>> >>> 1x1TB so I have 1TB of completely unused
>> space!
>> >>
>> >> Here's a thought:
>> >> Imaginary case: Say you have a 500, a 1000 and a
>> 1500 GB drive. You
>> >> could JBOD the 500 and the 1000 together and
>> mirror that against the
>> >> 1500GB.
>> >>
>> >> Disclaimer:
>> >> I don't know if it makes any sense to do this. I
>> haven't seen this
>> >> method mentioned before, IIRC. It may be too
>> esoteric to get any
>> >> press, or it may be simply stupid.
>> >
>> > Sure you can do that. In Jon's case, a RAID-5 across
>> all 6 discs using the
>> > first 500GB, leaving 2 x 250GB and 1x 500GB free. The
>> 2 x 250GB could be
>> > JBOD'ed together and mirrored against the 500GB,
>> giving another 500GB of
>> > usable storage. The two md arrays can in turn be
>> JBOD'ed or perhaps better
>> > LVM'ed together.
>> >
>> > Another approach would be to have another RAID-5
>> across the 3 larger drives,
>> > again providing an additional 500GB of usable storage,
>> this time leaving 1 x
>> > 250GB wasted, but available if another 1TB drive was
>> added. I think this may
>> > be the approach Netgear's X-RAID 2 takes to using
>> mixed-size discs:
>> > http://www.readynas.com/?p=656
>> >
>> > Cheers,
>> >
>> > John.
>> > --
>> > To unsubscribe from this list: send the line
>> "unsubscribe linux-raid" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>> >
>>
>>
>>
>> --
>> Majed B.
>> --
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 15:38 ` Jon Hardcastle
2009-09-22 15:47 ` Majed B.
2009-09-22 15:48 ` Ryan Wagoner
@ 2009-09-22 16:04 ` Robin Hill
2 siblings, 0 replies; 22+ messages in thread
From: Robin Hill @ 2009-09-22 16:04 UTC (permalink / raw)
To: Linux RAID
[-- Attachment #1: Type: text/plain, Size: 1033 bytes --]
On Tue Sep 22, 2009 at 08:38:39AM -0700, Jon Hardcastle wrote:
> Some good suggestions here, thanks guys.
>
> Do I >DID< imagine some built in support for making use of this space?
>
I believe ZFS manages to do this better, but I've not looked into this
in any detail yet.
> As a side note. when i do a repair or check on my array.. does it
> check the WHOLE DRIVE.. or just the part that is being used? I.e. in
> my case.. I have a 1TB drive but only an array multiple of 500GB.. i'd
> like to think it is checking the whole whack as it may have to take
> over some day...
>
The repair/check just recalculates the checksums for the data drives and
verifies that the parity drive holds the correct data. This can only be
done within the region used by the array.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 13:07 ` Majed B.
2009-09-22 15:38 ` Jon Hardcastle
@ 2009-09-23 8:20 ` John Robinson
2009-09-23 10:15 ` Tapani Tarvainen
2 siblings, 0 replies; 22+ messages in thread
From: John Robinson @ 2009-09-23 8:20 UTC (permalink / raw)
To: Majed B.; +Cc: Linux RAID
On 22/09/2009 14:07, Majed B. wrote:
> When I first put up a storage box, it was built out of 4x 500GB disks,
> later on, I expanded to 1TB disks.
>
> What I did was partition the 1TB disks into 2x 500GB partitions, then
> create 2 RAID arrays: Each array out of partitions:
> md0: sda1, sdb1, sdc1, ...etc.
> md1: sda2, sdb2, sdc2, ...etc.
>
> All of those below LVM.
>
> This worked for a while, but when more 1TB disks started making way
> into the array, performance dropped because the disk had to read from
> 2 partitions on the same disk, and even worse: When a disk fail, both
> arrays were affected, and things only got nastier and worse with time.
Sorry, I don't quite see what you mean. Sure, if half your drives are
500GB and half are 1TB, and you therefore have 2 arrays on the 1TB
drives, with the arrays as PVs for LVM, and one filesystem over the lot,
you're going to get twice as many read/write ops on the larger drives,
but you'd get that just concatenating the drives with JBOD. I wasn't
suggesting you let LVM stripe across the arrays, though - that would be
performance suicide.
> I would not recommend that you create arrays of partitions that rely
> on each other.
Again I don't see what you mean by "rely on each other", they're just
PVs to LVM.
Cheers,
John.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 11:24 Full use of varying drive sizes? Jon Hardcastle
2009-09-22 11:52 ` Kristleifur Daðason
2009-09-22 13:05 ` Tapani Tarvainen
@ 2009-09-23 10:07 ` Goswin von Brederlow
2009-09-23 14:57 ` Jon Hardcastle
2 siblings, 1 reply; 22+ messages in thread
From: Goswin von Brederlow @ 2009-09-23 10:07 UTC (permalink / raw)
To: Jon; +Cc: linux-raid
Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
> Hey guys,
>
> I have an array made of many drive sizes ranging from 500GB to 1TB and I appreciate that the array can only be a multiple of the smallest - I use the differing sizes as i just buy the best value drive at the time and hope that as i phase out the old drives I can '--grow' the array. That is all fine and dandy.
>
> But could someone tell me, did I dream that there might one day be support to allow you to actually use that unused space in the array? Because that would be awesome! (if a little hairy re: spare drives - have to be the size of the largest drive in the array atleast..?) I have 3x500GB 2x750GB 1x1TB so I have 1TB of completely unused space!
>
> Cheers.
>
> Jon H
I face the same problem as I buy new disks whenever I need more space
and have the money.
I found a rather simple way to organize disks of different sizes into
a set of software raids that gives the maximum size. The reasoning for
this algorithm are as follows:
1) 2 partitions of a disk must never be in the same raid set
2) as many disks as possible in each raid set to minimize the loss for
parity
3) the number of disks in each raid set should be equal to give
uniform amount of redundancy (same saftey for all data). Worst (and
usual) case will be a difference of 1 disk.
So here is the algorithm:
1) Draw a box as wide as the largest disk and open ended towards the
bottom.
2) Draw in each disk in order of size one right to the other.
When you hit the right side of the box continue in the next line.
3) Go through the box left to right and draw a vertical line every
time one disk ends and another starts.
4) Each sub-box creted thus represents one raid using the disks drawn
into it in the respective sizes present in the box.
In your case you have 6 Disks: A (1TB), BC (750G), DEF(500G)
+----------+-----+-----+
|AAAAAAAAAA|AAAAA|AAAAA|
|BBBBBBBBBB|BBBBB|CCCCC|
|CCCCCCCCCC|DDDDD|DDDDD|
|EEEEEEEEEE|FFFFF|FFFFF|
| md0 | md1 | md2 |
For raid5 this would give you:
md0: sda1, sdb1, sdc1, sde1 (500G) -> 1500G
md1: sda2, sdb2, sdd1, sdf1 (250G) -> 750G
md2: sda3, sdc2, sdd2, sdf2 (250G) -> 750G
-----
3000G total
As spare you would probably want to always use the largest disk as
only then it is completly unused and can power down.
Note that in your case the fit is perfect with all raids having 4
disks. This is not always the case. Worst case there is a difference
of 1 between raids though.
As a side node: Resizing when you get new disks might become tricky
and involve shuffeling around a lot of data. You might want to split
md0 into 2 raids with 250G partitiosn each assuming future disks will
continue to be multiples of 250G.
MfG
Goswin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-22 13:07 ` Majed B.
2009-09-22 15:38 ` Jon Hardcastle
2009-09-23 8:20 ` John Robinson
@ 2009-09-23 10:15 ` Tapani Tarvainen
2009-09-23 12:42 ` Goswin von Brederlow
2 siblings, 1 reply; 22+ messages in thread
From: Tapani Tarvainen @ 2009-09-23 10:15 UTC (permalink / raw)
To: Linux RAID
On Tue, Sep 22, 2009 at 04:07:53PM +0300, Majed B. (majedb@gmail.com) wrote:
> When I first put up a storage box, it was built out of 4x 500GB disks,
> later on, I expanded to 1TB disks.
>
> What I did was partition the 1TB disks into 2x 500GB partitions, then
> create 2 RAID arrays: Each array out of partitions:
> md0: sda1, sdb1, sdc1, ...etc.
> md1: sda2, sdb2, sdc2, ...etc.
>
> All of those below LVM.
>
> This worked for a while, but when more 1TB disks started making way
> into the array, performance dropped because the disk had to read from
> 2 partitions on the same disk, and even worse: When a disk fail, both
> arrays were affected, and things only got nastier and worse with time.
I'm not 100% sure I understand what you did, but for the record,
I've got a box with four 1TB disks arranged roughly like this:
md0: sda1, sdb1, sdc1, sde1
md1: sda2, sdb2, sdc2, sde2
md2: sda3, sdb3, sdc3, sde3
md3: sda4, sdb4, sdc4, sde4
and each md a pv under lvm, and it's been running problem-free
for over a year now. (No claims about performance, haven't
made any usable measurements, but it's fast enough for what
it does.)
When it was new I had strange problems of one disk dropping out of the
arrays every few days. The reason was traced to faulty SATA controller
(replacing it fixed the problem), but the process revealed an extra
advantage in the partitioning scheme: the lost disk could be added
back after reboot and array rebuilt, but the fault had appeared
in only one md at a time, so recovery was four times faster
than if the disks had had only one partition.
--
Tapani Tarvainen
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-23 10:15 ` Tapani Tarvainen
@ 2009-09-23 12:42 ` Goswin von Brederlow
0 siblings, 0 replies; 22+ messages in thread
From: Goswin von Brederlow @ 2009-09-23 12:42 UTC (permalink / raw)
To: Tapani Tarvainen; +Cc: Linux RAID
Tapani Tarvainen <raid@tapanitarvainen.fi> writes:
> On Tue, Sep 22, 2009 at 04:07:53PM +0300, Majed B. (majedb@gmail.com) wrote:
>
>> When I first put up a storage box, it was built out of 4x 500GB disks,
>> later on, I expanded to 1TB disks.
>>
>> What I did was partition the 1TB disks into 2x 500GB partitions, then
>> create 2 RAID arrays: Each array out of partitions:
>> md0: sda1, sdb1, sdc1, ...etc.
>> md1: sda2, sdb2, sdc2, ...etc.
>>
>> All of those below LVM.
>>
>> This worked for a while, but when more 1TB disks started making way
>> into the array, performance dropped because the disk had to read from
>> 2 partitions on the same disk, and even worse: When a disk fail, both
>> arrays were affected, and things only got nastier and worse with time.
>
> I'm not 100% sure I understand what you did, but for the record,
> I've got a box with four 1TB disks arranged roughly like this:
>
> md0: sda1, sdb1, sdc1, sde1
> md1: sda2, sdb2, sdc2, sde2
> md2: sda3, sdb3, sdc3, sde3
> md3: sda4, sdb4, sdc4, sde4
>
> and each md a pv under lvm, and it's been running problem-free
> for over a year now. (No claims about performance, haven't
> made any usable measurements, but it's fast enough for what
> it does.)
>
> When it was new I had strange problems of one disk dropping out of the
> arrays every few days. The reason was traced to faulty SATA controller
> (replacing it fixed the problem), but the process revealed an extra
> advantage in the partitioning scheme: the lost disk could be added
> back after reboot and array rebuilt, but the fault had appeared
> in only one md at a time, so recovery was four times faster
> than if the disks had had only one partition.
In such a case bitmaps will bring the resync time down to minutes.
MfG
Goswin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?
2009-09-23 10:07 ` Goswin von Brederlow
@ 2009-09-23 14:57 ` Jon Hardcastle
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
0 siblings, 1 reply; 22+ messages in thread
From: Jon Hardcastle @ 2009-09-23 14:57 UTC (permalink / raw)
To: Jon, Goswin von Brederlow; +Cc: linux-raid
--- On Wed, 23/9/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
> From: Goswin von Brederlow <goswin-v-b@web.de>
> Subject: Re: Full use of varying drive sizes?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Wednesday, 23 September, 2009, 11:07 AM
> Jon Hardcastle <jd_hardcastle@yahoo.com>
> writes:
>
> > Hey guys,
> >
> > I have an array made of many drive sizes ranging from
> 500GB to 1TB and I appreciate that the array can only be a
> multiple of the smallest - I use the differing sizes as i
> just buy the best value drive at the time and hope that as i
> phase out the old drives I can '--grow' the array. That is
> all fine and dandy.
> >
> > But could someone tell me, did I dream that there
> might one day be support to allow you to actually use that
> unused space in the array? Because that would be awesome!
> (if a little hairy re: spare drives - have to be the size of
> the largest drive in the array atleast..?) I have 3x500GB
> 2x750GB 1x1TB so I have 1TB of completely unused space!
> >
> > Cheers.
> >
> > Jon H
>
> I face the same problem as I buy new disks whenever I need
> more space
> and have the money.
>
> I found a rather simple way to organize disks of different
> sizes into
> a set of software raids that gives the maximum size. The
> reasoning for
> this algorithm are as follows:
>
> 1) 2 partitions of a disk must never be in the same raid
> set
>
> 2) as many disks as possible in each raid set to minimize
> the loss for
> parity
>
> 3) the number of disks in each raid set should be equal to
> give
> uniform amount of redundancy (same saftey for all data).
> Worst (and
> usual) case will be a difference of 1 disk.
>
>
> So here is the algorithm:
>
> 1) Draw a box as wide as the largest disk and open ended
> towards the
> bottom.
>
> 2) Draw in each disk in order of size one right to the
> other.
> When you hit the right side of the box
> continue in the next line.
>
> 3) Go through the box left to right and draw a vertical
> line every
> time one disk ends and another starts.
>
> 4) Each sub-box creted thus represents one raid using the
> disks drawn
> into it in the respective sizes present
> in the box.
>
> In your case you have 6 Disks: A (1TB), BC (750G),
> DEF(500G)
>
> +----------+-----+-----+
> |AAAAAAAAAA|AAAAA|AAAAA|
> |BBBBBBBBBB|BBBBB|CCCCC|
> |CCCCCCCCCC|DDDDD|DDDDD|
> |EEEEEEEEEE|FFFFF|FFFFF|
> | md0 | md1 | md2 |
>
> For raid5 this would give you:
>
> md0: sda1, sdb1, sdc1, sde1 (500G) -> 1500G
> md1: sda2, sdb2, sdd1, sdf1 (250G) -> 750G
> md2: sda3, sdc2, sdd2, sdf2 (250G) -> 750G
>
>
> -----
>
>
> 3000G total
>
> As spare you would probably want to always use the largest
> disk as
> only then it is completly unused and can power down.
>
> Note that in your case the fit is perfect with all raids
> having 4
> disks. This is not always the case. Worst case there is a
> difference
> of 1 between raids though.
>
>
>
> As a side node: Resizing when you get new disks might
> become tricky
> and involve shuffeling around a lot of data. You might want
> to split
> md0 into 2 raids with 250G partitiosn each assuming future
> disks will
> continue to be multiples of 250G.
>
> MfG
> Goswin
>
Yes,
This is a great system. I did think about this when i first created my array but I was young and lacked the confidence to do much..
So assuming I then purchased a 1.5TB drive the diagram would change to
6 Disks: A (1TB), BC (750G), DEF(500G), G(1.5TB)
i) So i'd partition the drive up into 250GB chucks and add each chuck to md0~3
+-----+-----+-----+-----+-----+-----+
|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
|AAAAA|AAAAA|AAAAA|AAAAA| | |
|BBBBB|BBBBB|BBBBB|CCCCC| | |
|CCCCC|CCCCC|DDDDD|DDDDD| | |
|EEEEE|EEEEE|FFFFF|FFFFF| | |
| md0| md1 | md2 | md3 | md4 | md5 |
ii) then I guess I'd have to relieve the E's from md0 and md1? giving (which I can do by failing the drives?)
this would then kick in the use of the newly added G's?
+-----+-----+-----+-----+-----+-----+
|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
|AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
|BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
|CCCCC|CCCCC|DDDDD|DDDDD| | |
|XXXXX|XXXXX|XXXXX|XXXXX| | |
| md0| md1 | md2 | md3 | md4 | md5 |
iii) Repeat for the F's which would again trigger the rebuild using the G's.
the end result is 6 arrays with 4 and 2 partions in respectively i.e.
+--1--+--2--+--3--+--4--+--5--+--6--+
sda|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
sdb|AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
sdc|BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
sdd|CCCCC|CCCCC|DDDDD|DDDDD| | |
sde| md0| md1 | md2 | md3 | md4 | md5 |
md0: sda1, sdb1, sdc1, sdd1 (250G) -> 750G
md1: sda2, sdb2, sdc2, sdd2 (250G) -> 750G
md2: sda3, sdb3, sdc3, sdd3 (250G) -> 750G
md3: sda4, sdb4, sdc4, sdd4 (250G) -> 750G
md4: sda5, sdb5, sdc5 -> 500G
md5: sda6, sdb6, sdc6 -> 500G
Total -> 4000G
I cant do the maths tho as my head hurts too much but is this quite wasteful with so many raid 5 arrays each time burning 1x250gb?
Finally... i DID find a reference...
check out: http://neil.brown.name/blog/20090817000931
'
...
It would also be nice to teach RAID5 to handle arrays with devices of different sizes. There are some complications there as you could have a hot spare that can replace some devices but not all.
...
'
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-23 14:57 ` Jon Hardcastle
@ 2009-09-23 20:28 ` Konstantinos Skarlatos
2009-09-23 21:29 ` Chris Green
` (3 more replies)
0 siblings, 4 replies; 22+ messages in thread
From: Konstantinos Skarlatos @ 2009-09-23 20:28 UTC (permalink / raw)
To: Jon; +Cc: Goswin von Brederlow, linux-raid, neilb
Instead of doing all those things, I have a suggestion to make:
Something that is like RAID 4 without striping.
There are already 3 programs doing that, Unraid, Flexraid and disparity,
but putting this functionality into linux-raid would be tremendous. (the
first two work on linux and the third one is a command line windows
program that works fine under wine).
The basic idea is this: Take any number of drives, with any capacity and
filesystem you like. Then provide the program with an empty disk at
least as large as your largest disk. The program creates parity data by
XORing together the disks sequentially block by block(or file by file),
until it reaches the end of the smallest one.(It XORs block 1 of disk A
with block1 of disk B, with block1 of disk C.... and writes the result
to block1 of Parity disk) Then it continues with the rest of the drives,
until it reaches the end of the last drive.
Disk A B C D E P
Block 1 1 1 1 1 1
Block 2 2 2 2
Block 3 3 3
Block 4 4
The great thing about this method is that when you lose one disk you can
get all your data back. when you lose two disks you only lose the data
on them, and not the whole array. New disks can be added and the parity
recalculated by reading only the new disk and the parity disk.
Please consider adding this feature request, it would be a big plus for
linux if such a functionality existed, bringing many users from WHS and
ZFS here, as it especially caters to the needs of people that store
video and their movie collection at their home server.
Thanks for your time
ABCDE for data drives, and P for parity
Jon Hardcastle wrote:
> --- On Wed, 23/9/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>
>
>> From: Goswin von Brederlow <goswin-v-b@web.de>
>> Subject: Re: Full use of varying drive sizes?
>> To: Jon@eHardcastle.com
>> Cc: linux-raid@vger.kernel.org
>> Date: Wednesday, 23 September, 2009, 11:07 AM
>> Jon Hardcastle <jd_hardcastle@yahoo.com>
>> writes:
>>
>>
>>> Hey guys,
>>>
>>> I have an array made of many drive sizes ranging from
>>>
>> 500GB to 1TB and I appreciate that the array can only be a
>> multiple of the smallest - I use the differing sizes as i
>> just buy the best value drive at the time and hope that as i
>> phase out the old drives I can '--grow' the array. That is
>> all fine and dandy.
>>
>>> But could someone tell me, did I dream that there
>>>
>> might one day be support to allow you to actually use that
>> unused space in the array? Because that would be awesome!
>> (if a little hairy re: spare drives - have to be the size of
>> the largest drive in the array atleast..?) I have 3x500GB
>> 2x750GB 1x1TB so I have 1TB of completely unused space!
>>
>>> Cheers.
>>>
>>> Jon H
>>>
>> I face the same problem as I buy new disks whenever I need
>> more space
>> and have the money.
>>
>> I found a rather simple way to organize disks of different
>> sizes into
>> a set of software raids that gives the maximum size. The
>> reasoning for
>> this algorithm are as follows:
>>
>> 1) 2 partitions of a disk must never be in the same raid
>> set
>>
>> 2) as many disks as possible in each raid set to minimize
>> the loss for
>> parity
>>
>> 3) the number of disks in each raid set should be equal to
>> give
>> uniform amount of redundancy (same saftey for all data).
>> Worst (and
>> usual) case will be a difference of 1 disk.
>>
>>
>> So here is the algorithm:
>>
>> 1) Draw a box as wide as the largest disk and open ended
>> towards the
>> bottom.
>>
>> 2) Draw in each disk in order of size one right to the
>> other.
>> When you hit the right side of the box
>> continue in the next line.
>>
>> 3) Go through the box left to right and draw a vertical
>> line every
>> time one disk ends and another starts.
>>
>> 4) Each sub-box creted thus represents one raid using the
>> disks drawn
>> into it in the respective sizes present
>> in the box.
>>
>> In your case you have 6 Disks: A (1TB), BC (750G),
>> DEF(500G)
>>
>> +----------+-----+-----+
>> |AAAAAAAAAA|AAAAA|AAAAA|
>> |BBBBBBBBBB|BBBBB|CCCCC|
>> |CCCCCCCCCC|DDDDD|DDDDD|
>> |EEEEEEEEEE|FFFFF|FFFFF|
>> | md0 | md1 | md2 |
>>
>> For raid5 this would give you:
>>
>> md0: sda1, sdb1, sdc1, sde1 (500G) -> 1500G
>> md1: sda2, sdb2, sdd1, sdf1 (250G) -> 750G
>> md2: sda3, sdc2, sdd2, sdf2 (250G) -> 750G
>>
>>
>> -----
>>
>>
>> 3000G total
>>
>> As spare you would probably want to always use the largest
>> disk as
>> only then it is completly unused and can power down.
>>
>> Note that in your case the fit is perfect with all raids
>> having 4
>> disks. This is not always the case. Worst case there is a
>> difference
>> of 1 between raids though.
>>
>>
>>
>> As a side node: Resizing when you get new disks might
>> become tricky
>> and involve shuffeling around a lot of data. You might want
>> to split
>> md0 into 2 raids with 250G partitiosn each assuming future
>> disks will
>> continue to be multiples of 250G.
>>
>> MfG
>> Goswin
>>
>>
>
> Yes,
>
> This is a great system. I did think about this when i first created my array but I was young and lacked the confidence to do much..
>
> So assuming I then purchased a 1.5TB drive the diagram would change to
>
> 6 Disks: A (1TB), BC (750G), DEF(500G), G(1.5TB)
>
> i) So i'd partition the drive up into 250GB chucks and add each chuck to md0~3
>
> +-----+-----+-----+-----+-----+-----+
> |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> |AAAAA|AAAAA|AAAAA|AAAAA| | |
> |BBBBB|BBBBB|BBBBB|CCCCC| | |
> |CCCCC|CCCCC|DDDDD|DDDDD| | |
> |EEEEE|EEEEE|FFFFF|FFFFF| | |
> | md0| md1 | md2 | md3 | md4 | md5 |
>
>
> ii) then I guess I'd have to relieve the E's from md0 and md1? giving (which I can do by failing the drives?)
> this would then kick in the use of the newly added G's?
>
> +-----+-----+-----+-----+-----+-----+
> |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> |AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
> |BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
> |CCCCC|CCCCC|DDDDD|DDDDD| | |
> |XXXXX|XXXXX|XXXXX|XXXXX| | |
> | md0| md1 | md2 | md3 | md4 | md5 |
>
> iii) Repeat for the F's which would again trigger the rebuild using the G's.
>
> the end result is 6 arrays with 4 and 2 partions in respectively i.e.
>
> +--1--+--2--+--3--+--4--+--5--+--6--+
> sda|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> sdb|AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
> sdc|BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
> sdd|CCCCC|CCCCC|DDDDD|DDDDD| | |
> sde| md0| md1 | md2 | md3 | md4 | md5 |
>
>
> md0: sda1, sdb1, sdc1, sdd1 (250G) -> 750G
> md1: sda2, sdb2, sdc2, sdd2 (250G) -> 750G
> md2: sda3, sdb3, sdc3, sdd3 (250G) -> 750G
> md3: sda4, sdb4, sdc4, sdd4 (250G) -> 750G
> md4: sda5, sdb5, sdc5 -> 500G
> md5: sda6, sdb6, sdc6 -> 500G
>
> Total -> 4000G
>
> I cant do the maths tho as my head hurts too much but is this quite wasteful with so many raid 5 arrays each time burning 1x250gb?
>
> Finally... i DID find a reference...
>
> check out: http://neil.brown.name/blog/20090817000931
>
> '
> ...
> It would also be nice to teach RAID5 to handle arrays with devices of different sizes. There are some complications there as you could have a hot spare that can replace some devices but not all.
> ...
> '
>
>
> -----------------------
> N: Jon Hardcastle
> E: Jon@eHardcastle.com
> 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
> -----------------------
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
@ 2009-09-23 21:29 ` Chris Green
2009-09-24 17:23 ` John Robinson
` (2 subsequent siblings)
3 siblings, 0 replies; 22+ messages in thread
From: Chris Green @ 2009-09-23 21:29 UTC (permalink / raw)
To: 'Konstantinos Skarlatos', Jon@eHardcastle.com
Cc: Goswin von Brederlow, linux-raid@vger.kernel.org, neilb@suse.de
I actually run the cheesiest possible implementation of this a person could write -
It's a perl script that looks through my movie directories, and chooses sets of movie files that are of similar size that haven't been backed up, and are on different drives.
Then, on the backup drive, it writes a file containing the xor of that set of files plus a little metadata.
It's actually a little more robust than unraid, in that
(a) it can write the backups to any arbitrary drive or set of drives
(b) it provides some degree of protection against accidental file deletion or corruption, since it doesn't bother to delete xors when the files in them are deleted or changed.
Please don't ask for the script - you'd hate it. It's not meant to be run by anyone except for myself.
-----Original Message-----
From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Konstantinos Skarlatos
Sent: Wednesday, September 23, 2009 1:29 PM
To: Jon@eHardcastle.com
Cc: Goswin von Brederlow; linux-raid@vger.kernel.org; neilb@suse.de
Subject: Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
Instead of doing all those things, I have a suggestion to make:
Something that is like RAID 4 without striping.
There are already 3 programs doing that, Unraid, Flexraid and disparity,
but putting this functionality into linux-raid would be tremendous. (the
first two work on linux and the third one is a command line windows
program that works fine under wine).
The basic idea is this: Take any number of drives, with any capacity and
filesystem you like. Then provide the program with an empty disk at
least as large as your largest disk. The program creates parity data by
XORing together the disks sequentially block by block(or file by file),
until it reaches the end of the smallest one.(It XORs block 1 of disk A
with block1 of disk B, with block1 of disk C.... and writes the result
to block1 of Parity disk) Then it continues with the rest of the drives,
until it reaches the end of the last drive.
Disk A B C D E P
Block 1 1 1 1 1 1
Block 2 2 2 2
Block 3 3 3
Block 4 4
The great thing about this method is that when you lose one disk you can
get all your data back. when you lose two disks you only lose the data
on them, and not the whole array. New disks can be added and the parity
recalculated by reading only the new disk and the parity disk.
Please consider adding this feature request, it would be a big plus for
linux if such a functionality existed, bringing many users from WHS and
ZFS here, as it especially caters to the needs of people that store
video and their movie collection at their home server.
Thanks for your time
ABCDE for data drives, and P for parity
Jon Hardcastle wrote:
> --- On Wed, 23/9/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>
>
>> From: Goswin von Brederlow <goswin-v-b@web.de>
>> Subject: Re: Full use of varying drive sizes?
>> To: Jon@eHardcastle.com
>> Cc: linux-raid@vger.kernel.org
>> Date: Wednesday, 23 September, 2009, 11:07 AM
>> Jon Hardcastle <jd_hardcastle@yahoo.com>
>> writes:
>>
>>
>>> Hey guys,
>>>
>>> I have an array made of many drive sizes ranging from
>>>
>> 500GB to 1TB and I appreciate that the array can only be a
>> multiple of the smallest - I use the differing sizes as i
>> just buy the best value drive at the time and hope that as i
>> phase out the old drives I can '--grow' the array. That is
>> all fine and dandy.
>>
>>> But could someone tell me, did I dream that there
>>>
>> might one day be support to allow you to actually use that
>> unused space in the array? Because that would be awesome!
>> (if a little hairy re: spare drives - have to be the size of
>> the largest drive in the array atleast..?) I have 3x500GB
>> 2x750GB 1x1TB so I have 1TB of completely unused space!
>>
>>> Cheers.
>>>
>>> Jon H
>>>
>> I face the same problem as I buy new disks whenever I need
>> more space
>> and have the money.
>>
>> I found a rather simple way to organize disks of different
>> sizes into
>> a set of software raids that gives the maximum size. The
>> reasoning for
>> this algorithm are as follows:
>>
>> 1) 2 partitions of a disk must never be in the same raid
>> set
>>
>> 2) as many disks as possible in each raid set to minimize
>> the loss for
>> parity
>>
>> 3) the number of disks in each raid set should be equal to
>> give
>> uniform amount of redundancy (same saftey for all data).
>> Worst (and
>> usual) case will be a difference of 1 disk.
>>
>>
>> So here is the algorithm:
>>
>> 1) Draw a box as wide as the largest disk and open ended
>> towards the
>> bottom.
>>
>> 2) Draw in each disk in order of size one right to the
>> other.
>> When you hit the right side of the box
>> continue in the next line.
>>
>> 3) Go through the box left to right and draw a vertical
>> line every
>> time one disk ends and another starts.
>>
>> 4) Each sub-box creted thus represents one raid using the
>> disks drawn
>> into it in the respective sizes present
>> in the box.
>>
>> In your case you have 6 Disks: A (1TB), BC (750G),
>> DEF(500G)
>>
>> +----------+-----+-----+
>> |AAAAAAAAAA|AAAAA|AAAAA|
>> |BBBBBBBBBB|BBBBB|CCCCC|
>> |CCCCCCCCCC|DDDDD|DDDDD|
>> |EEEEEEEEEE|FFFFF|FFFFF|
>> | md0 | md1 | md2 |
>>
>> For raid5 this would give you:
>>
>> md0: sda1, sdb1, sdc1, sde1 (500G) -> 1500G
>> md1: sda2, sdb2, sdd1, sdf1 (250G) -> 750G
>> md2: sda3, sdc2, sdd2, sdf2 (250G) -> 750G
>>
>>
>> -----
>>
>>
>> 3000G total
>>
>> As spare you would probably want to always use the largest
>> disk as
>> only then it is completly unused and can power down.
>>
>> Note that in your case the fit is perfect with all raids
>> having 4
>> disks. This is not always the case. Worst case there is a
>> difference
>> of 1 between raids though.
>>
>>
>>
>> As a side node: Resizing when you get new disks might
>> become tricky
>> and involve shuffeling around a lot of data. You might want
>> to split
>> md0 into 2 raids with 250G partitiosn each assuming future
>> disks will
>> continue to be multiples of 250G.
>>
>> MfG
>> Goswin
>>
>>
>
> Yes,
>
> This is a great system. I did think about this when i first created my array but I was young and lacked the confidence to do much..
>
> So assuming I then purchased a 1.5TB drive the diagram would change to
>
> 6 Disks: A (1TB), BC (750G), DEF(500G), G(1.5TB)
>
> i) So i'd partition the drive up into 250GB chucks and add each chuck to md0~3
>
> +-----+-----+-----+-----+-----+-----+
> |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> |AAAAA|AAAAA|AAAAA|AAAAA| | |
> |BBBBB|BBBBB|BBBBB|CCCCC| | |
> |CCCCC|CCCCC|DDDDD|DDDDD| | |
> |EEEEE|EEEEE|FFFFF|FFFFF| | |
> | md0| md1 | md2 | md3 | md4 | md5 |
>
>
> ii) then I guess I'd have to relieve the E's from md0 and md1? giving (which I can do by failing the drives?)
> this would then kick in the use of the newly added G's?
>
> +-----+-----+-----+-----+-----+-----+
> |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> |AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
> |BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
> |CCCCC|CCCCC|DDDDD|DDDDD| | |
> |XXXXX|XXXXX|XXXXX|XXXXX| | |
> | md0| md1 | md2 | md3 | md4 | md5 |
>
> iii) Repeat for the F's which would again trigger the rebuild using the G's.
>
> the end result is 6 arrays with 4 and 2 partions in respectively i.e.
>
> +--1--+--2--+--3--+--4--+--5--+--6--+
> sda|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> sdb|AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
> sdc|BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
> sdd|CCCCC|CCCCC|DDDDD|DDDDD| | |
> sde| md0| md1 | md2 | md3 | md4 | md5 |
>
>
> md0: sda1, sdb1, sdc1, sdd1 (250G) -> 750G
> md1: sda2, sdb2, sdc2, sdd2 (250G) -> 750G
> md2: sda3, sdb3, sdc3, sdd3 (250G) -> 750G
> md3: sda4, sdb4, sdc4, sdd4 (250G) -> 750G
> md4: sda5, sdb5, sdc5 -> 500G
> md5: sda6, sdb6, sdc6 -> 500G
>
> Total -> 4000G
>
> I cant do the maths tho as my head hurts too much but is this quite wasteful with so many raid 5 arrays each time burning 1x250gb?
>
> Finally... i DID find a reference...
>
> check out: http://neil.brown.name/blog/20090817000931
>
> '
> ...
> It would also be nice to teach RAID5 to handle arrays with devices of different sizes. There are some complications there as you could have a hot spare that can replace some devices but not all.
> ...
> '
>
>
> -----------------------
> N: Jon Hardcastle
> E: Jon@eHardcastle.com
> 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
> -----------------------
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
2009-09-23 21:29 ` Chris Green
@ 2009-09-24 17:23 ` John Robinson
2009-09-25 6:09 ` Neil Brown
2009-09-28 10:53 ` Goswin von Brederlow
3 siblings, 0 replies; 22+ messages in thread
From: John Robinson @ 2009-09-24 17:23 UTC (permalink / raw)
To: Konstantinos Skarlatos; +Cc: Linux RAID
On 23/09/2009 21:28, Konstantinos Skarlatos wrote:
> Instead of doing all those things, I have a suggestion to make:
>
> Something that is like RAID 4 without striping.
>
> There are already 3 programs doing that, Unraid, Flexraid and disparity,
[...]
> Disk A B C D E P
> Block 1 1 1 1 1 1
> Block 2 2 2 2
> Block 3 3 3
> Block 4 4
This is exactly what I want for a particular application I was thinking
of putting together. I want to offer my customers nightly backups over
the 'net. That's easy enough, and well off the scope of this list. I was
going to save the data onto a big RAID of big discs, but I was wondering
what to do when the customer needs the data, because downloading the
data would take way too long, and I'm not taking my server offline to
take to their site. So I thought, hmm, run a disc for each customer,
plus a parity drive in case anything goes phut. Then when the customer
needs their data, I can yank their drive and go to their premises with
their data in hand. Of course, another option would be to copy the data
onto a spare drive, but even that takes a while if they're in a hurry.
Might have known somebody would already have done it.
Cheers,
John.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
2009-09-23 21:29 ` Chris Green
2009-09-24 17:23 ` John Robinson
@ 2009-09-25 6:09 ` Neil Brown
2009-09-27 12:26 ` Konstantinos Skarlatos
2009-09-28 10:53 ` Goswin von Brederlow
3 siblings, 1 reply; 22+ messages in thread
From: Neil Brown @ 2009-09-25 6:09 UTC (permalink / raw)
To: Konstantinos Skarlatos; +Cc: Jon, Goswin von Brederlow, linux-raid
On Wednesday September 23, k.skarlatos@gmail.com wrote:
> Instead of doing all those things, I have a suggestion to make:
>
> Something that is like RAID 4 without striping.
>
> There are already 3 programs doing that, Unraid, Flexraid and disparity,
> but putting this functionality into linux-raid would be tremendous. (the
> first two work on linux and the third one is a command line windows
> program that works fine under wine).
>
> The basic idea is this: Take any number of drives, with any capacity and
> filesystem you like. Then provide the program with an empty disk at
> least as large as your largest disk. The program creates parity data by
> XORing together the disks sequentially block by block(or file by file),
> until it reaches the end of the smallest one.(It XORs block 1 of disk A
> with block1 of disk B, with block1 of disk C.... and writes the result
> to block1 of Parity disk) Then it continues with the rest of the drives,
> until it reaches the end of the last drive.
>
> Disk A B C D E P
> Block 1 1 1 1 1 1
> Block 2 2 2 2
> Block 3 3 3
> Block 4 4
>
> The great thing about this method is that when you lose one disk you can
> get all your data back. when you lose two disks you only lose the data
> on them, and not the whole array. New disks can be added and the parity
> recalculated by reading only the new disk and the parity disk.
>
> Please consider adding this feature request, it would be a big plus for
> linux if such a functionality existed, bringing many users from WHS and
> ZFS here, as it especially caters to the needs of people that store
> video and their movie collection at their home server.
This probably wouldn't be too hard. There would be some awkwardnesses
though.
The whole array would be one device, so the 'obvious' way to present
the separate non-parity drives would be as partitions of that device.
However you would not then be able to re-partition the device.
You could use dm to partition the partitions I suppose.
Another awkwardness would be that you would need to record somewhere
the size of each device so that when a device fails you can synthesis
a partition/device of the right size. The current md metadata doesn't
have anywhere to store that sort of per-device data. That is clearly
a solvable problem but finding an elegant solution might be a
challenge.
However this is not something I am likely to work on in the
foreseeable future. If someone else would like to have a go I can
certainly make suggestions and review code.
NeilBrown
>
> Thanks for your time
>
>
> ABCDE for data drives, and P for parity
>
> Jon Hardcastle wrote:
> > --- On Wed, 23/9/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
> >
> >
> >> From: Goswin von Brederlow <goswin-v-b@web.de>
> >> Subject: Re: Full use of varying drive sizes?
> >> To: Jon@eHardcastle.com
> >> Cc: linux-raid@vger.kernel.org
> >> Date: Wednesday, 23 September, 2009, 11:07 AM
> >> Jon Hardcastle <jd_hardcastle@yahoo.com>
> >> writes:
> >>
> >>
> >>> Hey guys,
> >>>
> >>> I have an array made of many drive sizes ranging from
> >>>
> >> 500GB to 1TB and I appreciate that the array can only be a
> >> multiple of the smallest - I use the differing sizes as i
> >> just buy the best value drive at the time and hope that as i
> >> phase out the old drives I can '--grow' the array. That is
> >> all fine and dandy.
> >>
> >>> But could someone tell me, did I dream that there
> >>>
> >> might one day be support to allow you to actually use that
> >> unused space in the array? Because that would be awesome!
> >> (if a little hairy re: spare drives - have to be the size of
> >> the largest drive in the array atleast..?) I have 3x500GB
> >> 2x750GB 1x1TB so I have 1TB of completely unused space!
> >>
> >>> Cheers.
> >>>
> >>> Jon H
> >>>
> >> I face the same problem as I buy new disks whenever I need
> >> more space
> >> and have the money.
> >>
> >> I found a rather simple way to organize disks of different
> >> sizes into
> >> a set of software raids that gives the maximum size. The
> >> reasoning for
> >> this algorithm are as follows:
> >>
> >> 1) 2 partitions of a disk must never be in the same raid
> >> set
> >>
> >> 2) as many disks as possible in each raid set to minimize
> >> the loss for
> >> parity
> >>
> >> 3) the number of disks in each raid set should be equal to
> >> give
> >> uniform amount of redundancy (same saftey for all data).
> >> Worst (and
> >> usual) case will be a difference of 1 disk.
> >>
> >>
> >> So here is the algorithm:
> >>
> >> 1) Draw a box as wide as the largest disk and open ended
> >> towards the
> >> bottom.
> >>
> >> 2) Draw in each disk in order of size one right to the
> >> other.
> >> When you hit the right side of the box
> >> continue in the next line.
> >>
> >> 3) Go through the box left to right and draw a vertical
> >> line every
> >> time one disk ends and another starts.
> >>
> >> 4) Each sub-box creted thus represents one raid using the
> >> disks drawn
> >> into it in the respective sizes present
> >> in the box.
> >>
> >> In your case you have 6 Disks: A (1TB), BC (750G),
> >> DEF(500G)
> >>
> >> +----------+-----+-----+
> >> |AAAAAAAAAA|AAAAA|AAAAA|
> >> |BBBBBBBBBB|BBBBB|CCCCC|
> >> |CCCCCCCCCC|DDDDD|DDDDD|
> >> |EEEEEEEEEE|FFFFF|FFFFF|
> >> | md0 | md1 | md2 |
> >>
> >> For raid5 this would give you:
> >>
> >> md0: sda1, sdb1, sdc1, sde1 (500G) -> 1500G
> >> md1: sda2, sdb2, sdd1, sdf1 (250G) -> 750G
> >> md2: sda3, sdc2, sdd2, sdf2 (250G) -> 750G
> >>
> >>
> >> -----
> >>
> >>
> >> 3000G total
> >>
> >> As spare you would probably want to always use the largest
> >> disk as
> >> only then it is completly unused and can power down.
> >>
> >> Note that in your case the fit is perfect with all raids
> >> having 4
> >> disks. This is not always the case. Worst case there is a
> >> difference
> >> of 1 between raids though.
> >>
> >>
> >>
> >> As a side node: Resizing when you get new disks might
> >> become tricky
> >> and involve shuffeling around a lot of data. You might want
> >> to split
> >> md0 into 2 raids with 250G partitiosn each assuming future
> >> disks will
> >> continue to be multiples of 250G.
> >>
> >> MfG
> >> Goswin
> >>
> >>
> >
> > Yes,
> >
> > This is a great system. I did think about this when i first created my array but I was young and lacked the confidence to do much..
> >
> > So assuming I then purchased a 1.5TB drive the diagram would change to
> >
> > 6 Disks: A (1TB), BC (750G), DEF(500G), G(1.5TB)
> >
> > i) So i'd partition the drive up into 250GB chucks and add each chuck to md0~3
> >
> > +-----+-----+-----+-----+-----+-----+
> > |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> > |AAAAA|AAAAA|AAAAA|AAAAA| | |
> > |BBBBB|BBBBB|BBBBB|CCCCC| | |
> > |CCCCC|CCCCC|DDDDD|DDDDD| | |
> > |EEEEE|EEEEE|FFFFF|FFFFF| | |
> > | md0| md1 | md2 | md3 | md4 | md5 |
> >
> >
> > ii) then I guess I'd have to relieve the E's from md0 and md1? giving (which I can do by failing the drives?)
> > this would then kick in the use of the newly added G's?
> >
> > +-----+-----+-----+-----+-----+-----+
> > |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> > |AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
> > |BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
> > |CCCCC|CCCCC|DDDDD|DDDDD| | |
> > |XXXXX|XXXXX|XXXXX|XXXXX| | |
> > | md0| md1 | md2 | md3 | md4 | md5 |
> >
> > iii) Repeat for the F's which would again trigger the rebuild using the G's.
> >
> > the end result is 6 arrays with 4 and 2 partions in respectively i.e.
> >
> > +--1--+--2--+--3--+--4--+--5--+--6--+
> > sda|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
> > sdb|AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
> > sdc|BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
> > sdd|CCCCC|CCCCC|DDDDD|DDDDD| | |
> > sde| md0| md1 | md2 | md3 | md4 | md5 |
> >
> >
> > md0: sda1, sdb1, sdc1, sdd1 (250G) -> 750G
> > md1: sda2, sdb2, sdc2, sdd2 (250G) -> 750G
> > md2: sda3, sdb3, sdc3, sdd3 (250G) -> 750G
> > md3: sda4, sdb4, sdc4, sdd4 (250G) -> 750G
> > md4: sda5, sdb5, sdc5 -> 500G
> > md5: sda6, sdb6, sdc6 -> 500G
> >
> > Total -> 4000G
> >
> > I cant do the maths tho as my head hurts too much but is this quite wasteful with so many raid 5 arrays each time burning 1x250gb?
> >
> > Finally... i DID find a reference...
> >
> > check out: http://neil.brown.name/blog/20090817000931
> >
> > '
> > ...
> > It would also be nice to teach RAID5 to handle arrays with devices of different sizes. There are some complications there as you could have a hot spare that can replace some devices but not all.
> > ...
> > '
> >
> >
> > -----------------------
> > N: Jon Hardcastle
> > E: Jon@eHardcastle.com
> > 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
> > -----------------------
> >
> >
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-25 6:09 ` Neil Brown
@ 2009-09-27 12:26 ` Konstantinos Skarlatos
0 siblings, 0 replies; 22+ messages in thread
From: Konstantinos Skarlatos @ 2009-09-27 12:26 UTC (permalink / raw)
To: Neil Brown; +Cc: Jon, Goswin von Brederlow, linux-raid
Neil, thanks for your answer! I appreciate that you took the time to
look into this.
So, what can people like me - who do not know how to program - do in
order to make something like this more likely to happen? create a new
thread here? post into forums like avsforums that are full of people
that would die to have something like this into linux? donate some money
or equipment? beg? :-)
FWIW I can be a tester for any code that comes out.
Best regards,
Konstantinos Skarlatos
Neil Brown wrote:
> On Wednesday September 23, k.skarlatos@gmail.com wrote:
>
>> Instead of doing all those things, I have a suggestion to make:
>>
>> Something that is like RAID 4 without striping.
>>
>> There are already 3 programs doing that, Unraid, Flexraid and disparity,
>> but putting this functionality into linux-raid would be tremendous. (the
>> first two work on linux and the third one is a command line windows
>> program that works fine under wine).
>>
>> The basic idea is this: Take any number of drives, with any capacity and
>> filesystem you like. Then provide the program with an empty disk at
>> least as large as your largest disk. The program creates parity data by
>> XORing together the disks sequentially block by block(or file by file),
>> until it reaches the end of the smallest one.(It XORs block 1 of disk A
>> with block1 of disk B, with block1 of disk C.... and writes the result
>> to block1 of Parity disk) Then it continues with the rest of the drives,
>> until it reaches the end of the last drive.
>>
>> Disk A B C D E P
>> Block 1 1 1 1 1 1
>> Block 2 2 2 2
>> Block 3 3 3
>> Block 4 4
>>
>> The great thing about this method is that when you lose one disk you can
>> get all your data back. when you lose two disks you only lose the data
>> on them, and not the whole array. New disks can be added and the parity
>> recalculated by reading only the new disk and the parity disk.
>>
>> Please consider adding this feature request, it would be a big plus for
>> linux if such a functionality existed, bringing many users from WHS and
>> ZFS here, as it especially caters to the needs of people that store
>> video and their movie collection at their home server.
>>
>
> This probably wouldn't be too hard. There would be some awkwardnesses
> though.
> The whole array would be one device, so the 'obvious' way to present
> the separate non-parity drives would be as partitions of that device.
> However you would not then be able to re-partition the device.
> You could use dm to partition the partitions I suppose.
>
> Another awkwardness would be that you would need to record somewhere
> the size of each device so that when a device fails you can synthesis
> a partition/device of the right size. The current md metadata doesn't
> have anywhere to store that sort of per-device data. That is clearly
> a solvable problem but finding an elegant solution might be a
> challenge.
>
>
> However this is not something I am likely to work on in the
> foreseeable future. If someone else would like to have a go I can
> certainly make suggestions and review code.
>
> NeilBrown
>
>
>
>
>> Thanks for your time
>>
>>
>> ABCDE for data drives, and P for parity
>>
>> Jon Hardcastle wrote:
>>
>>> --- On Wed, 23/9/09, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>>>
>>>
>>>
>>>> From: Goswin von Brederlow <goswin-v-b@web.de>
>>>> Subject: Re: Full use of varying drive sizes?
>>>> To: Jon@eHardcastle.com
>>>> Cc: linux-raid@vger.kernel.org
>>>> Date: Wednesday, 23 September, 2009, 11:07 AM
>>>> Jon Hardcastle <jd_hardcastle@yahoo.com>
>>>> writes:
>>>>
>>>>
>>>>
>>>>> Hey guys,
>>>>>
>>>>> I have an array made of many drive sizes ranging from
>>>>>
>>>>>
>>>> 500GB to 1TB and I appreciate that the array can only be a
>>>> multiple of the smallest - I use the differing sizes as i
>>>> just buy the best value drive at the time and hope that as i
>>>> phase out the old drives I can '--grow' the array. That is
>>>> all fine and dandy.
>>>>
>>>>
>>>>> But could someone tell me, did I dream that there
>>>>>
>>>>>
>>>> might one day be support to allow you to actually use that
>>>> unused space in the array? Because that would be awesome!
>>>> (if a little hairy re: spare drives - have to be the size of
>>>> the largest drive in the array atleast..?) I have 3x500GB
>>>> 2x750GB 1x1TB so I have 1TB of completely unused space!
>>>>
>>>>
>>>>> Cheers.
>>>>>
>>>>> Jon H
>>>>>
>>>>>
>>>> I face the same problem as I buy new disks whenever I need
>>>> more space
>>>> and have the money.
>>>>
>>>> I found a rather simple way to organize disks of different
>>>> sizes into
>>>> a set of software raids that gives the maximum size. The
>>>> reasoning for
>>>> this algorithm are as follows:
>>>>
>>>> 1) 2 partitions of a disk must never be in the same raid
>>>> set
>>>>
>>>> 2) as many disks as possible in each raid set to minimize
>>>> the loss for
>>>> parity
>>>>
>>>> 3) the number of disks in each raid set should be equal to
>>>> give
>>>> uniform amount of redundancy (same saftey for all data).
>>>> Worst (and
>>>> usual) case will be a difference of 1 disk.
>>>>
>>>>
>>>> So here is the algorithm:
>>>>
>>>> 1) Draw a box as wide as the largest disk and open ended
>>>> towards the
>>>> bottom.
>>>>
>>>> 2) Draw in each disk in order of size one right to the
>>>> other.
>>>> When you hit the right side of the box
>>>> continue in the next line.
>>>>
>>>> 3) Go through the box left to right and draw a vertical
>>>> line every
>>>> time one disk ends and another starts.
>>>>
>>>> 4) Each sub-box creted thus represents one raid using the
>>>> disks drawn
>>>> into it in the respective sizes present
>>>> in the box.
>>>>
>>>> In your case you have 6 Disks: A (1TB), BC (750G),
>>>> DEF(500G)
>>>>
>>>> +----------+-----+-----+
>>>> |AAAAAAAAAA|AAAAA|AAAAA|
>>>> |BBBBBBBBBB|BBBBB|CCCCC|
>>>> |CCCCCCCCCC|DDDDD|DDDDD|
>>>> |EEEEEEEEEE|FFFFF|FFFFF|
>>>> | md0 | md1 | md2 |
>>>>
>>>> For raid5 this would give you:
>>>>
>>>> md0: sda1, sdb1, sdc1, sde1 (500G) -> 1500G
>>>> md1: sda2, sdb2, sdd1, sdf1 (250G) -> 750G
>>>> md2: sda3, sdc2, sdd2, sdf2 (250G) -> 750G
>>>>
>>>>
>>>> -----
>>>>
>>>>
>>>> 3000G total
>>>>
>>>> As spare you would probably want to always use the largest
>>>> disk as
>>>> only then it is completly unused and can power down.
>>>>
>>>> Note that in your case the fit is perfect with all raids
>>>> having 4
>>>> disks. This is not always the case. Worst case there is a
>>>> difference
>>>> of 1 between raids though.
>>>>
>>>>
>>>>
>>>> As a side node: Resizing when you get new disks might
>>>> become tricky
>>>> and involve shuffeling around a lot of data. You might want
>>>> to split
>>>> md0 into 2 raids with 250G partitiosn each assuming future
>>>> disks will
>>>> continue to be multiples of 250G.
>>>>
>>>> MfG
>>>> Goswin
>>>>
>>>>
>>>>
>>> Yes,
>>>
>>> This is a great system. I did think about this when i first created my array but I was young and lacked the confidence to do much..
>>>
>>> So assuming I then purchased a 1.5TB drive the diagram would change to
>>>
>>> 6 Disks: A (1TB), BC (750G), DEF(500G), G(1.5TB)
>>>
>>> i) So i'd partition the drive up into 250GB chucks and add each chuck to md0~3
>>>
>>> +-----+-----+-----+-----+-----+-----+
>>> |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
>>> |AAAAA|AAAAA|AAAAA|AAAAA| | |
>>> |BBBBB|BBBBB|BBBBB|CCCCC| | |
>>> |CCCCC|CCCCC|DDDDD|DDDDD| | |
>>> |EEEEE|EEEEE|FFFFF|FFFFF| | |
>>> | md0| md1 | md2 | md3 | md4 | md5 |
>>>
>>>
>>> ii) then I guess I'd have to relieve the E's from md0 and md1? giving (which I can do by failing the drives?)
>>> this would then kick in the use of the newly added G's?
>>>
>>> +-----+-----+-----+-----+-----+-----+
>>> |GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
>>> |AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
>>> |BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
>>> |CCCCC|CCCCC|DDDDD|DDDDD| | |
>>> |XXXXX|XXXXX|XXXXX|XXXXX| | |
>>> | md0| md1 | md2 | md3 | md4 | md5 |
>>>
>>> iii) Repeat for the F's which would again trigger the rebuild using the G's.
>>>
>>> the end result is 6 arrays with 4 and 2 partions in respectively i.e.
>>>
>>> +--1--+--2--+--3--+--4--+--5--+--6--+
>>> sda|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|GGGGG|
>>> sdb|AAAAA|AAAAA|AAAAA|AAAAA|EEEEE|EEEEE|
>>> sdc|BBBBB|BBBBB|BBBBB|CCCCC|FFFFF|FFFFF|
>>> sdd|CCCCC|CCCCC|DDDDD|DDDDD| | |
>>> sde| md0| md1 | md2 | md3 | md4 | md5 |
>>>
>>>
>>> md0: sda1, sdb1, sdc1, sdd1 (250G) -> 750G
>>> md1: sda2, sdb2, sdc2, sdd2 (250G) -> 750G
>>> md2: sda3, sdb3, sdc3, sdd3 (250G) -> 750G
>>> md3: sda4, sdb4, sdc4, sdd4 (250G) -> 750G
>>> md4: sda5, sdb5, sdc5 -> 500G
>>> md5: sda6, sdb6, sdc6 -> 500G
>>>
>>> Total -> 4000G
>>>
>>> I cant do the maths tho as my head hurts too much but is this quite wasteful with so many raid 5 arrays each time burning 1x250gb?
>>>
>>> Finally... i DID find a reference...
>>>
>>> check out: http://neil.brown.name/blog/20090817000931
>>>
>>> '
>>> ...
>>> It would also be nice to teach RAID5 to handle arrays with devices of different sizes. There are some complications there as you could have a hot spare that can replace some devices but not all.
>>> ...
>>> '
>>>
>>>
>>> -----------------------
>>> N: Jon Hardcastle
>>> E: Jon@eHardcastle.com
>>> 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
>>> -----------------------
>>>
>>>
>>>
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
` (2 preceding siblings ...)
2009-09-25 6:09 ` Neil Brown
@ 2009-09-28 10:53 ` Goswin von Brederlow
2009-09-28 14:10 ` Konstantinos Skarlatos
3 siblings, 1 reply; 22+ messages in thread
From: Goswin von Brederlow @ 2009-09-28 10:53 UTC (permalink / raw)
To: Konstantinos Skarlatos; +Cc: Jon, Goswin von Brederlow, linux-raid, neilb
Konstantinos Skarlatos <k.skarlatos@gmail.com> writes:
> Instead of doing all those things, I have a suggestion to make:
>
> Something that is like RAID 4 without striping.
>
> There are already 3 programs doing that, Unraid, Flexraid and
> disparity, but putting this functionality into linux-raid would be
> tremendous. (the first two work on linux and the third one is a
> command line windows program that works fine under wine).
>
> The basic idea is this: Take any number of drives, with any capacity
> and filesystem you like. Then provide the program with an empty disk
> at least as large as your largest disk. The program creates parity
> data by XORing together the disks sequentially block by block(or file
> by file), until it reaches the end of the smallest one.(It XORs block
> 1 of disk A with block1 of disk B, with block1 of disk C.... and
> writes the result to block1 of Parity disk) Then it continues with the
> rest of the drives, until it reaches the end of the last drive.
>
> Disk A B C D E P
> Block 1 1 1 1 1 1
> Block 2 2 2 2
> Block 3 3 3
> Block 4 4
>
> The great thing about this method is that when you lose one disk you
> can get all your data back. when you lose two disks you only lose the
> data on them, and not the whole array. New disks can be added and the
> parity recalculated by reading only the new disk and the parity disk.
This has some problem though:
1) every wite is a read-modify-write
Well, for one thing this is slow.
2) every write is a read-modify-write of the parity disk
Even worse, all writes to independent disks bottleneck at the
parity disk.
3) every write is a read-modify-write of the parity disk
That poor parity disk. It can never catch a break, untill it
breaks. It is likely that it will break first.
4) if the parity disk is larger than the 2nd largest disk it will
waste space
5) data at the start of the disk is more likely to fail than at the
end of a disk
(Say disks A and D fail then Block A1 is lost but A2-A4 are still
there)
As for adding a new disks there are 2 cases:
1) adding a small disk
zero out the new disk and then the parity does not need to be updated
2) adding a large disk
zero out the new disk and then that becomes the parity disk
> Please consider adding this feature request, it would be a big plus
> for linux if such a functionality existed, bringing many users from
> WHS and ZFS here, as it especially caters to the needs of people that
> store video and their movie collection at their home server.
>
> Thanks for your time
>
>
> ABCDE for data drives, and P for parity
As a side note I like the idea of not striping, despide the uneven
use. For home use the speed of a single disk is usualy sufficient but
the noise of concurrent access to multiple disks is bothersome. Also
for movie archives a lot of access will be reading and then the parity
disk can rest. Disks can also be spun down more often. Only the disk
containing the movie one currently watches need to be spinning. That
could translate into real money saved on the electric bill.
But I would still do this with my algorithm to get even amount of
redunancy. One can then use partitions or lvm to split the overall
raid device back into seperate drives if one wants to.
MfG
Goswin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-28 10:53 ` Goswin von Brederlow
@ 2009-09-28 14:10 ` Konstantinos Skarlatos
2009-10-05 9:06 ` Goswin von Brederlow
0 siblings, 1 reply; 22+ messages in thread
From: Konstantinos Skarlatos @ 2009-09-28 14:10 UTC (permalink / raw)
To: Goswin von Brederlow; +Cc: Jon, linux-raid, neilb
Goswin von Brederlow wrote:
> Konstantinos Skarlatos <k.skarlatos@gmail.com> writes:
>
>
>> Instead of doing all those things, I have a suggestion to make:
>>
>> Something that is like RAID 4 without striping.
>>
>> There are already 3 programs doing that, Unraid, Flexraid and
>> disparity, but putting this functionality into linux-raid would be
>> tremendous. (the first two work on linux and the third one is a
>> command line windows program that works fine under wine).
>>
>> The basic idea is this: Take any number of drives, with any capacity
>> and filesystem you like. Then provide the program with an empty disk
>> at least as large as your largest disk. The program creates parity
>> data by XORing together the disks sequentially block by block(or file
>> by file), until it reaches the end of the smallest one.(It XORs block
>> 1 of disk A with block1 of disk B, with block1 of disk C.... and
>> writes the result to block1 of Parity disk) Then it continues with the
>> rest of the drives, until it reaches the end of the last drive.
>>
>> Disk A B C D E P
>> Block 1 1 1 1 1 1
>> Block 2 2 2 2
>> Block 3 3 3
>> Block 4 4
>>
>> The great thing about this method is that when you lose one disk you
>> can get all your data back. when you lose two disks you only lose the
>> data on them, and not the whole array. New disks can be added and the
>> parity recalculated by reading only the new disk and the parity disk.
>>
>
> This has some problem though:
>
> 1) every wite is a read-modify-write
> Well, for one thing this is slow.
>
Is that necessary? Why not read every other data disk at the same time
and calculate new parity blocks on the fly? granted, that would mean
spinning up every disk, so maybe this mode could be an option?
> 2) every write is a read-modify-write of the parity disk
> Even worse, all writes to independent disks bottleneck at the
> parity disk.
> 3) every write is a read-modify-write of the parity disk
> That poor parity disk. It can never catch a break, untill it
> breaks. It is likely that it will break first.
>
No problem, a failed parity disk on this method is a much smaller
problem than a failed disk on a RAID 5
> 4) if the parity disk is larger than the 2nd largest disk it will
> waste space
> 5) data at the start of the disk is more likely to fail than at the
> end of a disk
> (Say disks A and D fail then Block A1 is lost but A2-A4 are still
> there)
>
> As for adding a new disks there are 2 cases:
>
> 1) adding a small disk
> zero out the new disk and then the parity does not need to be updated
> 2) adding a large disk
> zero out the new disk and then that becomes the parity disk
>
So the new disk gets a copy of the parity data of the previous parity disk?
>
>> Please consider adding this feature request, it would be a big plus
>> for linux if such a functionality existed, bringing many users from
>> WHS and ZFS here, as it especially caters to the needs of people that
>> store video and their movie collection at their home server.
>>
>> Thanks for your time
>>
>>
>> ABCDE for data drives, and P for parity
>>
>
> As a side note I like the idea of not striping, despide the uneven
> use. For home use the speed of a single disk is usualy sufficient but
> the noise of concurrent access to multiple disks is bothersome.
Have you tried the Seagate Barracuda LP's? totally silent! I have 8 of
them and i can assure you that they are perfect for large media storage
in a silent computer.
> Also
> for movie archives a lot of access will be reading and then the parity
> disk can rest. Disks can also be spun down more often. Only the disk
> containing the movie one currently watches need to be spinning. That
> could translate into real money saved on the electric bill.
>
>
I agree this is something mainly for home use, where reads exceed writes
by a large margin and when writes are done, they are done to one or two
disks at the same time at most.
> But I would still do this with my algorithm to get even amount of
> redunancy. One can then use partitions or lvm to split the overall
> raid device back into seperate drives if one wants to.
>
Yes I think that an option for merging the disks into a large one would
be nice, as long as data is still recoverable from individual disks if
for example 2 disks fail. One of the main advantages of not stripping is
that when things go haywire some data is still recoverable, so please
lets not lose that.
> MfG
> Goswin
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Full use of varying drive sizes?---maybe a new raid mode is the answer?
2009-09-28 14:10 ` Konstantinos Skarlatos
@ 2009-10-05 9:06 ` Goswin von Brederlow
0 siblings, 0 replies; 22+ messages in thread
From: Goswin von Brederlow @ 2009-10-05 9:06 UTC (permalink / raw)
To: Konstantinos Skarlatos; +Cc: Goswin von Brederlow, Jon, linux-raid, neilb
Konstantinos Skarlatos <k.skarlatos@gmail.com> writes:
> Goswin von Brederlow wrote:
>> Konstantinos Skarlatos <k.skarlatos@gmail.com> writes:
>>
>>
>>> Instead of doing all those things, I have a suggestion to make:
>>>
>>> Something that is like RAID 4 without striping.
>>>
>>> There are already 3 programs doing that, Unraid, Flexraid and
>>> disparity, but putting this functionality into linux-raid would be
>>> tremendous. (the first two work on linux and the third one is a
>>> command line windows program that works fine under wine).
>>>
>>> The basic idea is this: Take any number of drives, with any capacity
>>> and filesystem you like. Then provide the program with an empty disk
>>> at least as large as your largest disk. The program creates parity
>>> data by XORing together the disks sequentially block by block(or file
>>> by file), until it reaches the end of the smallest one.(It XORs block
>>> 1 of disk A with block1 of disk B, with block1 of disk C.... and
>>> writes the result to block1 of Parity disk) Then it continues with the
>>> rest of the drives, until it reaches the end of the last drive.
>>>
>>> Disk A B C D E P
>>> Block 1 1 1 1 1 1
>>> Block 2 2 2 2
>>> Block 3 3 3
>>> Block 4 4
>>>
>>> The great thing about this method is that when you lose one disk you
>>> can get all your data back. when you lose two disks you only lose the
>>> data on them, and not the whole array. New disks can be added and the
>>> parity recalculated by reading only the new disk and the parity disk.
>>>
>>
>> This has some problem though:
>>
>> 1) every wite is a read-modify-write
>> Well, for one thing this is slow.
>>
> Is that necessary? Why not read every other data disk at the same time
> and calculate new parity blocks on the fly? granted, that would mean
> spinning up every disk, so maybe this mode could be an option?
Reading one parity block and updating it is faster than reading X data
blocks and recomputing the parity. Both in I/O and CPU terms.
>> 2) every write is a read-modify-write of the parity disk
>> Even worse, all writes to independent disks bottleneck at the
>> parity disk.
>> 3) every write is a read-modify-write of the parity disk That
>> poor parity disk. It can never catch a break, untill it
>> breaks. It is likely that it will break first.
>>
> No problem, a failed parity disk on this method is a much smaller
> problem than a failed disk on a RAID 5
But the reason they went from raid3/4 to raid5. :)
>> 4) if the parity disk is larger than the 2nd largest disk it will
>> waste space
>> 5) data at the start of the disk is more likely to fail than at the
>> end of a disk
>> (Say disks A and D fail then Block A1 is lost but A2-A4 are still
>> there)
>>
>> As for adding a new disks there are 2 cases:
>>
>> 1) adding a small disk
>> zero out the new disk and then the parity does not need to be updated
>> 2) adding a large disk
>> zero out the new disk and then that becomes the parity disk
>>
> So the new disk gets a copy of the parity data of the previous parity disk?
No, the old parity disk becomes a data disk that happens to initialy
contain the parity of A, B, C, D, E. The new parity disk becomes all
zero.
Look at it this way: XORing disk A, B, C, D, E together gives
P. XORing A, B, C, D, E, P together always gives 0. So by filling the
new parity disk with zero you are computing the parity of A, B, C, D,
E, P. Just more intelligently.
>>> Please consider adding this feature request, it would be a big plus
>>> for linux if such a functionality existed, bringing many users from
>>> WHS and ZFS here, as it especially caters to the needs of people that
>>> store video and their movie collection at their home server.
>>>
>>> Thanks for your time
>>>
>>>
>>> ABCDE for data drives, and P for parity
>>>
>>
>> As a side note I like the idea of not striping, despide the uneven
>> use. For home use the speed of a single disk is usualy sufficient but
>> the noise of concurrent access to multiple disks is bothersome.
> Have you tried the Seagate Barracuda LP's? totally silent! I have 8 of
> them and i can assure you that they are perfect for large media
> storage in a silent computer.
I buy when I need space and have the money (unfortunately that doesn't
always coincide) and i use what I have. But it is interesting to see
how newer disks are much quieter and I don't believe that is just age
making old disks louder.
>> Also
>> for movie archives a lot of access will be reading and then the parity
>> disk can rest. Disks can also be spun down more often. Only the disk
>> containing the movie one currently watches need to be spinning. That
>> could translate into real money saved on the electric bill.
>>
>>
> I agree this is something mainly for home use, where reads exceed
> writes by a large margin and when writes are done, they are done to
> one or two disks at the same time at most.
>> But I would still do this with my algorithm to get even amount of
>> redunancy. One can then use partitions or lvm to split the overall
>> raid device back into seperate drives if one wants to.
>>
> Yes I think that an option for merging the disks into a large one
> would be nice, as long as data is still recoverable from individual
> disks if for example 2 disks fail. One of the main advantages of not
> stripping is that when things go haywire some data is still
> recoverable, so please lets not lose that.
That is just a matter of placing the partitions/volumes to not span
multiple disks. With partitionable raids one could implement such a
raid mode that would combine all disks into a single raid device but
export each data disk back as a partition. One wouldn't be able to
repartition that though. So not sure if I would want that in the
driver layer. Not everyone will care about having each data disk
seperate.
MfG
Goswin
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2009-10-05 9:06 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-09-22 11:24 Full use of varying drive sizes? Jon Hardcastle
2009-09-22 11:52 ` Kristleifur Daðason
2009-09-22 12:58 ` John Robinson
2009-09-22 13:07 ` Majed B.
2009-09-22 15:38 ` Jon Hardcastle
2009-09-22 15:47 ` Majed B.
2009-09-22 15:48 ` Ryan Wagoner
2009-09-22 16:04 ` Robin Hill
2009-09-23 8:20 ` John Robinson
2009-09-23 10:15 ` Tapani Tarvainen
2009-09-23 12:42 ` Goswin von Brederlow
2009-09-22 13:05 ` Tapani Tarvainen
2009-09-23 10:07 ` Goswin von Brederlow
2009-09-23 14:57 ` Jon Hardcastle
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
2009-09-23 21:29 ` Chris Green
2009-09-24 17:23 ` John Robinson
2009-09-25 6:09 ` Neil Brown
2009-09-27 12:26 ` Konstantinos Skarlatos
2009-09-28 10:53 ` Goswin von Brederlow
2009-09-28 14:10 ` Konstantinos Skarlatos
2009-10-05 9:06 ` Goswin von Brederlow
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).