* new features time-line
@ 2006-10-13 21:53 Dan
2006-10-14 0:03 ` Neil Brown
0 siblings, 1 reply; 7+ messages in thread
From: Dan @ 2006-10-13 21:53 UTC (permalink / raw)
To: linux-raid
I am curious if there are plans for either of the following;
-RAID6 reshape
-RAID5 to RAID6 migration
Here is why I ask, and sorry for the length.
I have an aging RAID6 with eight 250G drives as a physical volume in a
volume group. It is at about 80% capacity. I have had a couple drives fail
and replaced them with 500G drives. I plan to migrate the rest over time as
they drop out. However this could be months or years.
I could just be patient and wait until I have replaced all the drives and
use the -G -z max to grow the RAID to resize the array to the maximum
space. But I could use the extra space sooner.
Since I already have the existing RAID (md0) as a physical volume in a
volume group, I though why not just use the other half of the drives and
create another RAID6 (md1) add that to the same volume group and so on as I
grow. md0 made from devices=/dev/sd[abcdefgh]1; md1 made from
devices=/dev/sd[abcdefgh]2; and so on (I could have the md number match the
partition number for aesthetics I suppose)...
By doing this I further protect myself from possible bit error rate on
increasingly large drives. So if there are suddenly three bit errors I have
a chance as long as they are not all on the same partition number. Mdadm
will only kick out the bad partitions and not the whole drive. (I know I am
already doing RAID6, what are the chances of three!).
To get to my point, I would like to split the new half of the drives into a
new physical volume and would 'like' to try to start using some of the
drives before I have replace all the existing 250G drives. If RAID6 reshape
was an option I could start once I have replaced at least three of the old
drives (built it as a RAID6 with one missing). But it is not available,
yet. Or, since RAID5 reshape is an option, I could again start when I have
replaced three (built it as a RAID5) than grow it to until I get to the
eighth drive and migrate to the final desired RAID6. But that is not an
option, yet.
Thoughts?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: new features time-line
2006-10-13 21:53 new features time-line Dan
@ 2006-10-14 0:03 ` Neil Brown
2006-10-18 2:35 ` Bill Davidsen
0 siblings, 1 reply; 7+ messages in thread
From: Neil Brown @ 2006-10-14 0:03 UTC (permalink / raw)
To: Dan; +Cc: linux-raid
On Friday October 13, dan@korstad.net wrote:
> I am curious if there are plans for either of the following;
> -RAID6 reshape
> -RAID5 to RAID6 migration
No concrete plans with timelines and milestones and such, no.
I would like to implement both of these but I really don't know when I
will find/make time. Probably by the end of 2007, but that is not a
promise.
Of course someone else could implement them. RAID6 reshape should be
fairly straight forward given that RAID5 and RAID6 use the same code
and RAID5 reshape is done.
RAID5 to RAID6 conversion would be a bit trickier, but not much.
A point worth noting is that RAID5->RAID6 conversion without growing
the array at the same time is not a good idea. It will either be
dangerous (a crash during the reshape will cause corruption of data)
or slow (all data needs to be copied one extra time - the 'critical
region' of raid5 reshape becomes the whole array if you don't grow the
array).
Probably the fastest way for these to get implemented is for someone
else to try and post the results. I would be very likely to comment
and help get the patch into a reliable and maintainable form and then
pass it on upstream.
Are you any good at coding :-)
>
> Here is why I ask, and sorry for the length.
All sounds fairly sensible. Except that as I say above, the option of
growing a raid5 bit by bit, then adding the last disk and making it
raid6 is not such a good approach.
NeilBreon
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: new features time-line
2006-10-14 0:03 ` Neil Brown
@ 2006-10-18 2:35 ` Bill Davidsen
2006-10-19 2:46 ` Neil Brown
0 siblings, 1 reply; 7+ messages in thread
From: Bill Davidsen @ 2006-10-18 2:35 UTC (permalink / raw)
To: Neil Brown; +Cc: Dan, linux-raid
Neil Brown wrote:
>On Friday October 13, dan@korstad.net wrote:
>
>
>>I am curious if there are plans for either of the following;
>>-RAID6 reshape
>>-RAID5 to RAID6 migration
>>
>>
>
>No concrete plans with timelines and milestones and such, no.
>I would like to implement both of these but I really don't know when I
>will find/make time. Probably by the end of 2007, but that is not a
>promise.
>
We talked about RAID5E a while ago, is there any thought that this would
actually happen, or is it one of the "would be nice" features? With
larger drives I suspect the number of drives in arrays is going down,
and anything which offers performance benefits for smaller arrays would
be useful.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: new features time-line
2006-10-18 2:35 ` Bill Davidsen
@ 2006-10-19 2:46 ` Neil Brown
2006-10-30 23:19 ` Bill Davidsen
0 siblings, 1 reply; 7+ messages in thread
From: Neil Brown @ 2006-10-19 2:46 UTC (permalink / raw)
To: Bill Davidsen; +Cc: Dan, linux-raid
On Tuesday October 17, davidsen@tmr.com wrote:
> We talked about RAID5E a while ago, is there any thought that this would
> actually happen, or is it one of the "would be nice" features? With
> larger drives I suspect the number of drives in arrays is going down,
> and anything which offers performance benefits for smaller arrays would
> be useful.
So ... RAID5E is RAID5 using (N-1)/N of each drive (or close to that)
and not having a hot spare.
On a drive failure, the data is restriped across N-1 drives so that it
becomes plain RAID5. This means that instead of having an idle spare,
you have spare space at the end of each drive.
To implement this you would need kernel code to restripe and array to
reduce the number of devices (currently we only increase the number of
devices).
Probably not too hard - just needs code and motivation.
Don't know if/when it will happen, but it probably will
.... especially if someone tries writing some code (hint hint to any
potential developers out there...)
NeilBrown
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: new features time-line
2006-10-19 2:46 ` Neil Brown
@ 2006-10-30 23:19 ` Bill Davidsen
0 siblings, 0 replies; 7+ messages in thread
From: Bill Davidsen @ 2006-10-30 23:19 UTC (permalink / raw)
To: Neil Brown; +Cc: Dan, linux-raid
Neil Brown wrote:
>On Tuesday October 17, davidsen@tmr.com wrote:
>
>
>>We talked about RAID5E a while ago, is there any thought that this would
>>actually happen, or is it one of the "would be nice" features? With
>>larger drives I suspect the number of drives in arrays is going down,
>>and anything which offers performance benefits for smaller arrays would
>>be useful.
>>
>>
>
>So ... RAID5E is RAID5 using (N-1)/N of each drive (or close to that)
>and not having a hot spare.
>On a drive failure, the data is restriped across N-1 drives so that it
>becomes plain RAID5. This means that instead of having an idle spare,
>you have spare space at the end of each drive.
>
>To implement this you would need kernel code to restripe and array to
>reduce the number of devices (currently we only increase the number of
>devices).
>
>Probably not too hard - just needs code and motivation.
>
Code is not coming from me right now, but the motivation is (a) all
drives in use so better response, particularly with small arrays, and
(b) spare is being used, if it's going to have a problem it won't be
just when you need it the most.
I suppose someone will want RAID6E as well, I would probably find a use
for it, but RAID5E would suit many systems which could use a
speed+reliability boost.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <45300FC5.80806@h3c.com>]
* RE: new features time-line
[not found] <45300FC5.80806@h3c.com>
@ 2006-10-14 0:14 ` Dan
2006-10-16 5:37 ` Neil Brown
0 siblings, 1 reply; 7+ messages in thread
From: Dan @ 2006-10-14 0:14 UTC (permalink / raw)
To: 'Mike Hardy'; +Cc: linux-raid
Good to hear.
I think when I first built my RAID (a few years ago) I did some research on
this;
http://www.google.com/search?hl=en&q=bad+block+replacement+capabilities+mdad
m
And found stories where bit errors were an issue.
http://www.ogre.com/tiki-read_article.php?articleId=7
After your email, I went out and researched it again. Eleven months ago a
patch to address this was submitted for RAID5, I would assume RAID6
benefited from it too?
_______________________________________
http://kernel.org/pub/linux/kernel/v2.6/testing/ChangeLog-2.6.15-rc1
Author: NeilBrown <neilb@suse.de>
Date: Tue Nov 8 21:39:22 2005 -0800
[PATCH] md: better handling of readerrors with raid5.
This patch changes the behaviour of raid5 when it gets a read error.
Instead of just failing the device, it tried to find out what should
have
been there, and writes it over the bad block. For some media-errors,
this
has a reasonable chance of fixing the error. If the write succeeds, and
a
subsequent read succeeds as well, raid5 decided the address is OK and
conitnues.
Instead of failing a drive on read-error, we attempt to re-write the
block,
and then re-read. If that all works, we allow the device to remain in
the
array.
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
_________________________________________________
So the vulnerability would exist only if the bad bit stuck at the some
parity information and another at a data sector that needed that exact
parity information which is next to impossible and closer to impossible with
RAID6 since there would need to be loss the data sector and both the same p
and q parity information at the same time.
Thus less benefit for splitting up the drives in sections for logical
volumes is less useful. And RAID6 provides the added benefit for bit errors
during a single drive degraded array as opposed to RAID5.
Nevertheless, I would still use the LVM system to split the new replacement
drives if I had a method to utilize the extra drive space of the few new
replacements prior to replacing all of them. Otherwise I suppose practice
patients and wait until they are all replaced to use the current grow -G -z
max feature.
Thanks,
Dan.
-----Original Message-----
From: Mike Hardy [mailto:mhardy@h3c.com]
Sent: Friday, October 13, 2006 5:14 PM
To: Dan
Subject: Re: new features time-line
Not commenting on your overall premise, but I believe bit errors are
already logged and rewritten using parity info by md
-Mike
Dan wrote:
> I am curious if there are plans for either of the following;
> -RAID6 reshape
> -RAID5 to RAID6 migration
>
> Here is why I ask, and sorry for the length.
>
> I have an aging RAID6 with eight 250G drives as a physical volume in a
> volume group. It is at about 80% capacity. I have had a couple drives
fail
> and replaced them with 500G drives. I plan to migrate the rest over time
as
> they drop out. However this could be months or years.
>
> I could just be patient and wait until I have replaced all the drives and
> use the -G -z max to grow the RAID to resize the array to the maximum
> space. But I could use the extra space sooner.
>
> Since I already have the existing RAID (md0) as a physical volume in a
> volume group, I though why not just use the other half of the drives and
> create another RAID6 (md1) add that to the same volume group and so on as
I
> grow. md0 made from devices=/dev/sd[abcdefgh]1; md1 made from
> devices=/dev/sd[abcdefgh]2; and so on (I could have the md number match
the
> partition number for aesthetics I suppose)...
>
> By doing this I further protect myself from possible bit error rate on
> increasingly large drives. So if there are suddenly three bit errors I
have
> a chance as long as they are not all on the same partition number. Mdadm
> will only kick out the bad partitions and not the whole drive. (I know I
am
> already doing RAID6, what are the chances of three!).
>
> To get to my point, I would like to split the new half of the drives into
a
> new physical volume and would 'like' to try to start using some of the
> drives before I have replace all the existing 250G drives. If RAID6
reshape
> was an option I could start once I have replaced at least three of the old
> drives (built it as a RAID6 with one missing). But it is not available,
> yet. Or, since RAID5 reshape is an option, I could again start when I
have
> replaced three (built it as a RAID5) than grow it to until I get to the
> eighth drive and migrate to the final desired RAID6. But that is not an
> option, yet.
>
> Thoughts?
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2006-10-30 23:19 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-10-13 21:53 new features time-line Dan
2006-10-14 0:03 ` Neil Brown
2006-10-18 2:35 ` Bill Davidsen
2006-10-19 2:46 ` Neil Brown
2006-10-30 23:19 ` Bill Davidsen
[not found] <45300FC5.80806@h3c.com>
2006-10-14 0:14 ` Dan
2006-10-16 5:37 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).