linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID5 -> RAID6
@ 2009-03-28 13:05 Max Waterman
  2009-03-28 20:41 ` NeilBrown
  0 siblings, 1 reply; 7+ messages in thread
From: Max Waterman @ 2009-03-28 13:05 UTC (permalink / raw)
  To: linux-raid

Hi,

I'm wondering what the latest is on migrating from RAID5 to RAID6.

I have a 6 disk RAID5 with 2 spares and have long been thinking of 
making better use of the spares. All 200GB.

I have a 1TB drive that is serving as a backup, but I wonder if there's 
a way to migrate without having to wipe the array.

Any advice?

Max.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 -> RAID6
  2009-03-28 13:05 RAID5 -> RAID6 Max Waterman
@ 2009-03-28 20:41 ` NeilBrown
  2009-03-28 20:54   ` Max Waterman
  0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2009-03-28 20:41 UTC (permalink / raw)
  To: Max Waterman; +Cc: linux-raid

On Sun, March 29, 2009 12:05 am, Max Waterman wrote:
> Hi,
>
> I'm wondering what the latest is on migrating from RAID5 to RAID6.
>
> I have a 6 disk RAID5 with 2 spares and have long been thinking of
> making better use of the spares. All 200GB.
>
> I have a 1TB drive that is serving as a backup, but I wonder if there's
> a way to migrate without having to wipe the array.
>
> Any advice?

Wait 3 months :-)

2.6.30 should contains support for this sort of conversion.  It is
already written (mostly) but still needs some testing.


Your options would then include:
 1/  convert that raid5 to a raid6 of the same size but with one
     extra device.  This device would store all the 'Q' blocks so
     it could become a write bottle neck
 1a/ as above, but then restripe the array so that the Q block is
     rotated among the drives.  This process is either dangerous - in
     that a crash would kill your data, or slow - in that all the data
     would need to be copied elsewhere in chunks while the corresponding
     chunk of the array was restriped.
 2/  convert to raid6 and grow at the same time.  i.e. add both spares
     using one of them to support the conversion to raid6 and the
     other to increase the space.  You could then arrange to restripe
     an grow at the same time which is faster/safer than striping in-place.
 3/  Possibly you could restripe-and-grow, then restripe-and-shrink
     so you end up with a 7 device RAID6 with properly rotating parity,
     but don't go through the slow/dangerous restripe-in-place.
     I'll need to do some experiments to see if that would actually
     be faster.

NeilBrown


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 -> RAID6
  2009-03-28 20:41 ` NeilBrown
@ 2009-03-28 20:54   ` Max Waterman
  0 siblings, 0 replies; 7+ messages in thread
From: Max Waterman @ 2009-03-28 20:54 UTC (permalink / raw)
  To: linux-raid

NeilBrown wrote:
> Wait 3 months :-)
>   

Sounds good. I'm in no particular hurry. Increasing capacity would be 
nice, but I'm not sure I want to do that since I only have a 1TB drive 
for backup...as such, the slow version of 1a/ sounds reasonable - I have 
a spare 80GB drive in the same machine that I could make use of to make 
it not-so-dangerous.

I guess I might consider a grow too - perhaps I'll have another drive by 
then so my backup can be bigger.

Thanks for the advice...I'll keep an eye out for the new support.

Max.

> 2.6.30 should contains support for this sort of conversion.  It is
> already written (mostly) but still needs some testing.
>
>
> Your options would then include:
>  1/  convert that raid5 to a raid6 of the same size but with one
>      extra device.  This device would store all the 'Q' blocks so
>      it could become a write bottle neck
>  1a/ as above, but then restripe the array so that the Q block is
>      rotated among the drives.  This process is either dangerous - in
>      that a crash would kill your data, or slow - in that all the data
>      would need to be copied elsewhere in chunks while the corresponding
>      chunk of the array was restriped.
>  2/  convert to raid6 and grow at the same time.  i.e. add both spares
>      using one of them to support the conversion to raid6 and the
>      other to increase the space.  You could then arrange to restripe
>      an grow at the same time which is faster/safer than striping in-place.
>  3/  Possibly you could restripe-and-grow, then restripe-and-shrink
>      so you end up with a 7 device RAID6 with properly rotating parity,
>      but don't go through the slow/dangerous restripe-in-place.
>      I'll need to do some experiments to see if that would actually
>      be faster

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RAID5 - RAID6
  2010-03-08  3:20 Leslie Rhorer
@ 2010-03-08  3:27 ` Leslie Rhorer
  2010-03-08  4:19   ` Michael Evans
  0 siblings, 1 reply; 7+ messages in thread
From: Leslie Rhorer @ 2010-03-08  3:27 UTC (permalink / raw)
  To: 'Michael Evans'; +Cc: linux-raid


> > >> You need mdadm-3.1.1 plus linux 2.6.32.
> > >>
> > >> NeilBrown
> > >>
> > >> >
> > >> > --
> > >> > To unsubscribe from this list: send the line "unsubscribe linux-
> raid"
> > in
> > >> > the body of a message to majordomo@vger.kernel.org
> > >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > >
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > >
> >
> > What are you talking about?
> 
> 	Referring to what, specifically?
> 
> > Have you not synced to the online
> > repository?  I grabbed this out of the Package.bz2 file.
> 
> 	Of course.  I'm running "Lenny" for an AMD-64.  The output below is
> definitely not for AMD-64, and it looks to me like it might be "Squeeze"
> or
> "Sid", not "Lenny".
> 
> >From the "Lenny" AMD-64 Package.bz2 file:

	Digging a bit further, even "Squeeze" does not offer mdadm 3.1.1, at
least not in the AMD-64 distro.  It's included version at the moment is
3.0.3-2.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 - RAID6
  2010-03-08  3:27 ` RAID5 - RAID6 Leslie Rhorer
@ 2010-03-08  4:19   ` Michael Evans
  0 siblings, 0 replies; 7+ messages in thread
From: Michael Evans @ 2010-03-08  4:19 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

On Sun, Mar 7, 2010 at 7:27 PM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>
>> > >> You need mdadm-3.1.1 plus linux 2.6.32.
>> > >>
>> > >> NeilBrown
>> > >>
>> > >> >
>> > >> > --
>> > >> > To unsubscribe from this list: send the line "unsubscribe linux-
>> raid"
>> > in
>> > >> > the body of a message to majordomo@vger.kernel.org
>> > >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> > >
>> > >
>> > > --
>> > > To unsubscribe from this list: send the line "unsubscribe linux-raid"
>> in
>> > > the body of a message to majordomo@vger.kernel.org
>> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> > >
>> >
>> > What are you talking about?
>>
>>       Referring to what, specifically?
>>
>> > Have you not synced to the online
>> > repository?  I grabbed this out of the Package.bz2 file.
>>
>>       Of course.  I'm running "Lenny" for an AMD-64.  The output below is
>> definitely not for AMD-64, and it looks to me like it might be "Squeeze"
>> or
>> "Sid", not "Lenny".
>>
>> >From the "Lenny" AMD-64 Package.bz2 file:
>
>        Digging a bit further, even "Squeeze" does not offer mdadm 3.1.1, at
> least not in the AMD-64 distro.  It's included version at the moment is
> 3.0.3-2.
>
>

Yes, but it's kernel is supported.  You need only run the newer
version during the reshape phase.  Otherwise normal operations should
still be supported.  The requirements to compile mdadm aren't exactly
a full development system.  You don't even have to install it to run
it; you can do that in the build area.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: RAID5 - RAID6
  2010-03-08  3:31 Michael Evans
@ 2010-03-08  8:59 ` Leslie Rhorer
  2010-03-08  9:09   ` Michael Evans
  0 siblings, 1 reply; 7+ messages in thread
From: Leslie Rhorer @ 2010-03-08  8:59 UTC (permalink / raw)
  To: 'Michael Evans'; +Cc: linux-raid

> Yes, it is for Squeeze, if you want the latest bugfixes and security
> updates you should seriously consider running debian-testing instead
> of stable.

	No, thanks.  I loaded "Squeeze" on another non-RAID workstation in
order to alleviate a kernel bug causing problems with a 3G wireless modem.
It was quite unstable, and caused a number of issues, most problematically
with the fact the distro assumes the system is not headless and would lock
up tight on boot if no monitor is present.  All of the RAID systems are
headless.  More importantly, stability is far and away the absolutely most
important requirement for these servers.  New features I can live without.
Bug fixes I don't need unless they directly affect the functioning of the
system, which is highly focussed.  These systems have a handful of very
basic, very mature apps.  They run NTP, FTP, SSH, rsync, NUT, SMART, SAMBA,
NFS, and KDE.  One of them also runs Galleon, pyTivo, TyTool under wine, and
openvpn server.  That's it.

> Stable is reserved for 'mature' features.  Testing, as far
> as I'm aware, will almost never (and should never if you are paying
> attention) cause data-loss, but might occasionally get in to a
> situation where something breaks; mostly just during upgrades (but
> then that's true of any upgrade).

	It's true no data was lost, but then it's a little difficult to lose
data if the system hangs hard on boot.  I had to yank most of the guts out
of the system to get it stable.  That, plus the new version of KDE really
sucks badly, and I could not get Kpackage to work properly at all.  It also
did something really goofy to pppd, but I was able to work around it by
re-trying the pppd launch repeatedly on boot until it works.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 - RAID6
  2010-03-08  8:59 ` RAID5 - RAID6 Leslie Rhorer
@ 2010-03-08  9:09   ` Michael Evans
  0 siblings, 0 replies; 7+ messages in thread
From: Michael Evans @ 2010-03-08  9:09 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

On Mon, Mar 8, 2010 at 12:59 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> Yes, it is for Squeeze, if you want the latest bugfixes and security
>> updates you should seriously consider running debian-testing instead
>> of stable.
>
>        No, thanks.  I loaded "Squeeze" on another non-RAID workstation in
> order to alleviate a kernel bug causing problems with a 3G wireless modem.
> It was quite unstable, and caused a number of issues, most problematically
> with the fact the distro assumes the system is not headless and would lock
> up tight on boot if no monitor is present.  All of the RAID systems are
> headless.  More importantly, stability is far and away the absolutely most
> important requirement for these servers.  New features I can live without.
> Bug fixes I don't need unless they directly affect the functioning of the
> system, which is highly focussed.  These systems have a handful of very
> basic, very mature apps.  They run NTP, FTP, SSH, rsync, NUT, SMART, SAMBA,
> NFS, and KDE.  One of them also runs Galleon, pyTivo, TyTool under wine, and
> openvpn server.  That's it.
>
>> Stable is reserved for 'mature' features.  Testing, as far
>> as I'm aware, will almost never (and should never if you are paying
>> attention) cause data-loss, but might occasionally get in to a
>> situation where something breaks; mostly just during upgrades (but
>> then that's true of any upgrade).
>
>        It's true no data was lost, but then it's a little difficult to lose
> data if the system hangs hard on boot.  I had to yank most of the guts out
> of the system to get it stable.  That, plus the new version of KDE really
> sucks badly, and I could not get Kpackage to work properly at all.  It also
> did something really goofy to pppd, but I was able to work around it by
> re-trying the pppd launch repeatedly on boot until it works.
>
>

Oh, is THAT where Ubuntu got 10.04's silly framebuffer required to
boot issue from...
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-03-08  9:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-03-28 13:05 RAID5 -> RAID6 Max Waterman
2009-03-28 20:41 ` NeilBrown
2009-03-28 20:54   ` Max Waterman
  -- strict thread matches above, loose matches on Subject: below --
2010-03-08  3:20 Leslie Rhorer
2010-03-08  3:27 ` RAID5 - RAID6 Leslie Rhorer
2010-03-08  4:19   ` Michael Evans
2010-03-08  3:31 Michael Evans
2010-03-08  8:59 ` RAID5 - RAID6 Leslie Rhorer
2010-03-08  9:09   ` Michael Evans

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).