linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Growing 6 HDD RAID5 to 7 HDD RAID5
@ 2011-04-12 16:56 Mathias Burén
  2011-04-12 17:14 ` Roman Mamedov
  2011-04-13 11:44 ` Growing 6 HDD RAID5 to 7 HDD RAID6 John Robinson
  0 siblings, 2 replies; 10+ messages in thread
From: Mathias Burén @ 2011-04-12 16:56 UTC (permalink / raw)
  To: Linux-RAID

Hi mailing list,

First, thanks for this great software!

I have a RAID5 setup with on 6x 2TB HDD:

/dev/md0:
        Version : 1.2
  Creation Time : Tue Oct 19 08:58:41 2010
     Raid Level : raid5
     Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
  Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Apr 12 17:50:25 2011
          State : active
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : ion:0  (local to host ion)
           UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
         Events : 3035979

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       17        1      active sync   /dev/sdb1
       4       8       49        2      active sync   /dev/sdd1
       3       8       33        3      active sync   /dev/sdc1
       5       8       65        4      active sync   /dev/sde1
       6       8       97        5      active sync   /dev/sdg1

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[0] sdg1[6] sde1[5] sdc1[3] sdd1[4] sdb1[1]
      9751756800 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 1/15 pages [4KB], 65536KB chunk

unused devices: <none>


I'm approaching over 6.5TB of data, and with an array this large I'd
like to migrate to RAID6 for a bit more safety. I'm just checking if I
understand this correctly, this is how to do it:

* Add a HDD to the array as a hot spare:
mdadm --manage /dev/md0 --add /dev/sdh1

* Migrate the array to RAID6:
mdadm --grow /dev/md0 --raid-devices 7 --level 6

Cheers,
// Mathias

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID5
  2011-04-12 16:56 Growing 6 HDD RAID5 to 7 HDD RAID5 Mathias Burén
@ 2011-04-12 17:14 ` Roman Mamedov
  2011-04-12 17:21   ` Mathias Burén
  2011-04-13 11:44 ` Growing 6 HDD RAID5 to 7 HDD RAID6 John Robinson
  1 sibling, 1 reply; 10+ messages in thread
From: Roman Mamedov @ 2011-04-12 17:14 UTC (permalink / raw)
  To: Mathias Burén; +Cc: Linux-RAID

[-- Attachment #1: Type: text/plain, Size: 1023 bytes --]

On Tue, 12 Apr 2011 17:56:05 +0100
Mathias Burén <mathias.buren@gmail.com> wrote:

> I'm approaching over 6.5TB of data, and with an array this large I'd
> like to migrate to RAID6 for a bit more safety.

That's a great decision (and I suppose you made a typo in the subject).
RAID5 is downright dangerous at that disk count, and with disks of that size.

> I'm just checking if I
> understand this correctly, this is how to do it:
> 
> * Add a HDD to the array as a hot spare:
> mdadm --manage /dev/md0 --add /dev/sdh1
> 
> * Migrate the array to RAID6:
> mdadm --grow /dev/md0 --raid-devices 7 --level 6

Looks correct to me...

The first command can be just "mdadm --add /dev/md0 /dev/sdh1".

If you'd rather avoid a reshape at this point, you can add
"--layout=preserve" to the second line. That way you will have just a rebuild
of the new drive, instead of a full reshape.

You will also need to "--grow --bitmap=none" first (you can re-add the bitmap
later).

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID5
  2011-04-12 17:14 ` Roman Mamedov
@ 2011-04-12 17:21   ` Mathias Burén
  2011-04-12 18:22     ` Roman Mamedov
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Burén @ 2011-04-12 17:21 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Linux-RAID

On 12 April 2011 18:14, Roman Mamedov <rm@romanrm.ru> wrote:
> On Tue, 12 Apr 2011 17:56:05 +0100
> Mathias Burén <mathias.buren@gmail.com> wrote:
>
>> I'm approaching over 6.5TB of data, and with an array this large I'd
>> like to migrate to RAID6 for a bit more safety.
>
> That's a great decision (and I suppose you made a typo in the subject).
> RAID5 is downright dangerous at that disk count, and with disks of that size.
>
>> I'm just checking if I
>> understand this correctly, this is how to do it:
>>
>> * Add a HDD to the array as a hot spare:
>> mdadm --manage /dev/md0 --add /dev/sdh1
>>
>> * Migrate the array to RAID6:
>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>
> Looks correct to me...
>
> The first command can be just "mdadm --add /dev/md0 /dev/sdh1".
>
> If you'd rather avoid a reshape at this point, you can add
> "--layout=preserve" to the second line. That way you will have just a rebuild
> of the new drive, instead of a full reshape.
>
> You will also need to "--grow --bitmap=none" first (you can re-add the bitmap
> later).
>
> --
> With respect,
> Roman
>

Hi,

Yep I mean RAID6, stupid subject line. If I use --layout=preserve ,
what impact will that have? Will the array have redundancy during the
rebuild of the new drive?
If I preserve the layout, what is the final result of the array
compared to not preserving it?

Cheers,
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID5
  2011-04-12 17:21   ` Mathias Burén
@ 2011-04-12 18:22     ` Roman Mamedov
  2011-04-12 21:15       ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Roman Mamedov @ 2011-04-12 18:22 UTC (permalink / raw)
  To: Mathias Burén; +Cc: Linux-RAID

[-- Attachment #1: Type: text/plain, Size: 1732 bytes --]

On Tue, 12 Apr 2011 18:21:13 +0100
Mathias Burén <mathias.buren@gmail.com> wrote:

> If I use --layout=preserve , what impact will that have?
> If I preserve the layout, what is the final result of the array
> compared to not preserving it?

Neil wrote about this on his blog:
"It is a very similar process that can now be used to convert a RAID5 to a
RAID6. We first change the RAID5 to RAID6 with a non-standard layout that has
the parity blocks distributed as normal, but the Q blocks all on the last
device (a new device). So this is RAID6 using the RAID6 driver, but with a
non-RAID6 layout. So we "simply" change the layout and the job is done."
http://neil.brown.name/blog/20090817000931

Admittedly it is not completely clear to me what are the long-term downsides of
this layout. As I understand it does fully provide the RAID6-level redundancy.
Perhaps just the performance will suffer a bit? Maybe someone can explain this
more.

If anything, I think it is safe to use this layout for a while, e.g. in case
you don't want to rebuild 'right now'. You can always change the layout to the
traditional one later, by issuing "--grow --layout=normalise". Or perhaps if
you plan to add another disk soon, you can normalise it on that occasion, and
still gain the benefit of only one full reshape.

>  Will the array have redundancy during the rebuild of the new drive?

If you choose --layout=preserve, your array immediately becomes a RAID6 with
one rebuilding drive. So this is the kind of redundancy you will have during
that rebuild - tolerance of up to one more (among the "old" drives) failure,
in other words, identical to what you currently have with RAID5.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID5
  2011-04-12 18:22     ` Roman Mamedov
@ 2011-04-12 21:15       ` NeilBrown
  2011-04-12 21:53         ` Mathias Burén
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2011-04-12 21:15 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Mathias Burén, Linux-RAID

[-- Attachment #1: Type: text/plain, Size: 2527 bytes --]

On Wed, 13 Apr 2011 00:22:38 +0600 Roman Mamedov <rm@romanrm.ru> wrote:

> On Tue, 12 Apr 2011 18:21:13 +0100
> Mathias Burén <mathias.buren@gmail.com> wrote:
> 
> > If I use --layout=preserve , what impact will that have?
> > If I preserve the layout, what is the final result of the array
> > compared to not preserving it?
> 
> Neil wrote about this on his blog:
> "It is a very similar process that can now be used to convert a RAID5 to a
> RAID6. We first change the RAID5 to RAID6 with a non-standard layout that has
> the parity blocks distributed as normal, but the Q blocks all on the last
> device (a new device). So this is RAID6 using the RAID6 driver, but with a
> non-RAID6 layout. So we "simply" change the layout and the job is done."
> http://neil.brown.name/blog/20090817000931
> 
> Admittedly it is not completely clear to me what are the long-term downsides of
> this layout. As I understand it does fully provide the RAID6-level redundancy.
> Perhaps just the performance will suffer a bit? Maybe someone can explain this
> more.

If you specify --layout=preserve, then all the 'Q' blocks will be on one disk.
As every write needs to update a Q block, every write will write to that disk.

With our current RAID6 implementation that probably isn't a big cost - for
any write, we need to either read from or write to each disk anyway.

Anyway:  the only possible problem would be a performance problem, and I
really don't know what performance impact there is - if any.

> 
> If anything, I think it is safe to use this layout for a while, e.g. in case
> you don't want to rebuild 'right now'. You can always change the layout to the
> traditional one later, by issuing "--grow --layout=normalise". Or perhaps if
> you plan to add another disk soon, you can normalise it on that occasion, and
> still gain the benefit of only one full reshape.

Note that doing a normalise by itself later will be much slower than not
doing a preserve now.
Doing the normalise later when growing the the device again would be just as
fast as no doing the preserve now.

NeilBrown


> 
> >  Will the array have redundancy during the rebuild of the new drive?
> 
> If you choose --layout=preserve, your array immediately becomes a RAID6 with
> one rebuilding drive. So this is the kind of redundancy you will have during
> that rebuild - tolerance of up to one more (among the "old" drives) failure,
> in other words, identical to what you currently have with RAID5.
> 


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID5
  2011-04-12 21:15       ` NeilBrown
@ 2011-04-12 21:53         ` Mathias Burén
  0 siblings, 0 replies; 10+ messages in thread
From: Mathias Burén @ 2011-04-12 21:53 UTC (permalink / raw)
  To: NeilBrown; +Cc: Roman Mamedov, Linux-RAID

On 12 April 2011 22:15, NeilBrown <neilb@suse.de> wrote:
> On Wed, 13 Apr 2011 00:22:38 +0600 Roman Mamedov <rm@romanrm.ru> wrote:
>
>> On Tue, 12 Apr 2011 18:21:13 +0100
>> Mathias Burén <mathias.buren@gmail.com> wrote:
>>
>> > If I use --layout=preserve , what impact will that have?
>> > If I preserve the layout, what is the final result of the array
>> > compared to not preserving it?
>>
>> Neil wrote about this on his blog:
>> "It is a very similar process that can now be used to convert a RAID5 to a
>> RAID6. We first change the RAID5 to RAID6 with a non-standard layout that has
>> the parity blocks distributed as normal, but the Q blocks all on the last
>> device (a new device). So this is RAID6 using the RAID6 driver, but with a
>> non-RAID6 layout. So we "simply" change the layout and the job is done."
>> http://neil.brown.name/blog/20090817000931
>>
>> Admittedly it is not completely clear to me what are the long-term downsides of
>> this layout. As I understand it does fully provide the RAID6-level redundancy.
>> Perhaps just the performance will suffer a bit? Maybe someone can explain this
>> more.
>
> If you specify --layout=preserve, then all the 'Q' blocks will be on one disk.
> As every write needs to update a Q block, every write will write to that disk.
>
> With our current RAID6 implementation that probably isn't a big cost - for
> any write, we need to either read from or write to each disk anyway.
>
> Anyway:  the only possible problem would be a performance problem, and I
> really don't know what performance impact there is - if any.
>
>>
>> If anything, I think it is safe to use this layout for a while, e.g. in case
>> you don't want to rebuild 'right now'. You can always change the layout to the
>> traditional one later, by issuing "--grow --layout=normalise". Or perhaps if
>> you plan to add another disk soon, you can normalise it on that occasion, and
>> still gain the benefit of only one full reshape.
>
> Note that doing a normalise by itself later will be much slower than not
> doing a preserve now.
> Doing the normalise later when growing the the device again would be just as
> fast as no doing the preserve now.
>
> NeilBrown
>
>
>>
>> >  Will the array have redundancy during the rebuild of the new drive?
>>
>> If you choose --layout=preserve, your array immediately becomes a RAID6 with
>> one rebuilding drive. So this is the kind of redundancy you will have during
>> that rebuild - tolerance of up to one more (among the "old" drives) failure,
>> in other words, identical to what you currently have with RAID5.
>>
>
>

Right, so using --preserve seems like a sane and good option. Thanks
for the info, I'll let you know what happens, HDD should arrive the
next few days.

// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID6
  2011-04-12 16:56 Growing 6 HDD RAID5 to 7 HDD RAID5 Mathias Burén
  2011-04-12 17:14 ` Roman Mamedov
@ 2011-04-13 11:44 ` John Robinson
  2011-04-22  9:39   ` Mathias Burén
  1 sibling, 1 reply; 10+ messages in thread
From: John Robinson @ 2011-04-13 11:44 UTC (permalink / raw)
  To: Mathias Burén; +Cc: Linux-RAID

(Subject line amended by me :-)

On 12/04/2011 17:56, Mathias Burén wrote:
[...]
> I'm approaching over 6.5TB of data, and with an array this large I'd
> like to migrate to RAID6 for a bit more safety. I'm just checking if I
> understand this correctly, this is how to do it:
>
> * Add a HDD to the array as a hot spare:
> mdadm --manage /dev/md0 --add /dev/sdh1
>
> * Migrate the array to RAID6:
> mdadm --grow /dev/md0 --raid-devices 7 --level 6

You will need a --backup-file to do this, on another device. Since you 
are keeping the same number of data discs before and after the reshape, 
the backup file will be needed throughout the reshape, so the reshape 
will take perhaps twice as long as a grow or shrink. If your backup-file 
is on the same disc(s) as md0 is (e.g. on another partition or array 
made up of other partitions on the same disc(s)), it will take way 
longer (gazillions of seeks), so I'd recommend a separate drive or if 
you have one a small SSD for the backup file.

Doing the above with --layout=preserve will save you doing the reshape 
so you won't need the backup file, but there will still be an initial 
sync of the Q parity, and the layout will be RAID4-alike with all the Q 
parity on one drive so it's possible its performance will be RAID4-alike 
too i.e. small writes never faster than the parity drive. Having said 
that, streamed writes can still potentially go as fast as your 5 data 
discs, as per your RAID5. In practice, I'd be surprised if it was faster 
than about twice the speed of a single drive (the same as your current 
RAID5), and as Neil Brown notes in his reply, RAID6 doesn't currently 
have the read-modify-write optimisation for small writes so small write 
performance is liable to be even poorer than your RAID5 in either layout.

You will never lose any redundancy in either of the above, but you won't 
gain RAID6 double redundancy until the reshape (or Q-drive sync with 
--layout=preserve) has completed - just the same as if you were 
replacing a dead drive in an existing RAID6.

Hope the above helps!

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID6
  2011-04-13 11:44 ` Growing 6 HDD RAID5 to 7 HDD RAID6 John Robinson
@ 2011-04-22  9:39   ` Mathias Burén
  2011-04-22 10:05     ` Mathias Burén
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Burén @ 2011-04-22  9:39 UTC (permalink / raw)
  To: John Robinson; +Cc: Linux-RAID

On 13 April 2011 12:44, John Robinson <john.robinson@anonymous.org.uk> wrote:
> (Subject line amended by me :-)
>
> On 12/04/2011 17:56, Mathias Burén wrote:
> [...]
>>
>> I'm approaching over 6.5TB of data, and with an array this large I'd
>> like to migrate to RAID6 for a bit more safety. I'm just checking if I
>> understand this correctly, this is how to do it:
>>
>> * Add a HDD to the array as a hot spare:
>> mdadm --manage /dev/md0 --add /dev/sdh1
>>
>> * Migrate the array to RAID6:
>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>
> You will need a --backup-file to do this, on another device. Since you are
> keeping the same number of data discs before and after the reshape, the
> backup file will be needed throughout the reshape, so the reshape will take
> perhaps twice as long as a grow or shrink. If your backup-file is on the
> same disc(s) as md0 is (e.g. on another partition or array made up of other
> partitions on the same disc(s)), it will take way longer (gazillions of
> seeks), so I'd recommend a separate drive or if you have one a small SSD for
> the backup file.
>
> Doing the above with --layout=preserve will save you doing the reshape so
> you won't need the backup file, but there will still be an initial sync of
> the Q parity, and the layout will be RAID4-alike with all the Q parity on
> one drive so it's possible its performance will be RAID4-alike too i.e.
> small writes never faster than the parity drive. Having said that, streamed
> writes can still potentially go as fast as your 5 data discs, as per your
> RAID5. In practice, I'd be surprised if it was faster than about twice the
> speed of a single drive (the same as your current RAID5), and as Neil Brown
> notes in his reply, RAID6 doesn't currently have the read-modify-write
> optimisation for small writes so small write performance is liable to be
> even poorer than your RAID5 in either layout.
>
> You will never lose any redundancy in either of the above, but you won't
> gain RAID6 double redundancy until the reshape (or Q-drive sync with
> --layout=preserve) has completed - just the same as if you were replacing a
> dead drive in an existing RAID6.
>
> Hope the above helps!
>
> Cheers,
>
> John.
>
>

Hi,

Thanks for the replies. Allright, here we go;

 $ mdadm --grow /dev/md0 --bitmap=none
 $ mdadm --manage /dev/md0 --add /dev/sde1
 $ mdadm --grow /dev/md0 --verbose --layout=preserve  --raid-devices 7
--level 6 --backup-file=/root/md-raid5-to-raid6-backupfile.bin
mdadm: level of /dev/md0 changed to raid6

$ cat /proc/mdstat

                                                             Fri Apr
22 10:37:44 2011

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1[1]
      9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18
[7/6] [UUUUUU_]
      [>....................]  reshape =  0.0% (224768/1950351360)
finish=8358.5min speed=3888K/sec

unused devices: <none>

And in dmesg:


 --- level:6 rd:7 wd:6
 disk 0, o:1, dev:sdg1
 disk 1, o:1, dev:sdb1
 disk 2, o:1, dev:sdd1
 disk 3, o:1, dev:sdc1
 disk 4, o:1, dev:sdf1
 disk 5, o:1, dev:sdh1
RAID conf printout:
 --- level:6 rd:7 wd:6
 disk 0, o:1, dev:sdg1
 disk 1, o:1, dev:sdb1
 disk 2, o:1, dev:sdd1
 disk 3, o:1, dev:sdc1
 disk 4, o:1, dev:sdf1
 disk 5, o:1, dev:sdh1
 disk 6, o:1, dev:sde1
md: reshape of RAID array md0
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reshape.
md: using 128k window, over a total of 1950351360 blocks.

IIRC there's a way to speed up the migration, by using a larger cache
value somewhere, no?

Thanks,
Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID6
  2011-04-22  9:39   ` Mathias Burén
@ 2011-04-22 10:05     ` Mathias Burén
  2011-04-29 22:45       ` Mathias Burén
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Burén @ 2011-04-22 10:05 UTC (permalink / raw)
  To: John Robinson; +Cc: Linux-RAID

On 22 April 2011 10:39, Mathias Burén <mathias.buren@gmail.com> wrote:
> On 13 April 2011 12:44, John Robinson <john.robinson@anonymous.org.uk> wrote:
>> (Subject line amended by me :-)
>>
>> On 12/04/2011 17:56, Mathias Burén wrote:
>> [...]
>>>
>>> I'm approaching over 6.5TB of data, and with an array this large I'd
>>> like to migrate to RAID6 for a bit more safety. I'm just checking if I
>>> understand this correctly, this is how to do it:
>>>
>>> * Add a HDD to the array as a hot spare:
>>> mdadm --manage /dev/md0 --add /dev/sdh1
>>>
>>> * Migrate the array to RAID6:
>>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>>
>> You will need a --backup-file to do this, on another device. Since you are
>> keeping the same number of data discs before and after the reshape, the
>> backup file will be needed throughout the reshape, so the reshape will take
>> perhaps twice as long as a grow or shrink. If your backup-file is on the
>> same disc(s) as md0 is (e.g. on another partition or array made up of other
>> partitions on the same disc(s)), it will take way longer (gazillions of
>> seeks), so I'd recommend a separate drive or if you have one a small SSD for
>> the backup file.
>>
>> Doing the above with --layout=preserve will save you doing the reshape so
>> you won't need the backup file, but there will still be an initial sync of
>> the Q parity, and the layout will be RAID4-alike with all the Q parity on
>> one drive so it's possible its performance will be RAID4-alike too i.e.
>> small writes never faster than the parity drive. Having said that, streamed
>> writes can still potentially go as fast as your 5 data discs, as per your
>> RAID5. In practice, I'd be surprised if it was faster than about twice the
>> speed of a single drive (the same as your current RAID5), and as Neil Brown
>> notes in his reply, RAID6 doesn't currently have the read-modify-write
>> optimisation for small writes so small write performance is liable to be
>> even poorer than your RAID5 in either layout.
>>
>> You will never lose any redundancy in either of the above, but you won't
>> gain RAID6 double redundancy until the reshape (or Q-drive sync with
>> --layout=preserve) has completed - just the same as if you were replacing a
>> dead drive in an existing RAID6.
>>
>> Hope the above helps!
>>
>> Cheers,
>>
>> John.
>>
>>
>
> Hi,
>
> Thanks for the replies. Allright, here we go;
>
>  $ mdadm --grow /dev/md0 --bitmap=none
>  $ mdadm --manage /dev/md0 --add /dev/sde1
>  $ mdadm --grow /dev/md0 --verbose --layout=preserve  --raid-devices 7
> --level 6 --backup-file=/root/md-raid5-to-raid6-backupfile.bin
> mdadm: level of /dev/md0 changed to raid6
>
> $ cat /proc/mdstat
>
>                                                             Fri Apr
> 22 10:37:44 2011
>
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1[1]
>      9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18
> [7/6] [UUUUUU_]
>      [>....................]  reshape =  0.0% (224768/1950351360)
> finish=8358.5min speed=3888K/sec
>
> unused devices: <none>
>
> And in dmesg:
>
>
>  --- level:6 rd:7 wd:6
>  disk 0, o:1, dev:sdg1
>  disk 1, o:1, dev:sdb1
>  disk 2, o:1, dev:sdd1
>  disk 3, o:1, dev:sdc1
>  disk 4, o:1, dev:sdf1
>  disk 5, o:1, dev:sdh1
> RAID conf printout:
>  --- level:6 rd:7 wd:6
>  disk 0, o:1, dev:sdg1
>  disk 1, o:1, dev:sdb1
>  disk 2, o:1, dev:sdd1
>  disk 3, o:1, dev:sdc1
>  disk 4, o:1, dev:sdf1
>  disk 5, o:1, dev:sdh1
>  disk 6, o:1, dev:sde1
> md: reshape of RAID array md0
> md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
> md: using maximum available idle IO bandwidth (but not more than
> 200000 KB/sec) for reshape.
> md: using 128k window, over a total of 1950351360 blocks.
>
> IIRC there's a way to speed up the migration, by using a larger cache
> value somewhere, no?
>
> Thanks,
> Mathias
>

Increasing stripe cache on the md device from 1027 to 32k or 16k
didn't make a difference, still around 3800KB/s reshape. Oh well,
we'll see if it's still alive in 5.5 days!

Cheers,
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Growing 6 HDD RAID5 to 7 HDD RAID6
  2011-04-22 10:05     ` Mathias Burén
@ 2011-04-29 22:45       ` Mathias Burén
  0 siblings, 0 replies; 10+ messages in thread
From: Mathias Burén @ 2011-04-29 22:45 UTC (permalink / raw)
  To: John Robinson; +Cc: Linux-RAID

On 22 April 2011 11:05, Mathias Burén <mathias.buren@gmail.com> wrote:
> On 22 April 2011 10:39, Mathias Burén <mathias.buren@gmail.com> wrote:
>> On 13 April 2011 12:44, John Robinson <john.robinson@anonymous.org.uk> wrote:
>>> (Subject line amended by me :-)
>>>
>>> On 12/04/2011 17:56, Mathias Burén wrote:
>>> [...]
>>>>
>>>> I'm approaching over 6.5TB of data, and with an array this large I'd
>>>> like to migrate to RAID6 for a bit more safety. I'm just checking if I
>>>> understand this correctly, this is how to do it:
>>>>
>>>> * Add a HDD to the array as a hot spare:
>>>> mdadm --manage /dev/md0 --add /dev/sdh1
>>>>
>>>> * Migrate the array to RAID6:
>>>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>>>
>>> You will need a --backup-file to do this, on another device. Since you are
>>> keeping the same number of data discs before and after the reshape, the
>>> backup file will be needed throughout the reshape, so the reshape will take
>>> perhaps twice as long as a grow or shrink. If your backup-file is on the
>>> same disc(s) as md0 is (e.g. on another partition or array made up of other
>>> partitions on the same disc(s)), it will take way longer (gazillions of
>>> seeks), so I'd recommend a separate drive or if you have one a small SSD for
>>> the backup file.
>>>
>>> Doing the above with --layout=preserve will save you doing the reshape so
>>> you won't need the backup file, but there will still be an initial sync of
>>> the Q parity, and the layout will be RAID4-alike with all the Q parity on
>>> one drive so it's possible its performance will be RAID4-alike too i.e.
>>> small writes never faster than the parity drive. Having said that, streamed
>>> writes can still potentially go as fast as your 5 data discs, as per your
>>> RAID5. In practice, I'd be surprised if it was faster than about twice the
>>> speed of a single drive (the same as your current RAID5), and as Neil Brown
>>> notes in his reply, RAID6 doesn't currently have the read-modify-write
>>> optimisation for small writes so small write performance is liable to be
>>> even poorer than your RAID5 in either layout.
>>>
>>> You will never lose any redundancy in either of the above, but you won't
>>> gain RAID6 double redundancy until the reshape (or Q-drive sync with
>>> --layout=preserve) has completed - just the same as if you were replacing a
>>> dead drive in an existing RAID6.
>>>
>>> Hope the above helps!
>>>
>>> Cheers,
>>>
>>> John.
>>>
>>>
>>
>> Hi,
>>
>> Thanks for the replies. Allright, here we go;
>>
>>  $ mdadm --grow /dev/md0 --bitmap=none
>>  $ mdadm --manage /dev/md0 --add /dev/sde1
>>  $ mdadm --grow /dev/md0 --verbose --layout=preserve  --raid-devices 7
>> --level 6 --backup-file=/root/md-raid5-to-raid6-backupfile.bin
>> mdadm: level of /dev/md0 changed to raid6
>>
>> $ cat /proc/mdstat
>>
>>                                                             Fri Apr
>> 22 10:37:44 2011
>>
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1[1]
>>      9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18
>> [7/6] [UUUUUU_]
>>      [>....................]  reshape =  0.0% (224768/1950351360)
>> finish=8358.5min speed=3888K/sec
>>
>> unused devices: <none>
>>
>> And in dmesg:
>>
>>
>>  --- level:6 rd:7 wd:6
>>  disk 0, o:1, dev:sdg1
>>  disk 1, o:1, dev:sdb1
>>  disk 2, o:1, dev:sdd1
>>  disk 3, o:1, dev:sdc1
>>  disk 4, o:1, dev:sdf1
>>  disk 5, o:1, dev:sdh1
>> RAID conf printout:
>>  --- level:6 rd:7 wd:6
>>  disk 0, o:1, dev:sdg1
>>  disk 1, o:1, dev:sdb1
>>  disk 2, o:1, dev:sdd1
>>  disk 3, o:1, dev:sdc1
>>  disk 4, o:1, dev:sdf1
>>  disk 5, o:1, dev:sdh1
>>  disk 6, o:1, dev:sde1
>> md: reshape of RAID array md0
>> md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>> md: using maximum available idle IO bandwidth (but not more than
>> 200000 KB/sec) for reshape.
>> md: using 128k window, over a total of 1950351360 blocks.
>>
>> IIRC there's a way to speed up the migration, by using a larger cache
>> value somewhere, no?
>>
>> Thanks,
>> Mathias
>>
>
> Increasing stripe cache on the md device from 1027 to 32k or 16k
> didn't make a difference, still around 3800KB/s reshape. Oh well,
> we'll see if it's still alive in 5.5 days!
>
> Cheers,
>

On 22 April 2011 11:05, Mathias Burén <mathias.buren@gmail.com> wrote:
> On 22 April 2011 10:39, Mathias Burén <mathias.buren@gmail.com> wrote:
>> On 13 April 2011 12:44, John Robinson <john.robinson@anonymous.org.uk> wrote:
>>> (Subject line amended by me :-)
>>>
>>> On 12/04/2011 17:56, Mathias Burén wrote:
>>> [...]
>>>>
>>>> I'm approaching over 6.5TB of data, and with an array this large I'd
>>>> like to migrate to RAID6 for a bit more safety. I'm just checking if I
>>>> understand this correctly, this is how to do it:
>>>>
>>>> * Add a HDD to the array as a hot spare:
>>>> mdadm --manage /dev/md0 --add /dev/sdh1
>>>>
>>>> * Migrate the array to RAID6:
>>>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>>>
>>> You will need a --backup-file to do this, on another device. Since you are
>>> keeping the same number of data discs before and after the reshape, the
>>> backup file will be needed throughout the reshape, so the reshape will take
>>> perhaps twice as long as a grow or shrink. If your backup-file is on the
>>> same disc(s) as md0 is (e.g. on another partition or array made up of other
>>> partitions on the same disc(s)), it will take way longer (gazillions of
>>> seeks), so I'd recommend a separate drive or if you have one a small SSD for
>>> the backup file.
>>>
>>> Doing the above with --layout=preserve will save you doing the reshape so
>>> you won't need the backup file, but there will still be an initial sync of
>>> the Q parity, and the layout will be RAID4-alike with all the Q parity on
>>> one drive so it's possible its performance will be RAID4-alike too i.e.
>>> small writes never faster than the parity drive. Having said that, streamed
>>> writes can still potentially go as fast as your 5 data discs, as per your
>>> RAID5. In practice, I'd be surprised if it was faster than about twice the
>>> speed of a single drive (the same as your current RAID5), and as Neil Brown
>>> notes in his reply, RAID6 doesn't currently have the read-modify-write
>>> optimisation for small writes so small write performance is liable to be
>>> even poorer than your RAID5 in either layout.
>>>
>>> You will never lose any redundancy in either of the above, but you won't
>>> gain RAID6 double redundancy until the reshape (or Q-drive sync with
>>> --layout=preserve) has completed - just the same as if you were replacing a
>>> dead drive in an existing RAID6.
>>>
>>> Hope the above helps!
>>>
>>> Cheers,
>>>
>>> John.
>>>
>>>
>>
>> Hi,
>>
>> Thanks for the replies. Allright, here we go;
>>
>>  $ mdadm --grow /dev/md0 --bitmap=none
>>  $ mdadm --manage /dev/md0 --add /dev/sde1
>>  $ mdadm --grow /dev/md0 --verbose --layout=preserve  --raid-devices 7
>> --level 6 --backup-file=/root/md-raid5-to-raid6-backupfile.bin
>> mdadm: level of /dev/md0 changed to raid6
>>
>> $ cat /proc/mdstat
>>
>>                                                             Fri Apr
>> 22 10:37:44 2011
>>
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1[1]
>>      9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18
>> [7/6] [UUUUUU_]
>>      [>....................]  reshape =  0.0% (224768/1950351360)
>> finish=8358.5min speed=3888K/sec
>>
>> unused devices: <none>
>>
>> And in dmesg:
>>
>>
>>  --- level:6 rd:7 wd:6
>>  disk 0, o:1, dev:sdg1
>>  disk 1, o:1, dev:sdb1
>>  disk 2, o:1, dev:sdd1
>>  disk 3, o:1, dev:sdc1
>>  disk 4, o:1, dev:sdf1
>>  disk 5, o:1, dev:sdh1
>> RAID conf printout:
>>  --- level:6 rd:7 wd:6
>>  disk 0, o:1, dev:sdg1
>>  disk 1, o:1, dev:sdb1
>>  disk 2, o:1, dev:sdd1
>>  disk 3, o:1, dev:sdc1
>>  disk 4, o:1, dev:sdf1
>>  disk 5, o:1, dev:sdh1
>>  disk 6, o:1, dev:sde1
>> md: reshape of RAID array md0
>> md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>> md: using maximum available idle IO bandwidth (but not more than
>> 200000 KB/sec) for reshape.
>> md: using 128k window, over a total of 1950351360 blocks.
>>
>> IIRC there's a way to speed up the migration, by using a larger cache
>> value somewhere, no?
>>
>> Thanks,
>> Mathias
>>
>
> Increasing stripe cache on the md device from 1027 to 32k or 16k
> didn't make a difference, still around 3800KB/s reshape. Oh well,
> we'll see if it's still alive in 5.5 days!
>
> Cheers,
>

It's alive!

md: md0: reshape done.
RAID conf printout:
 --- level:6 rd:7 wd:7
 disk 0, o:1, dev:sdg1
 disk 1, o:1, dev:sdb1
 disk 2, o:1, dev:sdd1
 disk 3, o:1, dev:sdc1
 disk 4, o:1, dev:sdf1
 disk 5, o:1, dev:sdh1
 disk 6, o:1, dev:sde1

$ sudo mdadm -D /dev/md0
Password:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Oct 19 08:58:41 2010
     Raid Level : raid6
     Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
  Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
   Raid Devices : 7
  Total Devices : 7
    Persistence : Superblock is persistent

    Update Time : Fri Apr 29 23:44:50 2011
          State : clean
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : ion:0  (local to host ion)
           UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
         Events : 6158702

    Number   Major   Minor   RaidDevice State
       0       8       97        0      active sync   /dev/sdg1
       1       8       17        1      active sync   /dev/sdb1
       4       8       49        2      active sync   /dev/sdd1
       3       8       33        3      active sync   /dev/sdc1
       5       8       81        4      active sync   /dev/sdf1
       6       8      113        5      active sync   /dev/sdh1
       7       8       65        6      active sync   /dev/sde1

Yay :) thanks for rgeat software! Cheers,

/ Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-04-29 22:45 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-12 16:56 Growing 6 HDD RAID5 to 7 HDD RAID5 Mathias Burén
2011-04-12 17:14 ` Roman Mamedov
2011-04-12 17:21   ` Mathias Burén
2011-04-12 18:22     ` Roman Mamedov
2011-04-12 21:15       ` NeilBrown
2011-04-12 21:53         ` Mathias Burén
2011-04-13 11:44 ` Growing 6 HDD RAID5 to 7 HDD RAID6 John Robinson
2011-04-22  9:39   ` Mathias Burén
2011-04-22 10:05     ` Mathias Burén
2011-04-29 22:45       ` Mathias Burén

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).