From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?Q?Mathias_Bur=C3=A9n?= Subject: Re: Growing 6 HDD RAID5 to 7 HDD RAID6 Date: Fri, 22 Apr 2011 10:39:07 +0100 Message-ID: References: <4DA58CB0.3020109@anonymous.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Return-path: In-Reply-To: <4DA58CB0.3020109@anonymous.org.uk> Sender: linux-raid-owner@vger.kernel.org To: John Robinson Cc: Linux-RAID List-Id: linux-raid.ids On 13 April 2011 12:44, John Robinson = wrote: > (Subject line amended by me :-) > > On 12/04/2011 17:56, Mathias Bur=C3=A9n wrote: > [...] >> >> I'm approaching over 6.5TB of data, and with an array this large I'd >> like to migrate to RAID6 for a bit more safety. I'm just checking if= I >> understand this correctly, this is how to do it: >> >> * Add a HDD to the array as a hot spare: >> mdadm --manage /dev/md0 --add /dev/sdh1 >> >> * Migrate the array to RAID6: >> mdadm --grow /dev/md0 --raid-devices 7 --level 6 > > You will need a --backup-file to do this, on another device. Since yo= u are > keeping the same number of data discs before and after the reshape, t= he > backup file will be needed throughout the reshape, so the reshape wil= l take > perhaps twice as long as a grow or shrink. If your backup-file is on = the > same disc(s) as md0 is (e.g. on another partition or array made up of= other > partitions on the same disc(s)), it will take way longer (gazillions = of > seeks), so I'd recommend a separate drive or if you have one a small = SSD for > the backup file. > > Doing the above with --layout=3Dpreserve will save you doing the resh= ape so > you won't need the backup file, but there will still be an initial sy= nc of > the Q parity, and the layout will be RAID4-alike with all the Q parit= y on > one drive so it's possible its performance will be RAID4-alike too i.= e. > small writes never faster than the parity drive. Having said that, st= reamed > writes can still potentially go as fast as your 5 data discs, as per = your > RAID5. In practice, I'd be surprised if it was faster than about twic= e the > speed of a single drive (the same as your current RAID5), and as Neil= Brown > notes in his reply, RAID6 doesn't currently have the read-modify-writ= e > optimisation for small writes so small write performance is liable to= be > even poorer than your RAID5 in either layout. > > You will never lose any redundancy in either of the above, but you wo= n't > gain RAID6 double redundancy until the reshape (or Q-drive sync with > --layout=3Dpreserve) has completed - just the same as if you were rep= lacing a > dead drive in an existing RAID6. > > Hope the above helps! > > Cheers, > > John. > > Hi, Thanks for the replies. Allright, here we go; $ mdadm --grow /dev/md0 --bitmap=3Dnone $ mdadm --manage /dev/md0 --add /dev/sde1 $ mdadm --grow /dev/md0 --verbose --layout=3Dpreserve --raid-devices = 7 --level 6 --backup-file=3D/root/md-raid5-to-raid6-backupfile.bin mdadm: level of /dev/md0 changed to raid6 $ cat /proc/mdstat Fri Apr 22 10:37:44 2011 Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1= [1] 9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18 [7/6] [UUUUUU_] [>....................] reshape =3D 0.0% (224768/1950351360) finish=3D8358.5min speed=3D3888K/sec unused devices: And in dmesg: --- level:6 rd:7 wd:6 disk 0, o:1, dev:sdg1 disk 1, o:1, dev:sdb1 disk 2, o:1, dev:sdd1 disk 3, o:1, dev:sdc1 disk 4, o:1, dev:sdf1 disk 5, o:1, dev:sdh1 RAID conf printout: --- level:6 rd:7 wd:6 disk 0, o:1, dev:sdg1 disk 1, o:1, dev:sdb1 disk 2, o:1, dev:sdd1 disk 3, o:1, dev:sdc1 disk 4, o:1, dev:sdf1 disk 5, o:1, dev:sdh1 disk 6, o:1, dev:sde1 md: reshape of RAID array md0 md: minimum _guaranteed_ speed: 1000 KB/sec/disk. md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape. md: using 128k window, over a total of 1950351360 blocks. IIRC there's a way to speed up the migration, by using a larger cache value somewhere, no? Thanks, Mathias -- To unsubscribe from this list: send the line "unsubscribe linux-raid" i= n the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html