* upgrading a RAID array in-place with larger drives. request for review of my approach?
@ 2014-12-01 2:55 terrygalant
2014-12-01 3:28 ` John Stoffel
2014-12-01 9:08 ` Robin Hill
0 siblings, 2 replies; 6+ messages in thread
From: terrygalant @ 2014-12-01 2:55 UTC (permalink / raw)
To: linux-raid
Hi,
I have a 4-drive RAID-10 array. I've been using mdadm for awhile to manage the array, and replace drives as they die without changing anything.
Now, I want to increase its size in-place. I'd like to ask for some help with a review of my setup and plans on how to do it right.
I'm really open to any advice that'll help me get there without blowing this all up!
My array is
cat /proc/mdstat
...
md2 : active raid10 sdd1[1] sdc1[0] sde1[4] sdf1[3]
1953519616 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU]
bitmap: 0/466 pages [0KB], 2048KB chunk
...
it's comprised of 4 drives; each is 1TB physical size, partitioned with a single 'max size' partition, where that partition is formatted 'Linux raid autodetect'
fdisk -l /dev/sd[cdef]
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdc1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdd1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sde1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
Disk /dev/sdf: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdf1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
the array contains only/multiple LVs, in a RAID-10 array size of 2TB,
pvs /dev/md2
PV VG Fmt Attr PSize PFree
/dev/md2 VGBKUP lvm2 a-- 1.82t 45.56g
vgs VGBKUP
VG #PV #LV #SN Attr VSize VFree
VGBKUP 1 8 0 wz--n- 1.82t 45.56g
lvs VGBKUP
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LV001 VGBKUP -wi-ao--- 1.46t
LV002 VGBKUP -wi-ao--- 300.00g
LV003 VGBKUP -wi-ao--- 160.00m
LV004 VGBKUP -wi-ao--- 12.00g
LV005 VGBKUP -wi-ao--- 512.00m
LV006 VGBKUP -wi-a---- 160.00m
LV007 VGBKUP -wi-a---- 4.00g
LV008 VGBKUP -wi-a---- 512.00m
where, currently, ~45.56G of the phy dev is unused
I've purchased 4 new 3TB drives.
I want to upgrade the existing array of 4x1TB drives to 4x3TB drives.
I want to end up with a single partition, @ max_size == ~ 3TB.
I'd like to do this *in-place*, never bringing down the array.
Iiuc, this IS doable.
1st, I think the following procedure starts the process correctly:
(1) format each new 3TB drive, with one 1TB partition, as 'linux raid autodetect', making sure it's IDENTICAL to the partition layout on the current array's disks
(2) with the current array up & running, mdadm FAIL one drive
(3) mdadm remove the FAIL'd drive from the array
(4) physically remove the FAIL'd drive
(5) physically insert the new, pre-formatted 3TB drive
(6) mdadm add the newly inserted drive
(7) allow the array to rebuild, until 'cat /proc/mdstat' says it's done
(8) repeat steps (2) - (7) for each of the three remaining drives.
2nd, I have to correctly/safely to, in 'some' order
extend the physical partitions on all four drives, or of the array (not sure which)
extend the volume group on the array
expand, or add, the existing LVMs in the volume group.
I'm really not sure about what steps, in what order to do *here*.
Can anyone verify that my first part is right, and help me out with doing the 2nd part right?
Thanks a lot!
Terry
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: upgrading a RAID array in-place with larger drives. request for review of my approach?
2014-12-01 2:55 upgrading a RAID array in-place with larger drives. request for review of my approach? terrygalant
@ 2014-12-01 3:28 ` John Stoffel
2014-12-01 4:04 ` terrygalant
2014-12-01 9:08 ` Robin Hill
1 sibling, 1 reply; 6+ messages in thread
From: John Stoffel @ 2014-12-01 3:28 UTC (permalink / raw)
To: terrygalant; +Cc: linux-raid
Terry,
If you have the ability and the power and space in the chassis, i'd
just add in the four new drives, set them up in their RAID10 format,
then just do a 'pvmove' to migrate all our currently LVs from the old
1Tb RAID10 setup to the new one. No fuss, no muss and you can keep
the system online while doing it.
You will of course need to add in the new disks into the VG, but
that's simple to do. Once all the data is moved off the old disks,
you can then remove them from the VG and then shutdown the MD device
,and then remove the disks from the system.
Let me know if you need more details, I glossed over a bunch here.
John
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: upgrading a RAID array in-place with larger drives. request for review of my approach?
2014-12-01 3:28 ` John Stoffel
@ 2014-12-01 4:04 ` terrygalant
2014-12-01 13:47 ` Phil Turmel
0 siblings, 1 reply; 6+ messages in thread
From: terrygalant @ 2014-12-01 4:04 UTC (permalink / raw)
To: John Stoffel; +Cc: linux-raid
Hi John,
On Sun, Nov 30, 2014, at 07:28 PM, John Stoffel wrote:
> If you have the ability and the power and space in the chassis, i'd
> just add in the four new drives, set them up in their RAID10 format,
Unfortunately I don't. I have the 4 slots and thats it :-(
If I did, it'd be pretty easy. But no. So that's why I'm trying to figure out how to do this right -- 'in place'.
Terry
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: upgrading a RAID array in-place with larger drives. request for review of my approach?
2014-12-01 2:55 upgrading a RAID array in-place with larger drives. request for review of my approach? terrygalant
2014-12-01 3:28 ` John Stoffel
@ 2014-12-01 9:08 ` Robin Hill
2014-12-01 9:42 ` Wols Lists
1 sibling, 1 reply; 6+ messages in thread
From: Robin Hill @ 2014-12-01 9:08 UTC (permalink / raw)
To: terrygalant; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 6742 bytes --]
On Sun Nov 30, 2014 at 06:55:53PM -0800, terrygalant@mailbolt.com wrote:
> Hi,
>
> I have a 4-drive RAID-10 array. I've been using mdadm for awhile to
> manage the array, and replace drives as they die without changing
> anything.
>
> Now, I want to increase its size in-place. I'd like to ask for some
> help with a review of my setup and plans on how to do it right.
>
> I'm really open to any advice that'll help me get there without
> blowing this all up!
>
> My array is
>
> cat /proc/mdstat
> ...
> md2 : active raid10 sdd1[1] sdc1[0] sde1[4] sdf1[3]
> 1953519616 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU]
> bitmap: 0/466 pages [0KB], 2048KB chunk
> ...
>
A question was raised just recently about reshaping "far" RAID10 arrays.
Neil Brown (the md maintainer) said:
I recommend creating some loop-back block devices and experimenting.
But I'm fairly sure that "far" RAID10 arrays cannot be reshaped at all.
> it's comprised of 4 drives; each is 1TB physical size, partitioned
> with a single 'max size' partition, where that partition is formatted
> 'Linux raid autodetect'
>
> fdisk -l /dev/sd[cdef]
>
> Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x00000000
>
> Device Boot Start End Sectors Size Id Type
> /dev/sdc1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
>
> Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x00000000
>
> Device Boot Start End Sectors Size Id Type
> /dev/sdd1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
>
> Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x00000000
>
> Device Boot Start End Sectors Size Id Type
> /dev/sde1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
>
> Disk /dev/sdf: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x00000000
>
> Device Boot Start End Sectors Size Id Type
> /dev/sdf1 63 1953520064 1953520002 931.5G fd Linux raid autodetect
>
> the array contains only/multiple LVs, in a RAID-10 array size of 2TB,
>
> pvs /dev/md2
> PV VG Fmt Attr PSize PFree
> /dev/md2 VGBKUP lvm2 a-- 1.82t 45.56g
> vgs VGBKUP
> VG #PV #LV #SN Attr VSize VFree
> VGBKUP 1 8 0 wz--n- 1.82t 45.56g
> lvs VGBKUP
> LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
> LV001 VGBKUP -wi-ao--- 1.46t
> LV002 VGBKUP -wi-ao--- 300.00g
> LV003 VGBKUP -wi-ao--- 160.00m
> LV004 VGBKUP -wi-ao--- 12.00g
> LV005 VGBKUP -wi-ao--- 512.00m
> LV006 VGBKUP -wi-a---- 160.00m
> LV007 VGBKUP -wi-a---- 4.00g
> LV008 VGBKUP -wi-a---- 512.00m
>
> where, currently, ~45.56G of the phy dev is unused
>
> I've purchased 4 new 3TB drives.
>
> I want to upgrade the existing array of 4x1TB drives to 4x3TB drives.
>
> I want to end up with a single partition, @ max_size == ~ 3TB.
>
> I'd like to do this *in-place*, never bringing down the array.
>
> Iiuc, this IS doable.
>
> 1st, I think the following procedure starts the process correctly:
>
> (1) format each new 3TB drive, with one 1TB partition, as 'linux
> raid autodetect', making sure it's IDENTICAL to the partition layout
> on the current array's disks
>
> (2) with the current array up & running, mdadm FAIL one drive
>
> (3) mdadm remove the FAIL'd drive from the array
>
> (4) physically remove the FAIL'd drive
>
> (5) physically insert the new, pre-formatted 3TB drive
>
> (6) mdadm add the newly inserted drive
>
> (7) allow the array to rebuild, until 'cat /proc/mdstat' says it's done
>
> (8) repeat steps (2) - (7) for each of the three remaining drives.
>
> 2nd, I have to correctly/safely to, in 'some' order
>
> extend the physical partitions on all four drives, or of the array
> (not sure which)
> extend the volume group on the array
> expand, or add, the existing LVMs in the volume group.
>
> I'm really not sure about what steps, in what order to do *here*.
>
> Can anyone verify that my first part is right, and help me out with
> doing the 2nd part right?
>
If it is doable (see comment above), it'll be simpler to just partition
the disks to the final size (or skip partitioning at all) - md will
quite happily accept larger devices added to an array (though it doesn't
use the extra space). Otherwise, your initial steps are correct - though
if you have a spare bay (or even a USB/SATA adapter), you can add the
drive as a spare and then use "mdadm --replace" (you may need a newer
version of mdadm for this) command to flag one of the existing array
members for replacement. This will do a direct copy of the data from the
existing disk to the new one and is quicker (and safer) than fail/add.
You'll then need to grow the array, the volume group, then the LVMs.
As I say above, I think you're out of luck though. I'd recommend
connecting up one of the new drives (if you have a spare bay or can hook
it up externally, do so, otherwise you'll need to fail one of the array
members), then:
- Copy all the data over to the new disk
- Stop the old array
- Remove the old disks and insert the new ones
- Create a new array (with a missing member if you only have 4 bays)
- Copy the data off the single disk and onto the new array
- Add the single disk to the array as the final member
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: upgrading a RAID array in-place with larger drives. request for review of my approach?
2014-12-01 9:08 ` Robin Hill
@ 2014-12-01 9:42 ` Wols Lists
0 siblings, 0 replies; 6+ messages in thread
From: Wols Lists @ 2014-12-01 9:42 UTC (permalink / raw)
To: linux-raid
On 01/12/14 09:08, Robin Hill wrote:
> If it is doable (see comment above), it'll be simpler to just
> partition the disks to the final size (or skip partitioning at all)
> - md will quite happily accept larger devices added to an array
> (though it doesn't use the extra space). Otherwise, your initial
> steps are correct - though if you have a spare bay (or even a
> USB/SATA adapter), you can add the drive as a spare and then use
> "mdadm --replace" (you may need a newer version of mdadm for this)
> command to flag one of the existing array members for replacement.
> This will do a direct copy of the data from the existing disk to
> the new one and is quicker (and safer) than fail/add.
I upgraded a (raid 1) system by just adding the new, larger, disk. I
think I swapped a 500Gb for a 1TB, so replaced my 400Gb partitions
with 900Gb partitions. I then grew the array, followed by growing the
partition. Worked fine.
Cheers,
Wol
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: upgrading a RAID array in-place with larger drives. request for review of my approach?
2014-12-01 4:04 ` terrygalant
@ 2014-12-01 13:47 ` Phil Turmel
0 siblings, 0 replies; 6+ messages in thread
From: Phil Turmel @ 2014-12-01 13:47 UTC (permalink / raw)
To: terrygalant, John Stoffel; +Cc: linux-raid
Good morning Terry,
On 11/30/2014 11:04 PM, terrygalant@mailbolt.com wrote:
> Hi John,
>
> On Sun, Nov 30, 2014, at 07:28 PM, John Stoffel wrote:
>> If you have the ability and the power and space in the chassis, i'd
>> just add in the four new drives, set them up in their RAID10 format,
>
> Unfortunately I don't. I have the 4 slots and thats it :-(
>
> If I did, it'd be pretty easy. But no. So that's why I'm trying to figure out how to do this right -- 'in place'.
You cannot --grow your array, as that isn't supported for the "far"
layout of raid10. Sorry. As you only have four slots, I recommend the
following convoluted procedure:
1) Get the new drives into the box w/ the existing array on the tail of
the space, as follows:
a) Partition new drive w/ 2T and 1T partitions, with the latter large
enough to serve as a member of the current array.
b) --fail and --remove the old disk.
c) Install the new disk, --add the 1T partition to your array.
d) Let it resync, then repeat for drives 2-4.
2) Create a new, growable array in the collection of 2T partitions.
With newer kernels, raid10,n2 should work. Experiment with that if you
aren't sure. Make sure you enable bitmaps.
3) Use pvcreate and vgextend to merge the new array into your existing
LVM setup.
4) Use pvmove to shift all of your volumes onto the new array.
5) Use vgreduce to drop the old array, then --stop it and destroy it.
6) Repartion each device to delete the 1T partitions and then resize the
2T over that space. Use --fail and --re-add to keep the array happy
with minimal disruption.
7) When all resyncing is done, --grow the array then use pvresize to
activate the space.
Enjoy!
Phil
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2014-12-01 13:47 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-01 2:55 upgrading a RAID array in-place with larger drives. request for review of my approach? terrygalant
2014-12-01 3:28 ` John Stoffel
2014-12-01 4:04 ` terrygalant
2014-12-01 13:47 ` Phil Turmel
2014-12-01 9:08 ` Robin Hill
2014-12-01 9:42 ` Wols Lists
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).