linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Robin Hill <robin@robinhill.me.uk>
To: terrygalant@mailbolt.com
Cc: linux-raid@vger.kernel.org
Subject: Re: upgrading a RAID array in-place with larger drives.  request for review of my approach?
Date: Mon, 1 Dec 2014 09:08:14 +0000	[thread overview]
Message-ID: <20141201090814.GA3772@cthulhu.home.robinhill.me.uk> (raw)
In-Reply-To: <1417402553.1412807.197119853.0D7A911E@webmail.messagingengine.com>

[-- Attachment #1: Type: text/plain, Size: 6742 bytes --]

On Sun Nov 30, 2014 at 06:55:53PM -0800, terrygalant@mailbolt.com wrote:

> Hi,
> 
> I have a 4-drive RAID-10 array.  I've been using mdadm for awhile to
> manage the array, and replace drives as they die without changing
> anything.
> 
> Now, I want to increase its size in-place.  I'd like to ask for some
> help with a review of my setup and plans on how to do it right.
> 
> I'm really open to any advice that'll help me get there without
> blowing this all up!
> 
> My array is
> 
> 	cat /proc/mdstat
> 		...
> 		md2 : active raid10 sdd1[1] sdc1[0] sde1[4] sdf1[3]
> 		      1953519616 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU]
> 		      bitmap: 0/466 pages [0KB], 2048KB chunk
> 		...
> 
A question was raised just recently about reshaping "far" RAID10 arrays.
Neil Brown (the md maintainer) said:
    I recommend creating some loop-back block devices and experimenting.

    But I'm fairly sure that "far" RAID10 arrays cannot be reshaped at all.

> it's comprised of 4 drives; each is 1TB physical size, partitioned
> with a single 'max size' partition, where that partition is formatted
> 'Linux raid autodetect'
> 
> 	fdisk -l /dev/sd[cdef]
> 
> 		Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> 		Units: sectors of 1 * 512 = 512 bytes
> 		Sector size (logical/physical): 512 bytes / 512 bytes
> 		I/O size (minimum/optimal): 512 bytes / 512 bytes
> 		Disklabel type: dos
> 		Disk identifier: 0x00000000
> 
> 		Device     Boot Start        End    Sectors   Size Id Type
> 		/dev/sdc1          63 1953520064 1953520002 931.5G fd Linux raid autodetect
> 
> 		Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> 		Units: sectors of 1 * 512 = 512 bytes
> 		Sector size (logical/physical): 512 bytes / 512 bytes
> 		I/O size (minimum/optimal): 512 bytes / 512 bytes
> 		Disklabel type: dos
> 		Disk identifier: 0x00000000
> 
> 		Device     Boot Start        End    Sectors   Size Id Type
> 		/dev/sdd1          63 1953520064 1953520002 931.5G fd Linux raid autodetect
> 
> 		Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> 		Units: sectors of 1 * 512 = 512 bytes
> 		Sector size (logical/physical): 512 bytes / 512 bytes
> 		I/O size (minimum/optimal): 512 bytes / 512 bytes
> 		Disklabel type: dos
> 		Disk identifier: 0x00000000
> 
> 		Device     Boot Start        End    Sectors   Size Id Type
> 		/dev/sde1          63 1953520064 1953520002 931.5G fd Linux raid autodetect
> 
> 		Disk /dev/sdf: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> 		Units: sectors of 1 * 512 = 512 bytes
> 		Sector size (logical/physical): 512 bytes / 512 bytes
> 		I/O size (minimum/optimal): 512 bytes / 512 bytes
> 		Disklabel type: dos
> 		Disk identifier: 0x00000000
> 
> 		Device     Boot Start        End    Sectors   Size Id Type
> 		/dev/sdf1          63 1953520064 1953520002 931.5G fd Linux raid autodetect
> 
> the array contains only/multiple LVs, in a RAID-10 array size of 2TB,
> 
> 	pvs /dev/md2
> 	  PV         VG     Fmt  Attr PSize PFree
> 	  /dev/md2   VGBKUP lvm2 a--  1.82t 45.56g
> 	vgs VGBKUP
> 	  VG     #PV #LV #SN Attr   VSize VFree
> 	  VGBKUP   1   8   0 wz--n- 1.82t 45.56g
> 	lvs VGBKUP
> 	  LV                VG     Attr      LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
> 	  LV001             VGBKUP -wi-ao---   1.46t
> 	  LV002             VGBKUP -wi-ao--- 300.00g
> 	  LV003             VGBKUP -wi-ao--- 160.00m
> 	  LV004             VGBKUP -wi-ao---  12.00g
> 	  LV005             VGBKUP -wi-ao--- 512.00m
> 	  LV006             VGBKUP -wi-a---- 160.00m
> 	  LV007             VGBKUP -wi-a----   4.00g
> 	  LV008             VGBKUP -wi-a---- 512.00m
> 
> where, currently, ~45.56G of the phy dev is unused
> 
> I've purchased 4 new 3TB drives.
> 
> I want to upgrade the existing array of 4x1TB drives to 4x3TB drives.
> 
> I want to end up with a single partition, @ max_size == ~ 3TB.
> 
> I'd like to do this *in-place*, never bringing down the array.
> 
> Iiuc, this IS doable.
> 
> 1st, I think the following procedure starts the process correctly:
> 
> 	(1) format each new 3TB drive, with one 1TB partition, as 'linux
> 	raid autodetect', making sure it's IDENTICAL to the partition layout
> 	on the current array's disks
> 
> 	(2) with the current array up & running, mdadm FAIL one drive
> 
> 	(3) mdadm remove the FAIL'd drive from the array
> 
> 	(4) physically remove the FAIL'd drive
> 
> 	(5) physically insert the new, pre-formatted 3TB drive
> 
> 	(6) mdadm add the newly inserted drive
> 
> 	(7) allow the array to rebuild, until 'cat /proc/mdstat' says it's done
> 
> 	(8) repeat steps (2) - (7) for each of the three remaining drives.
> 
> 2nd, I have to correctly/safely to, in 'some' order
> 
> 	extend the physical partitions on all four drives, or of the array
> 	(not sure which)
> 	extend the volume group on the array
> 	expand, or add, the existing LVMs in the volume group.
> 
> I'm really not sure about what steps, in what order to do *here*.
> 
> Can anyone verify that my first part is right, and help me out with
> doing the 2nd part right?
> 
If it is doable (see comment above), it'll be simpler to just partition
the disks to the final size (or skip partitioning at all) - md will
quite happily accept larger devices added to an array (though it doesn't
use the extra space). Otherwise, your initial steps are correct - though
if you have a spare bay (or even a USB/SATA adapter), you can add the
drive as a spare and then use "mdadm --replace" (you may need a newer
version of mdadm for this) command to flag one of the existing array
members for replacement. This will do a direct copy of the data from the
existing disk to the new one and is quicker (and safer) than fail/add.

You'll then need to grow the array, the volume group, then the LVMs.

As I say above, I think you're out of luck though. I'd recommend
connecting up one of the new drives (if you have a spare bay or can hook
it up externally, do so, otherwise you'll need to fail one of the array
members), then:
    - Copy all the data over to the new disk
    - Stop the old array
    - Remove the old disks and insert the new ones
    - Create a new array (with a missing member if you only have 4 bays)
    - Copy the data off the single disk and onto the new array
    - Add the single disk to the array as the final member

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

  parent reply	other threads:[~2014-12-01  9:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-01  2:55 upgrading a RAID array in-place with larger drives. request for review of my approach? terrygalant
2014-12-01  3:28 ` John Stoffel
2014-12-01  4:04   ` terrygalant
2014-12-01 13:47     ` Phil Turmel
2014-12-01  9:08 ` Robin Hill [this message]
2014-12-01  9:42   ` Wols Lists

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141201090814.GA3772@cthulhu.home.robinhill.me.uk \
    --to=robin@robinhill.me.uk \
    --cc=linux-raid@vger.kernel.org \
    --cc=terrygalant@mailbolt.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).