* Linear raid extend component
@ 2012-12-29 11:44 Adam Goryachev
2012-12-29 16:59 ` Chris Murphy
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Adam Goryachev @ 2012-12-29 11:44 UTC (permalink / raw)
To: linux-raid@vger.kernel.org
I have 3 RAID devices, 2 RAID1 which are then combined into a linear RAID. I've now replaced both drives of one of the RAID1 (from 2 x 1TB TO 2 x 3TB).
I now want to grow the raid1 from 1TB to 3TB and then grow the linear to include the extra space. However, my concern is that if this is the first array in the linear I will corrupt the filesystem because there will be a big blank space in the middle.
mdadm --misc --query /dev/md1
/dev/md1: 931.51GB raid1 2 devices, 0 spares.
/dev/md1: device 1 in 2 device unknown linear array.
mdadm --misc --query /dev/md2
/dev/md2: 1863.01GB raid1 2 devices, 0 spares.
/dev/md2: device 0 in 2 device unknown linear array.
So, I've increased the size of md1 which should be the end of md3.
So can I just do:
mdadm --grow /dev/md1 --size=max
Then when complete:
mdadm --grow /dev/md3 --size=max
Thanks for any assistance.
Regards,
Adam
Sent from my mobile
Adam Goryachev
Website Managers
http://www.websitemanagers.com.au
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: Linear raid extend component 2012-12-29 11:44 Linear raid extend component Adam Goryachev @ 2012-12-29 16:59 ` Chris Murphy 2012-12-29 18:14 ` Chris Murphy 2012-12-30 6:50 ` Mikael Abrahamsson 2 siblings, 0 replies; 8+ messages in thread From: Chris Murphy @ 2012-12-29 16:59 UTC (permalink / raw) To: linux-raid On Dec 29, 2012, at 4:44 AM, Adam Goryachev <adam@websitemanagers.com.au> wrote: > I have 3 RAID devices, 2 RAID1 which are then combined into a linear RAID. I've now replaced both drives of one of the RAID1 (from 2 x 1TB TO 2 x 3TB). > > I now want to grow the raid1 from 1TB to 3TB and then grow the linear to include the extra space. However, my concern is that if this is the first array in the linear I will corrupt the filesystem because there will be a big blank space in the middle. What file system? I'm not aware of any off hand that tolerate being grown from anything but the end. Chris Murphy ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Linear raid extend component 2012-12-29 11:44 Linear raid extend component Adam Goryachev 2012-12-29 16:59 ` Chris Murphy @ 2012-12-29 18:14 ` Chris Murphy 2012-12-29 20:24 ` Stan Hoeppner 2012-12-30 6:50 ` Mikael Abrahamsson 2 siblings, 1 reply; 8+ messages in thread From: Chris Murphy @ 2012-12-29 18:14 UTC (permalink / raw) To: linux-raid@vger.kernel.org Raid On Dec 29, 2012, at 4:44 AM, Adam Goryachev <adam@websitemanagers.com.au> wrote: > > Then when complete: > mdadm --grow /dev/md3 --size=max In a VM I'm unable to get a linear device to grow. I think you can only grow linear by adding devices. You could partition the 3TB such that the 1st partition matches the total sectors of the partition on the replaced 1TB drive; then add the 2TB partition of the 3TB drive onto the end of the linear array. It'd work, but it's a weird configuration. You're better off reverting to the original setup, and adding the 3TB drives at the end, then growing the file system. [root@f18v ~]# mdadm -G /dev/md0 --size=max mdadm: component size of /dev/md0 unchanged at 0K Chris Murphy ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Linear raid extend component 2012-12-29 18:14 ` Chris Murphy @ 2012-12-29 20:24 ` Stan Hoeppner 2012-12-29 21:52 ` Chris Murphy [not found] ` <50ED18A3.7050203@websitemanagers.com.au> 0 siblings, 2 replies; 8+ messages in thread From: Stan Hoeppner @ 2012-12-29 20:24 UTC (permalink / raw) To: Chris Murphy; +Cc: linux-raid@vger.kernel.org Raid On 12/29/2012 12:14 PM, Chris Murphy wrote: > > On Dec 29, 2012, at 4:44 AM, Adam Goryachev <adam@websitemanagers.com.au> wrote: > >> >> Then when complete: >> mdadm --grow /dev/md3 --size=max > > In a VM I'm unable to get a linear device to grow. I think you can only grow linear by adding devices. You could partition the 3TB such that the 1st partition matches the total sectors of the partition on the replaced 1TB drive; then add the 2TB partition of the 3TB drive onto the end of the linear array. It'd work, but it's a weird configuration. You're better off reverting to the original setup, and adding the 3TB drives at the end, then growing the file system. Somebody posted the same scenario a few weeks ago. The only 'proper' way to do this is to swap out the drives in the last RAID1 pair in the linear array. The whole point of the linear array, or concatenation, is to constantly add drives to expand. Swapping drives in a concat with larger units was never anticipated as a valid growth option. As Chris states, you can do this with partitions, but it is not elegant. Since your total array size is ~3TB, if you're using XFS, you could simply do a dump to one of the new 3TB drives. Afterwards you can simply blow away everything and start over, creating a RAID1 with the remaining 3TB drive but with a missing drive. Then create the RAID1 with the 2x 2TB drives. Then create the linear array and format it. Then do a restore of the XFS filesystem. Then add the 3TB drive to the missing RAID1. You can do the same thing with any filesystem but you'll be using 'cp -a' or rsync, etc, to move the files around, which is much slower. -- Stan ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Linear raid extend component 2012-12-29 20:24 ` Stan Hoeppner @ 2012-12-29 21:52 ` Chris Murphy 2012-12-31 4:23 ` Stan Hoeppner [not found] ` <50ED18A3.7050203@websitemanagers.com.au> 1 sibling, 1 reply; 8+ messages in thread From: Chris Murphy @ 2012-12-29 21:52 UTC (permalink / raw) To: linux-raid@vger.kernel.org Raid On Dec 29, 2012, at 1:24 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote: > On 12/29/2012 12:14 PM, Chris Murphy wrote: >> >> On Dec 29, 2012, at 4:44 AM, Adam Goryachev <adam@websitemanagers.com.au> wrote: >> >>> >>> Then when complete: >>> mdadm --grow /dev/md3 --size=max >> >> In a VM I'm unable to get a linear device to grow. I think you can only grow linear by adding devices. You could partition the 3TB such that the 1st partition matches the total sectors of the partition on the replaced 1TB drive; then add the 2TB partition of the 3TB drive onto the end of the linear array. It'd work, but it's a weird configuration. You're better off reverting to the original setup, and adding the 3TB drives at the end, then growing the file system. > > Somebody posted the same scenario a few weeks ago. The only 'proper' > way to do this is to swap out the drives in the last RAID1 pair in the > linear array. I haven't tried this, but man mdadm says about linear: "If the target array is a Linear array, then --add can be used to add one or more devices to the array. They are simply catenated on to the end of the array. Once added, the devices cannot be removed." So in any case it seems he'd have to partition that 3TB disk. But at least by adding it at the end of the linear array, the 3TB disk's 2nd partition at 2TB is linearly arranged within the linear array. So the same LBAs are used in any case, whether two partitions or one. And even if an XFS AG were bisected by the partitioning, since it's linearly arranged, offhand I see no performance downside to this. Making the linear array effectively non-linear by adding some earlier disk's 2nd partition to the end of the linear array would cause disk contention if XFS is being used; maybe it's negligible depending on the usage. And likely negligible if a non-parallel fs is being used. *shrug* But it's a confusing arrangement for any sysadmin, current or the one who inherits the beginnings of such a rat's nest. So when growing an XFS volume, does it add more AG's automatically when it sees additional underlying devices? Chris Murphy ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Linear raid extend component 2012-12-29 21:52 ` Chris Murphy @ 2012-12-31 4:23 ` Stan Hoeppner 0 siblings, 0 replies; 8+ messages in thread From: Stan Hoeppner @ 2012-12-31 4:23 UTC (permalink / raw) To: Chris Murphy; +Cc: linux-raid@vger.kernel.org Raid On 12/29/2012 3:52 PM, Chris Murphy wrote: > > On Dec 29, 2012, at 1:24 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote: > >> On 12/29/2012 12:14 PM, Chris Murphy wrote: >>> >>> On Dec 29, 2012, at 4:44 AM, Adam Goryachev <adam@websitemanagers.com.au> wrote: >>> >>>> >>>> Then when complete: >>>> mdadm --grow /dev/md3 --size=max >>> >>> In a VM I'm unable to get a linear device to grow. I think you can only grow linear by adding devices. You could partition the 3TB such that the 1st partition matches the total sectors of the partition on the replaced 1TB drive; then add the 2TB partition of the 3TB drive onto the end of the linear array. It'd work, but it's a weird configuration. You're better off reverting to the original setup, and adding the 3TB drives at the end, then growing the file system. >> >> Somebody posted the same scenario a few weeks ago. The only 'proper' >> way to do this is to swap out the drives in the last RAID1 pair in the >> linear array. > > I haven't tried this, but man mdadm says about linear: > > "If the target array is a Linear array, then --add can be used to add one or more devices to the array. They are simply catenated on to the end of the array. Once added, the devices cannot be removed." > > So in any case it seems he'd have to partition that 3TB disk. But at least by adding it at the end of the linear array, the 3TB disk's 2nd partition at 2TB is linearly arranged within the linear array. So the same LBAs are used in any case, whether two partitions or one. And even if an XFS AG were bisected by the partitioning, since it's linearly arranged, offhand I see no performance downside to this. > > Making the linear array effectively non-linear by adding some earlier disk's 2nd partition to the end of the linear array would cause disk contention if XFS is being used; maybe it's negligible depending on the usage. And likely negligible if a non-parallel fs is being used. *shrug* But it's a confusing arrangement for any sysadmin, current or the one who inherits the beginnings of such a rat's nest. Yeah, it's ugly no matter what, for everyone. Which is why I recommend against doing such a thing, no matter which FS is used atop. > So when growing an XFS volume, does it add more AG's automatically when it sees additional underlying devices? XFS allocation group size is fixed, static. When growing an XFS new AGs are created in the new free space. And to be clear, XFS doesn't see additional underlying devices. It simply sees more unallocated sectors at the end of the current device, that being a concat or striped array. -- Stan ^ permalink raw reply [flat|nested] 8+ messages in thread
[parent not found: <50ED18A3.7050203@websitemanagers.com.au>]
* Re: Linear raid extend component [not found] ` <50ED18A3.7050203@websitemanagers.com.au> @ 2013-01-10 12:14 ` Adam Goryachev 0 siblings, 0 replies; 8+ messages in thread From: Adam Goryachev @ 2013-01-10 12:14 UTC (permalink / raw) To: linux-raid@vger.kernel.org Raid On 09/01/13 18:13, Adam Goryachev wrote: > Actually, once I remembered that I did this before when I upgraded, I > went to google, and found: > http://permalink.gmane.org/gmane.linux.raid/38963 > > Which was the result of my request last time: > NeilBrown <http://search.gmane.org/?author=NeilBrown&sort=date> | 25 > Jun 2012 04:52 > I just checked this will loop devices and it works as expected (assuming you > have 1.1 or 1.2 metadata). > So: > mdadm -S /dev/md2 > mdadm -A --update=devicesize /dev/md0 /dev/md1 > (order doesn't matter with assemble). > So, as soon as I can unmount the partition, I'll give that a try > again, and hopefully I'm all sorted. > > Since I spent so much time typing all the below, I'll leave it there > anyway. > > Thanks again for everyone's help, and just another reminder to check > google first, someone else (or yourself) might have had the same > question before :) OK, so this is ... almost working.... I've worked out that I can't use fdisk on a drive larger than 2TB, so I've now moved onto parted. I remove one drive from the raid array (fail and remove), then used parted to create a GPT partition, and partitioned, then added the new drive back to the RAID1, waited for it to sync, and then repeated the process with the second drive. However, it doesn't seem to be working properly on the second drive as I don't get the extra space: This one is working/the right size parted /dev/sdc print Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB primary raid mdadm --misc --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 84e0923e:a10ebc3a:fc28c832:2341bfd8 Name : myhost:1 (local to host myhost) Creation Time : Wed Feb 1 19:26:45 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860528128 (2794.52 GiB 3000.59 GB) Array Size : 4294963199 (2048.00 GiB 2199.02 GB) Used Dev Size : 4294963199 (2048.00 GiB 2199.02 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 56c29503:527ce1ec:2af3394a:1739e6f4 Update Time : Thu Jan 10 23:02:47 2013 Checksum : 1003d87f - correct Events : 50468 Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing) Note the "Avail Dev Size" is 3TB, larger than the Array Size and Used Dev Size. This drive is not working properly, but parted looks correct (identical to the working one above): parted /dev/sde print Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sde: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB primary raid mdadm --misc --examine /dev/sde1 /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 84e0923e:a10ebc3a:fc28c832:2341bfd8 Name : keep.websitemanagers.com.au:1 (local to host keep.websitemanagers.com.au) Creation Time : Wed Feb 1 19:26:45 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 4294963199 (2048.00 GiB 2199.02 GB) Array Size : 4294963199 (2048.00 GiB 2199.02 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 5b37728e:f4e4ee3c:03284a2a:375bc4e7 Update Time : Thu Jan 10 22:56:46 2013 Checksum : 608e6b8c - correct Events : 50428 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) This time the "Avail Dev Size" is the same as previously when I used fdisk to create a normal partition, so it looks like the GPT partition is not visible. If I remove sde1 from the array (fail and remove) and then try to run kpartx -av /dev/sde I get this: kpartx -av /dev/sde add map sde1 (253:0): 0 5860530176 linear /dev/sde 2048 but mdadm --misc --examine /dev/sde1 does not change at all Two questions: 1) How or what do I need to do to get sde1 to show the correct size? Once that happens, I can grow the RAID1 array, and then I can extend the linear array 2) Are these partitions aligned properly? When I re-add the partition to the array, I see this in dmesg: md1: Warning: Device sde1 is misaligned I know fdisk is not going to work properly, but it also shows a mis-alignment: fdisk -l /dev/sde WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sde: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sde1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. However, parted says it is optimal: (parted) align-check alignment type(min/opt) [optimal]/minimal? Partition number? 1 1 aligned I thought I read that the partition should start at 1M, but parted seems to have put it a little further. So I don't know what to believe.... Any comments/suggestions would be appreciated. Thanks, Adam -- Adam Goryachev Website Managers Ph: +61 2 8304 0000 adam@websitemanagers.com.au Fax: +61 2 8304 0001 www.websitemanagers.com.au ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Linear raid extend component 2012-12-29 11:44 Linear raid extend component Adam Goryachev 2012-12-29 16:59 ` Chris Murphy 2012-12-29 18:14 ` Chris Murphy @ 2012-12-30 6:50 ` Mikael Abrahamsson 2 siblings, 0 replies; 8+ messages in thread From: Mikael Abrahamsson @ 2012-12-30 6:50 UTC (permalink / raw) To: Adam Goryachev; +Cc: linux-raid@vger.kernel.org On Sat, 29 Dec 2012, Adam Goryachev wrote: > I have 3 RAID devices, 2 RAID1 which are then combined into a linear RAID. I've now replaced both drives of one of the RAID1 (from 2 x 1TB TO 2 x 3TB). > > I now want to grow the raid1 from 1TB to 3TB and then grow the linear to include the extra space. However, my concern is that if this is the first array in the linear I will corrupt the filesystem because there will be a big blank space in the middle. You'd probably be better off using something that was meant to handle these cases, such as LVM instead of linear md-raid. The sooner you make this move the less problem you'll have down the line. -- Mikael Abrahamsson email: swmike@swm.pp.se ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2013-01-10 12:14 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-29 11:44 Linear raid extend component Adam Goryachev
2012-12-29 16:59 ` Chris Murphy
2012-12-29 18:14 ` Chris Murphy
2012-12-29 20:24 ` Stan Hoeppner
2012-12-29 21:52 ` Chris Murphy
2012-12-31 4:23 ` Stan Hoeppner
[not found] ` <50ED18A3.7050203@websitemanagers.com.au>
2013-01-10 12:14 ` Adam Goryachev
2012-12-30 6:50 ` Mikael Abrahamsson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).