* [linux-lvm] Need help with a particular use-case for pvmove.
@ 2010-11-13 21:45 Stirling Westrup
2010-11-13 23:03 ` Lars Ellenberg
2010-11-13 23:22 ` Ray Morris
0 siblings, 2 replies; 9+ messages in thread
From: Stirling Westrup @ 2010-11-13 21:45 UTC (permalink / raw)
To: linux-lvm
I have a 4-slot storage array with all slots filled and each of the
four drives having a single LVM2 partition. These pv's are all
collected together into a single volume group called 'Storage' and
containing a single logical volume called 'Data'. This setup has been
working fine until now, but I've almost run out of storage on the
array. Plus, one of the drives is showing signs of imminent failure,
and I'd like to replace it without data loss.
I got a new 2T drive to replace the near-failure 1T drive and thought
that I could just unplug one of the good 1T drives, plug in the new 2T
drive and do a 'pvmove' from the failing drive to the new drive. I
don't have any way to plug all 5 drives in at once, as my server is
PATA and my only SATA slots are in the array.
However pvmove tells me that I cannot do this with a missing drive. I
can't figure out why this should be. Logically I shouldn't need access
to the volume groups or logical volumes if I'm not starting the
drive-mapper or mounting the filesystem built in the logical volume.
I'm only using LVM because I thought it would give me the ability to
swap out drives in just the way I am now trying.
All I want to do is move physical extents from one physical volume to
another. Both of those volumes are present and accessible. Why should
uninvolved missing volumes be an issue, and is there any way around
it? pmmove suggests running "vgreduce --removemissing" but the
documentation for vgreduce seems to say that I'd need to 1) use
--force and 2) it would likely result in data loss.
Is there anything I can do, short of borrowing another storage array
somewhere, just so I can have an extra slot to do this move? My other
option is to put the new drive into a USB case, but the server only
supports USB1, so moving a terrabyte will take over a week.
Any help would be appreciated, thanks.
--
Stirling Westrup
Programmer, Entrepreneur.
https://www.linkedin.com/e/fpf/77228
http://www.linkedin.com/in/swestrup
http://technaut.livejournal.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-13 21:45 [linux-lvm] Need help with a particular use-case for pvmove Stirling Westrup
@ 2010-11-13 23:03 ` Lars Ellenberg
2010-11-14 3:56 ` Stirling Westrup
2010-11-14 3:57 ` Stirling Westrup
2010-11-13 23:22 ` Ray Morris
1 sibling, 2 replies; 9+ messages in thread
From: Lars Ellenberg @ 2010-11-13 23:03 UTC (permalink / raw)
To: linux-lvm
On Sat, Nov 13, 2010 at 04:45:36PM -0500, Stirling Westrup wrote:
> I have a 4-slot storage array with all slots filled and each of the
> four drives having a single LVM2 partition. These pv's are all
> collected together into a single volume group called 'Storage' and
> containing a single logical volume called 'Data'. This setup has been
> working fine until now, but I've almost run out of storage on the
> array. Plus, one of the drives is showing signs of imminent failure,
> and I'd like to replace it without data loss.
>
> I got a new 2T drive to replace the near-failure 1T drive and thought
> that I could just unplug one of the good 1T drives, plug in the new 2T
> drive and do a 'pvmove' from the failing drive to the new drive. I
> don't have any way to plug all 5 drives in at once, as my server is
> PATA and my only SATA slots are in the array.
>
> However pvmove tells me that I cannot do this with a missing drive. I
> can't figure out why this should be. Logically I shouldn't need access
> to the volume groups or logical volumes if I'm not starting the
> drive-mapper or mounting the filesystem built in the logical volume.
> I'm only using LVM because I thought it would give me the ability to
> swap out drives in just the way I am now trying.
>
> All I want to do is move physical extents from one physical volume to
> another. Both of those volumes are present and accessible. Why should
> uninvolved missing volumes be an issue, and is there any way around
> it? pmmove suggests running "vgreduce --removemissing" but the
> documentation for vgreduce seems to say that I'd need to 1) use
> --force and 2) it would likely result in data loss.
>
> Is there anything I can do, short of borrowing another storage array
> somewhere, just so I can have an extra slot to do this move? My other
> option is to put the new drive into a USB case, but the server only
> supports USB1, so moving a terrabyte will take over a week.
>
> Any help would be appreciated, thanks.
If you do it offline anyways:
Shut down.
Unplug one of the good old drives, plug in the new drive.
If you want to be extra sure against typos,
unplug all but the bad-old drive ;-)
Boot into maintenance mode, use a live-cd if you have to.
Don't activate the VG. It won't activate with one pv missing anyways,
unless you really want it to.
Then dd_rescue copy the disk image from bad-old to new, including
everything (partition table, if any, LVM signature, the full image).
Remove bad-old drive, have all other old and the new plugged,
reboot normally.
Done.
Estimated downtime, assuming a sustained linear write speed of 80 MiB/s:
1 TiB / (80 MiB/s), well under 4 hours.
1011
If you want to do it "live", using pvmove, you have to have all old
drives plugged in, _and_ the new drive. Even via USB1, if that is your
only option. Add the new drive as pv to the VG, then pvmove. As it is
live, you don't have any further downtime yet, but of course you will
have performance impact.
As long as you don't get a drive or USB or other failure during the
process you should be fine, so it should not matter if it takes a week.
You probably could plug in some external sata card as well,
or use nbd or iscsi and thus have the pvmove stream it via network.
You will then have to reduce the bad-old drive from the VG, and
shutdown/unplug old/replug new, which of course involves downtime again.
You should consider using some sort of RAID in the future.
hth,
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD� and LINBIT� are registered trademarks of LINBIT, Austria.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-13 21:45 [linux-lvm] Need help with a particular use-case for pvmove Stirling Westrup
2010-11-13 23:03 ` Lars Ellenberg
@ 2010-11-13 23:22 ` Ray Morris
1 sibling, 0 replies; 9+ messages in thread
From: Ray Morris @ 2010-11-13 23:22 UTC (permalink / raw)
To: swestrup, LVM general discussion and development
> However pvmove tells me that I cannot do this with a missing drive. I
> can't figure out why this should be. Logically I shouldn't need access
> to the volume groups or logical volumes
...
> All I want to do is move physical extents from one physical volume to
> another. Both of those volumes are present and accessible. Why should
> uninvolved missing volumes be an issue, and is there any way around
> it?
All of the PVs in a VG have a copy of the metadata that you are
wanting
to change, the description of which extent is stored where. If you
chnage
it while some PVs are missing, there will then be two inconsistent
versions
of the metadata for VG, with diffeent PVs disagreeing on where the
extents
are.
--
Ray Morris
support@bettercgi.com
Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/
Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/
Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php
On 11/13/2010 03:45:36 PM, Stirling Westrup wrote:
> I have a 4-slot storage array with all slots filled and each of the
> four drives having a single LVM2 partition. These pv's are all
> collected together into a single volume group called 'Storage' and
> containing a single logical volume called 'Data'. This setup has been
> working fine until now, but I've almost run out of storage on the
> array. Plus, one of the drives is showing signs of imminent failure,
> and I'd like to replace it without data loss.
>
> I got a new 2T drive to replace the near-failure 1T drive and thought
> that I could just unplug one of the good 1T drives, plug in the new 2T
> drive and do a 'pvmove' from the failing drive to the new drive. I
> don't have any way to plug all 5 drives in at once, as my server is
> PATA and my only SATA slots are in the array.
>
> However pvmove tells me that I cannot do this with a missing drive. I
> can't figure out why this should be. Logically I shouldn't need access
> to the volume groups or logical volumes if I'm not starting the
> drive-mapper or mounting the filesystem built in the logical volume.
> I'm only using LVM because I thought it would give me the ability to
> swap out drives in just the way I am now trying.
>
> All I want to do is move physical extents from one physical volume to
> another. Both of those volumes are present and accessible. Why should
> uninvolved missing volumes be an issue, and is there any way around
> it? pmmove suggests running "vgreduce --removemissing" but the
> documentation for vgreduce seems to say that I'd need to 1) use
> --force and 2) it would likely result in data loss.
>
> Is there anything I can do, short of borrowing another storage array
> somewhere, just so I can have an extra slot to do this move? My other
> option is to put the new drive into a USB case, but the server only
> supports USB1, so moving a terrabyte will take over a week.
>
> Any help would be appreciated, thanks.
> --
> Stirling Westrup
> Programmer, Entrepreneur.
> https://www.linkedin.com/e/fpf/77228
> http://www.linkedin.com/in/swestrup
> http://technaut.livejournal.com
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-13 23:03 ` Lars Ellenberg
@ 2010-11-14 3:56 ` Stirling Westrup
2010-11-14 21:58 ` Lars Ellenberg
2010-11-14 22:56 ` Stuart D Gathman
2010-11-14 3:57 ` Stirling Westrup
1 sibling, 2 replies; 9+ messages in thread
From: Stirling Westrup @ 2010-11-14 3:56 UTC (permalink / raw)
To: LVM general discussion and development
On Sat, Nov 13, 2010 at 6:03 PM, Lars Ellenberg
<lars.ellenberg@linbit.com> wrote:
> On Sat, Nov 13, 2010 at 04:45:36PM -0500, Stirling Westrup wrote:
...
>> All I want to do is move physical extents from one physical volume to
>> another. Both of those volumes are present and accessible. Why should
>> uninvolved missing volumes be an issue, and is there any way around
>> it? �pmmove suggests running "vgreduce --removemissing" but the
>> documentation for vgreduce seems to say that I'd need to 1) use
>> --force and 2) it would likely result in data loss.
>>
>> Is there anything I can do, short of borrowing another storage array
>> somewhere, just so I can have an extra slot to do this move? My other
>> option is to put the new drive into a USB case, but the server only
>> supports USB1, so moving a terrabyte will take over a week.
>>
> If you do it offline anyways:
>
> Shut down.
> Unplug one of the good old drives, plug in the new drive.
> If you want to be extra sure against typos,
> unplug all but the bad-old drive ;-)
>
> Boot into maintenance mode, use a live-cd if you have to.
>
> Don't activate the VG. �It won't activate with one pv missing anyways,
> unless you really want it to.
>
> Then dd_rescue copy the disk image from bad-old to new, including
> everything (partition table, if any, LVM signature, the full image).
>
> Remove bad-old drive, have all other old and the new plugged,
> reboot normally.
>
> Done.
>
> Estimated downtime, assuming a sustained linear write speed of 80 MiB/s:
> 1 TiB / (80 MiB/s), well under 4 hours.
>
Thanks. I did consider this, but the new drive is twice the size of
the old one, so I would need to make sure I had created a partition on
the new drive the exact size of the old one, and had dd-ed everything
correctly. Even then, I wasn't sure if it would work, because I don't
know what the metadata records in terms of the drive configurations.
--
Stirling Westrup
Programmer, Entrepreneur.
https://www.linkedin.com/e/fpf/77228
http://www.linkedin.com/in/swestrup
http://technaut.livejournal.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-13 23:03 ` Lars Ellenberg
2010-11-14 3:56 ` Stirling Westrup
@ 2010-11-14 3:57 ` Stirling Westrup
2010-11-14 23:05 ` Stuart D Gathman
1 sibling, 1 reply; 9+ messages in thread
From: Stirling Westrup @ 2010-11-14 3:57 UTC (permalink / raw)
To: LVM general discussion and development
On Sat, Nov 13, 2010 at 6:03 PM, Lars Ellenberg
<lars.ellenberg@linbit.com> wrote:
> You will then have to reduce the bad-old drive from the VG, and
> shutdown/unplug old/replug new, which of course involves downtime again.
>
> You should consider using some sort of RAID in the future.
>
My last server lost 80% of its data due to a bug in the raid software,
so I'm rather leery of going with a raid solution. I was hoping LVM
would be better.
--
Stirling Westrup
Programmer, Entrepreneur.
https://www.linkedin.com/e/fpf/77228
http://www.linkedin.com/in/swestrup
http://technaut.livejournal.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-14 3:56 ` Stirling Westrup
@ 2010-11-14 21:58 ` Lars Ellenberg
2010-11-14 23:52 ` Stirling Westrup
2010-11-14 22:56 ` Stuart D Gathman
1 sibling, 1 reply; 9+ messages in thread
From: Lars Ellenberg @ 2010-11-14 21:58 UTC (permalink / raw)
To: linux-lvm
On Sat, Nov 13, 2010 at 10:56:05PM -0500, Stirling Westrup wrote:
> On Sat, Nov 13, 2010 at 6:03 PM, Lars Ellenberg
> <lars.ellenberg@linbit.com> wrote:
> > On Sat, Nov 13, 2010 at 04:45:36PM -0500, Stirling Westrup wrote:
>
> ...
> >> All I want to do is move physical extents from one physical volume to
> >> another. Both of those volumes are present and accessible. Why should
> >> uninvolved missing volumes be an issue, and is there any way around
> >> it? �pmmove suggests running "vgreduce --removemissing" but the
> >> documentation for vgreduce seems to say that I'd need to 1) use
> >> --force and 2) it would likely result in data loss.
> >>
> >> Is there anything I can do, short of borrowing another storage array
> >> somewhere, just so I can have an extra slot to do this move? My other
> >> option is to put the new drive into a USB case, but the server only
> >> supports USB1, so moving a terrabyte will take over a week.
> >>
>
> > If you do it offline anyways:
> >
> > Shut down.
> > Unplug one of the good old drives, plug in the new drive.
> > If you want to be extra sure against typos,
> > unplug all but the bad-old drive ;-)
> >
> > Boot into maintenance mode, use a live-cd if you have to.
> >
> > Don't activate the VG. �It won't activate with one pv missing anyways,
> > unless you really want it to.
> >
> > Then dd_rescue copy the disk image from bad-old to new, including
> > everything (partition table, if any, LVM signature, the full image).
> >
> > Remove bad-old drive, have all other old and the new plugged,
> > reboot normally.
> >
> > Done.
> >
> > Estimated downtime, assuming a sustained linear write speed of 80 MiB/s:
> > 1 TiB / (80 MiB/s), well under 4 hours.
> >
>
> Thanks. I did consider this, but the new drive is twice the size of
> the old one, so I would need to make sure I had created a partition on
> the new drive the exact size of the old one, and had dd-ed everything
> correctly. Even then, I wasn't sure if it would work, because I don't
> know what the metadata records in terms of the drive configurations.
No. Don't ask for advice, if you don't take it. I don't just post
nonsense on mailing lists, just to read my own words in the net ;-)
Demo run, simulating a two drive VG, replacing one drive with a bigger one,
using LVs of some other VG as "drives".
root@racke:~/demo# export LVM_SYSTEM_DIR=$PWD
root@racke:~/demo# pvs
root@racke:~/demo# pvcreate /dev/demo/dummy-a
Physical volume "/dev/demo/dummy-a" successfully created
root@racke:~/demo# pvcreate /dev/demo/dummy-b
Physical volume "/dev/demo/dummy-b" successfully created
root@racke:~/demo# vgcreate Data /dev/demo/dummy-{a,b}
Volume group "Data" successfully created
root@racke:~/demo# vgs
VG #PV #LV #SN Attr VSize VFree
Data 2 0 0 wz--n- 1.99g 1.99g
root@racke:~/demo# lvcreate -n data -L 1.8g Data
Rounding up size to full physical extent 1.80 GiB
Logical volume "data" created
root@racke:~/demo# mkefs.ext4 /dev/Data/data
mke2fs 1.41.11 (14-Mar-2010)
...
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
...
root@racke:~/demo# vgs
VG #PV #LV #SN Attr VSize VFree
Data 2 1 0 wz--n- 1.99g 196.00m
root@racke:~/demo# blockdev --getsize64 /dev/demo/dummy-*
1073741824
1073741824
2147483648
root@racke:~/demo# blockdev --getsize64 /dev/demo/dummy-c
2147483648
Ok, so you now have a LV data in a VG Data consisting of two "drives", 1gig each,
and a third drive of two gig.
Now, simulating drive replacement with the method I told you.
root@racke:~/demo# vgchange -an Data
0 logical volume(s) in volume group "Data" now active
root@racke:~/demo# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
data Data -wi--- 1.80g
(you may want to play with some options of dd_rescue
to get best performance)
root@racke:~/demo# dd_rescue /dev/demo/dummy-a /dev/demo/dummy-c
...
dd_rescue: (info): /dev/demo/dummy-a (1048576.0k): EOF
Summary for /dev/demo/dummy-a -> /dev/demo/dummy-c:
dd_rescue: (info): ipos: 1048576.0k, opos: 1048576.0k, xferd: 1048576.0k
...
root@racke:~/demo# pvs -v
Scanning for physical volume names
Found duplicate PV 8Miyc0iXMErbqbTBMfCxrMSLMIo0F2IP: using /dev/demo/dummy-c not /dev/demo/dummy-a
PV VG Fmt Attr PSize PFree DevSize PV UUID
/dev/demo/dummy-b Data lvm2 a- 1020.00m 196.00m 1.00g N8tSRP-qwxK-M8wo-wy5A-4f8V-u07s-8xDwgi
/dev/demo/dummy-c Data lvm2 a- 1020.00m 0 2.00g 8Miyc0-iXME-rbqb-TBMf-CxrM-SLMI-o0F2IP
Uh oh... duplicate PV signature... Well, yes, of course, we physically
copied the image, including any signatures..
=>> if you have your PVs on partitions, not the full disks,
just create a partition fully covering the new drive,
and dd the old PV partition into the new partition.
No matter if the new partition is bigger.
Now, simulate unplugging the old drive by adjusting my demo filter:
root@racke:~/demo# vi lvm.conf +/filter\ =
...
- filter = [ "a|^/dev/demo/dummy-[abc]$|", "r/.*/" ]
+ filter = [ "a|^/dev/demo/dummy-[bc]$|", "r/.*/" ]
root@racke:~/demo# vgscan
Reading all physical volumes. This may take a while...
Found volume group "Data" using metadata type lvm2
root@racke:~/demo# pvscan
PV /dev/demo/dummy-c VG Data lvm2 [1020.00 MiB / 0 free]
PV /dev/demo/dummy-b VG Data lvm2 [1020.00 MiB / 196.00 MiB free]
Total: 2 [1.99 GiB] / in use: 2 [1.99 GiB] / in no VG: 0 [0 ]
There. No more duplicate PV signatures.
root@racke:~/demo# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
data Data -wi-a- 1.80g
root@racke:~/demo# vgs
VG #PV #LV #SN Attr VSize VFree
Data 2 1 0 wz--n- 1.99g 196.00m
have lvm recognize the new PV size:
root@racke:~/demo# pvresize /dev/demo/dummy-c
Physical volume "/dev/demo/dummy-c" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
root@racke:~/demo# vgs
VG #PV #LV #SN Attr VSize VFree
Data 2 1 0 wz--n- 2.99g 1.19g
^^^^^ ^^^^^
root@racke:~/demo# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
data Data -wi-a- 1.80g
reactivate the VG:
root@racke:~/demo# vgchange -ay Data
and now mount your LV, and be happy.
You now can add more space to your LV,
or pvmove an other drive onto the new space of the new bigger drive,
reducing it from the VGs, replace it with an other bigger one,
pvcreate and add it into the VG to grow your VG further.
Still you should consider some RAID, as in _redundancy_ of your data.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD� and LINBIT� are registered trademarks of LINBIT, Austria.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-14 3:56 ` Stirling Westrup
2010-11-14 21:58 ` Lars Ellenberg
@ 2010-11-14 22:56 ` Stuart D Gathman
1 sibling, 0 replies; 9+ messages in thread
From: Stuart D Gathman @ 2010-11-14 22:56 UTC (permalink / raw)
To: linux-lvm
On 11/13/2010 10:56 PM, Stirling Westrup wrote:
>
>> Then dd_rescue copy the disk image from bad-old to new, including
>> everything (partition table, if any, LVM signature, the full image).
>>
>> Remove bad-old drive, have all other old and the new plugged,
>> reboot normally
> Thanks. I did consider this, but the new drive is twice the size of
> the old one, so I would need to make sure I had created a partition on
> the new drive the exact size of the old one, and had dd-ed everything
> correctly. Even then, I wasn't sure if it would work, because I don't
> know what the metadata records in terms of the drive configurations.
The partition can be bigger. Just run pvresize on the new partition
when done. The metadata goes by UUID and doesn't
care what drive it is on (which results in a FAQ when cloning VGs via SAN).
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-14 3:57 ` Stirling Westrup
@ 2010-11-14 23:05 ` Stuart D Gathman
0 siblings, 0 replies; 9+ messages in thread
From: Stuart D Gathman @ 2010-11-14 23:05 UTC (permalink / raw)
To: linux-lvm
On 11/13/2010 10:57 PM, Stirling Westrup wrote:
> On Sat, Nov 13, 2010 at 6:03 PM, Lars Ellenberg
> <lars.ellenberg@linbit.com> wrote:
>> You will then have to reduce the bad-old drive from the VG, and
>> shutdown/unplug old/replug new, which of course involves downtime again.
>>
>> You should consider using some sort of RAID in the future.
> My last server lost 80% of its data due to a bug in the raid software,
> so I'm rather leery of going with a raid solution. I was hoping LVM
> would be better.
Use the linux md driver with RAID-1, simple and safe, and you can clone
a partition by just removing it from the RAID. LVM does RAID-1 only if
you turn on mirroring for a LV. LVM can't possible be "better" without
the redundancy - at which point you have RAID.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] Need help with a particular use-case for pvmove.
2010-11-14 21:58 ` Lars Ellenberg
@ 2010-11-14 23:52 ` Stirling Westrup
0 siblings, 0 replies; 9+ messages in thread
From: Stirling Westrup @ 2010-11-14 23:52 UTC (permalink / raw)
To: LVM general discussion and development
On Sun, Nov 14, 2010 at 4:58 PM, Lars Ellenberg
<lars.ellenberg@linbit.com> wrote:
> On Sat, Nov 13, 2010 at 10:56:05PM -0500, Stirling Westrup wrote:
>> On Sat, Nov 13, 2010 at 6:03 PM, Lars Ellenberg
>> <lars.ellenberg@linbit.com> wrote:
>> > On Sat, Nov 13, 2010 at 04:45:36PM -0500, Stirling Westrup wrote:
>>
>> ...
>> >> All I want to do is move physical extents from one physical volume to
>> >> another. Both of those volumes are present and accessible. Why should
>> >> uninvolved missing volumes be an issue, and is there any way around
>> >> it? �pmmove suggests running "vgreduce --removemissing" but the
>> >> documentation for vgreduce seems to say that I'd need to 1) use
>> >> --force and 2) it would likely result in data loss.
>> >>
>> >> Is there anything I can do, short of borrowing another storage array
>> >> somewhere, just so I can have an extra slot to do this move? My other
>> >> option is to put the new drive into a USB case, but the server only
>> >> supports USB1, so moving a terrabyte will take over a week.
>> >>
>>
>> > If you do it offline anyways:
>> >
>> > Shut down.
>> > Unplug one of the good old drives, plug in the new drive.
>> > If you want to be extra sure against typos,
>> > unplug all but the bad-old drive ;-)
>> >
>> > Boot into maintenance mode, use a live-cd if you have to.
>> >
>> > Don't activate the VG. �It won't activate with one pv missing anyways,
>> > unless you really want it to.
>> >
>> > Then dd_rescue copy the disk image from bad-old to new, including
>> > everything (partition table, if any, LVM signature, the full image).
>> >
>> > Remove bad-old drive, have all other old and the new plugged,
>> > reboot normally.
>> >
>> > Done.
>> >
>> > Estimated downtime, assuming a sustained linear write speed of 80 MiB/s:
>> > 1 TiB / (80 MiB/s), well under 4 hours.
>> >
>>
>> Thanks. I did consider this, but the new drive is twice the size of
>> the old one, so I would need to make sure I had created a partition on
>> the new drive the exact size of the old one, and had dd-ed everything
>> correctly. Even then, I wasn't sure if it would work, because I don't
>> know what the metadata records in terms of the drive configurations.
>
> No. Don't ask for advice, if you don't take it. �I don't just post
> nonsense on mailing lists, just to read my own words in the net ;-)
>
> Demo run, simulating a two drive VG, replacing one drive with a bigger one,
> using LVs of some other VG as "drives".
>
> root@racke:~/demo# export LVM_SYSTEM_DIR=$PWD
> root@racke:~/demo# pvs
> root@racke:~/demo# pvcreate /dev/demo/dummy-a
> �Physical volume "/dev/demo/dummy-a" successfully created
> root@racke:~/demo# pvcreate /dev/demo/dummy-b
> �Physical volume "/dev/demo/dummy-b" successfully created
> root@racke:~/demo# vgcreate Data /dev/demo/dummy-{a,b}
> �Volume group "Data" successfully created
> root@racke:~/demo# vgs
> �VG � #PV #LV #SN Attr � VSize VFree
> �Data � 2 � 0 � 0 wz--n- 1.99g 1.99g
> root@racke:~/demo# lvcreate -n data -L 1.8g Data
> �Rounding up size to full physical extent 1.80 GiB
> �Logical volume "data" created
> root@racke:~/demo# mkefs.ext4 /dev/Data/data
> mke2fs 1.41.11 (14-Mar-2010)
> ...
> Creating journal (8192 blocks): done
> Writing superblocks and filesystem accounting information: done
> ...
> root@racke:~/demo# vgs
> �VG � #PV #LV #SN Attr � VSize VFree
> �Data � 2 � 1 � 0 wz--n- 1.99g 196.00m
> root@racke:~/demo# blockdev --getsize64 /dev/demo/dummy-*
> 1073741824
> 1073741824
> 2147483648
> root@racke:~/demo# blockdev --getsize64 /dev/demo/dummy-c
> 2147483648
>
> Ok, so you now have a LV data in a VG Data consisting of two "drives", 1gig each,
> and a third drive of two gig.
>
> Now, simulating drive replacement with the method I told you.
>
> root@racke:~/demo# vgchange -an Data
> �0 logical volume(s) in volume group "Data" now active
> root@racke:~/demo# lvs
> �LV � VG � Attr � LSize Origin Snap% �Move Log Copy% �Convert
> �data Data -wi--- 1.80g
>
>
> (you may want to play with some options of dd_rescue
> to get best performance)
> root@racke:~/demo# dd_rescue /dev/demo/dummy-a /dev/demo/dummy-c
> ...
> dd_rescue: (info): /dev/demo/dummy-a (1048576.0k): EOF
> Summary for /dev/demo/dummy-a -> /dev/demo/dummy-c:
> dd_rescue: (info): ipos: � 1048576.0k, opos: � 1048576.0k, xferd: � 1048576.0k
> ...
>
> root@racke:~/demo# pvs -v
> � �Scanning for physical volume names
> �Found duplicate PV 8Miyc0iXMErbqbTBMfCxrMSLMIo0F2IP: using /dev/demo/dummy-c not /dev/demo/dummy-a
> �PV � � � � � � � � � � � � �VG � Fmt �Attr PSize � �PFree � DevSize PV UUID
> �/dev/demo/dummy-b Data lvm2 a- � 1020.00m 196.00m � 1.00g N8tSRP-qwxK-M8wo-wy5A-4f8V-u07s-8xDwgi
> �/dev/demo/dummy-c Data lvm2 a- � 1020.00m � � �0 � �2.00g 8Miyc0-iXME-rbqb-TBMf-CxrM-SLMI-o0F2IP
>
> Uh oh... duplicate PV signature... �Well, yes, of course, we physically
> copied the image, including any signatures..
>
> � � � �=>> if you have your PVs on partitions, not the full disks,
> � � � � � �just create a partition fully covering the new drive,
> � � � � � �and dd the old PV partition into the new partition.
> � � � � � �No matter if the new partition is bigger.
>
> Now, simulate unplugging the old drive by adjusting my demo filter:
> root@racke:~/demo# vi lvm.conf +/filter\ =
>
> ...
> - � �filter = [ "a|^/dev/demo/dummy-[abc]$|", "r/.*/" ]
> + � �filter = [ "a|^/dev/demo/dummy-[bc]$|", "r/.*/" ]
>
> root@racke:~/demo# vgscan
> �Reading all physical volumes. �This may take a while...
> �Found volume group "Data" using metadata type lvm2
> root@racke:~/demo# pvscan
> �PV /dev/demo/dummy-c � VG Data � lvm2 [1020.00 MiB / 0 � �free]
> �PV /dev/demo/dummy-b � VG Data � lvm2 [1020.00 MiB / 196.00 MiB free]
> �Total: 2 [1.99 GiB] / in use: 2 [1.99 GiB] / in no VG: 0 [0 � ]
>
> There. No more duplicate PV signatures.
>
> root@racke:~/demo# lvs
> �LV � VG � Attr � LSize Origin Snap% �Move Log Copy% �Convert
> �data Data -wi-a- 1.80g
> root@racke:~/demo# vgs
> �VG � #PV #LV #SN Attr � VSize VFree
> �Data � 2 � 1 � 0 wz--n- 1.99g 196.00m
>
> have lvm recognize the new PV size:
> root@racke:~/demo# pvresize /dev/demo/dummy-c
> �Physical volume "/dev/demo/dummy-c" changed
> �1 physical volume(s) resized / 0 physical volume(s) not resized
> root@racke:~/demo# vgs
> �VG � #PV #LV #SN Attr � VSize VFree
> �Data � 2 � 1 � 0 wz--n- 2.99g 1.19g
> � � � � � � � � � � � � �^^^^^ ^^^^^
> root@racke:~/demo# lvs
> �LV � VG � Attr � LSize Origin Snap% �Move Log Copy% �Convert
> �data Data -wi-a- 1.80g
>
> reactivate the VG:
> root@racke:~/demo# vgchange -ay Data
>
> and now mount your LV, and be happy.
>
>
> You now can add more space to your LV,
> or pvmove an other drive onto the new space of the new bigger drive,
> reducing it from the VGs, replace it with an other bigger one,
> pvcreate and add it into the VG to grow your VG further.
>
> Still you should consider some RAID, as in _redundancy_ of your data.
>
Thanks for going through all those steps. It does make the procedure a
lot clearer in my mind, and it does look like dd_rescue is the way to
go then. I'm going to head off to try it now.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2010-11-14 23:52 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-13 21:45 [linux-lvm] Need help with a particular use-case for pvmove Stirling Westrup
2010-11-13 23:03 ` Lars Ellenberg
2010-11-14 3:56 ` Stirling Westrup
2010-11-14 21:58 ` Lars Ellenberg
2010-11-14 23:52 ` Stirling Westrup
2010-11-14 22:56 ` Stuart D Gathman
2010-11-14 3:57 ` Stirling Westrup
2010-11-14 23:05 ` Stuart D Gathman
2010-11-13 23:22 ` Ray Morris
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).