* [linux-lvm] alternative to pvmove on root volume
@ 2010-01-28 20:11 chris procter
2010-01-28 21:17 ` malahal
2010-01-29 9:21 ` Milan Broz
0 siblings, 2 replies; 8+ messages in thread
From: chris procter @ 2010-01-28 20:11 UTC (permalink / raw)
To: linux-lvm
(resent, it didn't seem to come through last time)
Hi,
I'm trying to migrate our servers from an old EVA to a
shiney new netapp san if possible without downtime. For most of the
volumes I can present in luns from the new san and use pvmove to juggle
the data around but several servers have the root volume on the EVA and
pvmove has a nasty habit of deadlocking the machine when used on root
volumes.
I've been working on the technique mentioned on http://sources.redhat.com/lvm2/wiki/FrequentlyAskedQuestions but after a bit of thought it seems it might be better to do the following:
0) add /dev/new_lun to the volume group
1) lvconvert -m 1 /dev/myvg/lvol00 /dev/new_lun
2) wait for the mirror to sync
now we have RAID1 mirror copy of lvol00 on /dev/old_lun and /dev/new_lun so:
3) lvconvert -m 0 /dev/myvg/lvol00 /dev/old_lun
Breaks
the RAID in favour of new_lun and gets rid of the old leaving us with a
basic (non-mirrored) linear lvol entire on the new_lun.
4) Rinse and repeat for all the other lvols on old_lun (which you can get from "dmsetup table")
5) vgreduce myvg /dev/old_lun
Its less elegant then pvmove but my initial testing seems to suggest it does actually work and doesn't cause deadlocks.
However
given that pvmove also works by mirroring I'm not convinced I haven't
just been lucky so far . So does anyone have any ideas or even better
experiance on whether this is likely to work or am I setting myself up
for a world of pain if I try it on a live server?
All opinions and advice gratefully received :)
chris
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-28 20:11 [linux-lvm] alternative to pvmove on root volume chris procter
@ 2010-01-28 21:17 ` malahal
2010-01-29 13:28 ` Stuart D. Gathman
2010-01-29 9:21 ` Milan Broz
1 sibling, 1 reply; 8+ messages in thread
From: malahal @ 2010-01-28 21:17 UTC (permalink / raw)
To: linux-lvm
chris procter [chris-procter@talk21.com] wrote:
> (resent, it didn't seem to come through last time)
>
> Hi,
>
> I'm trying to migrate our servers from an old EVA to a
> shiney new netapp san if possible without downtime. For most of the
> volumes I can present in luns from the new san and use pvmove to juggle
> the data around but several servers have the root volume on the EVA and
> pvmove has a nasty habit of deadlocking the machine when used on root
> volumes.
>
> I've been working on the technique mentioned on http://sources.redhat.com/lvm2/wiki/FrequentlyAskedQuestions but after a bit of thought it seems it might be better to do the following:
>
> 0) add /dev/new_lun to the volume group
> 1) lvconvert -m 1 /dev/myvg/lvol00 /dev/new_lun
> 2) wait for the mirror to sync
>
> now we have RAID1 mirror copy of lvol00 on /dev/old_lun and /dev/new_lun so:
>
> 3) lvconvert -m 0 /dev/myvg/lvol00 /dev/old_lun
> Breaks
> the RAID in favour of new_lun and gets rid of the old leaving us with a
> basic (non-mirrored) linear lvol entire on the new_lun.
>
> 4) Rinse and repeat for all the other lvols on old_lun (which you can get from "dmsetup table")
>
> 5) vgreduce myvg /dev/old_lun
>
>
> Its less elegant then pvmove but my initial testing seems to suggest it does actually work and doesn't cause deadlocks.
Have you tried pvmove for the same and found deadlocks? If not, your
test doesn't mean anything!
> However
> given that pvmove also works by mirroring I'm not convinced I haven't
> just been lucky so far . So does anyone have any ideas or even better
> experiance on whether this is likely to work or am I setting myself up
> for a world of pain if I try it on a live server?
Yes, pvmove works very similar. It will create a mirror for each segment
one at a time, so it may have to create lot more mirrors depending on
your configuration. If it ends up needing more 'lvconverts' (suspends),
then the probability of failure (deadlock) will increase.
--Malahal.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-28 21:17 ` malahal
@ 2010-01-29 13:28 ` Stuart D. Gathman
2010-01-29 18:39 ` chris procter
0 siblings, 1 reply; 8+ messages in thread
From: Stuart D. Gathman @ 2010-01-29 13:28 UTC (permalink / raw)
To: LVM general discussion and development
On Thu, 28 Jan 2010, malahal@us.ibm.com wrote:
> > given that pvmove also works by mirroring I'm not convinced I haven't
> > just been lucky so far . So does anyone have any ideas or even better
> > experiance on whether this is likely to work or am I setting myself up
> > for a world of pain if I try it on a live server?
>
> Yes, pvmove works very similar. It will create a mirror for each segment
> one at a time, so it may have to create lot more mirrors depending on
> your configuration. If it ends up needing more 'lvconverts' (suspends),
> then the probability of failure (deadlock) will increase.
It seems to me, that if disk space is available, pvmove could minimize
metadata updates by mirroring as many LEs as possible at once. This
would reduce to the equivalent of mirroring then reducing the entire LE
when space is available. What happens in case of memory log and crash,
however? Does it still remember which mirror is the "original"?
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-29 13:28 ` Stuart D. Gathman
@ 2010-01-29 18:39 ` chris procter
0 siblings, 0 replies; 8+ messages in thread
From: chris procter @ 2010-01-29 18:39 UTC (permalink / raw)
To: LVM general discussion and development
> On Thu, 28 Jan 2010, malahal@us.ibm.com wrote:
> > given that pvmove also works by mirroring I'm not convinced I haven't
> > just been lucky so far . So does anyone have any ideas or even better
> > experiance on whether this is likely to work or am I setting myself up
> > for a world of pain if I try it on a live server?
>
> Yes, pvmove works very similar. It will create a mirror for each segment
> one at a time, so it may have to create lot more mirrors depending on
> your configuration. If it ends up needing more 'lvconverts' (suspends),
> then the probability of failure (deadlock) will increase.
I've been testing using VMs cloned from the same template, the clones that I used pvmove on have deadlocked maybe 75% of the time, and the ones that I lvconverted have never deadlocked. However given your explanation I think its probable that I've just been lucky and my 10 or so tests haven't had enough 'suspends' to trigger the issue. Oh well back to the planned downtime route.
Thanks for the help :)
chris
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-28 20:11 [linux-lvm] alternative to pvmove on root volume chris procter
2010-01-28 21:17 ` malahal
@ 2010-01-29 9:21 ` Milan Broz
2010-01-29 18:26 ` chris procter
1 sibling, 1 reply; 8+ messages in thread
From: Milan Broz @ 2010-01-29 9:21 UTC (permalink / raw)
To: chris procter, LVM general discussion and development; +Cc: chris procter
On 01/28/2010 09:11 PM, chris procter wrote:
> pvmove has a nasty habit of deadlocking the machine when used on root
> volumes.
please report bug then, with system configuration (lvmdump)
and process backtrace (log from echo t>/proc/sysrq-trigger)
when it deadlocks.
pvmove should work for moving root volume
(except some special situations, like when you increase log level
and log itself is on moved LV)
(And I used pvmove several times myself to move system disk to another
without any problems on RHEL5.3)
Milan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-29 9:21 ` Milan Broz
@ 2010-01-29 18:26 ` chris procter
2010-01-29 18:50 ` Milan Broz
0 siblings, 1 reply; 8+ messages in thread
From: chris procter @ 2010-01-29 18:26 UTC (permalink / raw)
To: Milan Broz, LVM general discussion and development
> On 01/28/2010 09:11 PM, chris procter wrote:
> > pvmove has a nasty habit of deadlocking the machine when used on root
> > volumes.
>
> please report bug then, with system configuration (lvmdump)
> and process backtrace (log from echo t>/proc/sysrq-trigger)
> when it deadlocks.
>
> pvmove should work for moving root volume
> (except some special situations, like when you increase log level
> and log itself is on moved LV)
>
> (And I used pvmove several times myself to move system disk to another
> without any problems on RHEL5.3)
Unfortuately this project is mostly RHEL4 boxes (its an old SAN, nothing new has been added to it in a couple of years) and so thats what I've been testing with (RHEL4.5 to be precise) so if you really want me too I'll file a bug but I'm pretty sure it will just get closed with "upgrade to a modern version of lvm" :)
chris
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-29 18:26 ` chris procter
@ 2010-01-29 18:50 ` Milan Broz
2010-01-29 21:18 ` chris procter
0 siblings, 1 reply; 8+ messages in thread
From: Milan Broz @ 2010-01-29 18:50 UTC (permalink / raw)
To: chris procter; +Cc: LVM general discussion and development
On 01/29/2010 07:26 PM, chris procter wrote:
> Unfortuately this project is mostly RHEL4 boxes (its an old SAN, nothing new has been added to it in a couple of years)
> and so thats what I've been testing with (RHEL4.5 to be precise) so if you really want me too
> I'll file a bug but I'm pretty sure it will just get closed with "upgrade to a modern version of lvm" :)
Ah so. Pretty old lvm2 version there. But both RHEL5 and RHEL4 share the same codebase,
just upgrade to RHEL4.8.
(you can even use statically linked lvm binary from newer version if you cannot upgrade...)
Milan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] alternative to pvmove on root volume
2010-01-29 18:50 ` Milan Broz
@ 2010-01-29 21:18 ` chris procter
0 siblings, 0 replies; 8+ messages in thread
From: chris procter @ 2010-01-29 21:18 UTC (permalink / raw)
To: Milan Broz; +Cc: LVM general discussion and development
> On 01/29/2010 07:26 PM, chris procter wrote:
> > Unfortuately this project is mostly RHEL4 boxes (its an old SAN, nothing new
> has been added to it in a couple of years)
> > and so thats what I've been testing with (RHEL4.5 to be precise) so if you
> really want me too
> > I'll file a bug but I'm pretty sure it will just get closed with "upgrade to a
> modern version of lvm" :)
>
> Ah so. Pretty old lvm2 version there.
Yep, but upgrading means all sorts of risk assessments and revalidating of custom and third party code. I'd much prefer just to reinstall the whole thing onto the new san as RHEL5.4 but thats not going to happen this year :(
>But both RHEL5 and RHEL4 share the same codebase, just upgrade to RHEL4.8.
> (you can even use statically linked lvm binary from newer version if you cannot
> upgrade...)
>
> Milan
Using a version of lvm.static copied over is not something I'd considered, but it sounds like a much nicer approach then mine. I'll test it out and report back
Thanks a lot :)
chris
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-01-29 21:18 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-28 20:11 [linux-lvm] alternative to pvmove on root volume chris procter
2010-01-28 21:17 ` malahal
2010-01-29 13:28 ` Stuart D. Gathman
2010-01-29 18:39 ` chris procter
2010-01-29 9:21 ` Milan Broz
2010-01-29 18:26 ` chris procter
2010-01-29 18:50 ` Milan Broz
2010-01-29 21:18 ` chris procter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).