From: "Bryn M. Reeves" <breeves@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvm fails trying to expand an LV into available space in VG.
Date: Thu, 19 Jul 2007 14:50:57 +0100 [thread overview]
Message-ID: <469F6C41.3070601@redhat.com> (raw)
In-Reply-To: <469F62C2.4080709@iinet.net.au>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
David Timms wrote:
> James Parsons wrote:
>> Hi - I'm the s-c-lvm guy; this will take a bit to digest. Just wanted
>> to let you know you were heard.
>> Yes, 3 minutes to scan for state seems a bit abnormal :)
> Actually, its more like 5-6 minutes (I get bored waiting and start doing
> other stuff). During boot up it takes about 8 seconds to activate the
> lvm volumes.
Nothing to do with the s-c-lvm problem, but these times sound perfectly
normal if you have a metadata area on each of the 25 PVs in the volume
group.
By default, pvcreate will set each PV up with a single metadata area
(MDA). This is fine for small volume groups but will kill tool
performance for volume groups with large numbers of PVs since the
runtime grows as something like N^3 in the number of metadata areas.
Note that it's only tool performance that suffers - I/O performance is
not affected.
You can prevent this by creating most of the PVs in such a VG with the
option "--metadatacopies=0" on the pvcreate command line. Create a small
number with either "--metadatacopies=1" (or just use the default - it's
1 anyway) and you should find the time to scan/activate the VG is much
shorter.
It's possible to fix this in-place without destroying the VG but you
need to be a little careful; it'd be wise to back up anything important
first in case things don't work out.
First, deactivate all volumes in the group and backup the metadata:
# vgchange -an <VolGroup>
# vgcfgbackup <VolGroup>
Now recreate all the PVs except those that you want to contain metadata
with their original UUIDs and the --restorefile and --metadatacopies=0
flags:
# pvcreate --uuid=$UUID --restorefile=/etc/lvm/backup/<VolGroup>
- --metadatacopies=0 /dev/$DEVICE
Next create the metadata-holding PVs:
# pvcreate --uuid=$UUID --restorefile=/etc/lvm/backup/<VolGroup>
- --metadatacopies=1 /dev/$DEVICE
And finally restore the metadata:
# vgcfgrestore <VolGroup>
The last time I had to do this was on a VG with 100 PVs, I used a script
that parsed the output of pvs to automate all this stuff. Drop me a line
off-list if you're interested & I'll see if I can pull it up.
Kind regards,
Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
iD8DBQFGn2xB6YSQoMYUY94RAh3tAJ905BwCeRUNVYu0xWW5DWYpB+ZZGQCcC63Q
NmZcYlIAlcC3wRRdRNDUuoc=
=oqzc
-----END PGP SIGNATURE-----
next prev parent reply other threads:[~2007-07-19 13:52 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-07-12 19:45 [linux-lvm] s-c-lvm fails trying to expand an LV into available space in VG David Timms
2007-07-12 20:10 ` James Parsons
2007-07-19 13:10 ` [linux-lvm] lvm " David Timms
2007-07-19 13:50 ` Bryn M. Reeves [this message]
2007-07-19 14:44 ` David Timms
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=469F6C41.3070601@redhat.com \
--to=breeves@redhat.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).