linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Solving the "metadata too large for circular buffer" condition
@ 2010-11-24 20:28 Andrew Gideon
  2010-11-24 21:35 ` Andrew Gideon
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Andrew Gideon @ 2010-11-24 20:28 UTC (permalink / raw)
  To: linux-lvm


We've just hit this error, and it is blocking any expansion of existing 
or creation of new volumes.  

We found: 

http://readlist.com/lists/redhat.com/linux-lvm/0/2839.html

which appears to describe a solution.  I'm doing some reading, and I've 
set up a test environment to try things out (before doing anything risky 
to production).  But I'm hoping a post here can save some time (and 
angst).

First: The referenced thread is two years old.  I don't suppose there's a 
better way to solve this problem today?

Assuming not...

I'm not sure how metadata is stored.  It seems like, by default, it is 
duplicated on each PV.  I'm guessing this because one can't just add new 
PVs (with larger metadatasize values), but one must also remove the old 
metadata.  Is that right?

Are there are consequences to removing the metadata from most of the 
physical volumes?  I've six, so I'd be adding a seventh and eighth (two 
for redundancy, though the PVs are all built on RAID sets).

The "pvcreate --restorefile ... --uuid ... --metadatacopies 0" command 
would be executed on the existing 6 physical volumes?  No data would be 
lost?  I want to be *very* sure of this (so I'm not trashing an existing 
PV).

What is the default metadatasize?  Judging from lvm.conf, it may be 255.  
255 Megabytes?  Is there some way to guestimate how much space I should 
expect to be using?  I thought perhaps pvdata would help, but this is 
apparently LVMv1 only.

[Unfortunately, 'lvm dumpconfig' isn't listing any data in the metadata 
block.]

There's also mention in the cited thread of reducing fragmentation using 
pvmove.  How would that work?  From what I can see, pvmove will move 
segments.  But even if two segments are moved from dislocated locations 
to immediately adjacent locations, I see nothing which says that these 
two segments would be combined into a single segment.  So I'm not clear 
how fragmentation can be reduced.

Finally, there was mention of changing lvm.conf - presumably, 
metadata.dirs - to help make more space.  Once lvm.conf is changed, how 
is that change made live?  Is a complete reboot required, or is there a 
quicker way?

Thanks for any and all help...

	Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Solving the "metadata too large for circular buffer" condition
  2010-11-24 20:28 [linux-lvm] Solving the "metadata too large for circular buffer" condition Andrew Gideon
@ 2010-11-24 21:35 ` Andrew Gideon
  2010-11-24 23:02 ` Ray Morris
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Andrew Gideon @ 2010-11-24 21:35 UTC (permalink / raw)
  To: linux-lvm

On Wed, 24 Nov 2010 20:28:11 +0000, Andrew Gideon wrote:

> But even if two segments are moved from dislocated locations to
> immediately adjacent locations, I see nothing which says that these two
> segments would be combined into a single segment.  So I'm not clear how
> fragmentation can be reduced.

This one part I was able to answer myself with some experimentation.  
pvmove is actually smart enough that, if two segments that are logically 
contiguous are placed such that they're physically contiguous, they are 
aggregated into a single segment.

Unfortunately, I cannot use pvmove in my production environment since it 
needs to create a temporary volume for a move and this runs into the 
"metadata too large" error.

So I'm still looking at how to solve this problem either with new PVs 
(and removing the metadata from the previous PVs, if I've understood this 
correctly) and/or moving the metadata to an external file.

	- Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Solving the "metadata too large for circular buffer" condition
  2010-11-24 20:28 [linux-lvm] Solving the "metadata too large for circular buffer" condition Andrew Gideon
  2010-11-24 21:35 ` Andrew Gideon
@ 2010-11-24 23:02 ` Ray Morris
  2010-11-25  3:35   ` Andrew Gideon
  2010-11-24 23:07 ` Ray Morris
  2010-11-25  3:28 ` [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition) Andrew Gideon
  3 siblings, 1 reply; 8+ messages in thread
From: Ray Morris @ 2010-11-24 23:02 UTC (permalink / raw)
  To: LVM general discussion and development

> Are there are consequences to removing the metadata from most of the
> physical volumes?

    You should be OK with one copy of the metadata, but of course that
means you can't later remove the PVs with metadata unless you first
put the metadata somewhere.  Two copies the metadata provides  
redundancy,
more copies maintains redundancy even if some are lost or removed.

> I've six, so I'd be adding a seventh and eighth (two
> for redundancy, though the PVs are all built on RAID sets).

    If you have two redundant PVs or free space, one could move LVs in
order to empty an older PV, then recreate it with larger metadata.   
pvmove
can be used to move active LVs, or dd is much faster for inactive ones.

> It seems like, by default, it is > duplicated on each PV.  I'm  
> guessing
> this because one can't just add new PVs (with larger metadatasize  
> values),
> but one must also remove the old metadata.  Is that right?

Right.

> The "pvcreate --restorefile ... --uuid ... --metadatacopies 0" command
> would be executed on the existing 6 physical volumes?  No data would  
> be
> lost?  I want to be *very* sure of this (so I'm not trashing an  
> existing
> PV).

    Right.  As long as you do a vgcfgbackup, you're pretty safe.  I've
trashed things pretty badly before in various ways and vgcfgrestore has
been a great friend.  That said, it still wouldn't hurt to copy the LVs
to the new PV, then work on the old PV which is now redundant.  In which
case, you could then put a larger metadata area on the old PV.

> What is the default metadatasize?  Judging from lvm.conf, it may be  
> 255.
> 255 Megabytes?

   I believe it defaults to 255 sectors, roughly 128KiB.


> Is there some way to guestimate how much space I should expect to be  
> using?
> I thought perhaps pvdata would help, but this is apparently LVMv1 only

    128KiB will cover something on the order of 20 LVs.  (Very roughly  
speaking).
If you're using PVs of a terabyte or more, you could probably easily  
spare 16MB,
which would be 50 times the default.  That's what we use because we  
don't ever
want to have to worry about it again.  16MB will allow for roughly  
1,000 LVs.
pvmetadatasize = 32768

   Hmm, at the time I choose 16MB I thought it would be more than  
enough, but
we're already at 171 LVs, so I guess I'll use 64MB for metadata on our  
new PVs.
--
Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php


On 11/24/2010 02:28:11 PM, Andrew Gideon wrote:
> 
> We've just hit this error, and it is blocking any expansion of  
> existing
> or creation of new volumes.
> 
> We found:
> 
> http://readlist.com/lists/redhat.com/linux-lvm/0/2839.html
> 
> which appears to describe a solution.  I'm doing some reading, and  
> I've
> set up a test environment to try things out (before doing anything  
> risky
> to production).  But I'm hoping a post here can save some time (and
> angst).
> 
> First: The referenced thread is two years old.  I don't suppose  
> there's a
> better way to solve this problem today?
> 
> Assuming not...
> 
> I'm not sure how metadata is stored.  It seems like, by default, it is
> duplicated on each PV.  I'm guessing this because one can't just add  
> new
> PVs (with larger metadatasize values), but one must also remove the  
> old
> metadata.  Is that right?
> 
> Are there are consequences to removing the metadata from most of the
> physical volumes?  I've six, so I'd be adding a seventh and eighth  
> (two
> for redundancy, though the PVs are all built on RAID sets).
> 
> The "pvcreate --restorefile ... --uuid ... --metadatacopies 0" command
> would be executed on the existing 6 physical volumes?  No data would  
> be
> lost?  I want to be *very* sure of this (so I'm not trashing an  
> existing
> PV).
> 
> What is the default metadatasize?  Judging from lvm.conf, it may be  
> 255.
> 255 Megabytes?  Is there some way to guestimate how much space I  
> should
> expect to be using?  I thought perhaps pvdata would help, but this is
> apparently LVMv1 only.
> 
> [Unfortunately, 'lvm dumpconfig' isn't listing any data in the  
> metadata
> block.]
> 
> There's also mention in the cited thread of reducing fragmentation  
> using
> pvmove.  How would that work?  From what I can see, pvmove will move
> segments.  But even if two segments are moved from dislocated  
> locations
> to immediately adjacent locations, I see nothing which says that these
> two segments would be combined into a single segment.  So I'm not  
> clear
> how fragmentation can be reduced.
> 
> Finally, there was mention of changing lvm.conf - presumably,
> metadata.dirs - to help make more space.  Once lvm.conf is changed,  
> how
> is that change made live?  Is a complete reboot required, or is there  
> a
> quicker way?
> 
> Thanks for any and all help...
> 
> 	Andrew
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Solving the "metadata too large for circular buffer" condition
  2010-11-24 20:28 [linux-lvm] Solving the "metadata too large for circular buffer" condition Andrew Gideon
  2010-11-24 21:35 ` Andrew Gideon
  2010-11-24 23:02 ` Ray Morris
@ 2010-11-24 23:07 ` Ray Morris
  2010-11-25  3:28 ` [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition) Andrew Gideon
  3 siblings, 0 replies; 8+ messages in thread
From: Ray Morris @ 2010-11-24 23:07 UTC (permalink / raw)
  To: LVM general discussion and development

   Taking another look, my estimate of required metadatasize
might be off by a factor of five or so.  128K might be sufficient
for 100 LVs, depending on fragmentation and such.

   Still, the history of computer science is largely a story
of problems caused by people thinking they'd allocated plenty
of space.  Today you can't fdisk a drive or array larger than
2TB because someone thought 32 bits would plenty.  Probably
best to allocate 100 times as much as you think you'll ever
need - a few MB of disk space is cheap.
--
Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php


On 11/24/2010 02:28:11 PM, Andrew Gideon wrote:
> 
> We've just hit this error, and it is blocking any expansion of  
> existing
> or creation of new volumes.
> 
> We found:
> 
> http://readlist.com/lists/redhat.com/linux-lvm/0/2839.html
> 
> which appears to describe a solution.  I'm doing some reading, and  
> I've
> set up a test environment to try things out (before doing anything  
> risky
> to production).  But I'm hoping a post here can save some time (and
> angst).
> 
> First: The referenced thread is two years old.  I don't suppose  
> there's a
> better way to solve this problem today?
> 
> Assuming not...
> 
> I'm not sure how metadata is stored.  It seems like, by default, it is
> duplicated on each PV.  I'm guessing this because one can't just add  
> new
> PVs (with larger metadatasize values), but one must also remove the  
> old
> metadata.  Is that right?
> 
> Are there are consequences to removing the metadata from most of the
> physical volumes?  I've six, so I'd be adding a seventh and eighth  
> (two
> for redundancy, though the PVs are all built on RAID sets).
> 
> The "pvcreate --restorefile ... --uuid ... --metadatacopies 0" command
> would be executed on the existing 6 physical volumes?  No data would  
> be
> lost?  I want to be *very* sure of this (so I'm not trashing an  
> existing
> PV).
> 
> What is the default metadatasize?  Judging from lvm.conf, it may be  
> 255.
> 255 Megabytes?  Is there some way to guestimate how much space I  
> should
> expect to be using?  I thought perhaps pvdata would help, but this is
> apparently LVMv1 only.
> 
> [Unfortunately, 'lvm dumpconfig' isn't listing any data in the  
> metadata
> block.]
> 
> There's also mention in the cited thread of reducing fragmentation  
> using
> pvmove.  How would that work?  From what I can see, pvmove will move
> segments.  But even if two segments are moved from dislocated  
> locations
> to immediately adjacent locations, I see nothing which says that these
> two segments would be combined into a single segment.  So I'm not  
> clear
> how fragmentation can be reduced.
> 
> Finally, there was mention of changing lvm.conf - presumably,
> metadata.dirs - to help make more space.  Once lvm.conf is changed,  
> how
> is that change made live?  Is a complete reboot required, or is there  
> a
> quicker way?
> 
> Thanks for any and all help...
> 
> 	Andrew
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition)
  2010-11-24 20:28 [linux-lvm] Solving the "metadata too large for circular buffer" condition Andrew Gideon
                   ` (2 preceding siblings ...)
  2010-11-24 23:07 ` Ray Morris
@ 2010-11-25  3:28 ` Andrew Gideon
  2010-11-26 16:03   ` Peter Rajnoha
  3 siblings, 1 reply; 8+ messages in thread
From: Andrew Gideon @ 2010-11-25  3:28 UTC (permalink / raw)
  To: linux-lvm

On Wed, 24 Nov 2010 20:28:11 +0000, Andrew Gideon wrote:

> Finally, there was mention of changing lvm.conf - presumably,
> metadata.dirs - to help make more space.  Once lvm.conf is changed, how
> is that change made live?  Is a complete reboot required, or is there a
> quicker way?

It looks like changes to this file are immediate.  However, I'm having a problem
with metadata being stored "outside" the VG.

I tried:
   * For all PVs in the test VG but one:
      * pvcreate -ff --restorefile /etc/lvm/backup/... --uuid ... --metadatacopies 0  /dev/xvdN1
   * vgcfgbackup
   * cp -p the backup file to the directory I will specify for metadata.dirs in lvm.conf
   * Add the directory to lvm.conf as metadata.dirs

I immediately start seeing "memory" errors from LVM commands.  For example:

	[root@noodle6 tagonline]# vgscan
	  Reading all physical volumes.  This may take a while...
	  Found volume group "TestVG0" using metadata type lvm2
	  Found volume group "guestvg00" using metadata type lvm2
	  You have a memory leak (not released memory pool):
	   [0x83e7848]
	   [0x83e7868]
	  You have a memory leak (not released memory pool):
	   [0x83e7848]
	   [0x83e7868]
	[root@noodle6 tagonline]# 

I then
   * pvcreate -ff --restorefile /etc/lvm/backup/... --uuid ... --metadatacopies 0  /dev/xvdN1
on the final PV that has metadata.  

Now, I cannot see the volume group:

	[root@noodle6 tagonline]# vgdisplay -v TestVG0
	    Using volume group(s) on command line
	    Finding volume group "TestVG0"
	    Wiping cache of LVM-capable devices
	  Volume group "TestVG0" not found
	  You have a memory leak (not released memory pool):
	   [0x8c59ef8]
	   [0x8c55fa8]
	   [0x8c55ed0]
	   [0x8c48aa0]
	  You have a memory leak (not released memory pool):
	   [0x8c59ef8]
	   [0x8c55fa8]
	   [0x8c55ed0]
	   [0x8c48aa0]
	[root@noodle6 tagonline]# 

If I
   * pvcreate -ff --restorefile /etc/lvm/backup/... --uuid ... --metadatacopies 1  /dev/xvdN1
   * vgcfgrestore 
then the volume group is back.  More oddly, vgdisplay -v reports:

  Metadata Areas        2

but, for some reason, the metadata area being used on a separate file system isn't
sufficient or working.

Am I doing something wrong?

This is on CentOS 5.5 386 with:

	[root@noodle6 tagonline]# lvm version
	  LVM version:     2.02.56(1)-RHEL5 (2010-04-22)
	  Library version: 1.02.39-RHEL5 (2010-04-22)
	  Driver version:  4.11.5

Thanks...

	- Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Solving the "metadata too large for circular buffer" condition
  2010-11-24 23:02 ` Ray Morris
@ 2010-11-25  3:35   ` Andrew Gideon
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Gideon @ 2010-11-25  3:35 UTC (permalink / raw)
  To: linux-lvm

On Wed, 24 Nov 2010 17:02:36 -0600, Ray Morris wrote:

>     If you have two redundant PVs or free space, one could move LVs in
> order to empty an older PV, then recreate it with larger metadata.
> pvmove can be used to move active LVs, or dd is much faster for inactive
> ones.

I don't have much free space on the storage device.  To get this would 
take a fair bit of time in that I'd need to add new drives and 
incorporate them into the RAID set.

But I do have *some* space thanks to "rounding errors" (or "fudge 
factor").  It's not enough to move much data around, but I could build a 
couple of small PVs for use primarily as metadata repositories.

Does this make sense?

In the long term, as I do add new disks, I can create new "real" PVs with 
the larger metadata segments, increasing the redundancy.  Ultimately, I 
can get ahead of the game and start pvmove-ing segments off the PVs on 
which I'd have disabled the metadata because the space allocated is too 
small.

But, in the short term, I'm looking to see how I can most quickly return 
this device to "full service" (being able to extend or add volumes).  And 
that seems to be using a pair of tiny PVs as metadata-only.  If that 
makes sense.

Does it?

Thanks...

	Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition)
  2010-11-25  3:28 ` [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition) Andrew Gideon
@ 2010-11-26 16:03   ` Peter Rajnoha
  2010-11-29 14:50     ` Andrew Gideon
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Rajnoha @ 2010-11-26 16:03 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Andrew Gideon

On 11/25/2010 04:28 AM +0100, Andrew Gideon wrote:
> I immediately start seeing "memory" errors from LVM commands.  For example:
> 
> 	[root@noodle6 tagonline]# vgscan
> 	  Reading all physical volumes.  This may take a while...
> 	  Found volume group "TestVG0" using metadata type lvm2
> 	  Found volume group "guestvg00" using metadata type lvm2
> 	  You have a memory leak (not released memory pool):
> 	   [0x83e7848]
> 	   [0x83e7868]
> 	  You have a memory leak (not released memory pool):
> 	   [0x83e7848]
> 	   [0x83e7868]

This should be resolved already (since lvm2 version 2.02.75) with this patch
http://sourceware.org/git/?p=lvm2.git;a=commit;h=cfbbf34d6a8606dd97ef529e8f709e494535ed42

> 
> I then
>    * pvcreate -ff --restorefile /etc/lvm/backup/... --uuid ... --metadatacopies 0  /dev/xvdN1
> on the final PV that has metadata.  
> 
> Now, I cannot see the volume group:
> 
> 	[root@noodle6 tagonline]# vgdisplay -v TestVG0
> 	    Using volume group(s) on command line
> 	    Finding volume group "TestVG0"
> 	    Wiping cache of LVM-capable devices
> 	  Volume group "TestVG0" not found

This seems to be another bug, I've sent a patch to lvm-devel...

(The metatada/dirs is not used so often so that's probably the reason why these
bugs got unnoticed for a longer time. If you have any other problem while using
this setting, plese, feel free to report it. Thanks.)

Peter

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition)
  2010-11-26 16:03   ` Peter Rajnoha
@ 2010-11-29 14:50     ` Andrew Gideon
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Gideon @ 2010-11-29 14:50 UTC (permalink / raw)
  To: linux-lvm

On Fri, 26 Nov 2010 17:03:37 +0100, Peter Rajnoha wrote:

> (The metatada/dirs is not used so often so that's probably the reason
> why these bugs got unnoticed for a longer time. If you have any other
> problem while using this setting, plese, feel free to report it.
> Thanks.)

Thanks.  I was trying to use separate metadata storage as a way to work 
around a "metadata too large for circular buffer" problem.  I'm now 
trying to do it with "tiny" PVs storing only metadata.

But it's good to know that this feature will be available again soon.

	- Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-11-29 14:50 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-24 20:28 [linux-lvm] Solving the "metadata too large for circular buffer" condition Andrew Gideon
2010-11-24 21:35 ` Andrew Gideon
2010-11-24 23:02 ` Ray Morris
2010-11-25  3:35   ` Andrew Gideon
2010-11-24 23:07 ` Ray Morris
2010-11-25  3:28 ` [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition) Andrew Gideon
2010-11-26 16:03   ` Peter Rajnoha
2010-11-29 14:50     ` Andrew Gideon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).