linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Andrew Gideon <ag2827189@tagmall.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] Solving the "metadata too large for circular buffer" condition
Date: Wed, 24 Nov 2010 20:28:11 +0000 (UTC)	[thread overview]
Message-ID: <icjsgr$tm$1@taco.int.tagonline.com> (raw)


We've just hit this error, and it is blocking any expansion of existing 
or creation of new volumes.  

We found: 

http://readlist.com/lists/redhat.com/linux-lvm/0/2839.html

which appears to describe a solution.  I'm doing some reading, and I've 
set up a test environment to try things out (before doing anything risky 
to production).  But I'm hoping a post here can save some time (and 
angst).

First: The referenced thread is two years old.  I don't suppose there's a 
better way to solve this problem today?

Assuming not...

I'm not sure how metadata is stored.  It seems like, by default, it is 
duplicated on each PV.  I'm guessing this because one can't just add new 
PVs (with larger metadatasize values), but one must also remove the old 
metadata.  Is that right?

Are there are consequences to removing the metadata from most of the 
physical volumes?  I've six, so I'd be adding a seventh and eighth (two 
for redundancy, though the PVs are all built on RAID sets).

The "pvcreate --restorefile ... --uuid ... --metadatacopies 0" command 
would be executed on the existing 6 physical volumes?  No data would be 
lost?  I want to be *very* sure of this (so I'm not trashing an existing 
PV).

What is the default metadatasize?  Judging from lvm.conf, it may be 255.  
255 Megabytes?  Is there some way to guestimate how much space I should 
expect to be using?  I thought perhaps pvdata would help, but this is 
apparently LVMv1 only.

[Unfortunately, 'lvm dumpconfig' isn't listing any data in the metadata 
block.]

There's also mention in the cited thread of reducing fragmentation using 
pvmove.  How would that work?  From what I can see, pvmove will move 
segments.  But even if two segments are moved from dislocated locations 
to immediately adjacent locations, I see nothing which says that these 
two segments would be combined into a single segment.  So I'm not clear 
how fragmentation can be reduced.

Finally, there was mention of changing lvm.conf - presumably, 
metadata.dirs - to help make more space.  Once lvm.conf is changed, how 
is that change made live?  Is a complete reboot required, or is there a 
quicker way?

Thanks for any and all help...

	Andrew

             reply	other threads:[~2010-11-24 20:28 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-24 20:28 Andrew Gideon [this message]
2010-11-24 21:35 ` [linux-lvm] Solving the "metadata too large for circular buffer" condition Andrew Gideon
2010-11-24 23:02 ` Ray Morris
2010-11-25  3:35   ` Andrew Gideon
2010-11-24 23:07 ` Ray Morris
2010-11-25  3:28 ` [linux-lvm] Unable to use metadata.dirs in lvm.conf? (Was: Re: Solving the "metadata too large for circular buffer" condition) Andrew Gideon
2010-11-26 16:03   ` Peter Rajnoha
2010-11-29 14:50     ` Andrew Gideon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='icjsgr$tm$1@taco.int.tagonline.com' \
    --to=ag2827189@tagmall.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).