linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] error msg: VG vgc1 metadata too large for circular buffer
@ 2006-01-27 20:35 Gary Eheman
  2006-01-27 21:33 ` Alasdair G Kergon
  0 siblings, 1 reply; 3+ messages in thread
From: Gary Eheman @ 2006-01-27 20:35 UTC (permalink / raw)
  To: Linux-lvm

I am running SuSE 9.3 PRO.
# uname -r
2.6.11.4-21.9-bigsmp

I pulled the latest CVS tree two days ago for both device-mapper and 
LVM2 because I hit a problem with the level of those that came with SuSE 
9.3.  I need to be able to create hundreds (say 400 or 500) logical 
volumes with the application I am working with.

The new CVS code allowed me to get many many more than I was able to get 
prior to going with the new code from the CVS trees.

I created a volume group on a 500+G drive with -M2, omitted 
metadatasize, and let the allocation size default to 4M (probably dumb 
given the size of the volumes I am creating).

I managed to about 392 volumes created in the volume group. Now any 
attempt to create additional or to delete a logical volume from that 
group fails with a message:
"VG vgc1 metadata too large for circular buffer"

# lvremove -vvvvv vgc1/33901_vgc1_392
<lots-of-stuff-snipped>

#metadata/pv_manip.c:241         /dev/sdc1 381:  89154    234: 
33901_vgc1_382(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 382:  89388    234: 
33901_vgc1_383(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 383:  89622    234: 
33901_vgc1_384(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 384:  89856    234: 
33901_vgc1_385(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 385:  90090    234: 
33901_vgc1_386(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 386:  90324    234: 
33901_vgc1_387(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 387:  90558    234: 
33901_vgc1_388(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 388:  90792    234: 
33901_vgc1_389(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 389:  91026    234: 
33901_vgc1_390(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 390:  91260    234: 
33901_vgc1_391(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 391:  91494    234: NULL(0:0)
#metadata/pv_manip.c:241         /dev/sdc1 392:  91728  47815: NULL(0:0)
#format_text/export.c:121         Doubling metadata output buffer to 131072
#format_text/format-text.c:408   VG vgc1 metadata too large for circular 
buffer
#metadata/metadata.c:782         <backtrace>
#locking/file_locking.c:59       Unlocking /var/lock/lvm/V_vgc1
#device/dev-io.c:483         Closed /dev/sdc1
x260:/usr/flexes


I can destroy the data. In fact, that is what I would like to do, but 
softly using lvremove, vgremove, pvremove if possible. But I can't get 
around the current error.

Advice,please?  Are there parameters I can give during the creation of 
the group or volume to avoid this?
-- 
Gary Eheman
Fundamental Software, Inc.
http://www.funsoft.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] error msg: VG vgc1 metadata too large for circular buffer
  2006-01-27 20:35 [linux-lvm] error msg: VG vgc1 metadata too large for circular buffer Gary Eheman
@ 2006-01-27 21:33 ` Alasdair G Kergon
  2006-01-27 22:21   ` Gary Eheman
  0 siblings, 1 reply; 3+ messages in thread
From: Alasdair G Kergon @ 2006-01-27 21:33 UTC (permalink / raw)
  To: eheman, LVM general discussion and development

On Fri, Jan 27, 2006 at 03:35:41PM -0500, Gary Eheman wrote:
> I can destroy the data. In fact, that is what I would like to do, but 
> softly using lvremove, vgremove, pvremove if possible. But I can't get 
> around the current error.
> 
> Advice,please?  Are there parameters I can give during the creation of 
> the group or volume to avoid this?

As you realised, you need to recreate the PV with a much larger 
metadatasize.

There are no tools yet to manipulate it after it's been created.

If you can throw it away, do that: vgchange -an to deactivate everything
then pvremove -ff to destroy it.

[If you weren't able to do that, there are workarounds involving making
lvm2 stop using that metadata area and using a new larger one somewhere
else instead.]

Alasdair
-- 
agk@redhat.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] error msg: VG vgc1 metadata too large for circular buffer
  2006-01-27 21:33 ` Alasdair G Kergon
@ 2006-01-27 22:21   ` Gary Eheman
  0 siblings, 0 replies; 3+ messages in thread
From: Gary Eheman @ 2006-01-27 22:21 UTC (permalink / raw)
  To: LVM general discussion and development

Alasdair:
Many thanks.  I blew away the pv with your suggested pvremove and 
recreated specifying a large metadatasize (16M, since  am experimenting) 
and all worked as desired this time around.


Alasdair G Kergon wrote:
> On Fri, Jan 27, 2006 at 03:35:41PM -0500, Gary Eheman wrote:
>> I can destroy the data. In fact, that is what I would like to do, but 
>> softly using lvremove, vgremove, pvremove if possible. But I can't get 
>> around the current error.
>>
>> Advice,please?  Are there parameters I can give during the creation of 
>> the group or volume to avoid this?
> 
> As you realised, you need to recreate the PV with a much larger 
> metadatasize.
> 
> There are no tools yet to manipulate it after it's been created.
> 
> If you can throw it away, do that: vgchange -an to deactivate everything
> then pvremove -ff to destroy it.
> 
> [If you weren't able to do that, there are workarounds involving making
> lvm2 stop using that metadata area and using a new larger one somewhere
> else instead.]
> 
> Alasdair

-- 
Gary Eheman
Fundamental Software, Inc.
http://www.funsoft.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-01-27 22:22 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-01-27 20:35 [linux-lvm] error msg: VG vgc1 metadata too large for circular buffer Gary Eheman
2006-01-27 21:33 ` Alasdair G Kergon
2006-01-27 22:21   ` Gary Eheman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).