linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent" error following lvm operations
@ 2006-07-28  9:38 Dave
  2006-07-28  9:47 ` [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error " Roger Lucas
  0 siblings, 1 reply; 9+ messages in thread
From: Dave @ 2006-07-28  9:38 UTC (permalink / raw)
  To: linux-lvm

Hello,

I've been using LVM (1.08), which comes by default with RedHat AS3update5, for over a year now and am consistently running into a problem.  I hope there are still some LVM version 1 users out there who have some knowledge about this!!

After a reboot, LVM typically functions as expected, however, oftentimes, after some LVM operations, I get the following sequence:

[root@mucrrp10 vb]# vgdisplay
vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please run vgscan

[root@mucrrp10 vb]# vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found active volume group "prdxptux"
vgscan -- found exported volume group "prdtuxPV_EXP"
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume groups

[root@mucrrp10 vb]# vgdisplay
vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please run vgscan

This is troublesome because it makes it difficult to reliably use certain LVM commands and trust their results.  I need to minimize reboots as much as possible.

Any help in understanding the cause of this problem, and how to resolve it, or avoid it, are greatly appreciated!!

Here is some version info about the system and software (I can provide more information if needed):

[root@mucrrp10 vb]# cat /etc/redhat-release
 Red Hat Enterprise Linux AS release 3 (Taroon Update 5)
 [root@mucrrp10 vb]# uname -a
 Linux mucrrp10 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:17:59 EDT 2005 i686 i686 i386 GNU/Linux
 [root@mucrrp10 vb]# rpm -qa|grep lvm
 lvm-1.0.8-12.2
 
Thanks in advance for any assistance.
Regards,
David

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-28  9:38 [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent" error following lvm operations Dave
@ 2006-07-28  9:47 ` Roger Lucas
  2006-07-28 10:25   ` Dave
  0 siblings, 1 reply; 9+ messages in thread
From: Roger Lucas @ 2006-07-28  9:47 UTC (permalink / raw)
  To: 'Dave', 'LVM general discussion and development'

Small world - I was chasing a similar problem this morning with LVM2.

I don't know if your problem is the same as mine, but...

In my system I am using LVM within Dom0.  I am then creating "disks" for the
DomUs from the LVM partitions.  E.g.

(Hydra = Dom0)

root@hydra:~# pvs
  PV         VG    Fmt  Attr PSize   PFree
  /dev/hda3  xenvg lvm2 a-   176.05G 162.43G
root@hydra:~# vgs
  VG    #PV #LV #SN Attr   VSize   VFree
  xenvg   1   7   0 wz--n- 176.05G 162.43G
root@hydra:~# lvs
  LV           VG    Attr   LSize   Origin Snap%  Move Log Copy%
  backupimage  xenvg -ri-ao 512.00M
  harpseal     xenvg -wi-ao   5.00G
  harpseal-lvm xenvg -wi-ao   1.00G
  octopus      xenvg -wi-ao   1.00G
  octopus-lvm  xenvg -wi-ao   1.00G
  tarantula    xenvg -wi-ao   5.00G
  userdisk     xenvg -wi-a- 128.00M
root@hydra:~# cat /etc/xen/octopus
kernel = "/boot/vmlinuz-2.6.16-xen"
ramdisk = "/boot/initrd.img-2.6.16-xen"
memory = 128
name = "octopus"
# Remember in Xen we are limited to three virtual network interfaces per
DomU...
vif = ['mac=aa:00:00:00:00:e7,bridge=xenbr0',
'mac=aa:00:00:00:01:01,bridge=xenbr1',
'mac=aa:00:00:00:02:01,bridge=xenbr2']
disk = ['phy:/dev/xenvg/octopus,hda1,w','phy:/dev/xenvg/octopus-lvm,hda2,w']
hostname = "octopus"
root = "/dev/hda1 ro"
extra = "4"
root@hydra:~#

Now, the DomU is also using LVM:

root@octopus:~# pvs
  PV         VG      Fmt  Attr PSize    PFree
  /dev/hda2  storage lvm2 a-   1020.00M 380.00M
root@octopus:~# vgs
  VG      #PV #LV #SN Attr   VSize    VFree
  storage   1   2   0 wz--n- 1020.00M 380.00M
root@octopus:~# lvs
  LV          VG      Attr   LSize   Origin Snap%  Move Log Copy%
  backupimage storage -ri-ao 512.00M
  userdisk    storage -wi-a- 128.00M
root@octopus:~#


Now we have the problem!  When the Dom0 scans for LVs, it will look on
/dev/hda3 and find the "xenvg" and all its LVs.  It will then look _inside_
these LVs and find the "/dev/storage" group that really belongs to the DomU.
At this point, you have two machines accessing the same LV, which is "bad".

The solution is to restrict the Dom0 LVM to only use the devices that we
know are for scanning.  This is not the default behaviour - the default
behaviour (at least under Debian/(K)Ubuntu) is to scan pretty much every
block device in /dev - and this why the problem occurs.

I changed by Dom0 LVM configuration to only scan /dev/hda as below.

root@hydra:~# cat /etc/lvm/lvm.conf
devices {
    dir = "/dev"
    scan = [ "/dev" ]
    filter =[ "a|/dev/hda|", "r|.*|" ]
    cache = "/etc/lvm/.cache"
    write_cache_state = 1
    sysfs_scan = 1
    md_component_detection = 1
}
/snip/
root@hydra:~#

I'm not an LVM expert and I cannot tell enough from your e-mail to know if
this is your problem, but hopefully this will help you (or maybe someone
else).

BR,

Roger


> -----Original Message-----
> From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com]
> On Behalf Of Dave
> Sent: 28 July 2006 10:39
> To: linux-lvm@redhat.com
> Subject: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT
> consistent"error following lvm operations
> 
> Hello,
> 
> I've been using LVM (1.08), which comes by default with RedHat AS3update5,
> for over a year now and am consistently running into a problem.  I hope
> there are still some LVM version 1 users out there who have some knowledge
> about this!!
> 
> After a reboot, LVM typically functions as expected, however, oftentimes,
> after some LVM operations, I get the following sequence:
> 
> [root@mucrrp10 vb]# vgdisplay
> vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
> run vgscan
> 
> [root@mucrrp10 vb]# vgscan
> vgscan -- reading all physical volumes (this may take a while...)
> vgscan -- found active volume group "prdxptux"
> vgscan -- found exported volume group "prdtuxPV_EXP"
> vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
> vgscan -- WARNING: This program does not do a VGDA backup of your volume
> groups
> 
> [root@mucrrp10 vb]# vgdisplay
> vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
> run vgscan
> 
> This is troublesome because it makes it difficult to reliably use certain
> LVM commands and trust their results.  I need to minimize reboots as much
> as possible.
> 
> Any help in understanding the cause of this problem, and how to resolve
> it, or avoid it, are greatly appreciated!!
> 
> Here is some version info about the system and software (I can provide
> more information if needed):
> 
> [root@mucrrp10 vb]# cat /etc/redhat-release
>  Red Hat Enterprise Linux AS release 3 (Taroon Update 5)
>  [root@mucrrp10 vb]# uname -a
>  Linux mucrrp10 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:17:59 EDT 2005 i686
> i686 i386 GNU/Linux
>  [root@mucrrp10 vb]# rpm -qa|grep lvm
>  lvm-1.0.8-12.2
> 
> Thanks in advance for any assistance.
> Regards,
> David
> 
> 
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-28  9:47 ` [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error " Roger Lucas
@ 2006-07-28 10:25   ` Dave
  2006-07-28 10:36     ` Roger Lucas
  0 siblings, 1 reply; 9+ messages in thread
From: Dave @ 2006-07-28 10:25 UTC (permalink / raw)
  To: Roger Lucas, LVM general discussion and development

Hi Roger,

Thanks for your reply.  I'm not sure if I'm facing the same issue, but I can tell you this...  I have 4 servers, 2 sets of 2 node clusters.  One application cluster of 2 servers, and one database cluster of servers.  Both servers in a cluster are attached via a QLogic HBA card to the SAN.  The setup is such that normally only one server in the cluster activates the VGs and mounts the volumes, but we have a failover setup, so that if there is a problem on one machine, that machine unmounts the file systems, deactivates the volumes, and then the backup machine scans for volumes, activates them and mounts them.  We've tested the failover scenario extensively and it works fine moving the volumes back and forth between the 2 machines.  But, perhaps after 2 switches and 2 machines doing a "vgscan", some sort of inconsistency is caused?!?!  Perhaps some info is different from the scan on each machine, which is causing the issue.  But, I would think that the information should be
 identical on both servers in the cluster.

Any additional thoughts on that?

Thanks,
Dave

----- Original Message ----
From: Roger Lucas <roger@planbit.co.uk>
To: Dave <davo_muc@yahoo.com>; LVM general discussion and development <linux-lvm@redhat.com>
Sent: Friday, July 28, 2006 11:47:17 AM
Subject: RE: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations

Small world - I was chasing a similar problem this morning with LVM2.

I don't know if your problem is the same as mine, but...

In my system I am using LVM within Dom0.  I am then creating "disks" for the
DomUs from the LVM partitions.  E.g.

(Hydra = Dom0)

root@hydra:~# pvs
  PV         VG    Fmt  Attr PSize   PFree
  /dev/hda3  xenvg lvm2 a-   176.05G 162.43G
root@hydra:~# vgs
  VG    #PV #LV #SN Attr   VSize   VFree
  xenvg   1   7   0 wz--n- 176.05G 162.43G
root@hydra:~# lvs
  LV           VG    Attr   LSize   Origin Snap%  Move Log Copy%
  backupimage  xenvg -ri-ao 512.00M
  harpseal     xenvg -wi-ao   5.00G
  harpseal-lvm xenvg -wi-ao   1.00G
  octopus      xenvg -wi-ao   1.00G
  octopus-lvm  xenvg -wi-ao   1.00G
  tarantula    xenvg -wi-ao   5.00G
  userdisk     xenvg -wi-a- 128.00M
root@hydra:~# cat /etc/xen/octopus
kernel = "/boot/vmlinuz-2.6.16-xen"
ramdisk = "/boot/initrd.img-2.6.16-xen"
memory = 128
name = "octopus"
# Remember in Xen we are limited to three virtual network interfaces per
DomU...
vif = ['mac=aa:00:00:00:00:e7,bridge=xenbr0',
'mac=aa:00:00:00:01:01,bridge=xenbr1',
'mac=aa:00:00:00:02:01,bridge=xenbr2']
disk = ['phy:/dev/xenvg/octopus,hda1,w','phy:/dev/xenvg/octopus-lvm,hda2,w']
hostname = "octopus"
root = "/dev/hda1 ro"
extra = "4"
root@hydra:~#

Now, the DomU is also using LVM:

root@octopus:~# pvs
  PV         VG      Fmt  Attr PSize    PFree
  /dev/hda2  storage lvm2 a-   1020.00M 380.00M
root@octopus:~# vgs
  VG      #PV #LV #SN Attr   VSize    VFree
  storage   1   2   0 wz--n- 1020.00M 380.00M
root@octopus:~# lvs
  LV          VG      Attr   LSize   Origin Snap%  Move Log Copy%
  backupimage storage -ri-ao 512.00M
  userdisk    storage -wi-a- 128.00M
root@octopus:~#


Now we have the problem!  When the Dom0 scans for LVs, it will look on
/dev/hda3 and find the "xenvg" and all its LVs.  It will then look _inside_
these LVs and find the "/dev/storage" group that really belongs to the DomU.
At this point, you have two machines accessing the same LV, which is "bad".

The solution is to restrict the Dom0 LVM to only use the devices that we
know are for scanning.  This is not the default behaviour - the default
behaviour (at least under Debian/(K)Ubuntu) is to scan pretty much every
block device in /dev - and this why the problem occurs.

I changed by Dom0 LVM configuration to only scan /dev/hda as below.

root@hydra:~# cat /etc/lvm/lvm.conf
devices {
    dir = "/dev"
    scan = [ "/dev" ]
    filter =[ "a|/dev/hda|", "r|.*|" ]
    cache = "/etc/lvm/.cache"
    write_cache_state = 1
    sysfs_scan = 1
    md_component_detection = 1
}
/snip/
root@hydra:~#

I'm not an LVM expert and I cannot tell enough from your e-mail to know if
this is your problem, but hopefully this will help you (or maybe someone
else).

BR,

Roger


> -----Original Message-----
> From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com]
> On Behalf Of Dave
> Sent: 28 July 2006 10:39
> To: linux-lvm@redhat.com
> Subject: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT
> consistent"error following lvm operations
> 
> Hello,
> 
> I've been using LVM (1.08), which comes by default with RedHat AS3update5,
> for over a year now and am consistently running into a problem.  I hope
> there are still some LVM version 1 users out there who have some knowledge
> about this!!
> 
> After a reboot, LVM typically functions as expected, however, oftentimes,
> after some LVM operations, I get the following sequence:
> 
> [root@mucrrp10 vb]# vgdisplay
> vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
> run vgscan
> 
> [root@mucrrp10 vb]# vgscan
> vgscan -- reading all physical volumes (this may take a while...)
> vgscan -- found active volume group "prdxptux"
> vgscan -- found exported volume group "prdtuxPV_EXP"
> vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
> vgscan -- WARNING: This program does not do a VGDA backup of your volume
> groups
> 
> [root@mucrrp10 vb]# vgdisplay
> vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
> run vgscan
> 
> This is troublesome because it makes it difficult to reliably use certain
> LVM commands and trust their results.  I need to minimize reboots as much
> as possible.
> 
> Any help in understanding the cause of this problem, and how to resolve
> it, or avoid it, are greatly appreciated!!
> 
> Here is some version info about the system and software (I can provide
> more information if needed):
> 
> [root@mucrrp10 vb]# cat /etc/redhat-release
>  Red Hat Enterprise Linux AS release 3 (Taroon Update 5)
>  [root@mucrrp10 vb]# uname -a
>  Linux mucrrp10 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:17:59 EDT 2005 i686
> i686 i386 GNU/Linux
>  [root@mucrrp10 vb]# rpm -qa|grep lvm
>  lvm-1.0.8-12.2
> 
> Thanks in advance for any assistance.
> Regards,
> David
> 
> 
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-28 10:25   ` Dave
@ 2006-07-28 10:36     ` Roger Lucas
  2006-07-31  8:50       ` Dave
  0 siblings, 1 reply; 9+ messages in thread
From: Roger Lucas @ 2006-07-28 10:36 UTC (permalink / raw)
  To: 'Dave', 'LVM general discussion and development'

Errr, no.  Your config is way more sophisticated than ours.  Our LVM
problems appeared because we were using Xen virtualisation (hence the
Dom0/DomU description) and managed to get two virtual machines accessing the
same VG simultaneously.  Apart from that, LVM has worked extremely well for
us (but we are running the latest dev version of LVM2 rather than LVM1).

I think you will need the help of someone who really understands the guts of
LVM rather than a mere user such as myself :-)

> -----Original Message-----
> From: Dave [mailto:davo_muc@yahoo.com]
> Sent: 28 July 2006 11:26
> To: Roger Lucas; LVM general discussion and development
> Subject: Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT
> consistent"error following lvm operations
> 
> Hi Roger,
> 
> Thanks for your reply.  I'm not sure if I'm facing the same issue, but I
> can tell you this...  I have 4 servers, 2 sets of 2 node clusters.  One
> application cluster of 2 servers, and one database cluster of servers.
> Both servers in a cluster are attached via a QLogic HBA card to the SAN.
> The setup is such that normally only one server in the cluster activates
> the VGs and mounts the volumes, but we have a failover setup, so that if
> there is a problem on one machine, that machine unmounts the file systems,
> deactivates the volumes, and then the backup machine scans for volumes,
> activates them and mounts them.  We've tested the failover scenario
> extensively and it works fine moving the volumes back and forth between
> the 2 machines.  But, perhaps after 2 switches and 2 machines doing a
> "vgscan", some sort of inconsistency is caused?!?!  Perhaps some info is
> different from the scan on each machine, which is causing the issue.  But,
> I would think that the information should be
>  identical on both servers in the cluster.
> 
> Any additional thoughts on that?
> 
> Thanks,
> Dave
> 
> ----- Original Message ----
> From: Roger Lucas <roger@planbit.co.uk>
> To: Dave <davo_muc@yahoo.com>; LVM general discussion and development
> <linux-lvm@redhat.com>
> Sent: Friday, July 28, 2006 11:47:17 AM
> Subject: RE: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT
> consistent"error following lvm operations
> 
> Small world - I was chasing a similar problem this morning with LVM2.
> 
> I don't know if your problem is the same as mine, but...
> 
> In my system I am using LVM within Dom0.  I am then creating "disks" for
> the
> DomUs from the LVM partitions.  E.g.
> 
> (Hydra = Dom0)
> 
> root@hydra:~# pvs
>   PV         VG    Fmt  Attr PSize   PFree
>   /dev/hda3  xenvg lvm2 a-   176.05G 162.43G
> root@hydra:~# vgs
>   VG    #PV #LV #SN Attr   VSize   VFree
>   xenvg   1   7   0 wz--n- 176.05G 162.43G
> root@hydra:~# lvs
>   LV           VG    Attr   LSize   Origin Snap%  Move Log Copy%
>   backupimage  xenvg -ri-ao 512.00M
>   harpseal     xenvg -wi-ao   5.00G
>   harpseal-lvm xenvg -wi-ao   1.00G
>   octopus      xenvg -wi-ao   1.00G
>   octopus-lvm  xenvg -wi-ao   1.00G
>   tarantula    xenvg -wi-ao   5.00G
>   userdisk     xenvg -wi-a- 128.00M
> root@hydra:~# cat /etc/xen/octopus
> kernel = "/boot/vmlinuz-2.6.16-xen"
> ramdisk = "/boot/initrd.img-2.6.16-xen"
> memory = 128
> name = "octopus"
> # Remember in Xen we are limited to three virtual network interfaces per
> DomU...
> vif = ['mac=aa:00:00:00:00:e7,bridge=xenbr0',
> 'mac=aa:00:00:00:01:01,bridge=xenbr1',
> 'mac=aa:00:00:00:02:01,bridge=xenbr2']
> disk = ['phy:/dev/xenvg/octopus,hda1,w','phy:/dev/xenvg/octopus-
> lvm,hda2,w']
> hostname = "octopus"
> root = "/dev/hda1 ro"
> extra = "4"
> root@hydra:~#
> 
> Now, the DomU is also using LVM:
> 
> root@octopus:~# pvs
>   PV         VG      Fmt  Attr PSize    PFree
>   /dev/hda2  storage lvm2 a-   1020.00M 380.00M
> root@octopus:~# vgs
>   VG      #PV #LV #SN Attr   VSize    VFree
>   storage   1   2   0 wz--n- 1020.00M 380.00M
> root@octopus:~# lvs
>   LV          VG      Attr   LSize   Origin Snap%  Move Log Copy%
>   backupimage storage -ri-ao 512.00M
>   userdisk    storage -wi-a- 128.00M
> root@octopus:~#
> 
> 
> Now we have the problem!  When the Dom0 scans for LVs, it will look on
> /dev/hda3 and find the "xenvg" and all its LVs.  It will then look
> _inside_
> these LVs and find the "/dev/storage" group that really belongs to the
> DomU.
> At this point, you have two machines accessing the same LV, which is
> "bad".
> 
> The solution is to restrict the Dom0 LVM to only use the devices that we
> know are for scanning.  This is not the default behaviour - the default
> behaviour (at least under Debian/(K)Ubuntu) is to scan pretty much every
> block device in /dev - and this why the problem occurs.
> 
> I changed by Dom0 LVM configuration to only scan /dev/hda as below.
> 
> root@hydra:~# cat /etc/lvm/lvm.conf
> devices {
>     dir = "/dev"
>     scan = [ "/dev" ]
>     filter =[ "a|/dev/hda|", "r|.*|" ]
>     cache = "/etc/lvm/.cache"
>     write_cache_state = 1
>     sysfs_scan = 1
>     md_component_detection = 1
> }
> /snip/
> root@hydra:~#
> 
> I'm not an LVM expert and I cannot tell enough from your e-mail to know if
> this is your problem, but hopefully this will help you (or maybe someone
> else).
> 
> BR,
> 
> Roger
> 
> 
> > -----Original Message-----
> > From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com]
> > On Behalf Of Dave
> > Sent: 28 July 2006 10:39
> > To: linux-lvm@redhat.com
> > Subject: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT
> > consistent"error following lvm operations
> >
> > Hello,
> >
> > I've been using LVM (1.08), which comes by default with RedHat
> AS3update5,
> > for over a year now and am consistently running into a problem.  I hope
> > there are still some LVM version 1 users out there who have some
> knowledge
> > about this!!
> >
> > After a reboot, LVM typically functions as expected, however,
> oftentimes,
> > after some LVM operations, I get the following sequence:
> >
> > [root@mucrrp10 vb]# vgdisplay
> > vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
> > run vgscan
> >
> > [root@mucrrp10 vb]# vgscan
> > vgscan -- reading all physical volumes (this may take a while...)
> > vgscan -- found active volume group "prdxptux"
> > vgscan -- found exported volume group "prdtuxPV_EXP"
> > vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
> > vgscan -- WARNING: This program does not do a VGDA backup of your volume
> > groups
> >
> > [root@mucrrp10 vb]# vgdisplay
> > vgdisplay -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
> > run vgscan
> >
> > This is troublesome because it makes it difficult to reliably use
> certain
> > LVM commands and trust their results.  I need to minimize reboots as
> much
> > as possible.
> >
> > Any help in understanding the cause of this problem, and how to resolve
> > it, or avoid it, are greatly appreciated!!
> >
> > Here is some version info about the system and software (I can provide
> > more information if needed):
> >
> > [root@mucrrp10 vb]# cat /etc/redhat-release
> >  Red Hat Enterprise Linux AS release 3 (Taroon Update 5)
> >  [root@mucrrp10 vb]# uname -a
> >  Linux mucrrp10 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:17:59 EDT 2005 i686
> > i686 i386 GNU/Linux
> >  [root@mucrrp10 vb]# rpm -qa|grep lvm
> >  lvm-1.0.8-12.2
> >
> > Thanks in advance for any assistance.
> > Regards,
> > David
> >
> >
> >
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-28 10:36     ` Roger Lucas
@ 2006-07-31  8:50       ` Dave
  2006-07-31 10:50         ` Dave
  2006-07-31 14:09         ` Heinz Mauelshagen
  0 siblings, 2 replies; 9+ messages in thread
From: Dave @ 2006-07-31  8:50 UTC (permalink / raw)
  To: LVM general discussion and development



Hello,

I have slightly more info re: this error message I'm seeing.  After running vgdisplay and getting the message:

    VGDA in kernel and lvmtab are NOT consistent

I issued an $? to see the exit code and it was 98, which is explained as:

    invalid lvmtab (run vgscan(8))

in the vgdisplay man page.  How does the lvmtab become invalid?  

I do not want to have to reboot (because this will cause an outage for our customers), but I know from previous experience that the error message will disappear following a reboot.  I'm also hesitant to add new disks, vg's, lv's, and file systems until I can run vgdisplay without an error.  Should I be concerned re: this message???


Thanks!
Dave

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-31  8:50       ` Dave
@ 2006-07-31 10:50         ` Dave
  2006-07-31 14:09         ` Heinz Mauelshagen
  1 sibling, 0 replies; 9+ messages in thread
From: Dave @ 2006-07-31 10:50 UTC (permalink / raw)
  To: Dave; +Cc: Linux-LVM

Hi again,

Some more information re: this problem.  On the 2 (clustered) systems current displaying this error, I see four disks which are being report by pvscan as "inactive" and "exported".  They are reported this way by both hosts, however, the file systems that rely on these four disks are actually mounted an in use on one system.

Thus, my question is, how can I "alert" lvm as to the real status of these disks and have it update it's status information?  If I run vgimport or vgchange to try and get the status right, will that affect the already mounted file systems (which rely on those volume groups)?

Thanks,
Dave


----- Original Message ----
From: Dave <davo_muc@yahoo.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Sent: Monday, July 31, 2006 10:50:16 AM
Subject: Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations



Hello,

I have slightly more info re: this error message I'm seeing.  After running vgdisplay and getting the message:

    VGDA in kernel and lvmtab are NOT consistent

I issued an $? to see the exit code and it was 98, which is explained as:

    invalid lvmtab (run vgscan(8))

in the vgdisplay man page.  How does the lvmtab become invalid?  

I do not want to have to reboot (because this will cause an outage for our customers), but I know from previous experience that the error message will disappear following a reboot.  I'm also hesitant to add new disks, vg's, lv's, and file systems until I can run vgdisplay without an error.  Should I be concerned re: this message???


Thanks!
Dave


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-31  8:50       ` Dave
  2006-07-31 10:50         ` Dave
@ 2006-07-31 14:09         ` Heinz Mauelshagen
  2006-07-31 15:16           ` Dave
  2006-07-31 16:54           ` Dave
  1 sibling, 2 replies; 9+ messages in thread
From: Heinz Mauelshagen @ 2006-07-31 14:09 UTC (permalink / raw)
  To: Dave, LVM general discussion and development

On Mon, Jul 31, 2006 at 01:50:16AM -0700, Dave wrote:
> 
> 
> Hello,
> 
> I have slightly more info re: this error message I'm seeing.  After running vgdisplay and getting the message:
> 
>     VGDA in kernel and lvmtab are NOT consistent
> 
> I issued an $? to see the exit code and it was 98, which is explained as:
> 
>     invalid lvmtab (run vgscan(8))

E.g. because your root fs is full.
FYI: all lvm commands changing the configuration write the lvmtab.

Have you tried running vgscan yet ?

Heinz

> 
> in the vgdisplay man page.  How does the lvmtab become invalid?  
> 
> I do not want to have to reboot (because this will cause an outage for our customers), but I know from previous experience that the error message will disappear following a reboot.  I'm also hesitant to add new disks, vg's, lv's, and file systems until I can run vgdisplay without an error.  Should I be concerned re: this message???
> 
> 
> Thanks!
> Dave
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Heinz Mauelshagen                                 Red Hat GmbH
Consulting Development Engineer                   Am Sonnenhang 11
Storage Development                               56242 Marienrachdorf
                                                  Germany
Mauelshagen@RedHat.com                            PHONE +49  171 7803392
                                                  FAX   +49 2626 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-31 14:09         ` Heinz Mauelshagen
@ 2006-07-31 15:16           ` Dave
  2006-07-31 16:54           ` Dave
  1 sibling, 0 replies; 9+ messages in thread
From: Dave @ 2006-07-31 15:16 UTC (permalink / raw)
  To: mauelshagen, LVM general discussion and development

Hi Heinz,

Thanks for your message.  To answer your questions...

The root filesystem has over 50 GB available.  I ran vgscan but that did not improve the situation.  What we were able to do to get vgdisplay to not output an error was...

1)  modify the /etc/lvmtab file to include the filesystem which LVM claims is inactive and exported (even though it is in mounted and in use)

2)  cp /etc/lvmconf/${vg}.conf /etc/lvmtab.d/${vg}

3)  after steps 1 and 2, vgdisplay shows information regarding both volume groups defined;  however, pvscan still shows one volume group as inactive and exported;  we determined this must be due to a flag on the disks themselves, because /proc/lvm kernel info seems to show both volume groups as active

Any other info on why volume group status is not kept accurate by LVM is appreciated!!  Is this perhaps a bug with LVM v 1.08

Thanks!
Dave

----- Original Message ----
From: Heinz Mauelshagen <mauelshagen@redhat.com>
To: Dave <davo_muc@yahoo.com>; LVM general discussion and development <linux-lvm@redhat.com>
Sent: Monday, July 31, 2006 4:09:20 PM
Subject: Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations

On Mon, Jul 31, 2006 at 01:50:16AM -0700, Dave wrote:
> 
> 
> Hello,
> 
> I have slightly more info re: this error message I'm seeing.  After running vgdisplay and getting the message:
> 
>     VGDA in kernel and lvmtab are NOT consistent
> 
> I issued an $? to see the exit code and it was 98, which is explained as:
> 
>     invalid lvmtab (run vgscan(8))

E.g. because your root fs is full.
FYI: all lvm commands changing the configuration write the lvmtab.

Have you tried running vgscan yet ?

Heinz

> 
> in the vgdisplay man page.  How does the lvmtab become invalid?  
> 
> I do not want to have to reboot (because this will cause an outage for our customers), but I know from previous experience that the error message will disappear following a reboot.  I'm also hesitant to add new disks, vg's, lv's, and file systems until I can run vgdisplay without an error.  Should I be concerned re: this message???
> 
> 
> Thanks!
> Dave
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Heinz Mauelshagen                                 Red Hat GmbH
Consulting Development Engineer                   Am Sonnenhang 11
Storage Development                               56242 Marienrachdorf
                                                  Germany
Mauelshagen@RedHat.com                            PHONE +49  171 7803392
                                                  FAX   +49 2626 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations
  2006-07-31 14:09         ` Heinz Mauelshagen
  2006-07-31 15:16           ` Dave
@ 2006-07-31 16:54           ` Dave
  1 sibling, 0 replies; 9+ messages in thread
From: Dave @ 2006-07-31 16:54 UTC (permalink / raw)
  To: mauelshagen, LVM general discussion and development

Hello Heinz and any others who might know the answer to this question,

One other question...  As I mentioned in a separate message, I currently have a volume group which seen as "inactive" and "exported" by two separate systems, both of which have physical access to the volume group over fibre to a SAN.  In this two node cluster, one node should always have the volume group as "active".  In fact, currently one node does have the volume group in use, because they are visible using df and mounted.

I would like to make sure that the status displayed by pvscan is corrected to "active" on that node.  Is it safe to run "vgimport" and "vgchange -ay" on the volume that is active, even though pvscan shows it as "inactive" and "exported"???  Will this cause any problem to the file system that is running on top of that volume group???

Thanks!!
Dave


----- Original Message ----
From: Heinz Mauelshagen <mauelshagen@redhat.com>
To: Dave <davo_muc@yahoo.com>; LVM general discussion and development <linux-lvm@redhat.com>
Sent: Monday, July 31, 2006 4:09:20 PM
Subject: Re: [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error following lvm operations

On Mon, Jul 31, 2006 at 01:50:16AM -0700, Dave wrote:
> 
> 
> Hello,
> 
> I have slightly more info re: this error message I'm seeing.  After running vgdisplay and getting the message:
> 
>     VGDA in kernel and lvmtab are NOT consistent
> 
> I issued an $? to see the exit code and it was 98, which is explained as:
> 
>     invalid lvmtab (run vgscan(8))

E.g. because your root fs is full.
FYI: all lvm commands changing the configuration write the lvmtab.

Have you tried running vgscan yet ?

Heinz

> 
> in the vgdisplay man page.  How does the lvmtab become invalid?  
> 
> I do not want to have to reboot (because this will cause an outage for our customers), but I know from previous experience that the error message will disappear following a reboot.  I'm also hesitant to add new disks, vg's, lv's, and file systems until I can run vgdisplay without an error.  Should I be concerned re: this message???
> 
> 
> Thanks!
> Dave
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Heinz Mauelshagen                                 Red Hat GmbH
Consulting Development Engineer                   Am Sonnenhang 11
Storage Development                               56242 Marienrachdorf
                                                  Germany
Mauelshagen@RedHat.com                            PHONE +49  171 7803392
                                                  FAX   +49 2626 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-07-31 16:55 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-28  9:38 [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent" error following lvm operations Dave
2006-07-28  9:47 ` [linux-lvm] LVM1 - "VGDA in kernel and lvmtab are NOT consistent"error " Roger Lucas
2006-07-28 10:25   ` Dave
2006-07-28 10:36     ` Roger Lucas
2006-07-31  8:50       ` Dave
2006-07-31 10:50         ` Dave
2006-07-31 14:09         ` Heinz Mauelshagen
2006-07-31 15:16           ` Dave
2006-07-31 16:54           ` Dave

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).