* [linux-lvm] LVM2 crash - unable to get the volume group up and running again
@ 2005-11-13 23:05 Jian Xu
2005-11-14 7:57 ` Dieter Stüken
0 siblings, 1 reply; 2+ messages in thread
From: Jian Xu @ 2005-11-13 23:05 UTC (permalink / raw)
To: linux-lvm
Hi,
My LVM2 volume just stopped working out of the blue. I added a new disk to
the volume group a while ago and added it to a logical volume, and used
xfs_growfs to extend the partition.
All off a sudden a few weeks later, it stopped working. Upon reboot I get
this error message:
device-mapper: device /dev/hdd1 too small for target
device-mapper: dm-linear: Device lookup failed
device-mapper: error adding target to table
I have alot of important data on these disks, are there any way to get the
volume up running again, or get some data back from any of the individual
disks? The problems seems to be on the /dev/hdd disk. I've tried searcing
in google, but havent found anything specific about a problem like mine.
I run gentoo with the 2.6.13-gentoo-r5 kernel.
LVM version: 2.01.09 (2005-04-04)
Library version: 1.01.03 (2005-06-13)
Driver version: 4.4.0
device-mapper: 4.4.0-ioctl (2005-01-12) initialised: dm-devel@redhat.com
# pvscan
PV /dev/hda4 VG vg lvm2 [101.94 GB / 0 free]
PV /dev/hdb1 VG vg lvm2 [111.79 GB / 0 free]
PV /dev/hdc1 VG vg lvm2 [186.30 GB / 0 free]
PV /dev/hdd1 VG vg lvm2 [186.31 GB / 0 free]
PV /dev/hdh1 VG vg lvm2 [186.31 GB / 0 free]
PV /dev/hdg1 VG vg lvm2 [186.31 GB / 0 free]
PV /dev/hde VG vg lvm2 [232.88 GB / 0 free]
Total: 7 [1.16 TB] / in use: 7 [1.16 TB] / in no VG: 0 [0 ]
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg" using metadata type lvm2
pvscan and vgscan appears to run fine, but when I try to activate the
volumegroup, I get the following error:
# vgchange -ay
device-mapper ioctl cmd 9 failed: Invalid argument
Couldn't load device 'vg-public'.
1 logical volume(s) in volume group "vg" now active
(and when I do this, the device-mapper also logs the 3 error lines I
pasted earlier in this mail again)
Below are the outpots that pvs, vgdisplay and pvdisplay give me
# pvs
PV VG Fmt Attr PSize PFree
/dev/hda4 vg lvm2 a- 101.94G 0
/dev/hdb1 vg lvm2 a- 111.79G 0
/dev/hdc1 vg lvm2 a- 186.30G 0
/dev/hdd1 vg lvm2 a- 186.31G 0
/dev/hde vg lvm2 a- 232.88G 0
/dev/hdg1 vg lvm2 a- 186.31G 0
/dev/hdh1 vg lvm2 a- 186.31G 0
# vgdisplay
--- Volume group ---
VG Name vg
System ID
Format lvm2
Metadata Areas 7
Metadata Sequence No 37
VG Access read/write
VG Status resizable
MAX LV 7
Cur LV 1
Open LV 0
Max PV 255
Cur PV 7
Act PV 7
VG Size 1.16 TB
PE Size 4.00 MB
Total PE 305111
Alloc PE / Size 305111 / 1.16 TB
Free PE / Size 0 / 0
VG UUID LQ6EXt-RBjK-qqBB-tf7M-vXwL-pHZR-uGN63q
# vgdisplay
--- Volume group ---
VG Name vg
System ID
Format lvm2
Metadata Areas 7
Metadata Sequence No 37
VG Access read/write
VG Status resizable
MAX LV 7
Cur LV 1
Open LV 0
Max PV 255
Cur PV 7
Act PV 7
VG Size 1.16 TB
PE Size 4.00 MB
Total PE 305111
Alloc PE / Size 305111 / 1.16 TB
Free PE / Size 0 / 0
VG UUID LQ6EXt-RBjK-qqBB-tf7M-vXwL-pHZR-uGN63q
exillion jian # pvdisplay
--- Physical volume ---
PV Name /dev/hda4
VG Name vg
PV Size 101.94 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 26097
Free PE 0
Allocated PE 26097
PV UUID rN48fI-G7jz-BR60-tIDi-Bh9Z-jNGu-OY8xLg
--- Physical volume ---
PV Name /dev/hdb1
VG Name vg
PV Size 111.79 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 28617
Free PE 0
Allocated PE 28617
PV UUID bZWnTI-vk9p-LFv6-ZsNB-d5Kb-5oNz-6oxr4U
--- Physical volume ---
PV Name /dev/hdc1
VG Name vg
PV Size 186.30 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 47694
Free PE 0
Allocated PE 47694
PV UUID tXAjzt-nKR5-nZ42-g2Md-bfQH-PGta-w54Zjm
--- Physical volume ---
PV Name /dev/hdd1
VG Name vg
PV Size 186.31 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 47695
Free PE 0
Allocated PE 47695
PV UUID P67tnA-CUSV-41XG-RkTY-JfVd-0Zvw-nXY5T9
--- Physical volume ---
PV Name /dev/hdh1
VG Name vg
PV Size 186.31 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 47695
Free PE 0
Allocated PE 47695
PV UUID ux9OP1-oXdK-I5Kk-Ua0i-hlKk-Qayr-lMEddZ
--- Physical volume ---
PV Name /dev/hdg1
VG Name vg
PV Size 186.31 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 47695
Free PE 0
Allocated PE 47695
PV UUID EDySOS-lUhE-Z7mp-fiGq-HGm0-rN2k-RXO7cO
--- Physical volume ---
PV Name /dev/hde
VG Name vg
PV Size 232.88 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 59618
Free PE 0
Allocated PE 59618
PV UUID 8lQNTw-923u-WoOB-wQn9-Zbnw-rZ1c-pDR3d3
--
Jian Xu
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [linux-lvm] LVM2 crash - unable to get the volume group up and running again
2005-11-13 23:05 [linux-lvm] LVM2 crash - unable to get the volume group up and running again Jian Xu
@ 2005-11-14 7:57 ` Dieter Stüken
0 siblings, 0 replies; 2+ messages in thread
From: Dieter Stüken @ 2005-11-14 7:57 UTC (permalink / raw)
To: LVM general discussion and development
Hi Jian,
have a look into /etc/lvm/archive/ and /etc/lvm/backup/ to understand
how your vgs and lvs are organized and how they changed during your last
LVM actions. May be this helps understanding the problem.
Dieter.
--
Dieter St�ken, con terra GmbH, M�nster
stueken@conterra.de
http://www.conterra.de/
(0)251-7474-501
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2005-11-14 7:58 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-11-13 23:05 [linux-lvm] LVM2 crash - unable to get the volume group up and running again Jian Xu
2005-11-14 7:57 ` Dieter Stüken
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).