Linux LVM users
 help / color / mirror / Atom feed
* [linux-lvm] pvscan fails
@ 2002-08-25 22:39 Todd Troxell
  2002-08-26  5:20 ` Heinz J . Mauelshagen
  0 siblings, 1 reply; 9+ messages in thread
From: Todd Troxell @ 2002-08-25 22:39 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 529 bytes --]

Hello,

I'm having trouble setting up physical volumes.

caffeine:~# pvcreate /dev/hdd1
pvcreate -- physical volume "/dev/hdd1" successfully created

caffeine:~# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ERROR "pv_read(): read" reading physical volumes

After looking at debug messages, I think it is related to my 
ide cdrom (/dev/hdc).

Does this seem to be the case?  Is there a way to ignore /dev/hdc?

(attached is output from pvscan -d) (2.2.18, lvm 1.0.4)

-Todd

p.s. please cc: 

[-- Attachment #2: pvscan_debug --]
[-- Type: text/plain, Size: 11190 bytes --]

<1> lvm_get_iop_version -- CALLED
<22> lvm_check_special -- CALLED
<22> lvm_check_special -- LEAVING
<1> lvm_get_iop_version -- AFTER ioctl ret: 0
<1> lvm_get_iop_version -- LEAVING with ret: 10
<1> pv_read_all_pv -- CALLED
<1> pv_read_all_pv -- calling lvm_dir_cache
<22> lvm_dir_cache -- CALLED
<333> lvm_add_dir_cache -- CALLED with /dev/hdc
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 0
<55555> lvm_check_partitioned_dev -- LEAVING with ret: TRUE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus1/target1/lun0/disc
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus1/target1/lun0/part1
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/disc
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/part1
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/part2
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/part5
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/part6
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/part7
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target0/lun0/part8
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target1/lun0/disc
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/ide/host0/bus0/target1/lun0/part1
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/disc
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part1
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part2
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part3
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part5
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part6
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part7
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part8
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part9
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/scsi/host0/bus0/target6/lun0/part9
<333> lvm_add_dir_cache -- LEAVING with ret: NOT ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop0
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop1
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop2
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop3
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop4
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop5
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop6
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<333> lvm_add_dir_cache -- CALLED with /dev/loop7
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 3
<55555> lvm_check_partitioned_dev -- LEAVING with ret: FALSE
<4444> lvm_check_dev -- LEAVING with ret: 1
<333> lvm_add_dir_cache -- LEAVING with ret: ADDED
<22> lvm_dir_cache -- LEAVING with ret: 9
<1> pv_read_all_pv -- calling stat with "/dev/hdc"
<22> pv_read -- CALLED with /dev/hdc
<333> pv_check_name -- CALLED with "/dev/hdc"
<4444> lvm_check_chars -- CALLED with name: "/dev/hdc"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/hdc
<333> lvm_check_dev -- CALLED
<4444> lvm_check_partitioned_dev -- CALLED
<55555> lvm_get_device_type called
<55555> lvm_get_device_type leaving with 0
<4444> lvm_check_partitioned_dev -- LEAVING with ret: TRUE
<333> lvm_check_dev -- LEAVING with ret: 1
<333> pv_copy_from_disk -- CALLED
<333> pv_copy_from_disk -- LEAVING ret = 0x804cd08
<333> pv_create_name_from_kdev_t -- CALLED with 22:0
<4444> lvm_check_dev -- CALLED
<55555> lvm_check_partitioned_dev -- CALLED
<666666> lvm_get_device_type called
<666666> lvm_get_device_type leaving with 0
<55555> lvm_check_partitioned_dev -- LEAVING with ret: TRUE
<4444> lvm_check_dev -- LEAVING with ret: 1
<4444> lvm_dir_cache -- CALLED
<4444> lvm_dir_cache -- LEAVING with ret: 9
<333> pv_create_name_from_kdev_t -- LEAVING with dev_name: /dev/hdc
<22> pv_read -- LEAVING with ret: -268
<1> pv_read_all_pv -- pv_read returned: -268
<1> pv_read_all_pv -- calling stat with "/dev/loop0"
<22> pv_read -- CALLED with /dev/loop0
<333> pv_check_name -- CALLED with "/dev/loop0"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop0"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop0
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop1"
<22> pv_read -- CALLED with /dev/loop1
<333> pv_check_name -- CALLED with "/dev/loop1"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop1"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop1
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop2"
<22> pv_read -- CALLED with /dev/loop2
<333> pv_check_name -- CALLED with "/dev/loop2"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop2"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop2
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop3"
<22> pv_read -- CALLED with /dev/loop3
<333> pv_check_name -- CALLED with "/dev/loop3"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop3"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop3
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop4"
<22> pv_read -- CALLED with /dev/loop4
<333> pv_check_name -- CALLED with "/dev/loop4"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop4"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop4
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop5"
<22> pv_read -- CALLED with /dev/loop5
<333> pv_check_name -- CALLED with "/dev/loop5"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop5"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop5
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop6"
<22> pv_read -- CALLED with /dev/loop6
<333> pv_check_name -- CALLED with "/dev/loop6"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop6"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop6
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- calling stat with "/dev/loop7"
<22> pv_read -- CALLED with /dev/loop7
<333> pv_check_name -- CALLED with "/dev/loop7"
<4444> lvm_check_chars -- CALLED with name: "/dev/loop7"
<4444> lvm_check_chars -- LEAVING with ret: 0
<333> pv_check_name -- LEAVING with ret: 0
<22> pv_read -- going to read /dev/loop7
<22> pv_read -- LEAVING with ret: -282
<1> pv_read_all_pv -- pv_read returned: -282
<1> pv_read_all_pv -- avoiding multiple entries in case of MD; np: 0
<1> pv_read_all_pv -- LEAVING with ret: -282
<1> lvm_error -- CALLED with: -282
<1> lvm_error -- LEAVING with: "pv_read(): read"
pvscan -- ERROR "pv_read(): read" reading physical volumes

pvscan -- reading all physical volumes (this may take a while...)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] pvscan fails
  2002-08-25 22:39 [linux-lvm] pvscan fails Todd Troxell
@ 2002-08-26  5:20 ` Heinz J . Mauelshagen
  2002-08-27 11:23   ` Todd Troxell
  0 siblings, 1 reply; 9+ messages in thread
From: Heinz J . Mauelshagen @ 2002-08-26  5:20 UTC (permalink / raw)
  To: linux-lvm; +Cc: ttroxell

On Mon, Aug 26, 2002 at 01:21:31AM -0400, Todd Troxell wrote:
> Hello,
> 
> I'm having trouble setting up physical volumes.
> 
> caffeine:~# pvcreate /dev/hdd1
> pvcreate -- physical volume "/dev/hdd1" successfully created
> 
> caffeine:~# pvscan
> pvscan -- reading all physical volumes (this may take a while...)
> pvscan -- ERROR "pv_read(): read" reading physical volumes
> 
> After looking at debug messages, I think it is related to my 
> ide cdrom (/dev/hdc).

Todd,
have you temporarily removed /dev/hdc and retried to prove this right?

> 
> Does this seem to be the case?  Is there a way to ignore /dev/hdc?

Not with LVM version < 1.1 :(

You should give LVM2 a try (please follow download instructions at
www.sistina.com in that case) which has fully configurable devices
based on regular expressions.

Regards,
Heinz    -- The LVM Guy --


> 
> (attached is output from pvscan -d) (2.2.18, lvm 1.0.4)
> 
> -Todd
> 
> p.s. please cc: 

> <1> lvm_get_iop_version -- CALLED
> <22> lvm_check_special -- CALLED
<SNIP>
> <1> lvm_error -- LEAVING with: "pv_read(): read"
> pvscan -- ERROR "pv_read(): read" reading physical volumes
> 
> pvscan -- reading all physical volumes (this may take a while...)

*** Software bugs are stupid.
    Nevertheless it needs not so stupid people to solve them ***

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Heinz Mauelshagen                                 Sistina Software Inc.
Senior Consultant/Developer                       Am Sonnenhang 11
                                                  56242 Marienrachdorf
                                                  Germany
Mauelshagen@Sistina.com                           +49 2626 141200
                                                       FAX 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] pvscan fails
  2002-08-26  5:20 ` Heinz J . Mauelshagen
@ 2002-08-27 11:23   ` Todd Troxell
  0 siblings, 0 replies; 9+ messages in thread
From: Todd Troxell @ 2002-08-27 11:23 UTC (permalink / raw)
  To: linux-lvm

On Mon, Aug 26, 2002 at 12:03:52PM +0200, Heinz J . Mauelshagen wrote:
> 
> Todd,
> have you temporarily removed /dev/hdc and retried to prove this right?
> 

Verified this just now.

> 
> You should give LVM2 a try (please follow download instructions at
> www.sistina.com in that case) which has fully configurable devices
> based on regular expressions.
> 
> 

Will try it, thanks!

-Todd

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [linux-lvm] pvscan fails
@ 2004-07-27 23:10 Frank Mohr
  2004-07-27 23:17 ` Erik Ch. Ohrnberger
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Frank Mohr @ 2004-07-27 23:10 UTC (permalink / raw)
  To: linux-lvm

Hi

after a system crash my system can't find it's LVM volumes:

System:
- SuSE 7.3 with last 7.3 patches, own Kernel Update to 2.4.26
- was running for some longer time with SuSE lvm-1.0.0.2_rc2-6
  (vgscan --help -> LVM 1.0.1-rc2 - 30/08/2001 (IOP 10))
- I've updated LVM to LVM 1.0.8 - 17/11/2003 (IOP 10)
  in the hope to fix the problem

vgscan dies with a Segmentation fault

odie:~/LVM/1.0.8/tools # vgscan -v    
vgscan -- removing "/etc/lvmtab" and "/etc/lvmtab.d"
vgscan -- creating empty "/etc/lvmtab" and "/etc/lvmtab.d"
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- scanning for all active volume group(s) first
vgscan -- reading data of volume group "DATAVG" from physical volume(s)
Segmentation fault
odie:~/LVM/1.0.8/tools # 

pvscan finds the volumes of the VG

odie:~/LVM/1.0.8/tools # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/hdc1"  is associated to unknown VG "DATAVG"
(run vgscan)
pvscan -- inactive PV "/dev/hdd1"  is associated to unknown VG "DATAVG"
(run vgscan)
pvscan -- inactive PV "/dev/hdb1"  is associated to unknown VG "DATAVG"
(run vgscan)
pvscan -- total: 3 [306.23 GB] / in use: 3 [306.23 GB] / in no VG: 0 [0]

odie:~/LVM/1.0.8/tools # 

Running vgscan -dv results in

...
<1> vg_read_with_pv_and_lv -- AFTER lv_read_all_lv; vg_this->pv_cur: 3 
vg_this->pv_max: 255  ret: 0
<1> vg_read_with_pv_and_lv -- BEFORE for PE
<1> vg_read_with_pv_and_lv -- AFTER for PE
<1> vg_read_with_pv_and_lv -- BEFORE for LV
<1> vg_read_with_pv_and_lv -- vg_this->lv[0]->lv_allocated_le: 32500
Segmentation fault

(copied the last few lines - didn't want to send 72k debug output)

Is there any chance to fix this without loosing the data on the disks?


Frank

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [linux-lvm] pvscan fails
  2004-07-27 23:10 [linux-lvm] pvscan fails Frank Mohr
@ 2004-07-27 23:17 ` Erik Ch. Ohrnberger
  2004-07-28 14:37   ` Frank Mohr
  2004-07-28 14:05 ` [linux-lvm] pvscan fails (more informations) Frank Mohr
  2004-07-28 16:37 ` [linux-lvm] pvscan fails (some more debugging - cause found) Frank Mohr
  2 siblings, 1 reply; 9+ messages in thread
From: Erik Ch. Ohrnberger @ 2004-07-27 23:17 UTC (permalink / raw)
  To: 'LVM general discussion and development'

Frank,
	Sounds like you and me are in similar situations.  I lost my
partition tables on a reboot - no idea why, and I'd also like to recover my
data (I've not written to the disks, other than to restore the partician
tables).  Below is a summary of my experiences.  I ended up using a borrowed
R-Studio and only recovered 38 GB of 170 GB or so.  I'd like to be able to
recover more if possible.

	Erik.
==================================
...LVM Recovery
Well, I've slowly been coming to grips with recovering with what to me is a
pretty serious hard disk calamity.
 
I rebooted my Linux system, as it was up and running for 48 days or so, and
it just seemed to be time to do it.  When the system came back up, many of
the hard disk partician tables were lost, and it wouldn't boot.
 
After much research on the Internet, I found that a partician table could be
re-written and all the data in the file system maintained.  I also found a
tool, TestDisk at http://www.cgsecurity.org by Christophe GRENIER
<grenier@cgsecurity.org>, which seemed to do a good job of sniffing out
partician tables from the remaining file system data.  Well, it did OK on
the system disk, found the first FAT partician and the ext3 partician for
the root of the system.  In fact, after it wrote out the partician table, I
could mount the root file system without any sort of fsck required.  Very
cool.
 
Of the LVM hard disks, which is why I'm submitting this post, 3 out of 4
partician tables were identified and recovered (/dev/hde1, /dev/hdg1,
/dev/hdh1 but not /dev/hdf1).  For Lvm, I always used a single primary
partician, non-bootable, which uses the entire space on the hard disk.  So
recovering this partician table should be no problem, right?  I used fdisk
and re-created the partician table.
 
OK, so I've not re-written the grub boot-loader on the system disk, but I
did boot off of a rescue CD and performed a chroot to where the root file
system was mounted, so I have a chrooted environment, and I can run access
the binaries and file from the old system hard disk.  I check to make sure
that the lvm module was loaded using lsmod, and it was so, now I figured I'd
see how far I could get to recover the 130 GB of data that was on the LVM
volume.
 
First things first, I tried vgscan, and got the following results:
 
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- ERROR "vg_read_with_pv_and_lv(): current PV" can't get data of
volume group "u00_vg" from physical volume(s)
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
group

Additionally, pvscan reports the following:
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/hdg1"  is associated to unknown VG "u00_vg" (run
vgscan)
pvscan -- inactive PV "/dev/hdh1"  is associated to unknown VG "u00_vg" (run
vgscan)
pvscan -- inactive PV "/dev/hde1"  is associated to unknown VG "u00_vg" (run
vgscan)
pvscan -- total: 3 [204.96 GB] / in use: 3 [204.96 GB] / in no VG: 0 [0]

I did a pvdata, and produced the output at the bottom part of this message.
First notice that all the drive letters are the same, which I think is a
good thing.  I also notice that way at the end, there are UUIDs for each of
the volumes.  Now it would appear that the UUID from the one bad volume is
lost.  Do you suppose that I could use the UUID_fixed program to put that
UUID back on the physical volume and get it back?

Next, I moved the LVM disks from the old RedHat machine where they started
off at over to a SuSE 9.0 machine for the purpose of recovering any data
that I can.  The main reason is that the SuSE machine has a DM patched
kernel and LVM2, which should be able to handle partial LVMs.  I've also
added a brand new 200 GB hard disk to copy the recovered data to.  While it
won't hold uncompressed images of the LVM disks, if I recall, I had
something like 68 GB free on the LVM set, so I should have enough room to
hold all the recovered data.
 
I tried using e2retrieve (at http://coredump.free.fr/linux/e2retrieve.php)
to copy off the data by analyzing the raw disk data, but after it scans all
the disks, it seg faults.  So that went nowhere.  Too bad, from the
description of the program, it has some real promise for a general LVM
recovery utility.
 
When I do a pvscan, I get this (this is now with LVM2):
  3 PV(s) found for VG u00_vg: expected 4
  Logical volume (u00_lv) contains an incomplete mapping table.
  PV /dev/hde1    is in exported VG u00_vg [55.89 GB / 0    free]
  PV /dev/hdg1    is in exported VG u00_vg [74.52 GB / 0    free]
  PV /dev/hdh1    is in exported VG u00_vg [74.52 GB / 0    free]
  Total: 3 [0   ] / in use: 3 [0   ] / in no VG: 0 [0   ]

When I go a vgscan, I get this:
  Reading all physical volumes.  This may take a while...
  3 PV(s) found for VG u00_vg: expected 4
  Volume group "u00_vg" not found


Also, I'm wondering how I can re-create the volume group and logical volumes
to that I can mount the file system in read only more and copy all the data
off that I can access without causing any greater data loss on the hard
disks.
 
Any help in answering these questions would be greatly appreciated, as I
know what to do when LVM is working, but I'm at a little of a loss when it's
not working.
 
Thanks in advance,
    Erik.
 
==================================
pvdata information:

--- Physical volume ---
PV Name               /dev/hde1
VG Name               u00_vg
PV Size               55.90 GB [117226242 secs] / NOT usable 4.18 MB [LVM:
179 KB]
PV#                   1
PV Status             available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              14308
Free PE               0
Allocated PE          14308
PV UUID               VILh9i-uWlA-cKBM-AcRJ-VYU7-54kM-OgiWQm
 
--- Physical volume ---
pcdata /dev/hdf1
pvdata segfaults on this command.
 
--- Physical volume ---
PV Name               /dev/hdg1
VG Name               u00_vg
PV Size               74.53 GB [156296322 secs] / NOT usable 4.25 MB [LVM:
198 KB]
PV#                   2
PV Status             available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              19078
Free PE               0
Allocated PE          19078
PV UUID               AZf9pT-TYsE-Y3xF-jolh-Z9EF-WV3l-T6yATO
 
--- Physical volume ---
PV Name               /dev/hdh1
VG Name               u00_vg
PV Size               74.53 GB [156301425 secs] / NOT usable 4.25 MB [LVM:
198 KB]
PV#                   6
PV Status             available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              19078
Free PE               0
Allocated PE          19078
PV UUID               8seUMF-A73a-V5tQ-N88Q-Uv0M-Ci6f-5wVO9C
 
--- Volume group ---
VG Name
VG Access             read/write
VG Status             NOT available/resizable
VG #                  0
MAX LV                255
Cur LV                1
Open LV               0
MAX LV Size           255.99 GB
Max PV                255
Cur PV                4
Act PV                4
VG Size               243.28 GB
PE Size               4 MB
Total PE              62279
Alloc PE / Size       62279 / 243.28 GB
Free  PE / Size       0 / 0
VG UUID               tUQf5q-QvaA-hEj8-slM0-MmoW-A2Xt-47HS1p
 
--- List of physical volume UUIDs ---
 
001: AZf9pT-TYsE-Y3xF-jolh-Z9EF-WV3l-T6yATO	(/dev/hdg1)
002: Pclazx-RnTY-QBCG-P1O6-dVDg-V435-SlLluH	(/dev/hdf1?)
003: 8seUMF-A73a-V5tQ-N88Q-Uv0M-Ci6f-5wVO9C	(/dev/hdh1)
004: VILh9i-uWlA-cKBM-AcRJ-VYU7-54kM-OgiWQm	(/dev/hde1)

> -----Original Message-----
> From: linux-lvm-bounces@redhat.com 
> [mailto:linux-lvm-bounces@redhat.com] On Behalf Of Frank Mohr
> Sent: Tuesday, July 27, 2004 7:10 PM
> To: linux-lvm@redhat.com
> Subject: [linux-lvm] pvscan fails 
> 
> 
> Hi
> 
> after a system crash my system can't find it's LVM volumes:
> 
> System:
> - SuSE 7.3 with last 7.3 patches, own Kernel Update to 2.4.26
> - was running for some longer time with SuSE lvm-1.0.0.2_rc2-6
>   (vgscan --help -> LVM 1.0.1-rc2 - 30/08/2001 (IOP 10))
> - I've updated LVM to LVM 1.0.8 - 17/11/2003 (IOP 10)
>   in the hope to fix the problem
> 
> vgscan dies with a Segmentation fault
> 
> odie:~/LVM/1.0.8/tools # vgscan -v    
> vgscan -- removing "/etc/lvmtab" and "/etc/lvmtab.d"
> vgscan -- creating empty "/etc/lvmtab" and "/etc/lvmtab.d" 
> vgscan -- reading all physical volumes (this may take a 
> while...) vgscan -- scanning for all active volume group(s) 
> first vgscan -- reading data of volume group "DATAVG" from 
> physical volume(s) Segmentation fault odie:~/LVM/1.0.8/tools # 
> 
> pvscan finds the volumes of the VG
> 
> odie:~/LVM/1.0.8/tools # pvscan
> pvscan -- reading all physical volumes (this may take a 
> while...) pvscan -- inactive PV "/dev/hdc1"  is associated to 
> unknown VG "DATAVG" (run vgscan) pvscan -- inactive PV 
> "/dev/hdd1"  is associated to unknown VG "DATAVG" (run 
> vgscan) pvscan -- inactive PV "/dev/hdb1"  is associated to 
> unknown VG "DATAVG" (run vgscan) pvscan -- total: 3 [306.23 
> GB] / in use: 3 [306.23 GB] / in no VG: 0 [0]
> 
> odie:~/LVM/1.0.8/tools # 
> 
> Running vgscan -dv results in
> 
> ...
> <1> vg_read_with_pv_and_lv -- AFTER lv_read_all_lv; 
> vg_this->pv_cur: 3 
> vg_this->pv_max: 255  ret: 0
> <1> vg_read_with_pv_and_lv -- BEFORE for PE
> <1> vg_read_with_pv_and_lv -- AFTER for PE
> <1> vg_read_with_pv_and_lv -- BEFORE for LV
> <1> vg_read_with_pv_and_lv -- 
> vg_this->lv[0]->lv_allocated_le: 32500 Segmentation fault
> 
> (copied the last few lines - didn't want to send 72k debug output)
> 
> Is there any chance to fix this without loosing the data on the disks?
> 
> 
> Frank
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] pvscan fails (more informations)
  2004-07-27 23:10 [linux-lvm] pvscan fails Frank Mohr
  2004-07-27 23:17 ` Erik Ch. Ohrnberger
@ 2004-07-28 14:05 ` Frank Mohr
  2004-07-28 16:37 ` [linux-lvm] pvscan fails (some more debugging - cause found) Frank Mohr
  2 siblings, 0 replies; 9+ messages in thread
From: Frank Mohr @ 2004-07-28 14:05 UTC (permalink / raw)
  To: LVM general discussion and development

Frank Mohr wrote:
> 
> Hi
> 
> after a system crash my system can't find it's LVM volumes:
> 

some more informations:

- the system didn't crash but was switched of before "vgchange -a n" was
called
- pvdisplay shows "PV Status available" for all PV's

- vgcfgrestore
  even some backups back doesn't help
- vgcfgrestore -t -ll -f /etc/lvmconf/DATAVG.conf -n DATAVG
  showns the same results as pvdisplay
- pvdisplay on all 3 partions shown "good" results

- pvdata -a /dev/hdb1 /dev/hdc1 /dev/hdd1
  looks good 
  displays the same "VG UUID" for all PV's

- tried vgimport -f -v DATAVG /dev/hdb1 /dev/hdc1 /dev/hdd1

odie:~/LVM/1.0.8/tools # vgimport -f -v DATAVG /dev/hdb1 /dev/hdc1
/dev/hdd1
vgimport -- locking logical volume manager
vgimport -- checking volume group name
vgimport -- checking volume group "DATAVG" existence
vgimport -- trying to read physical volumes
vgimport -- checking for duplicate physical volumes
vgimport -- checking physical volume name "/dev/hdb1"
vgimport -- reading data of physical volume "/dev/hdb1" from disk
vgimport -- checking for exported physical volume "/dev/hdb1"
vgimport -- checking consistency of physical volume "/dev/hdb1"
vgimport -- reallocating memory
vgimport -- checking for duplicate physical volumes
vgimport -- checking physical volume name "/dev/hdc1"
vgimport -- reading data of physical volume "/dev/hdc1" from disk
vgimport -- checking for exported physical volume "/dev/hdc1"
vgimport -- checking consistency of physical volume "/dev/hdc1"
vgimport -- reallocating memory
vgimport -- checking for duplicate physical volumes
vgimport -- checking physical volume name "/dev/hdd1"
vgimport -- reading data of physical volume "/dev/hdd1" from disk
vgimport -- checking for exported physical volume "/dev/hdd1"
vgimport -- checking consistency of physical volume "/dev/hdd1"
vgimport -- reallocating memory
vgimport -- physical volumes "/dev/hdc1" and "/dev/hdb1" are in
different volume groups

vgimport [-d|--debug] [-f|--force] [-h|--help] [-v|--verbose]
        VolumeGroupName PhysicalVolumePath [PhysicalVolumePath...]
odie:~/LVM/1.0.8/tools #

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] pvscan fails
  2004-07-27 23:17 ` Erik Ch. Ohrnberger
@ 2004-07-28 14:37   ` Frank Mohr
  0 siblings, 0 replies; 9+ messages in thread
From: Frank Mohr @ 2004-07-28 14:37 UTC (permalink / raw)
  To: Erik, LVM general discussion and development

"Erik Ch. Ohrnberger" wrote:
> 
> Frank,
>         Sounds like you and me are in similar situations.  I lost my
> partition tables on a reboot - no idea why, and I'd also like to recover my
> data (I've not written to the disks, other than to restore the partician
> tables).  Below is a summary of my experiences.  I ended up using a borrowed
> R-Studio and only recovered 38 GB of 170 GB or so.  I'd like to be able to
> recover more if possible.
> 
>         Erik.

my problem seem to be some strange currupted LVM configuration on the
disks.
Partition table is OK, 
Most output of LVM tools seems OK

only 

pvscan crashes
vgimport complains that all 3 PV's are in different VG's (PVdata shows
the same VG UUID and VG name)

Frank

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] pvscan fails (some more debugging - cause found)
  2004-07-27 23:10 [linux-lvm] pvscan fails Frank Mohr
  2004-07-27 23:17 ` Erik Ch. Ohrnberger
  2004-07-28 14:05 ` [linux-lvm] pvscan fails (more informations) Frank Mohr
@ 2004-07-28 16:37 ` Frank Mohr
  2004-08-01 10:06   ` Frank Mohr
  2 siblings, 1 reply; 9+ messages in thread
From: Frank Mohr @ 2004-07-28 16:37 UTC (permalink / raw)
  To: LVM general discussion and development

I did some debugging and found where and why pvscan crashes.

Is the any tool to fix PE entries on disk?
There seem to be 7 corrupted entries.


I've added some debug messages to vg_read_with_pv_and_lv

there seems to be a mismatch between vg_this->lv[l]->lv_allocated_le and
some PE

i've change/added a check to the loop:


               debug ( "construct the lv_current_pe pointer array\n");
               /* construct the lv_current_pe pointer array */
               p = npe = 0;
               for ( p = 0; p < vg_this->pv_cur && npe <
vg_this->lv[l]->lv_allocated_le; p++)
               {
                   debug ( "p = %d - pe_total =
%d\n",p,vg_this->pv[p]->pe_total);
                  for ( ope = 0; ope < vg_this->pv[p]->pe_total; ope++)
                  {
                       debug ( "ope = %d\n",ope);
                     if ( vg_this->pv[p]->pe[ope].lv_num == lv_num)
                     {
                        pe_index = vg_this->pv[p]->pe[ope].le_num;
                       debug ( "pe_index = %d\n",pe_index);
                        if( pe_index > vg_this->lv[l]->lv_allocated_le)
                        {
                            debug("(fm) Error pe_index = %lu >
vg_this->lv[l]->lv_allocated_le = %lu\n",
                                pe_index ,
vg_this->lv[l]->lv_allocated_le);
                        }
                        else
                        {
                        vg_this->lv[l]->lv_current_pe[pe_index].dev =
vg_this->pv[p]->pv_dev;
                       debug ( "get_pe_offset = %d\n",ope);
                        vg_this->lv[l]->lv_current_pe[pe_index].pe =
get_pe_offset(ope, vg_this->pv[p]);
                       debug ( "get_pe_offset ->
%lu\n",vg_this->lv[l]->lv_current_pe[pe_index].pe);
                        vg_this->lv[l]->lv_current_pe[pe_index].reads =
\
                        vg_this->lv[l]->lv_current_pe[pe_index].writes =
0;
                        npe++;
                        }
                     }
                  }
               }
               debug ( "construct the lv_current_pe pointer array --
done\n");


the result is:

<1> p = 1 - pe_total = 19632
...
<1> ope = 17014
<1> pe_index = 28542
<1> get_pe_offset = 17014
<1> get_pe_offset -> 139387384
<1> ope = 17015
<1> pe_index = 32639
<1> (fm) Error pe_index = 32639 > vg_this->lv[l]->lv_allocated_le =
32500
<1> ope = 17016
<1> pe_index = 28544
<1> get_pe_offset = 17016
<1> get_pe_offset -> 139403768
<1> ope = 17017
<1> pe_index = 28545
<1> get_pe_offset = 17017
...
<1> ope = 17142
<1> pe_index = 28670
<1> get_pe_offset = 17142
<1> get_pe_offset -> 140435960
<1> ope = 17143
<1> pe_index = 32767
<1> (fm) Error pe_index = 32767 > vg_this->lv[l]->lv_allocated_le =
32500
<1> ope = 17144
<1> pe_index = 28672
<1> get_pe_offset = 17144
<1> get_pe_offset -> 140452344
<1> ope = 17145
<1> pe_index = 28673
...
<1> construct the lv_current_pe pointer array -- done
vgscan -- only found 32493 of 32500 LEs for LV /dev/DATAVG/DATALV (0)
<1> vg_read_with_pv_and_lv -- LEAVING with ret: -365
<1> lvm_error -- CALLED with: -365
<1> lvm_error -- LEAVING with: "vg_read_with_pv_and_lv(): allocated LE
of LV"
vgscan -- ERROR "vg_read_with_pv_and_lv(): allocated LE of LV" can't get
data of volume group "DATAVG" from physical volume(s)
<1> vg_free -- CALLED
<1> vg_free -- LEAVING with ret: -99
<1> lvm_interrupt -- CALLED
<1> lvm_interrupt -- LEAVING
<1> lvm_unlock -- CALLED
<1> lvm_unlock -- LEAVING with ret: 0
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
group

<1> lvm_unlock -- CALLED
<1> lvm_unlock -- LEAVING with ret: -104

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] pvscan fails (some more debugging - cause found)
  2004-07-28 16:37 ` [linux-lvm] pvscan fails (some more debugging - cause found) Frank Mohr
@ 2004-08-01 10:06   ` Frank Mohr
  0 siblings, 0 replies; 9+ messages in thread
From: Frank Mohr @ 2004-08-01 10:06 UTC (permalink / raw)
  To: LVM general discussion and development

Frank Mohr wrote:
> 
> I did some debugging and found where and why pvscan crashes.
> 
> Is the any tool to fix PE entries on disk?
> There seem to be 7 corrupted entries.
> 
> I've added some debug messages to vg_read_with_pv_and_lv
> 
> there seems to be a mismatch between vg_this->lv[l]->lv_allocated_le and
> some PE
> 

After adding some debug messages to vgcfgrestore too, i found that the
backup had "good" values.

Now i moved the disks to a different controller and everything works
fine.

Seems it was a broken onboard ide controller

frank

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2004-08-01 10:07 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-07-27 23:10 [linux-lvm] pvscan fails Frank Mohr
2004-07-27 23:17 ` Erik Ch. Ohrnberger
2004-07-28 14:37   ` Frank Mohr
2004-07-28 14:05 ` [linux-lvm] pvscan fails (more informations) Frank Mohr
2004-07-28 16:37 ` [linux-lvm] pvscan fails (some more debugging - cause found) Frank Mohr
2004-08-01 10:06   ` Frank Mohr
  -- strict thread matches above, loose matches on Subject: below --
2002-08-25 22:39 [linux-lvm] pvscan fails Todd Troxell
2002-08-26  5:20 ` Heinz J . Mauelshagen
2002-08-27 11:23   ` Todd Troxell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox