Linux LVM users
 help / color / mirror / Atom feed
From: Stephanus Fengler <fengler@uiuc.edu>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvm 1.0.3 kernel panic
Date: Mon, 19 Jul 2004 16:46:31 -0500	[thread overview]
Message-ID: <40FC4137.8040204@uiuc.edu> (raw)
In-Reply-To: <40FC1B6C.7060302@uiuc.edu>

Here is additionally an output of:
vgcfgrestore -f /mnt/sysimage/etc/lvmconf/vg01.conf -n vg01 -ll

--- Volume group ---
VG Name               vg01
VG Access             read/write
VG Status             NOT available/resizable
VG #                  0
MAX LV                256
Cur LV                3
Open LV               0
MAX LV Size           255.99 GB
Max PV                256
Cur PV                4
Act PV                4
VG Size               501.92 GB
PE Size               4 MB
Total PE              128491
Alloc PE / Size       128491 / 501.92 GB
Free  PE / Size       0 / 0
VG UUID               vsZRhX-6bfh-jqhD-Cn1Z-0h9E-tiE6-isU7hJ

--- Logical volume ---
LV Name                /dev/vg01/lv_root
VG Name                vg01
LV Write Access        read/write
LV Status              available
LV #                   1
# open                 0
LV Size                33.66 GB
Current LE             8617
Allocated LE           8617
Allocation             next free
Read ahead sectors     10000
Block device           58:0

--- Logical volume ---
LV Name                /dev/vg01/lv_data2
VG Name                vg01
LV Write Access        read/write
LV Status              available
LV #                   2
# open                 0
LV Size                235.38 GB
Current LE             60257
Allocated LE           60257
Allocation             next free
Read ahead sectors     10000
Block device           58:1

--- Logical volume ---
LV Name                /dev/vg01/lv_data
VG Name                vg01
LV Write Access        read/write
LV Status              available
LV #                   3
# open                 0
LV Size                232.88 GB
Current LE             59617
Allocated LE           59617
Allocation             next free
Read ahead sectors     10000
Block device           58:2


--- Physical volume ---
PV Name               /dev/hda3
VG Name               vg01
PV Size               33.67 GB [70605675 secs] / NOT usable 4.19 MB 
[LVM: 161 KB]
PV#                   1
PV Status             available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              8617
Free PE               0
Allocated PE          8617
PV UUID               ouWY4f-vGwm-Rq3p-eX3M-tDaI-BQ3i-D7gT4f

--- Physical volume ---
PV Name               /dev/hda1
VG Name               vg01
PV Size               2.50 GB [5253192 secs] / NOT usable 4.19 MB [LVM: 
130 KB]
PV#                   2
PV Status             available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              640
Free PE               0
Allocated PE          640
PV UUID               6WZGDT-Sev3-jYXj-VquB-FQj2-Wbeq-cdgNLK

--- Physical volume ---
PV Name               /dev/hdb
VG Name               vg01
PV Size               232.89 GB [488397168 secs] / NOT usable 4.38 MB 
[LVM: 360 KB]
PV#                   3
PV Status             available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              59617
Free PE               0
Allocated PE          59617
PV UUID               nV1chY-OlRj-tLrb-cdSM-D3IN-Mvwb-U2nxfE

--- Physical volume ---
PV Name               /dev/hdc
VG Name               vg01
PV Size               232.89 GB [488397168 secs] / NOT usable 4.38 MB 
[LVM: 360 KB]
PV#                   4
PV Status             NOT available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       4096
Total PE              59617
Free PE               0
Allocated PE          59617
PV UUID               XMs7mJ-PdbD-voTc-IUY7-sLiu-Gg1P-99nwc5

I have checked the output of some older config files like 
vg01.conf.[1-6].old and have found one without the additional hard disk. 
Hence no data is stored on that disk yet, I wouldn't mind to overwrite 
the configuration, but is it save doing it? I definitely worried about 
data loss then.

Thanks,
Stephanus


Stephanus Fengler wrote:

> Hi experts,
>
> I added a new hard disk to my system and created it entirely as a new 
> logical volume. Mounting unmounting everything worked until reboot. It 
> stops now with kerel panic and lines like:
>
> (null) -- ERROR 2 writing volume group backup file /etc/lvmtab.d/vg01.tmp
>       vg_cfgbackup.c [line271]
>
> vgscan -- ERROR: unable to do a backup of volume group vg01
> vgscan -- ERROR: lvm_tab_vg_remove(): unlink" removing volume group 
> "vg01"  from  "/etc/lvmtab"
>
> ...
>
> Activating volume groups
>    vgchange - no volume groups found
>
> I understand the kernel panic if lvm is unable to find the volume 
> group vg01 because that's my root system. But I don't get the first 
> error.
> I rebooted with my Redhat Installation disk: linux rescue
> and can activate the volume group by hand and mount the file systems. 
> So it looks to me everything is consistent in the filesystem.
>
> So since I am pretty new to lvm, which output do you additional need 
> to help me?
>
> Thanks in advance,
> Stephanus
>
> lvmdiskscan:
> lvmdiskscan -- reading all disks / partitions (this may take a while...)
> lvmdiskscan -- /dev/hdc   [     232.89 GB] USED LVM whole disk
> lvmdiskscan -- /dev/hda1  [       2.50 GB] Primary  LVM partition [0x8E]
> lvmdiskscan -- /dev/hda2  [     101.97 MB] Primary  LINUX native 
> partition [0x83]
> lvmdiskscan -- /dev/hda3  [      33.67 GB] Primary  LVM partition [0x8E]
> lvmdiskscan -- /dev/hda4  [    1019.75 MB] Primary  Windows98 extended 
> partition [0x0F]
> lvmdiskscan -- /dev/hda5  [    1019.72 MB] Extended LINUX swap 
> partition [0x82]
> lvmdiskscan -- /dev/hdb   [     232.89 GB] USED LVM whole disk
> lvmdiskscan -- /dev/loop0 [      59.08 MB] free loop device
> lvmdiskscan -- 3 disks
> lvmdiskscan -- 2 whole disks
> lvmdiskscan -- 1 loop device
> lvmdiskscan -- 0 multiple devices
> lvmdiskscan -- 0 network block devices
> lvmdiskscan -- 5 partitions
> lvmdiskscan -- 2 LVM physical volume partitions
>
> pvscan:
> pvscan -- reading all physical volumes (this may take a while...)
> pvscan -- inactive PV "/dev/hdc"  of VG "vg01" [232.88 GB / 0 free]
> pvscan -- inactive PV "/dev/hda1" of VG "vg01" [2.50 GB / 0 free]
> pvscan -- inactive PV "/dev/hda3" of VG "vg01" [33.66 GB / 0 free]
> pvscan -- inactive PV "/dev/hdb"  of VG "vg01" [232.88 GB / 0 free]
> pvscan -- total: 4 [501.94 GB] / in use: 4 [501.94 GB] / in no VG: 0 [0]
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  reply	other threads:[~2004-07-19 21:46 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-07-19 19:05 [linux-lvm] lvm 1.0.3 kernel panic Stephanus Fengler
2004-07-19 21:46 ` Stephanus Fengler [this message]
  -- strict thread matches above, loose matches on Subject: below --
2004-07-20  1:09 Stephanus Fengler
2004-07-21  8:32 ` Stephanus Fengler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40FC4137.8040204@uiuc.edu \
    --to=fengler@uiuc.edu \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox