linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed?
@ 2006-08-18 14:29 Thomas Novin
  2006-08-29 21:02 ` Thomas Novin
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Novin @ 2006-08-18 14:29 UTC (permalink / raw)
  To: linux-lvm

Hi all,

I had a disk that stopped working. After booting I could see with pvdisplay
that the disk was missing. After reading everything I could find via google
I thought that you were supposed to run 'vgreduce --remove-missing volgrp0'
to remove the missing disk from the group.

After this the volume group looks OK but the entire logical volume got
removed! Am I screwed now or is there any way to salvage the data which is
on the remaining three disks?

[root@mistik ~]# pvdisplay
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               volgrp0
  PV Size               189.92 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              48620
  Free PE               0
  Allocated PE          48620
  PV UUID               zyELIl-MgM6-9zoD-G0PI-c1Wy-CZ1i-Sps8EV
   
  --- Physical volume ---
  PV Name               /dev/hdd
  VG Name               volgrp0
  PV Size               115.04 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              29449
  Free PE               0
  Allocated PE          29449
  PV UUID               yIgADr-5HXP-lFZC-KtZq-onX2-vaNG-aFiLLI
   
  --- Physical volume ---
  PV Name               unknown device
  VG Name               volgrp0
  PV Size               115.04 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              29449
  Free PE               0
  Allocated PE          29449
  PV UUID               dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53
   
  --- Physical volume ---
  PV Name               /dev/hda4
  VG Name               volgrp0
  PV Size               111.17 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              28460
  Free PE               0
  Allocated PE          28460
  PV UUID               Lh7KKJ-OGDQ-Gv4A-4l7U-TRGS-mfma-R2LrPt

[root@mistik ~]# vgreduce --removemissing volgrp0
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find all physical volumes for volume group volgrp0.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find all physical volumes for volume group volgrp0.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find all physical volumes for volume group volgrp0.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find all physical volumes for volume group volgrp0.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find all physical volumes for volume group volgrp0.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find all physical volumes for volume group volgrp0.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Couldn't find device with uuid 'dvPVGp-9Vcg-b6Gp-s4cC-2pwK-cvyc-ATbE53'.
  Wrote out consistent volume group volgrp0
[root@mistik ~]# vgdisplay 
  --- Volume group ---
  VG Name               volgrp0
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               416.13 GB
  PE Size               4.00 MB
  Total PE              106529
  Alloc PE / Size       0 / 0   
  Free  PE / Size       106529 / 416.13 GB
  VG UUID               5C4Hnl-YVct-jtv9-HM5Z-J63D-XePc-ANcpx8
[root@mistik ~]# lvdisplay 
[root@mistik ~]# 

[root@mistik ~]# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               volgrp0
  PV Size               189.92 GB / not usable 0   
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              48620
  Free PE               48620
  Allocated PE          0
  PV UUID               zyELIl-MgM6-9zoD-G0PI-c1Wy-CZ1i-Sps8EV
   
  --- Physical volume ---
  PV Name               /dev/hdd
  VG Name               volgrp0
  PV Size               115.04 GB / not usable 0   
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              29449
  Free PE               29449
  Allocated PE          0
  PV UUID               yIgADr-5HXP-lFZC-KtZq-onX2-vaNG-aFiLLI
   
  --- Physical volume ---
  PV Name               /dev/hda4
  VG Name               volgrp0
  PV Size               111.17 GB / not usable 0   
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              28460
  Free PE               28460
  Allocated PE          0
  PV UUID               Lh7KKJ-OGDQ-Gv4A-4l7U-TRGS-mfma-R2LrPt

Thanks for any help on this,

Thomas

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed?
  2006-08-18 14:29 [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed? Thomas Novin
@ 2006-08-29 21:02 ` Thomas Novin
  2006-08-29 21:16   ` Peter Smith
  2006-08-30  2:16   ` Tom+Dale
  0 siblings, 2 replies; 11+ messages in thread
From: Thomas Novin @ 2006-08-29 21:02 UTC (permalink / raw)
  To: 'LVM general discussion and development'

> I had a disk that stopped working. After booting I could see with
> pvdisplay
> that the disk was missing. After reading everything I could find via
> google
> I thought that you were supposed to run 'vgreduce --remove-missing
> volgrp0'
> to remove the missing disk from the group.
> 
> After this the volume group looks OK but the entire logical volume got
> removed! Am I screwed now or is there any way to salvage the data which is
> on the remaining three disks?

Please someone answer this, is there any solution to this problem? To
clarify:

- Disk failure
- Ran 'vgreduce --remove-missing volgrp0' (probably not such a good idea)
- /dev/volgrp0/ emtpy. 'lvdisplay' doesn't show anything.

So, can I somehow restore my logical volume? The three other physical
disks/partitions are intact.

Thanks in advance,

Thomas Novin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed?
  2006-08-29 21:02 ` Thomas Novin
@ 2006-08-29 21:16   ` Peter Smith
  2006-08-30  2:16   ` Tom+Dale
  1 sibling, 0 replies; 11+ messages in thread
From: Peter Smith @ 2006-08-29 21:16 UTC (permalink / raw)
  To: LVM general discussion and development

Thomas Novin wrote:

>>I had a disk that stopped working. After booting I could see with
>>pvdisplay
>>that the disk was missing. After reading everything I could find via
>>google
>>I thought that you were supposed to run 'vgreduce --remove-missing
>>volgrp0'
>>to remove the missing disk from the group.
>>
>>After this the volume group looks OK but the entire logical volume got
>>removed! Am I screwed now or is there any way to salvage the data which is
>>on the remaining three disks?
>>    
>>
>
>Please someone answer this, is there any solution to this problem? To
>clarify:
>
>- Disk failure
>- Ran 'vgreduce --remove-missing volgrp0' (probably not such a good idea)
>- /dev/volgrp0/ emtpy. 'lvdisplay' doesn't show anything.
>
>So, can I somehow restore my logical volume? The three other physical
>disks/partitions are intact.
>
>Thanks in advance,
>
>Thomas Novin
>  
>

You will probably be able to restore your config from a backup. Look in 
/etc/lvmconf . Or, look for files that may be backups of your 
configuration. You should be able to do some sort of vgcfgrestore 
command using a previous conf and get back to, at least, where you were 
before doing the remove.

Peter

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed?
  2006-08-29 21:02 ` Thomas Novin
  2006-08-29 21:16   ` Peter Smith
@ 2006-08-30  2:16   ` Tom+Dale
  2006-08-30  3:09     ` Tom+Dale
  2006-09-04  4:09     ` [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing Tom+Dale
  1 sibling, 2 replies; 11+ messages in thread
From: Tom+Dale @ 2006-08-30  2:16 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 1712 bytes --]

Thomas Novin <thnov@xyz.pp.se> wrote:  > I had a disk that stopped working. After booting I could see with
> pvdisplay
> that the disk was missing. After reading everything I could find via
> google
> I thought that you were supposed to run 'vgreduce --remove-missing
> volgrp0'
> to remove the missing disk from the group.
> 
> After this the volume group looks OK but the entire logical volume got
> removed! Am I screwed now or is there any way to salvage the data which is
> on the remaining three disks?

Please someone answer this, is there any solution to this problem? To
clarify:

- Disk failure
- Ran 'vgreduce --remove-missing volgrp0' (probably not such a good idea)
- /dev/volgrp0/ emtpy. 'lvdisplay' doesn't show anything.

So, can I somehow restore my logical volume? The three other physical
disks/partitions are intact.

Thanks in advance,

Thomas Novin  I wish I could help you...
The answer to this question is VERY important to me, too.  I sent out a question to this mailing list on Saturday that described my nearly identical problem (although I included lots of details).  Nobody responded at all.  Perhaps I overwhelmed everyone with TMI???
   
  If there is no one here who can answer these questions, where can we go for help?
   
  Could someone please suggest a forum or reference document that can help Mr. Novin and me?  I have spent 3 weeks searching for help on the 'net; and I have read the LVM HowTo several times.  This mailing list was my best hope for assistance.
   
  Thanks again in advance for any help you might be able to provide, folks!
   
  -Tom-

 		
---------------------------------
 All-new Yahoo! Mail - Fire up a more powerful email and get things done faster.

[-- Attachment #2: Type: text/html, Size: 2167 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed?
  2006-08-30  2:16   ` Tom+Dale
@ 2006-08-30  3:09     ` Tom+Dale
  2006-09-04  4:09     ` [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing Tom+Dale
  1 sibling, 0 replies; 11+ messages in thread
From: Tom+Dale @ 2006-08-30  3:09 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 1908 bytes --]

Tom+Dale <tdmyth@yahoo.com> wrote:   Thomas Novin <thnov@xyz.pp.se> wrote:   > I had a disk that stopped working. After booting I could see with
> pvdisplay
> that the disk was missing. After reading everything I could find via
> google
> I thought that you were supposed to run 'vgreduce --remove-missing
> volgrp0'
> to remove the missing disk from the group.
> 
> After this the volume group looks OK but the entire logical volume got
> removed! Am I screwed now or is there any way to salvage the data which is
> on the remaining three disks?

Please someone answer this, is there any solution to this problem? To
clarify:

- Disk failure
- Ran 'vgreduce --remove-missing volgrp0' (probably not such a good idea)
- /dev/volgrp0/ emtpy. 'lvdisplay' doesn't show anything.

So, can I somehow restore my logical volume? The three other physical
disks/partitions are intact.

Thanks in advance,

Thomas Novin  I wish I could help you...
The answer to this question is VERY important to me, too.  I sent out a question to this mailing list on Saturday that described my nearly identical problem (although I included lots of details).  Nobody responded at all.  Perhaps I overwhelmed everyone with TMI???
   
  If there is no one here who can answer these questions, where can we go for help?
   
  Could someone please suggest a forum or reference document that can help Mr. Novin and me?  I have spent 3 weeks searching for help on the 'net; and I have read the LVM HowTo several times.  This mailing list was my best hope for assistance.
   
  Thanks again in advance for any help you might be able to provide, folks!
   
  -Tom-
    
---------------------------------
  
   
  Oops!  I see that Peter Smith had responded to my message; also my case was not related to "vgreduce."  Sorry.
   
  -Tom-


 		
---------------------------------
Do you Yahoo!?
 Everyone is raving about the  all-new Yahoo! Mail.

[-- Attachment #2: Type: text/html, Size: 2549 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing
  2006-08-30  2:16   ` Tom+Dale
  2006-08-30  3:09     ` Tom+Dale
@ 2006-09-04  4:09     ` Tom+Dale
  2006-09-06 16:29       ` Peter Smith
                         ` (2 more replies)
  1 sibling, 3 replies; 11+ messages in thread
From: Tom+Dale @ 2006-09-04  4:09 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 2119 bytes --]

I had a bad drive in my volume group, so I tried "vgreduce --removemissing VolGroup00" which seemed to work. Then, I tried "vgchange -ay --partial VolGroup00" which also appeared to complete successfully.  It seems like I was partly successful but I cannot get the logical volume to be recognized.  I don't know what else to do?  Where can I learn more about LVM?  How does one utilize the LVM archive or backup files?  Can't seem to find an answer to this problem.  Is all of my data lost?

[root[at]server ~]# lvchange -vvvay /dev/VolGroup00/LogVol00 
Processing: lvchange -vvvay /dev/VolGroup00/LogVol00 
O_DIRECT will be used 
Setting global/locking_type to 1 
Setting global/locking_dir to /var/lock/lvm 
File-based locking enabled. 
Using logical volume(s) on command line 
Locking /var/lock/lvm/V_VolGroup00 WB 
Opened /dev/sda RW 
/dev/sda: block size is 4096 bytes 
/dev/sda: No label detected  
Opened /dev/hda1 RW 
/dev/hda1: block size is 1024 bytes 
/dev/hda1: No label detected 
Opened /dev/hda2 RW 
/dev/hda2: block size is 4096 bytes 
/dev/hda2: No label detected 
Opened /dev/hda3 RW 
/dev/hda3: block size is 4096 bytes 
/dev/hda3: No label detected 
Opened /dev/hda5 RW 
/dev/hda5: block size is 512 bytes 
/dev/hda5: lvm2 label detected 
lvmcache: /dev/hda5 now orphaned 
lvmcache: /dev/hda5 now in VG VolGroup00 
Opened /dev/hdb RW 
/dev/hdb: block size is 4096 bytes 
/dev/hdb: lvm2 label detected 
lvmcache: /dev/hdb now orphaned 
lvmcache: /dev/hdb now in VG VolGroup00 
/dev/hda5: lvm2 label detected 
/dev/hdb: lvm2 label detected 
/dev/hda5: lvm2 label detected 
/dev/hdb: lvm2 label detected 
Read VolGroup00 metadata (11) from /dev/hda5 at 18944 size 720 
/dev/hda5: lvm2 label detected 
/dev/hdb: lvm2 label detected 
Read VolGroup00 metadata (11) from /dev/hdb at 16896 size 720  
One or more specified logical volume(s) not found. 
Unlocking /var/lock/lvm/V_VolGroup00 
Closed /dev/sda 
Closed /dev/hda1 
Closed /dev/hda2 
Closed /dev/hda3 
Closed /dev/hda5 
Closed /dev/hdb 

 		
---------------------------------
Do you Yahoo!?
 Get on board. You're invited to try the new Yahoo! Mail.

[-- Attachment #2: Type: text/html, Size: 2533 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing
  2006-09-04  4:09     ` [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing Tom+Dale
@ 2006-09-06 16:29       ` Peter Smith
  2006-09-07 17:18       ` Peter Smith
  2006-09-10 17:27       ` Nix
  2 siblings, 0 replies; 11+ messages in thread
From: Peter Smith @ 2006-09-06 16:29 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 2685 bytes --]

What if you do "lvdisplay -m /dev/VolGroup00/LogVol00 --partial"

P

Tom+Dale wrote:

> I had a bad drive in my volume group, so I tried "vgreduce 
> --removemissing VolGroup00" which seemed to work. Then, I tried 
> "vgchange -ay --partial VolGroup00" which also appeared to complete 
> successfully.  It seems like I was partly successful but I cannot get 
> the logical volume to be recognized.  I don't know what else to do?  
> Where can I learn more about LVM?  How does one utilize the LVM 
> archive or backup files?  Can't seem to find an answer to this 
> problem.  Is all of my data lost?
>
> [root[at]server ~]# lvchange -vvvay /dev/VolGroup00/LogVol00
> Processing: lvchange -vvvay /dev/VolGroup00/LogVol00
> O_DIRECT will be used
> Setting global/locking_type to 1
> Setting global/locking_dir to /var/lock/lvm
> File-based locking enabled.
> Using logical volume(s) on command line
> Locking /var/lock/lvm/V_VolGroup00 WB
> Opened /dev/sda RW
> /dev/sda: block size is 4096 bytes
> /dev/sda: No label detected
> Opened /dev/hda1 RW
> /dev/hda1: block size is 1024 bytes
> /dev/hda1: No label detected
> Opened /dev/hda2 RW
> /dev/hda2: block size is 4096 bytes
> /dev/hda2: No label detected
> Opened /dev/hda3 RW
> /dev/hda3: block size is 4096 bytes
> /dev/hda3: No label detected
> Opened /dev/hda5 RW
> /dev/hda5: block size is 512 bytes
> /dev/hda5: lvm2 label detected
> lvmcache: /dev/hda5 now orphaned
> lvmcache: /dev/hda5 now in VG VolGroup00
> Opened /dev/hdb RW
> /dev/hdb: block size is 4096 bytes
> /dev/hdb: lvm2 label detected
> lvmcache: /dev/hdb now orphaned
> lvmcache: /dev/hdb now in VG VolGroup00
> /dev/hda5: lvm2 label detected
> /dev/hdb: lvm2 label detected
> /dev/hda5: lvm2 label detected
> /dev/hdb: lvm2 label detected
> Read VolGroup00 metadata (11) from /dev/hda5 at 18944 size 720
> /dev/hda5: lvm2 label detected
> /dev/hdb: lvm2 label detected
> Read VolGroup00 metadata (11) from /dev/hdb at 16896 size 720
> One or more specified logical volume(s) not found.
> Unlocking /var/lock/lvm/V_VolGroup00
> Closed /dev/sda
> Closed /dev/hda1
> Closed /dev/hda2
> Closed /dev/hda3
> Closed /dev/hda5
> Closed /dev/hdb
>
> ------------------------------------------------------------------------
> Do you Yahoo!?
> Get on board. You're invited 
> <http://us.rd.yahoo.com/evt=40791/*http://advision.webevents.yahoo.com/mailbeta> 
> to try the new Yahoo! Mail.
>
>------------------------------------------------------------------------
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://www.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

[-- Attachment #2: Type: text/html, Size: 3520 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing
  2006-09-04  4:09     ` [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing Tom+Dale
  2006-09-06 16:29       ` Peter Smith
@ 2006-09-07 17:18       ` Peter Smith
  2006-09-11 19:05         ` Tom+Dale
  2006-09-10 17:27       ` Nix
  2 siblings, 1 reply; 11+ messages in thread
From: Peter Smith @ 2006-09-07 17:18 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 3174 bytes --]

Ok.  It sounds like the fix will have to involve finding a way to get 
back to your installed-configuration.  The one that defines your VG as 
containing *only* hda5 and hdb.  It sounds like you succeeded extending 
the LV to include sda, but forgot to expand the filesystem after that.  
Which, ultimately, may save you.  I don't have enough experience with 
LVM (or just LVM2) to be able to tell you either how to edit your 
current mangled config to force your LV to forget about sda (which it 
sounds like you were successful accomplishing anyways) *or* to recover 
the before-sda configuration.  I've looked at the config on my own 
workstation and even though I'm using LVM2 I don't see any evidence of 
backed-up or historical LVM2 configuration data.  So I *really* don't 
know what to tell you there.  But I think it should be do-able.  There 
may be someone out there capable of helping with this, but I'm afraid it 
likely isn't me.  Although I might try and reproduce this on one of my 
Fedora machines to see if I get stuck in the same box which you are in.

Peter

Tom+Dale wrote:

> I had a bad drive in my volume group, so I tried "vgreduce 
> --removemissing VolGroup00" which seemed to work. Then, I tried 
> "vgchange -ay --partial VolGroup00" which also appeared to complete 
> successfully.  It seems like I was partly successful but I cannot get 
> the logical volume to be recognized.  I don't know what else to do?  
> Where can I learn more about LVM?  How does one utilize the LVM 
> archive or backup files?  Can't seem to find an answer to this 
> problem.  Is all of my data lost?
>
> [root[at]server ~]# lvchange -vvvay /dev/VolGroup00/LogVol00
> Processing: lvchange -vvvay /dev/VolGroup00/LogVol00
> O_DIRECT will be used
> Setting global/locking_type to 1
> Setting global/locking_dir to /var/lock/lvm
> File-based locking enabled.
> Using logical volume(s) on command line
> Locking /var/lock/lvm/V_VolGroup00 WB
> Opened /dev/sda RW
> /dev/sda: block size is 4096 bytes
> /dev/sda: No label detected
> Opened /dev/hda1 RW
> /dev/hda1: block size is 1024 bytes
> /dev/hda1: No label detected
> Opened /dev/hda2 RW
> /dev/hda2: block size is 4096 bytes
> /dev/hda2: No label detected
> Opened /dev/hda3 RW
> /dev/hda3: block size is 4096 bytes
> /dev/hda3: No label detected
> Opened /dev/hda5 RW
> /dev/hda5: block size is 512 bytes
> /dev/hda5: lvm2 label detected
> lvmcache: /dev/hda5 now orphaned
> lvmcache: /dev/hda5 now in VG VolGroup00
> Opened /dev/hdb RW
> /dev/hdb: block size is 4096 bytes
> /dev/hdb: lvm2 label detected
> lvmcache: /dev/hdb now orphaned
> lvmcache: /dev/hdb now in VG VolGroup00
> /dev/hda5: lvm2 label detected
> /dev/hdb: lvm2 label detected
> /dev/hda5: lvm2 label detected
> /dev/hdb: lvm2 label detected
> Read VolGroup00 metadata (11) from /dev/hda5 at 18944 size 720
> /dev/hda5: lvm2 label detected
> /dev/hdb: lvm2 label detected
> Read VolGroup00 metadata (11) from /dev/hdb at 16896 size 720
> One or more specified logical volume(s) not found.
> Unlocking /var/lock/lvm/V_VolGroup00
> Closed /dev/sda
> Closed /dev/hda1
> Closed /dev/hda2
> Closed /dev/hda3
> Closed /dev/hda5
> Closed /dev/hdb
>

[-- Attachment #2: Type: text/html, Size: 3885 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing
  2006-09-04  4:09     ` [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing Tom+Dale
  2006-09-06 16:29       ` Peter Smith
  2006-09-07 17:18       ` Peter Smith
@ 2006-09-10 17:27       ` Nix
  2 siblings, 0 replies; 11+ messages in thread
From: Nix @ 2006-09-10 17:27 UTC (permalink / raw)
  To: LVM general discussion and development

On Sun, 3 Sep 2006, Tom wondered:
>                                    How does one utilize the LVM
> archive or backup files?  Can't seem to find an answer to this
> problem.

Look up `vgcfgrestore'.

-- 
`In typical emacs fashion, it is both absurdly ornate and
 still not really what one wanted.' --- jdev

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing
  2006-09-07 17:18       ` Peter Smith
@ 2006-09-11 19:05         ` Tom+Dale
  2006-09-12 18:06           ` Peter Smith
  0 siblings, 1 reply; 11+ messages in thread
From: Tom+Dale @ 2006-09-11 19:05 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 7159 bytes --]



Peter Smith <peter.smith@utsouthwestern.edu> wrote:           Ok.  It sounds like the fix will have to involve finding a way to get back to your installed-configuration.  The one that defines your VG as containing *only* hda5 and hdb.  It sounds like you succeeded extending the LV to include sda, but forgot to expand the filesystem after that.  Which, ultimately, may save you.  I don't have enough experience with LVM (or just LVM2) to be able to tell you either how to edit your current mangled config to force your LV to forget about sda (which it sounds like you were successful accomplishing anyways) *or* to recover the before-sda configuration.  I've looked at the config on my own workstation and even though I'm using LVM2 I don't see any evidence of backed-up or historical LVM2 configuration data.  So I *really* don't know what to tell you there.  But I think it should be do-able.  There may be someone out there capable of helping with this, but I'm afraid it likely isn't
 me.  Although I might try and reproduce this on one of my Fedora machines to see if I get stuck in the same box which you are in.
 
 Peter
 
 Tom+Dale wrote: I had a bad drive in my volume group, so I tried "vgreduce --removemissing VolGroup00" which seemed to work. Then, I tried "vgchange -ay --partial VolGroup00" which also appeared to complete successfully.    It seems like I was partly successful but I cannot get the logical volume to be recognized.  I don't know what else to do?  Where can I learn more about LVM?  How does one utilize the LVM archive or backup files?  Can't seem to find an answer to this problem.  Is all of my data lost?
   
   [root[at]server ~]# lvchange -vvvay /dev/VolGroup00/LogVol00 
 Processing: lvchange -vvvay /dev/VolGroup00/LogVol00 
 O_DIRECT will be used 
 Setting global/locking_type to 1 
 Setting global/locking_dir to /var/lock/lvm 
 File-based locking enabled. 
 Using logical volume(s) on command line 
 Locking /var/lock/lvm/V_VolGroup00 WB 
 Opened /dev/sda RW 
 /dev/sda: block size is 4096 bytes 
 /dev/sda: No label detected 
 Opened /dev/hda1 RW 
 /dev/hda1: block size is 1024 bytes 
 /dev/hda1: No label detected 
 Opened /dev/hda2 RW 
 /dev/hda2: block size is 4096 bytes 
 /dev/hda2: No label detected 
 Opened /dev/hda3 RW 
 /dev/hda3: block size is 4096 bytes 
 /dev/hda3: No label detected 
 Opened /dev/hda5 RW 
 /dev/hda5: block size is 512 bytes 
 /dev/hda5: lvm2 label detected 
 lvmcache: /dev/hda5 now orphaned 
 lvmcache: /dev/hda5 now in VG VolGroup00 
 Opened /dev/hdb RW 
 /dev/hdb: block size is 4096 bytes 
 /dev/hdb: lvm2 label detected 
 lvmcache: /dev/hdb now orphaned 
 lvmcache: /dev/hdb now in VG VolGroup00 
 /dev/hda5: lvm2 label detected 
 /dev/hdb: lvm2 label detected 
 /dev/hda5: lvm2 label detected 
 /dev/hdb: lvm2 label detected 
 Read VolGroup00 metadata (11) from /dev/hda5 at 18944 size 720 
 /dev/hda5: lvm2 label detected 
 /dev/hdb: lvm2 label detected 
 Read VolGroup00 metadata (11) from /dev/hdb at 16896 size 720 
 One or more specified logical volume(s) not found. 
 Unlocking /var/lock/lvm/V_VolGroup00 
 Closed /dev/sda 
 Closed /dev/hda1 
 Closed /dev/hda2 
 Closed /dev/hda3 
 Closed /dev/hda5 
 Closed /dev/hdb 
   
 Once again, thank you, Peter.  I appreciate your time and attention on this matter.  Even though you were not certain of how to approach this, you did help me to resolve the problem.  Of course, the most significant factor was the adjustment of my attitude when I decided that all the data was probably lost.  I became less cautious, then.  :-)

I was right all along...the data was there.  Your assessment that we were saved by not having extended the volume is likely correct, too.  Good thing we didn't know what we were doing!  So when I started experimenting with the archive files associated with LVM, I stumbled on success.  Nix recently suggested "vgcfgrestore" as others had suggested to me; however, I had to use trial & error with the -t (test) parameter in order to figure this out.  By that I mean reading the LVM HowTo and various man pages did not clarify much for me.  So here are the steps that I took:
--------------------------------------------------------
[root@mythserver lvm]# vgcfgrestore -tf /etc/lvm/archive/VolGroup00_00000.vg
  Test mode: Metadata will NOT be updated.
  Please specify a *single* volume group to restore.
[root@mythserver lvm]# vgcfgrestore -tf /etc/lvm/archive/VolGroup00_00000.vg Vol
Group00
  Test mode: Metadata will NOT be updated.
  Restored volume group VolGroup00
[root@mythserver lvm]# vgcfgrestore -tvf /etc/lvm/archive/VolGroup00_00000.vg Vo
lGroup00
  Test mode: Metadata will NOT be updated.
  Restored volume group VolGroup00
    Test mode: Wiping internal cache
    Wiping internal VG cache
[root@mythserver lvm]# vgcfgrestore -tvvf /etc/lvm/archive/VolGroup00_00000.vg V
olGroup00
  Test mode: Metadata will NOT be updated.
      Setting global/locking_type to 1
      Setting global/locking_dir to /var/lock/lvm
      File-based locking enabled.
      Locking /var/lock/lvm/P_orphans WB
      Locking /var/lock/lvm/V_VolGroup00 W 
      /dev/hda1: No label detected
      /dev/hda2: No label detected
      /dev/hda3: No label detected
      /dev/hda5: lvm2 label detected
      /dev/hdb: lvm2 label detected
      /dev/hda5: lvm2 label detected
      /dev/hdb: lvm2 label detected
  Restored volume group VolGroup00
      Unlocking /var/lock/lvm/V_VolGroup00
      Unlocking /var/lock/lvm/P_orphans
    Test mode: Wiping internal cache
    Wiping internal VG cache
[root@mythserver lvm]# vgcfgrestore -vvf /etc/lvm/archive/VolGroup00_00000.vg Vo
lGroup00
      Setting global/locking_type to 1
      Setting global/locking_dir to /var/lock/lvm
      File-based locking enabled.
      Locking /var/lock/lvm/P_orphans WB
      Locking /var/lock/lvm/V_VolGroup00 W 
      /dev/hda1: No label detected
      /dev/hda2: No label detected
      /dev/hda3: No label detected
      /dev/hda5: lvm2 label detected
      /dev/hdb: lvm2 label detected
      /dev/hda5: lvm2 label detected
      /dev/hdb: lvm2 label detected
  Restored volume group VolGroup00
      Unlocking /var/lock/lvm/V_VolGroup00
      Unlocking /var/lock/lvm/P_orphans
[root@mythserver lvm]# lvscan
  inactive          '/dev/VolGroup00/LogVol00' [364.21 GB] inherit
[root@mythserver lvm]# lvchange -tv -ay /dev/VolGroup00/LogVol00
  Test mode: Metadata will NOT be updated.
    Using logical volume(s) on command line
    Activating logical volume "LogVol00"
    Found volume group "VolGroup00"
    Test mode: Wiping internal cache
    Wiping internal VG cache
[root@mythserver lvm]# lvchange -v -ay /dev/VolGroup00/LogVol00
    Using logical volume(s) on command line
    Activating logical volume "LogVol00"
    Found volume group "VolGroup00"
    Loading VolGroup00-LogVol00
[root@mythserver lvm]# mount -a
------------------------------------------------------------------------
That did it!  We were able to copy our data off the volume and recover the whole system.  I hope this helps someone else down the road.

-Tom-


 		
---------------------------------
Get your email and more, right on the  new Yahoo.com 

[-- Attachment #2: Type: text/html, Size: 9251 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing
  2006-09-11 19:05         ` Tom+Dale
@ 2006-09-12 18:06           ` Peter Smith
  0 siblings, 0 replies; 11+ messages in thread
From: Peter Smith @ 2006-09-12 18:06 UTC (permalink / raw)
  To: LVM general discussion and development

That is great news!!  It is like I always say about Open Source tools, 
for every problem I've ever had, I've found the solution--it just takes 
time.

Peter

Tom+Dale wrote:

> <snip>
> ------------------------------------------------------------------------
> That did it!  We were able to copy our data off the volume and recover 
> the whole system.  I hope this helps someone else down the road.
>
> -Tom-
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2006-09-12 18:07 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-08-18 14:29 [linux-lvm] Ran vgreduce --missing to remove broken disk, am I screwed? Thomas Novin
2006-08-29 21:02 ` Thomas Novin
2006-08-29 21:16   ` Peter Smith
2006-08-30  2:16   ` Tom+Dale
2006-08-30  3:09     ` Tom+Dale
2006-09-04  4:09     ` [linux-lvm] Wrecked Logical Volume with vgreduce --removemissing Tom+Dale
2006-09-06 16:29       ` Peter Smith
2006-09-07 17:18       ` Peter Smith
2006-09-11 19:05         ` Tom+Dale
2006-09-12 18:06           ` Peter Smith
2006-09-10 17:27       ` Nix

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).