* [linux-lvm] vgscanfailed"novolumegroupsfound"
@ 2001-10-25 0:14 appendix
2001-10-25 2:39 ` Andreas Dilger
0 siblings, 1 reply; 7+ messages in thread
From: appendix @ 2001-10-25 0:14 UTC (permalink / raw)
To: linux-lvm
Hi!
I've a LVM one physical volume "vg0" that created on 9 RAID devices.
It worked still the day.
But one day, I have problem about vgscan reports that it can't find my VG.
And recovery data /etc/lvmconf/vg0.conf was broken by other cause.
Please help me..... or give me hints to recover it.
My VG structure is following
/dev/md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0
Kernel Linux melody.angelic.jp 2.4.5 #10 SMP
The pvscan report is following
>pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/md2" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md3" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md4" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md5" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md6" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md7" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md8" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md9" is associated to an unknown VG (run vgscan)
pvscan -- inactive PV "/dev/md10" of VG "vg1" [4.48 GB / 4.48 GB free]
pvscan -- total: 9 [68.48 GB] / in use: 9 [68.48 GB] / in no VG: 0 [0]
The pvdata report is following
>pvdata -U /dev/md2
000: w1ozGmggQJ7LqDumRFhBWpxAcBuinvkV
001: gyivka8v8Rs8N6UHW1mXO2A7pe3V2UtL
002: N1rBqi3J4SXDpRwYCh65eXCtH98zrkYQ
003: vy3JnFfm4b4j5t1kcnnmPBVnqvKE1454
004: 3qwEJ6e08fnjyfEtYh2VUwNLSlAv7WHC
005: bCf2F3RgkdCqz0qs605zpQiMDF738U7Q
006: Ao8MnMZSLrDhk1pbTHatNA5KHiZXv5vG
007: 3ztQ2cfoGMc15y1TTXQzSpSkTIBzLcas
008: 9VW0My6FYEh4T1WnwBP3m0OSlMhdM7Gq
009: BIxTWheupMeCfEjU8UuyW0LX8gAq4aoD
>pvdata -PP pvdata -PP /dev/md2
--- Physical volume ---
PV Name /dev/md2
VG Name vg0
PV Size 8 GB / NOT usable 4 MB [LVM: 128 KB]
PV# 2
PV Status available
Allocatable yes (but full)
Cur LV 1
PE Size (KByte) 4096
Total PE 2047
Free PE 0
Allocated PE 2047
PV UUID gyivka-8v8R-s8N6-UHW1-mXO2-A7pe-3V2UtL
pv_dev 0:9
system_id melody.angelic.jp991380667
pv_on_disk.base 0
pv_on_disk.size 1024
vg_on_disk.base 1024
vg_on_disk.size 4608
pv_uuidlist_on_disk.base 6144
pv_uuidlist_on_disk.size 32896
lv_on_disk.base 39424
lv_on_disk.size 84296
pe_on_disk.base 123904
pe_on_disk.size 4070400
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [linux-lvm] vgscanfailed"novolumegroupsfound"
2001-10-25 0:14 [linux-lvm] vgscanfailed"novolumegroupsfound" appendix
@ 2001-10-25 2:39 ` Andreas Dilger
2001-10-25 10:23 ` YouMizuki
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Andreas Dilger @ 2001-10-25 2:39 UTC (permalink / raw)
To: linux-lvm
On Oct 25, 2001 14:15 +0900, appendix@hatsune.cc wrote:
> I've a LVM one physical volume "vg0" that created on 9 RAID devices.
>
> My VG structure is following
> /dev/md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0
You say 9 md devices, and list 9 above for vg0.
> Kernel Linux melody.angelic.jp 2.4.5 #10 SMP
>
> The pvscan report is following
> >pvscan
> pvscan -- reading all physical volumes (this may take a while...)
> pvscan -- inactive PV "/dev/md2" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md3" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md4" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md5" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md6" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md7" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md8" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md9" is associated to an unknown VG (run vgscan)
> pvscan -- inactive PV "/dev/md10" of VG "vg1" [4.48 GB / 4.48 GB free]
One of your PVs says it is in another VG ^^^ very strange.
> pvscan -- total: 9 [68.48 GB] / in use: 9 [68.48 GB] / in no VG: 0 [0]
>
> The pvdata report is following
> >pvdata -U /dev/md2
> 000: w1ozGmggQJ7LqDumRFhBWpxAcBuinvkV
> 001: gyivka8v8Rs8N6UHW1mXO2A7pe3V2UtL
> 002: N1rBqi3J4SXDpRwYCh65eXCtH98zrkYQ
> 003: vy3JnFfm4b4j5t1kcnnmPBVnqvKE1454
> 004: 3qwEJ6e08fnjyfEtYh2VUwNLSlAv7WHC
> 005: bCf2F3RgkdCqz0qs605zpQiMDF738U7Q
> 006: Ao8MnMZSLrDhk1pbTHatNA5KHiZXv5vG
> 007: 3ztQ2cfoGMc15y1TTXQzSpSkTIBzLcas
> 008: 9VW0My6FYEh4T1WnwBP3m0OSlMhdM7Gq
> 009: BIxTWheupMeCfEjU8UuyW0LX8gAq4aoD
^^^ 10 PVs listed. This is wrong.
>
> >pvdata -PP pvdata -PP /dev/md2
> --- Physical volume ---
> PV# 2
> PV UUID gyivka-8v8R-s8N6-UHW1-mXO2-A7pe-3V2UtL
This matches "001" above. You need to check each of the other MD devices
to see which ones have valid UUIDs. You may need to use the uuid-fixer
tool.
Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [linux-lvm] vgscanfailed"novolumegroupsfound"
2001-10-25 2:39 ` Andreas Dilger
@ 2001-10-25 10:23 ` YouMizuki
2001-10-27 5:33 ` [linux-lvm] vgscan failed "no volume groups found" YouMizuki
2001-10-29 12:38 ` [linux-lvm] trouble of reduce pv YouMizuki
2 siblings, 0 replies; 7+ messages in thread
From: YouMizuki @ 2001-10-25 10:23 UTC (permalink / raw)
To: linux-lvm
>> My VG structure is following
>> /dev/md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0
>You say 9 md devices, and list 9 above for vg0.
>pvscan -- inactive PV "/dev/md10" of VG "vg1" [4.48 GB / 4.48 GB free]
>One of your PVs says it is in another VG ^^^ very strange.
This PV device is not used now.This PV is not included "vg0"
# Really,this device was included "vg0",but when I tried to recovery this,
# It was broken ,cause of my mistake.
# It may be broken part of this VG, but I think the area will be not used by filesystem.
# I want to recovery remaining....
>> >pvdata -PP pvdata -PP /dev/md2
>> --- Physical volume ---
>> PV# 2
>> PV UUID gyivka-8v8R-s8N6-UHW1-mXO2-A7pe-3V2UtL
>This matches "001" above. You need to check each of the other MD devices
>to see which ones have valid UUIDs. You may need to use the uuid-fixer
>tool.
Thanks! I will try to use uuid-fixer.
--
YouMizuki <appendix@hatsune.cc>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [linux-lvm] vgscan failed "no volume groups found"
2001-10-25 2:39 ` Andreas Dilger
2001-10-25 10:23 ` YouMizuki
@ 2001-10-27 5:33 ` YouMizuki
2001-10-29 12:38 ` [linux-lvm] trouble of reduce pv YouMizuki
2 siblings, 0 replies; 7+ messages in thread
From: YouMizuki @ 2001-10-27 5:33 UTC (permalink / raw)
To: linux-lvm
I tried to do uuid_fixer.
But I had this message.
uuid_fixer /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7 /dev/md8
/dev/md9
Error: number of PVs passed in does not match number of PVs in /dev/md2's VG
8 PVs were passed in and 10 were expected.
uuid_fixer2 /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7
/dev/md8 /dev/md9 /dev/md10
Error: number of PVs passed in does not match number of PVs in /dev/md2's VG
9 PVs were passed in and 10 were expected.
My /dev/md10 is broken. but /dev/md2../dev/md9 alive.
I want to recover the remainder disks.
Please give some advice to me....
>I've a LVM one physical volume "vg0" that created on 9 RAID devices.
>It worked still the day.
>But one day, I have problem about vgscan reports that it can't find my
VG.
>And recovery data /etc/lvmconf/vg0.conf was broken by other cause.
>Please help me..... or give me hints to recover it.
>My VG structure is following
>/dev/md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0
>Kernel Linux 2.4.5 #10 SMP
>The pvscan report is following
>pvscan
>pvscan -- reading all physical volumes (this may take a while...)
>pvscan -- inactive PV "/dev/md2" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md3" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md4" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md5" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md6" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md7" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md8" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md9" is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md10" of VG "vg1" [4.48 GB / 4.48 GB free]
>pvscan -- total: 9 [68.48 GB] / in use: 9 [68.48 GB] / in no VG: 0 [0]
>The pvdata report is following
>>pvdata -U /dev/md2
>000: w1ozGmggQJ7LqDumRFhBWpxAcBuinvkV
>001: gyivka8v8Rs8N6UHW1mXO2A7pe3V2UtL
>002: N1rBqi3J4SXDpRwYCh65eXCtH98zrkYQ
>003: vy3JnFfm4b4j5t1kcnnmPBVnqvKE1454
>004: 3qwEJ6e08fnjyfEtYh2VUwNLSlAv7WHC
>005: bCf2F3RgkdCqz0qs605zpQiMDF738U7Q
>006: Ao8MnMZSLrDhk1pbTHatNA5KHiZXv5vG
>007: 3ztQ2cfoGMc15y1TTXQzSpSkTIBzLcas
>008: 9VW0My6FYEh4T1WnwBP3m0OSlMhdM7Gq
>009: BIxTWheupMeCfEjU8UuyW0LX8gAq4aoD
>>pvdata -PP pvdata -PP /dev/md2
>--- Physical volume ---
>PV Name /dev/md2
>VG Name vg0
>PV Size 8 GB / NOT usable 4 MB [LVM: 128 KB]
>PV# 2
>PV Status available
>Allocatable yes (but full)
>Cur LV 1
>PE Size (KByte) 4096
>Total PE 2047
>Free PE 0
>Allocated PE 2047
>PV UUID gyivka-8v8R-s8N6-UHW1-mXO2-A7pe-3V2UtL
>pv_dev 0:9
>pv_on_disk.base 0
>pv_on_disk.size 1024
>vg_on_disk.base 1024
>vg_on_disk.size 4608
>pv_uuidlist_on_disk.base 6144
>pv_uuidlist_on_disk.size 32896
>lv_on_disk.base 39424
>lv_on_disk.size 84296
>pe_on_disk.base 123904
>pe_on_disk.size 4070400
^ permalink raw reply [flat|nested] 7+ messages in thread* [linux-lvm] trouble of reduce pv
2001-10-25 2:39 ` Andreas Dilger
2001-10-25 10:23 ` YouMizuki
2001-10-27 5:33 ` [linux-lvm] vgscan failed "no volume groups found" YouMizuki
@ 2001-10-29 12:38 ` YouMizuki
2001-10-29 16:17 ` Heinz J . Mauelshagen
2 siblings, 1 reply; 7+ messages in thread
From: YouMizuki @ 2001-10-29 12:38 UTC (permalink / raw)
To: linux-lvm
I've a LVM one physical volume "vg0" that created on 10 RAID devices.
It worked still the day.
But one day, I have problem about vgscan reports that it can't find my VG.
My VG structure was this.
| /dev/md1,md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0
But md10 was broken.
I need to restruct VG like this.
| /dev/md1,md2,md3,md4,md5,md6,md7,md8,md9 => vg0
But vgscan is can't find VG
|vgscan
|vgscan -- reading all physical volumes (this may take a while...)
|vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
|vgscan -- WARNING: This program does not do a VGDA backup of your volume group
I tried uuid_fixer. but I get only error.
|uuid_fixer /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7 /dev/md8 /dev/md9
|Error: number of PVs passed in does not match number of PVs in /dev/md2's VG
| 9 PVs were passed in and 10 were expected.
I tried pvmove...
|pvmove --force -v /dev/md9 /dev/md10
|pvmove -- checking name of source physical volume "/dev/md9"
|pvmove -- locking logical volume manager
|pvmove -- reading data of source physical volume from "/dev/md9"
|pvmove -- checking volume group existence
|pvmove -- ERROR: can't move physical extents: volume group vg0 doesn't exist
How do I carry out the restructuring of it and can I reconstruct it?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [linux-lvm] trouble of reduce pv
2001-10-29 12:38 ` [linux-lvm] trouble of reduce pv YouMizuki
@ 2001-10-29 16:17 ` Heinz J . Mauelshagen
2001-10-29 19:35 ` YouMizuki
0 siblings, 1 reply; 7+ messages in thread
From: Heinz J . Mauelshagen @ 2001-10-29 16:17 UTC (permalink / raw)
To: linux-lvm
You need to restore the metadata formerly stored on md10 to a valid device
using the LVM vgcfgrestore(8) command.
Then you can run vgscan and "vgchange -ay".
On Tue, Oct 30, 2001 at 03:45:29AM +0900, YouMizuki wrote:
> I've a LVM one physical volume "vg0" that created on 10 RAID devices.
> It worked still the day.
> But one day, I have problem about vgscan reports that it can't find my VG.
>
> My VG structure was this.
>
> | /dev/md1,md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0
>
> But md10 was broken.
> I need to restruct VG like this.
>
> | /dev/md1,md2,md3,md4,md5,md6,md7,md8,md9 => vg0
>
> But vgscan is can't find VG
>
> |vgscan
> |vgscan -- reading all physical volumes (this may take a while...)
> |vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
> |vgscan -- WARNING: This program does not do a VGDA backup of your volume group
>
>
> I tried uuid_fixer. but I get only error.
>
> |uuid_fixer /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7 /dev/md8 /dev/md9
> |Error: number of PVs passed in does not match number of PVs in /dev/md2's VG
> | 9 PVs were passed in and 10 were expected.
>
> I tried pvmove...
>
> |pvmove --force -v /dev/md9 /dev/md10
> |pvmove -- checking name of source physical volume "/dev/md9"
> |pvmove -- locking logical volume manager
> |pvmove -- reading data of source physical volume from "/dev/md9"
> |pvmove -- checking volume group existence
> |pvmove -- ERROR: can't move physical extents: volume group vg0 doesn't exist
>
> How do I carry out the restructuring of it and can I reconstruct it?
>
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://www.sistina.com/lvm/Pages/howto.html
--
Regards,
Heinz -- The LVM Guy --
*** Software bugs are stupid.
Nevertheless it needs not so stupid people to solve them ***
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Heinz Mauelshagen Sistina Software Inc.
Senior Consultant/Developer Am Sonnenhang 11
56242 Marienrachdorf
Germany
Mauelshagen@Sistina.com +49 2626 141200
FAX 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [linux-lvm] trouble of reduce pv
2001-10-29 16:17 ` Heinz J . Mauelshagen
@ 2001-10-29 19:35 ` YouMizuki
0 siblings, 0 replies; 7+ messages in thread
From: YouMizuki @ 2001-10-29 19:35 UTC (permalink / raw)
To: linux-lvm
Thanks for your answer.
> You need to restore the metadata formerly stored on md10 to a valid device
> using the LVM vgcfgrestore(8) command.
> Then you can run vgscan and "vgchange -ay".
"/etc/lvmtab.d/" was losted with "/dev/md10".
Isn't there any method of reviving VG without /etc/lvmtab.d/ ?
For example, isn't there any method of creating metadata personally?
--
YouMizuki <appendix@hatsune.cc>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2001-10-29 19:35 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-10-25 0:14 [linux-lvm] vgscanfailed"novolumegroupsfound" appendix
2001-10-25 2:39 ` Andreas Dilger
2001-10-25 10:23 ` YouMizuki
2001-10-27 5:33 ` [linux-lvm] vgscan failed "no volume groups found" YouMizuki
2001-10-29 12:38 ` [linux-lvm] trouble of reduce pv YouMizuki
2001-10-29 16:17 ` Heinz J . Mauelshagen
2001-10-29 19:35 ` YouMizuki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).