linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: YouMizuki <appendix@hatsune.cc>
To: linux-lvm@sistina.com
Subject: Re: [linux-lvm] vgscan failed "no volume groups found"
Date: Sat Oct 27 05:33:01 2001	[thread overview]
Message-ID: <20011027192708.3F87.APPENDIX@hatsune.cc> (raw)
In-Reply-To: <20011025014038.X23590@turbolinux.com>

I tried to do uuid_fixer.
But I had this message.

uuid_fixer  /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7 /dev/md8
/dev/md9 
Error: number of PVs passed in does not match number of PVs in /dev/md2's VG
       8 PVs were passed in and 10 were expected.

uuid_fixer2  /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7
/dev/md8 /dev/md9 /dev/md10
Error: number of PVs passed in does not match number of PVs in /dev/md2's VG
       9 PVs were passed in and 10 were expected.

My /dev/md10 is broken. but /dev/md2../dev/md9 alive.
I want to recover the remainder disks.
Please give some advice to me....


>I've a LVM one physical volume "vg0" that created on 9 RAID devices.
>It worked still the day.
>But one day, I have problem about vgscan reports that it can't find my
VG. 
>And recovery data /etc/lvmconf/vg0.conf was broken by other cause.
>Please help me..... or give me hints to recover it.

>My VG structure is following

>/dev/md2,md3,md4,md5,md6,md7,md8,md9,md10 => vg0

>Kernel Linux  2.4.5 #10 SMP

>The pvscan report is following
>pvscan
>pvscan -- reading all physical volumes (this may take a while...)
>pvscan -- inactive PV "/dev/md2"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md3"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md4"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md5"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md6"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md7"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md8"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md9"   is associated to an unknown VG (run
vgscan)
>pvscan -- inactive PV "/dev/md10" of VG "vg1" [4.48 GB / 4.48 GB free]
>pvscan -- total: 9 [68.48 GB] / in use: 9 [68.48 GB] / in no VG: 0 [0]

>The pvdata report is following
>>pvdata -U /dev/md2
>000: w1ozGmggQJ7LqDumRFhBWpxAcBuinvkV
>001: gyivka8v8Rs8N6UHW1mXO2A7pe3V2UtL
>002: N1rBqi3J4SXDpRwYCh65eXCtH98zrkYQ
>003: vy3JnFfm4b4j5t1kcnnmPBVnqvKE1454
>004: 3qwEJ6e08fnjyfEtYh2VUwNLSlAv7WHC
>005: bCf2F3RgkdCqz0qs605zpQiMDF738U7Q
>006: Ao8MnMZSLrDhk1pbTHatNA5KHiZXv5vG
>007: 3ztQ2cfoGMc15y1TTXQzSpSkTIBzLcas
>008: 9VW0My6FYEh4T1WnwBP3m0OSlMhdM7Gq
>009: BIxTWheupMeCfEjU8UuyW0LX8gAq4aoD

>>pvdata -PP pvdata -PP /dev/md2
>--- Physical volume ---
>PV Name               /dev/md2
>VG Name               vg0
>PV Size               8 GB / NOT usable 4 MB [LVM: 128 KB]
>PV#                   2
>PV Status             available
>Allocatable           yes (but full)
>Cur LV                1
>PE Size (KByte)       4096
>Total PE              2047
>Free PE               0
>Allocated PE          2047
>PV UUID               gyivka-8v8R-s8N6-UHW1-mXO2-A7pe-3V2UtL
>pv_dev                   0:9
>pv_on_disk.base          0
>pv_on_disk.size          1024
>vg_on_disk.base          1024
>vg_on_disk.size          4608
>pv_uuidlist_on_disk.base 6144
>pv_uuidlist_on_disk.size 32896
>lv_on_disk.base          39424
>lv_on_disk.size          84296
>pe_on_disk.base          123904
>pe_on_disk.size          4070400

  parent reply	other threads:[~2001-10-27  5:33 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-10-25  0:14 [linux-lvm] vgscanfailed"novolumegroupsfound" appendix
2001-10-25  2:39 ` Andreas Dilger
2001-10-25 10:23   ` YouMizuki
2001-10-27  5:33   ` YouMizuki [this message]
2001-10-29 12:38   ` [linux-lvm] trouble of reduce pv YouMizuki
2001-10-29 16:17     ` Heinz J . Mauelshagen
2001-10-29 19:35       ` YouMizuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20011027192708.3F87.APPENDIX@hatsune.cc \
    --to=appendix@hatsune.cc \
    --cc=linux-lvm@sistina.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).