linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Jayson Vantuyl <jvantuyl@engineyard.com>
To: LVM general discussion and development <linux-lvm@redhat.com>,
	BigMac <bigmac@sas-clan.de>
Subject: Re: [linux-lvm] f*cked up metadata on 1of3 LVM-disks
Date: Mon, 21 May 2007 09:27:33 -0700	[thread overview]
Message-ID: <F206B7C7-9032-45FB-8FB5-AA111ECC8AC5@engineyard.com> (raw)
In-Reply-To: <1179763074.3696.28.camel@linux-cxyg>

[-- Attachment #1: Type: text/plain, Size: 7597 bytes --]

BigMac,

Okay, I think I have a solution, but it's your data, so beware.

First force the VG to activated.  To do this, use the -P flag  
(partial).  It will activate as much of the VG as it can find.  So,  
first:

vgscan -P
vgchange -ay -P vgdata

Then, get a backup of the metadata:

vgcfgbackup -f vgdata.lvm -P vgdata

Now, recreate the PV metadata on the damaged disk, just to get the  
UUID back:

pvcreate -ff -u 9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN-v1Eq81 /dev/hdc

Now you should have the pv there but with inconsistent metadata.   
Now, restore the metadata from the backup we just made:

vgcfgrestore -f vgdata.lvm vgdata

If the planets are aligned and you are very lucky, you might have  
your LV back.  Good luck, and I'm curious if this will actually work.

On May 21, 2007, at 8:57 AM, Dave Wysochanski wrote:

> On Mon, 2007-05-21 at 12:38 +0200, BigMac wrote:
>> Nobody an idea how to recover the lvm meta data on the disk?
>> The data itself an the other disks are fine, just the first  
>> sectors of
>> /dev/hdc are gone by installing grub on it.
>>
>> I had a deeper look on the disk with one of those low level disk  
>> editors
>> and there are still parts of the meta-data stored on the disk.
>> It seems to be that grub messed up just the first 17 sectors of  
>> the disk.
>>
>> Regards,
>>
>> BigMac
>>
>
> I am working on a tool to do recovery and pull metadata out of the
> disks.  Maybe this will help you.
>
> You can try "pvck -v" in the latest upstream LVM code.  It does not
> extract metadata to a file yet but makes an attempt at identifying  
> areas
> on the disk that contain metadata and prints the offsets and lengths
> (you can then just dd to a file).  Since you have other PVs that are
> valid you could at least get the latest metadata off one of those,  
> then
> use vgcfgrestore with this file and uuid option.
>
> Also, can you send me the first 256K?  Something like:
> dd if=/dev/hdc of=/tmp/file.raw bs=1024 count=256 and send
> me /tmp/file.raw in an email attachment?
>
> Maybe I can enhance the extraction and/or add this case to my unit  
> test.
> Ideas on enhancements welcome.
>
>
>
>>
>>
>> BigMac schrieb:
>>> Hi lists!
>>>
>>> My fileserver stored it's data on 3 physical disks,building a
>>> volumegroup with lvm2.
>>> The mashine was running ubuntu dapper drake for a long time  
>>> without any
>>> problems but I decided to switch to ubuntu feisty fawn. Switching  
>>> from
>>> debian to dapper drake I installed and removed quite a lot of  
>>> packages,
>>> leaving a messy system, so I thought a new and clean installation  
>>> with
>>> feisty fawn might be a great idea.
>>>
>>> box config with dapper drake:
>>>
>>> hda: 120GB (PATA) systemdisk
>>> hdb: unused
>>> hdc: 250GB (PATA) part of the LVM-array
>>> hdd: 250GB (PATA) part of the LVM-array
>>> sda: 400GB (SATA) part of the LVM-array
>>> sdb: DVD-RW (PATA on additional PCI-controller UDMA-100)
>>>
>>> The additional IDE-controller was installed to be able to install  
>>> even
>>> more disks, but as I won't buy any new PATA stuff the controller was
>>> dispensable.
>>>
>>> new box config:
>>>
>>> hda: 120GB (PATA) systemdisk
>>> hdb: DVD-RW
>>> hdc: 250GB (PATA) part of the LVM-array
>>> hdd: 250GB (PATA) part of the LVM-array
>>> sda: 400GB (SATA) part of the LVM-array
>>>
>>> I exported the vg before shutting down dapper drake and while  
>>> installing
>>> festy fawn to the box I - no idea what did me do so - installed  
>>> grub to
>>> hdc. Ooops, this disk was part of the LVM array and so grub  
>>> f*cked up
>>> the lvm meta-data on hdc.
>>>
>>> After installing grub to hda the box is running fine, despite the  
>>> LVM
>>> which can't find one of the vgs volumes:
>>>
>>> Code:
>>>
>>> bigmac@knecht:~$ sudo pvscan
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> PV /dev/hdd         VG vgdata   lvm2 [232.88 GB / 0    free]
>>> PV unknown device   VG vgdata   lvm2 [232.88 GB / 0    free]
>>> PV /dev/sda         VG vgdata   lvm2 [372.61 GB / 4.00 MB free]
>>> Total: 3 [838.38 GB] / in use: 3 [838.38 GB] / in no VG: 0 [0   ]
>>>
>>> bigmac@knecht:~$ sudo pvdisplay
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> --- Physical volume ---
>>> PV Name               /dev/hdd
>>> VG Name               vgdata
>>> PV Size               232.88 GB / not usable 0
>>> Allocatable           yes (but full)
>>> PE Size (KByte)       4096
>>> Total PE              59618
>>> Free PE               0
>>> Allocated PE          59618
>>> PV UUID               locW5S-HNFK-b1WW-iOLt-olHl-2SNd-fmXKC8
>>>
>>> --- Physical volume ---
>>> PV Name               unknown device
>>> VG Name               vgdata
>>> PV Size               232.88 GB / not usable 0
>>> Allocatable           yes (but full)
>>> PE Size (KByte)       4096
>>> Total PE              59618
>>> Free PE               0
>>> Allocated PE          59618
>>> PV UUID               9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN-v1Eq81
>>>
>>> --- Physical volume ---
>>> PV Name               /dev/sda
>>> VG Name               vgdata
>>> PV Size               372.61 GB / not usable 0
>>> Allocatable           yes
>>> PE Size (KByte)       4096
>>> Total PE              95388
>>> Free PE               1
>>> Allocated PE          95387
>>> PV UUID               YUJcsF-3XP7-OrbO-pITp-5z96-gw1A-DI11QS
>>>
>>>
>>> bigmac@knecht:~$ sudo vgscan
>>> Reading all physical volumes.  This may take a while...
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find all physical volumes for volume group vgdata.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find all physical volumes for volume group vgdata.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find all physical volumes for volume group vgdata.
>>> Couldn't find device with uuid '9kpWwu-LdwT-YiqV-DRPg-kSAN-plYN- 
>>> v1Eq81'.
>>> Couldn't find all physical volumes for volume group vgdata.
>>> Volume group "vgdata" not found
>>>
>>> I tried multiple Live-CDs like gparted (with testdisk), Ultimate  
>>> Boot-CD
>>> and SystemRescueCd, but none could help.
>>>
>>> Any idea what to do to get the lvm up and running?
>>> How to restore the metadata on hdc?
>>>
>>> Best Regards,
>>>
>>> BigMac
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>
>>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



-- 
Jayson Vantuyl
Systems Architect
Engine Yard
jvantuyl@engineyard.com



[-- Attachment #2: Type: text/html, Size: 28837 bytes --]

  reply	other threads:[~2007-05-21 16:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-05-09 16:30 [linux-lvm] f*cked up metadata on 1of3 LVM-disks BigMac
2007-05-09 22:34 ` David Robinson
2007-05-21 10:38 ` BigMac
2007-05-21 15:57   ` Dave Wysochanski
2007-05-21 16:27     ` Jayson Vantuyl [this message]
2007-05-24 21:42     ` Dave Wysochanski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F206B7C7-9032-45FB-8FB5-AA111ECC8AC5@engineyard.com \
    --to=jvantuyl@engineyard.com \
    --cc=bigmac@sas-clan.de \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).