From: Thomas Krichel <krichel@openlib.org>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] uuid already in use
Date: Thu, 10 Jan 2008 02:46:24 -0600 [thread overview]
Message-ID: <20080110084624.GA19895@openlib.org> (raw)
In-Reply-To: <20080108182230.GA6223@openlib.org>
Thomas Krichel writes
>
> raneb:/etc/lvm/archive# vgdisplay
> --- Volume group ---
> VG Name vg1
> System ID
> Format lvm2
> Metadata Areas 2
> Metadata Sequence No 17
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 0
> Open LV 0
> Max PV 0
> Cur PV 2
> Act PV 2
> VG Size 652.06 GB
> PE Size 4.00 MB
> Total PE 166928
> Alloc PE / Size 0 / 0
> Free PE / Size 166928 / 652.06 GB
> VG UUID Hm2mZH-jACj-gxQI-tbZM-H6pm-ovfr-TVgurC
>
> raneb:/etc/lvm/archive# lvdisplay
> raneb:/etc/lvm/archive#
>
> I presume I have to restore the lv somehow. But this
> has got me a step foward.
I could not see how I would restore the lv,
so I created a new one, with the same size and
name as the previous one
raneb:~# lvcreate -n lv1 -L 652.06G vg1
However, checking the volume fails
raneb:~# e2fsck /dev/mapper/vg1-lv1
e2fsck 1.40.2 (12-Jul-2007)
Couldn't find ext2 superblock, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/mapper/vg1-lv1
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
Alternative superblocks, gleaned from
raneb:~# mke2fs -n /dev/mapper/vg1-lv1
mke2fs 1.40.2 (12-Jul-2007)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
85475328 inodes, 170934272 blocks
8546713 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
5217 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
also fail. I presume all superblocks where on disk /dev/hdb,
and now that disk is gone, it is not possible to recover
data from /dev/hdc and /dev/hdd, the two other disks
in the vg. Thus, a failure on the first disk spills onto
the other disks because that disk hold vital information
for all.
Is that assessment correct?
Note I am desparate to recover data here because I destroyed
the backup through a mistake of mine 10 hours before disk
/dev/hdb crashed. The data represents about 10 years of
work of mine.
Conclusion: next time two backups.
Cheers,
Thomas Krichel http://openlib.org/home/krichel
RePEc:per:1965-06-05:thomas_krichel
skype: thomaskrichel
prev parent reply other threads:[~2008-01-10 8:46 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-08 17:55 [linux-lvm] uuid already in use Thomas Krichel
2008-01-08 18:04 ` Bryn M. Reeves
2008-01-08 18:22 ` Thomas Krichel
2008-01-10 8:46 ` Thomas Krichel [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080110084624.GA19895@openlib.org \
--to=krichel@openlib.org \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).