From: paddy <paddy@panici.net>
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] Dead PV....recovering data
Date: Tue, 15 Aug 2006 00:58:49 +0100 [thread overview]
Message-ID: <20060814235830.GA28057@cobalt0.panici.net> (raw)
In-Reply-To: <44B33002.1020203@picardconsulting.ca>
On Tue, Jul 11, 2006 at 12:58:42AM -0400, Patrick Picard wrote:
> Good morning everyone.
>
> I was away on vacation and when i got back, I booted my server up.
> Unfortunately, one of the HD's didn't come back.
>
> The hard drive was a WD4000YR 400GB.
>
> PV /dev/hdb --> Hitachi 400GB
> PV /dev/sdd --> WD4000YR 400GB (the dead one)
>
> Both PV's were part of my Video Volume group.
>
> Now that my WD hard drive is dead(not seen by bios at all :() the
> videovg wont load.
> Is it possible to recover the data on the remaining PV?
>
> To my understanding, the data is stripped across the PV's in the volume
> group.....
which sounds *BAD* but see below
>
> Right now im loosing around 570GB of videos :(
>
> Thanks for any help
>
> Patrick
>
> To help outline my configuration see below. PV1 is the missing drive:
> [root@fc4 backup]# cat /etc/lvm/backup/videovg
> # Generated by LVM2: Tue Jan 31 18:55:29 2006
>
> contents = "Text Format Volume Group"
> version = 1
>
> description = "Created *after* executing 'lvextend -l +95388
> /dev/videovg/movieslv'"
>
> creation_host = "fc4.patpic.com" # Linux fc4.patpic.com
> 2.6.14-1.1656_FC4smp #1 SMP Thu Jan 5 22:24:06 EST 2006 i686
> creation_time = 1138751729 # Tue Jan 31 18:55:29 2006
>
> videovg {
> id = "mqumMs-fVhb-mOhv-KgEj-ZJqW-pWhD-3RDbQR"
> seqno = 4
> status = ["RESIZEABLE", "READ", "WRITE"]
> extent_size = 8192 # 4 Megabytes
> max_lv = 0
> max_pv = 0
>
> physical_volumes {
>
> pv0 {
> id = "bHJ1PS-yEg4-QlD0-D62k-nhyy-2TVe-b84gL0"
> device = "/dev/hdb" # Hint only
>
> status = ["ALLOCATABLE"]
> pe_start = 384
> pe_count = 95388 # 372.609 Gigabytes
> }
>
> pv1 {
> id = "rH7yLu-aAoG-Gcc7-C2ZM-r2jP-lU1L-Cg8IsY"
> // this is the bad drive
> device = "/dev/sdc" # Hint only
>
> status = ["ALLOCATABLE"]
> pe_start = 384
> pe_count = 95388 # 372.609 Gigabytes
> }
> }
>
> logical_volumes {
>
> movieslv {
> id = "ijlCES-kuxY-YHnA-FlDo-cm7a-yYlO-cXHmga"
> status = ["READ", "WRITE", "VISIBLE"]
> segment_count = 2
>
> segment1 {
> start_extent = 0
> extent_count = 95388 # 372.609 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 0
> ]
> }
> segment2 {
> start_extent = 95388
> extent_count = 95388 # 372.609 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv1", 0
> ]
> }
> }
> }
> }
If I read this correctly, your volume is spanned, and given an assumption of
reasonable contiguity of files you should be able to recover a good deal
from your remaining drive.
Off the top of my head, I don't recall the right way to do this, but
hopefully someone will chime in with the right answer, otherwise I'll
see if I can think of it tomorrow or so :-)
Regards,
Paddy
--
Perl 6 will give you the big knob. -- Larry Wall
prev parent reply other threads:[~2006-08-14 23:59 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-07-11 4:59 [linux-lvm] Dead PV....recovering data Patrick Picard
2006-08-14 23:58 ` paddy [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060814235830.GA28057@cobalt0.panici.net \
--to=paddy@panici.net \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).