linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "Saad Shakhshir" <saads@alum.mit.edu>
To: linux-lvm@redhat.com
Subject: [linux-lvm] Restoring data after losing a drive
Date: Fri, 11 May 2007 23:29:00 -0400	[thread overview]
Message-ID: <230d1df80705112029w7b4b9fd0ncf7a733363b44705@mail.gmail.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 4106 bytes --]

I lost one of my drives that was a physical volume in my lvm.  The volume
group spanned across all the drives and so did the logical volume.  I know
that the data on the good drives is still intact I just don't know how to
get the lvm up and running again to be able to read the data.

I tried running 'vgreduce --removemissing fileserver' and that managed to
restore my other small logical volume that was located entirely on one of
the (still good) physical volumes.  The other large logical volume is still
not showing up.

I'm going to attach the last good configuration from /etc/lvm/archive  The
drive that died was pv0.

I really hope that someone can help with this.  It's terrible losing so much
data...

# Generated by LVM2: Fri May 11 19:48:00 2007

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing 'vgreduce --removemissing
fileserver'"

creation_host = "veritas"    # Linux veritas 2.6.20-15-generic #2 SMP Sun
Apr 15 07:36:31 UTC 2007 i686
creation_time = 1178927280    # Fri May 11 19:48:00 2007

fileserver {
    id = "9vePH6-W5lO-S1HD-aQyC-jOCY-R0Re-nIBPcJ"
    seqno = 12
    status = ["RESIZEABLE", "PARTIAL", "READ"]
    extent_size = 8192        # 4 Megabytes
    max_lv = 0
    max_pv = 0

    physical_volumes {

        pv0 {
            id = "V2yZV5-qx19-7c98-5WLY-9lTF-Wb4c-70Q0na"
            device = "unknown device"    # Hint only

            status = ["ALLOCATABLE"]
            pe_start = 384
            pe_count = 76310    # 298.086 Gigabytes
        }

        pv1 {
            id = "MZrncO-HYwT-6YEB-w3hV-2YqP-HZRY-u6OMyf"
            device = "/dev/mapper/hdb1"    # Hint only

            status = ["ALLOCATABLE"]
            pe_start = 384
            pe_count = 59618    # 232.883 Gigabytes
        }

        pv2 {
            id = "52vIhv-lOnI-RDQ3-Z5KT-4Ifc-thL0-uLZQVo"
            device = "/dev/mapper/sdb1"    # Hint only

            status = ["ALLOCATABLE"]
            pe_start = 384
            pe_count = 59618    # 232.883 Gigabytes
        }

        pv3 {
            id = "0q8rC2-9Nxi-YriB-7oa6-0wp4-frBa-GT9aQt"
            device = "/dev/mapper/hda3"    # Hint only

            status = ["ALLOCATABLE"]
            pe_start = 384
            pe_count = 26611    # 103.949 Gigabytes
        }
    }

    logical_volumes {

        data {
            id = "O0WU1C-0f0H-ZZvL-AwIc-vl8x-seFe-J3uE2X"
            status = ["READ", "WRITE", "VISIBLE"]
            segment_count = 4

            segment1 {
                start_extent = 0
                extent_count = 76310    # 298.086 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 0
                ]
            }
            segment2 {
                start_extent = 76310
                extent_count = 59618    # 232.883 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv1", 0
                ]
            }
            segment3 {
                start_extent = 135928
                extent_count = 51937    # 202.879 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv2", 0
                ]
            }
            segment4 {
                start_extent = 187865
                extent_count = 26611    # 103.949 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv3", 0
                ]
            }
        }

        home {
            id = "13Bra6-cE4i-MRgA-CZhC-RC0f-qtEo-R2Y0x2"
            status = ["READ", "WRITE", "VISIBLE"]
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 7681    # 30.0039 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv2", 51937
                ]
            }
        }
    }
}

[-- Attachment #2: Type: text/html, Size: 9615 bytes --]

             reply	other threads:[~2007-05-12  3:29 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-05-12  3:29 Saad Shakhshir [this message]
2007-05-12 19:51 ` [linux-lvm] Restoring data after losing a drive Ian Kent
2007-05-12 23:07   ` Saad Shakhshir
2007-05-13  1:29     ` Stuart D. Gathman
2007-05-13 16:02       ` Saad Shakhshir
2007-05-18 16:13         ` Saad Shakhshir

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=230d1df80705112029w7b4b9fd0ncf7a733363b44705@mail.gmail.com \
    --to=saads@alum.mit.edu \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).