linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Ray Morris <support@bettercgi.com>
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] moving LVMs to another machine
Date: Fri, 14 Jan 2011 13:54:44 -0600	[thread overview]
Message-ID: <20110114135444.108565bd@bettercgi.com> (raw)
In-Reply-To: <FB3E421C-2DDC-4BBE-9736-53762F26B425@ualr.edu>

> I did not actively deactivate any volume groups or logical volumes
> before I made this move.

> Q: What are my next steps after lvscan to bring the two logical
> volumes back online? OS, etc are all on /dev/sda; the logical volumes
> just have extra stuff. 

Assuming that you powered down or somehow informed the OS that the 
drives were leaving, you can just activate the volume groups with:

vgchange -ay


> Q: Am I right to assume that once lv0 and lv1 show as active, all I
> need to do is mount them somewhere, and that the filesystems they
> contain should be intact? I had no disk or controller failures that I
> know of.

Correct, assuming that they were unmounted before they were pulled 
from their old homes. If they were pulled while mounted, you'd want
to fsck just as if the filesystem were on a raw disk or a partition.

Since you said you "did not actively deactivate any volume groups",
I'm assuming you did not use "vgexport" and therefore don't need to 
use "vgimport".
-- 

Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php


On Fri, 14 Jan 2011 13:25:24 -0600
Albert Everett <aeeverett@ualr.edu> wrote:

> This is my first time trying to do this, so please forgive me if what
> I'm asking is trivial. I'm anxious not to loose data.
> 
> I have moved a Dell MD3000 with an MD1000 attached from one CentOS
> 4.5 x86_64 machine to another. I've installed Dell's drivers on the
> second machine and I see output below. 
> 
> /dev/sdb and sdc are on the MD3000; /dev/sdd, sde and sdf are on the
> MD1000. Filesystem on both is ext3, and I only used LVM to
> concatenate <2TB volumes because the MD3000 firmware required it at
> the time.
> 
> I did not actively deactivate any volume groups or logical volumes
> before I made this move.
> 
> Q: What are my next steps after lvscan to bring the two logical
> volumes back online? OS, etc are all on /dev/sda; the logical volumes
> just have extra stuff. 
> 
> Q: Am I right to assume that once lv0 and lv1 show as active, all I
> need to do is mount them somewhere, and that the filesystems they
> contain should be intact? I had no disk or controller failures that I
> know of.
> 
> Albert
> 
> [root@login-0-0 ~]# fdisk -l
> 
> Disk /dev/sda: 749.6 GB, 749606010880 bytes
> 255 heads, 63 sectors/track, 91134 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1        3060    24579418+  83  Linux
> /dev/sda2            3061        6120    24579450   83  Linux
> /dev/sda3            6121        8160    16386300   82  Linux swap
> /dev/sda4            8161       91134   666488655    5  Extended
> /dev/sda5            8161       91134   666488623+  83  Linux
> 
> Disk /dev/sdb: 1796.7 GB, 1796776919040 bytes
> 255 heads, 63 sectors/track, 218445 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1               1      218445  1754659431   8e  Linux LVM
> 
> Disk /dev/sdc: 1796.7 GB, 1796776919040 bytes
> 255 heads, 63 sectors/track, 218445 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1      218445  1754659431   8e  Linux LVM
> 
> Disk /dev/sdd: 2186.1 GB, 2186136256512 bytes
> 255 heads, 63 sectors/track, 265782 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdd1               1      265782  2134893883+  8e  Linux LVM
> 
> Disk /dev/sde: 2186.1 GB, 2186136256512 bytes
> 255 heads, 63 sectors/track, 265782 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sde1               1      265782  2134893883+  8e  Linux LVM
> 
> Disk /dev/sdf: 820.7 GB, 820745076736 bytes
> 255 heads, 63 sectors/track, 99783 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdf1               1       99783   801506916   8e  Linux LVM
> 
> [root@login-0-0 ~]# pvscan
>   PV /dev/sdd1   VG vg1   lvm2 [1.99 TB / 0    free]
>   PV /dev/sde1   VG vg1   lvm2 [1.99 TB / 0    free]
>   PV /dev/sdf1   VG vg1   lvm2 [764.38 GB / 2.75 GB free]
>   PV /dev/sdb1   VG vg0   lvm2 [1.63 TB / 0    free]
>   PV /dev/sdc1   VG vg0   lvm2 [1.63 TB / 0    free]
>   Total: 5 [7.99 TB] / in use: 5 [7.99 TB] / in no VG: 0 [0   ]
> 
> [root@login-0-0 ~]# vgscan
>   Reading all physical volumes.  This may take a while...
>   Found volume group "vg1" using metadata type lvm2
>   Found volume group "vg0" using metadata type lvm2
> 
> [root@login-0-0 ~]# lvscan
>   inactive          '/dev/vg1/lv1' [4.72 TB] inherit
>   inactive          '/dev/vg0/lv0' [3.27 TB] inherit
> 
> [root@login-0-0 ~]# pvdisplay
>   --- Physical volume ---
>   PV Name               /dev/sdd1
>   VG Name               vg1
>   PV Size               8192.00 EB / not usable 8192.00 EB
>   Allocatable           yes (but full)
>   PE Size (KByte)       131072
>   Total PE              16287
>   Free PE               0
>   Allocated PE          16287
>   PV UUID               VSTyo3-r8rE-5lym-G6F7-683r-fdZm-Urm5af
>    
>   --- Physical volume ---
>   PV Name               /dev/sde1
>   VG Name               vg1
>   PV Size               8192.00 EB / not usable 8192.00 EB
>   Allocatable           yes (but full)
>   PE Size (KByte)       131072
>   Total PE              16287
>   Free PE               0
>   Allocated PE          16287
>   PV UUID               eXd2Ee-L55A-bO43-ucXR-GnGM-n4fo-k0vOXP
>    
>   --- Physical volume ---
>   PV Name               /dev/sdf1
>   VG Name               vg1
>   PV Size               764.38 GB / not usable 1.60 MB
>   Allocatable           yes 
>   PE Size (KByte)       131072
>   Total PE              6115
>   Free PE               22
>   Allocated PE          6093
>   PV UUID               4e0pIE-AoXc-2bb5-m6Q9-Eb2z-gmHW-Gu9Owl
>    
>   --- Physical volume ---
>   PV Name               /dev/sdb1
>   VG Name               vg0
>   PV Size               8192.00 EB / not usable 8192.00 EB
>   Allocatable           yes (but full)
>   PE Size (KByte)       131072
>   Total PE              13386
>   Free PE               0
>   Allocated PE          13386
>   PV UUID               aX4eXc-5ADq-BTl2-sZzY-36kX-A0zx-bDKIQf
>    
>   --- Physical volume ---
>   PV Name               /dev/sdc1
>   VG Name               vg0
>   PV Size               8192.00 EB / not usable 8192.00 EB
>   Allocatable           yes (but full)
>   PE Size (KByte)       131072
>   Total PE              13386
>   Free PE               0
>   Allocated PE          13386
>   PV UUID               fgu78M-yzS9-OPbV-2aXt-NgJP-A6dA-ZpQCkt
>    
> [root@login-0-0 ~]# vgdisplay
>   --- Volume group ---
>   VG Name               vg1
>   System ID             
>   Format                lvm2
>   Metadata Areas        3
>   Metadata Sequence No  2
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                1
>   Open LV               0
>   Max PV                0
>   Cur PV                3
>   Act PV                3
>   VG Size               4.72 TB
>   PE Size               128.00 MB
>   Total PE              38689
>   Alloc PE / Size       38667 / 4.72 TB
>   Free  PE / Size       22 / 2.75 GB
>   VG UUID               Qjb9Fq-o5Jy-MH1n-453l-gQqp-iqqN-49sUIS
>    
>   --- Volume group ---
>   VG Name               vg0
>   System ID             
>   Format                lvm2
>   Metadata Areas        2
>   Metadata Sequence No  4
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                1
>   Open LV               0
>   Max PV                0
>   Cur PV                2
>   Act PV                2
>   VG Size               3.27 TB
>   PE Size               128.00 MB
>   Total PE              26772
>   Alloc PE / Size       26772 / 3.27 TB
>   Free  PE / Size       0 / 0   
>   VG UUID               DlYlrX-oFu7-S3rf-GVH0-5sVp-cuQh-0VGIdy
>    
> [root@login-0-0 ~]# lvdisplay
>   --- Logical volume ---
>   LV Name                /dev/vg1/lv1
>   VG Name                vg1
>   LV UUID                SNdEg6-nshz-xtzf-ml4R-jraE-ukEZ-eqK6hE
>   LV Write Access        read/write
>   LV Status              NOT available
>   LV Size                4.72 TB
>   Current LE             38667
>   Segments               3
>   Allocation             inherit
>   Read ahead sectors     0
>    
>   --- Logical volume ---
>   LV Name                /dev/vg0/lv0
>   VG Name                vg0
>   LV UUID                fOvYHo-jj3e-qoZK-nvU8-7yu3-z19U-siuqOP
>   LV Write Access        read/write
>   LV Status              NOT available
>   LV Size                3.27 TB
>   Current LE             26772
>   Segments               2
>   Allocation             inherit
>   Read ahead sectors     0
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

  parent reply	other threads:[~2011-01-14 19:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-14 19:25 [linux-lvm] moving LVMs to another machine Albert Everett
2011-01-14 19:41 ` Albert Everett
2011-01-14 19:56   ` Ray Morris
2011-01-14 20:22     ` [linux-lvm] SOLVED " Albert Everett
2011-01-14 19:54 ` Ray Morris [this message]
2011-01-16 19:01   ` [linux-lvm] " Ron Johnson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110114135444.108565bd@bettercgi.com \
    --to=support@bettercgi.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).