From: Misc Things <formisc@gmail.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] moving data to a new VG
Date: Sun, 10 Jan 2010 09:24:39 -0500 [thread overview]
Message-ID: <b74494e61001100624y5a7b79e2rcfa1993698f6a88e@mail.gmail.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 5414 bytes --]
>
> Hello,
> I 've been unsuccessfully trying to move data from the existing VG to a
> new VG.
> Here are the steps i did:
> [code]
>
> fdisk -l
>
> Disk /dev/hdc: 160.0 GB, 160041885696 bytes
> 255 heads, 63 sectors/track, 19457 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
>
> /dev/hdc1 * 1 9729 78148161 83 Linux
> /dev/hdc2 9730 19457 78140160 83 Linux
>
> Disk /dev/hdd: 250.0 GB, 250059350016 bytes
> 255 heads, 63 sectors/track, 30401 cylinders
>
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/hdd1 * 1 9729 78148161 83 Linux
> /dev/hdd2 9730 19457 78140160 83 Linux
>
> /dev/hdd3 19458 30401 87907680 83 Linux
>
> Disk /dev/sda: 160.0 GB, 160041885696 bytes
> 255 heads, 63 sectors/track, 19457 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
>
> /dev/sda1 * 1 13 104391 83 Linux
> /dev/sda2 14 19457 156183930 8e Linux LVM
>
> Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
> 255 heads, 63 sectors/track, 182401 cylinders
>
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 1 60788 488279578+ 83 Linux
> /dev/sdb2 60789 121576 488279610 83 Linux
>
> /dev/sdb3 121577 182401 488576812+ 83 Linux
>
> pvscan
> PV /dev/hdd3 VG VG_Storage02 lvm2 [83.83 GB / 0 free]
> PV /dev/sdb2 VG VG_Storage000 lvm2 [465.66 GB / 0 free]
> PV /dev/sdb1 VG VG_Storage01 lvm2 [465.66 GB / 316.62 GB free]
>
> PV /dev/hdc2 VG VG_Storage lvm2 [74.52 GB / 74.52 GB free]
> PV /dev/hdd2 VG VG_Storage lvm2 [74.52 GB / 74.52 GB free]
> PV /dev/sda2 VG VG_SYS lvm2 [148.95 GB / 0 free]
> PV /dev/hdd1 VG VG_Storage00 lvm2 [74.53 GB / 0 free]
>
> PV /dev/hdc1 VG VG_Storage00 lvm2 [74.53 GB / 0 free]
> Total: 8 [1.43 TB] / in use: 8 [1.43 TB] / in no VG: 0 [0 ]
>
> vgscan
> Reading all physical volumes. This may take a while...
> Found volume group "VG_Storage02" using metadata type lvm2
>
> Found volume group "VG_Storage000" using metadata type lvm2
> Found volume group "VG_Storage01" using metadata type lvm2
> Found volume group "VG_Storage" using metadata type lvm2
>
> Found volume group "VG_SYS" using metadata type lvm2
> Found volume group "VG_Storage00" using metadata type lvm2
>
> lvscan
> ACTIVE '/dev/VG_Storage02/LG_VG_Storage02_00' [83.83 GB] inherit
>
> ACTIVE '/dev/VG_Storage000/LV_VGSTORAGE000' [465.66 GB] anywhere
> ACTIVE '/dev/VG_Storage01/LV_VG_STORAGE01' [149.04 GB] anywhere
> ACTIVE '/dev/VG_SYS/LogVol00' [148.07 GB] inherit
>
> ACTIVE '/dev/VG_SYS/LogVol01' [896.00 MB] inherit
> ACTIVE '/dev/VG_Storage00/LG_VG_Storage00_00' [149.05 GB] inherit
>
> lvdisplay /dev/VG_Storage01/LV_VG_STORAGE01
> --- Logical volume ---
>
> LV Name /dev/VG_Storage01/LV_VG_STORAGE01
> VG Name VG_Storage01
> LV UUID 48dcTk-XKJ1-3umG-ykVu-EjYS-gGP7-i1RN1y
> LV Write Access read/write
> LV Status available
>
> # open 1
> LV Size 149.04 GB
> Current LE 76308
> Segments 1
> Allocation anywhere
> Read ahead sectors auto
> - currently set to 512
>
> [/code]
>
> the drive i'm trying to move the data to is /dev/sdb. Dived into 2 partitions. Each 500Gb.
>
> The "/dev/VG_Storage01/LV_VG_STORAGE01" is the original LG that had 2 drives (partitions) -
>
>
> step by step:
>
> 0. /dev/VG_Storage01/LV_VG_STORAGE01 of is not mounted;
>
> 1. /dev/VG_Storage01 originally had /dev/hdc2 and /dev/hdd2.
>
> 2. I added /dev/sdb1 into /dev/VG_Storage01
>
> 3. did pvmove /dev/hdc2 /dev/sdb1 and pvmove /dev/hdd2 /dev/sdb1
>
>
> 4. removed /dev/hdc2 and /dev/hdd2 from /dev/VG_Storage01
>
> 5. created VG_Storage with /dev/hdc2 and /dev/hdd2
>
> 6. tried to extend /dev/VG_Storage01/LV_VG_STORAGE01:
> [code]
> vextend -l +100%PVS -r -t /dev/VG_Storage01/LV_VG_STORAGE01
>
> Test mode: Metadata will NOT be updated.
> Segmentation fault
>
> lvextend -v -l +100%FREE -r -t /dev/VG_Storage01/LV_VG_STORAGE01
> Test mode: Metadata will NOT be updated.
> Finding volume group VG_Storage01
>
> Using stripesize of last segment 64.00 KB
> Rounding size (238417 extents) down to stripe boundary size for segment (238416 extents)
> Executing: fsadm --dry-run --verbose check /dev/VG_Storage01/LV_VG_STORAGE01
>
> fsadm: "ext3" filesystem found on "/dev/mapper/VG_Storage01-LV_VG_STORAGE01"
> fsadm: Dry execution fsck /dev/mapper/VG_Storage01-LV_VG_STORAGE01
> Test mode: Skipping archiving of volume group.
>
> Extending logical volume LV_VG_STORAGE01 to 465.66 GB
> Insufficient suitable allocatable extents for logical volume LV_VG_STORAGE01: 162108 more required
> Test mode: Wiping internal cache
> Wiping internal VG cache
>
> [/code]
> Block device 253:4
>
> Thank you for your help.
>
[-- Attachment #2: Type: text/html, Size: 6060 bytes --]
reply other threads:[~2010-01-10 14:24 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b74494e61001100624y5a7b79e2rcfa1993698f6a88e@mail.gmail.com \
--to=formisc@gmail.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).