From: James Hawtin <oolon@ankh.org>
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] Snapshots and disk re-use
Date: Tue, 05 Apr 2011 22:42:01 +0000 [thread overview]
Message-ID: <4D9B9AB9.8070202@ankh.org> (raw)
In-Reply-To: <4D9B8B5A.2070104@abpni.co.uk>
On 05/04/2011 21:36, Jonathan Tripathy wrote:
> Hi James,
>
> Interesting, didn't know you could do that! However, how do I know
> that the PEs aren't being used by LVs? Also, could you please explain
> the syntax? Normally to create a snapshot, I would do:
>
> lvcreate -L20G -s -n backup /dev/vg0/customerID
>
Hmmm well you have two options, you could use pvdisplay --map or
lvdisplay --map to work out exactly which PEs have been used to build
you snapshot cow and then use that information to allow you to create a
blanking PV in the same place or you could do it the easy way :-
1 hog the space to specific PEs
2 delete the hog
3 create the snapshot on same PEs
4 backup
5 delete the snapshot
6 create the hog on the same PEs
7 zero the hog
This has the advantage that the creation commands will fail if the PEs
you want are not available the problem with it is you probably need more
space for snapshots. As its less flexible in space use. Below i have
illustrated all the commands, you need to do this. you don;t need all
the display commands but they are there to prove to you this has worked,
and the lvs are in the same place.
#pvdisplay --map /dev/cciss/c0d1p1
--- Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4332
Allocated PE 1136
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/test_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5467:
FREE
#lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
#pvdisplay --map /dev/cciss/c0d1p1
--- Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4312
Allocated PE 1156
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/test_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5447:
FREE
Physical extent 5448 to 5467:
Logical volume /dev/test_vg/hog_lv
Logical extents 0 to 19
#lvremove /dev/test_vg/hog_lv
Do you really want to remove active logical volume hog_lv? [y/n]: y
Logical volume "hog_lv" successfully removed
#lvcreate -l 20 -s -n data_snap /dev/test_vg/data_lv
/dev/cciss/c0d1p1:5448-5467
Logical volume "data_snap" created
#pvdisplay --map /dev/cciss/c0d1p1
--- Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4312
Allocated PE 1156
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/restricted_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5447:
FREE
Physical extent 5448 to 5467:
Logical volume /dev/test_vg/data_snap
Logical extents 0 to 19
#lvdisplay /dev/test_vg/data_snap
--- Logical volume ---
LV Name /dev/test_vg/data_snap
VG Name test_vg
LV UUID bdqB77-f0vb-ZucS-Ka1l-pCr3-Ebeq-kOchmk
LV Write Access read/write
LV snapshot status active destination for /dev/test_vg/data_lv
LV Status available
# open 0
LV Size 30.00 GB
Current LE 240
COW-table size 2.50 GB
COW-table LE 20
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
#lvdisplay --map /dev/test_vg/data_snap
--- Logical volume ---
LV Name /dev/test_vg/data_snap
VG Name test_vg
LV UUID IBBvOq-Bg0U-c69v-p7fQ-tR63-T8UV-gM1Ncu
LV Write Access read/write
LV snapshot status active destination for /dev/test_vg/data_lv
LV Status available
# open 0
LV Size 30.00 GB
Current LE 240
COW-table size 2.50 GB
COW-table LE 20
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Segments ---
Logical extent 0 to 19:
Type linear
Physical volume /dev/cciss/c0d1p1
Physical extents 5448 to 5467
<NOW BACKUP>
#lvremove /dev/test_vg/data_snap
Do you really want to remove active logical volume data_snap? [y/n]: y
Logical volume "data_snap" successfully removed
#lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
Logical volume "hog_lv" created
#pvdisplay --map /dev/cciss/c0d1p1 ---
Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4312
Allocated PE 1156
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/restricted_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5447:
FREE
Physical extent 5448 to 5467:
Logical volume /dev/test_vg/hog_lv
Logical extents 0 to 19
#dd if=/dev/zero of=/dev/hog_lv
#lvremove /dev/test_vg/hog_lv
Do you really want to remove active logical volume hog_lv? [y/n]: y
Logical volume "hog_lv" successfully removed
Enjoy
James
next prev parent reply other threads:[~2011-04-05 22:42 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-23 12:36 [linux-lvm] Snapshots and disk re-use Jonathan Tripathy
2011-02-23 13:09 ` Joe Thornber
2011-02-23 13:57 ` Jonathan Tripathy
2011-02-23 14:16 ` Joe Thornber
2011-02-23 14:18 ` Jonathan Tripathy
2011-02-23 16:12 ` Ray Morris
2011-02-23 16:55 ` Jonathan Tripathy
2011-02-23 17:54 ` Stuart D. Gathman
2011-02-23 18:05 ` Jonathan Tripathy
2011-02-23 19:34 ` Stuart D. Gathman
2011-02-23 18:05 ` Stuart D. Gathman
2011-02-23 18:19 ` Jonathan Tripathy
2011-02-23 18:39 ` Les Mikesell
2011-02-23 19:39 ` Stuart D. Gathman
2011-02-23 20:03 ` Les Mikesell
2011-02-23 20:37 ` Stuart D. Gathman
2011-02-23 20:49 ` Jonathan Tripathy
2011-02-23 23:25 ` Stuart D. Gathman
2011-02-23 23:42 ` Stuart D. Gathman
2011-02-24 0:09 ` Jonathan Tripathy
2011-02-24 0:32 ` Stuart D. Gathman
2011-02-24 0:37 ` Jonathan Tripathy
2011-02-24 0:40 ` Jonathan Tripathy
2011-02-24 2:00 ` Stuart D. Gathman
2011-02-24 7:33 ` Jonathan Tripathy
2011-02-24 14:50 ` Stuart D. Gathman
2011-02-24 14:57 ` Jonathan Tripathy
2011-02-24 15:13 ` Stuart D. Gathman
2011-02-24 15:20 ` Jonathan Tripathy
2011-02-24 16:41 ` Jonathan Tripathy
2011-02-24 19:15 ` Nataraj
2011-02-24 19:25 ` Les Mikesell
2011-02-24 19:55 ` Stuart D. Gathman
2011-02-24 19:19 ` Stuart D. Gathman
2011-02-24 19:45 ` Stuart D. Gathman
2011-02-24 21:22 ` Jonathan Tripathy
2011-04-05 20:09 ` Jonathan Tripathy
2011-04-05 20:41 ` Stuart D. Gathman
2011-04-05 20:48 ` Jonathan Tripathy
2011-04-05 20:59 ` James Hawtin
2011-04-05 21:36 ` Jonathan Tripathy
2011-04-05 22:42 ` James Hawtin [this message]
2011-04-05 22:52 ` Jonathan Tripathy
2011-04-05 23:11 ` James Hawtin
2011-04-05 23:19 ` Jonathan Tripathy
2011-04-05 23:39 ` James Hawtin
2011-04-06 0:00 ` Jonathan Tripathy
2011-04-06 0:08 ` Stuart D. Gathman
2011-04-06 0:14 ` Jonathan Tripathy
2011-04-06 0:16 ` James Hawtin
2011-04-06 0:28 ` Jonathan Tripathy
2011-04-06 0:38 ` Stuart D. Gathman
2011-04-06 0:43 ` Stuart D. Gathman
2011-04-06 1:36 ` James Hawtin
2011-04-06 1:47 ` Jonathan Tripathy
2011-04-06 1:53 ` James Hawtin
2011-04-06 0:47 ` Jonathan Tripathy
2011-04-06 0:42 ` James Hawtin
2011-04-06 0:50 ` Jonathan Tripathy
2011-04-06 1:20 ` James Hawtin
2011-04-06 1:45 ` Jonathan Tripathy
2011-02-23 19:49 ` Nataraj
2011-02-23 19:24 ` Stuart D. Gathman
2011-02-23 19:07 ` [linux-lvm] Problem executing lvm related commands Tinni
2011-02-23 19:33 ` [linux-lvm] Snapshots and disk re-use Phillip Susi
2011-02-23 19:45 ` Stuart D. Gathman
2011-02-23 19:56 ` Nataraj
2011-02-23 13:18 ` Sunil_Gupta2
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D9B9AB9.8070202@ankh.org \
--to=oolon@ankh.org \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).