From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] Snapshots and disk re-use
Date: Wed, 06 Apr 2011 00:19:54 +0100 [thread overview]
Message-ID: <4D9BA39A.8020008@abpni.co.uk> (raw)
In-Reply-To: <4D9BA196.20006@ankh.org>
On 06/04/2011 00:11, James Hawtin wrote:
>
>> James,
>>
>> That's fantastic! Thanks very much! I have a couple of questions:
>>
>> 1) If I wanted to create a script that backed up lots of
>> customer-data LVs, could I just do one zero at the end (and still
>> have no data leakage)?
>
> Yes you could, because COW means COPY ON WRITE, so the original block
> is copied onto the COW with the data from the original disk
> overwritting any data currently on it. Before that point any data on
> it was not addressable from the snapshot lv (* see my final point)
>> 2) On average, each of my data LVs are 20GB each, and if I were to
>> create a snapshot of 20GB, this would take about 20 mins to erase. If
>> I made the snapshot only 1GB, that means it would be quick to erase
>> at the end (however only 1GB of data could be created on the respect
>> origin, correct?)
>
> You are right you only have to erase the snapshot cow space, which is
> normally only 10-15% of the whole original disk. 2GB is pretty fast to
> over right on any system I have used these days. To be sure though you
> do need to overwrite the whole cow even if only a few percent was used
> as you cannot tell which few percent that was.
Actually, I meant just making a snapshot of 1GB, not just erase the
first 1GB of a 20GB snapshot. Bu tthis may be moot (See below)
>
> * I do wonder why you are so worried, leakage is only a problem if the
> COW is assigned to a future customer LV. If you always used the same
> space for backups perhaps had a PE just for backups it would never be
> used in a customer lv therefore you could argue that you never have to
> erase it. If its on a pe you only use for snapshotting you also don't
> need to hog the space as any bit of that disk is ok.
Excellent point! As long as I use the same PEs for making the snapshot
everytime, I don't need to ever erase it (And it can be a nice big size
like 50GB, so even my largest customers won't outgrow the snapshot).
However though, wouldn't I need to keep the "hog" around just to make
sure that the snapshot PEs don't get assigned to a new customer LV in
the future (Currently, we don't specify PEs to use when creating normal
LVs)?
An even better question: Does the snapshot have to be on the same
physical disk as the LV its mirroring?
next prev parent reply other threads:[~2011-04-05 23:20 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-23 12:36 [linux-lvm] Snapshots and disk re-use Jonathan Tripathy
2011-02-23 13:09 ` Joe Thornber
2011-02-23 13:57 ` Jonathan Tripathy
2011-02-23 14:16 ` Joe Thornber
2011-02-23 14:18 ` Jonathan Tripathy
2011-02-23 16:12 ` Ray Morris
2011-02-23 16:55 ` Jonathan Tripathy
2011-02-23 17:54 ` Stuart D. Gathman
2011-02-23 18:05 ` Jonathan Tripathy
2011-02-23 19:34 ` Stuart D. Gathman
2011-02-23 18:05 ` Stuart D. Gathman
2011-02-23 18:19 ` Jonathan Tripathy
2011-02-23 18:39 ` Les Mikesell
2011-02-23 19:39 ` Stuart D. Gathman
2011-02-23 20:03 ` Les Mikesell
2011-02-23 20:37 ` Stuart D. Gathman
2011-02-23 20:49 ` Jonathan Tripathy
2011-02-23 23:25 ` Stuart D. Gathman
2011-02-23 23:42 ` Stuart D. Gathman
2011-02-24 0:09 ` Jonathan Tripathy
2011-02-24 0:32 ` Stuart D. Gathman
2011-02-24 0:37 ` Jonathan Tripathy
2011-02-24 0:40 ` Jonathan Tripathy
2011-02-24 2:00 ` Stuart D. Gathman
2011-02-24 7:33 ` Jonathan Tripathy
2011-02-24 14:50 ` Stuart D. Gathman
2011-02-24 14:57 ` Jonathan Tripathy
2011-02-24 15:13 ` Stuart D. Gathman
2011-02-24 15:20 ` Jonathan Tripathy
2011-02-24 16:41 ` Jonathan Tripathy
2011-02-24 19:15 ` Nataraj
2011-02-24 19:25 ` Les Mikesell
2011-02-24 19:55 ` Stuart D. Gathman
2011-02-24 19:19 ` Stuart D. Gathman
2011-02-24 19:45 ` Stuart D. Gathman
2011-02-24 21:22 ` Jonathan Tripathy
2011-04-05 20:09 ` Jonathan Tripathy
2011-04-05 20:41 ` Stuart D. Gathman
2011-04-05 20:48 ` Jonathan Tripathy
2011-04-05 20:59 ` James Hawtin
2011-04-05 21:36 ` Jonathan Tripathy
2011-04-05 22:42 ` James Hawtin
2011-04-05 22:52 ` Jonathan Tripathy
2011-04-05 23:11 ` James Hawtin
2011-04-05 23:19 ` Jonathan Tripathy [this message]
2011-04-05 23:39 ` James Hawtin
2011-04-06 0:00 ` Jonathan Tripathy
2011-04-06 0:08 ` Stuart D. Gathman
2011-04-06 0:14 ` Jonathan Tripathy
2011-04-06 0:16 ` James Hawtin
2011-04-06 0:28 ` Jonathan Tripathy
2011-04-06 0:38 ` Stuart D. Gathman
2011-04-06 0:43 ` Stuart D. Gathman
2011-04-06 1:36 ` James Hawtin
2011-04-06 1:47 ` Jonathan Tripathy
2011-04-06 1:53 ` James Hawtin
2011-04-06 0:47 ` Jonathan Tripathy
2011-04-06 0:42 ` James Hawtin
2011-04-06 0:50 ` Jonathan Tripathy
2011-04-06 1:20 ` James Hawtin
2011-04-06 1:45 ` Jonathan Tripathy
2011-02-23 19:49 ` Nataraj
2011-02-23 19:24 ` Stuart D. Gathman
2011-02-23 19:07 ` [linux-lvm] Problem executing lvm related commands Tinni
2011-02-23 19:33 ` [linux-lvm] Snapshots and disk re-use Phillip Susi
2011-02-23 19:45 ` Stuart D. Gathman
2011-02-23 19:56 ` Nataraj
2011-02-23 13:18 ` Sunil_Gupta2
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D9BA39A.8020008@abpni.co.uk \
--to=jonnyt@abpni.co.uk \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).