linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] LVM snapshot: in memory COW table
@ 2017-01-25 13:46 Francois Blondel
  2017-01-25 14:38 ` Mike Snitzer
  0 siblings, 1 reply; 3+ messages in thread
From: Francois Blondel @ 2017-01-25 13:46 UTC (permalink / raw)
  To: linux-lvm@redhat.com

Hi all,

We currently use LVM snapshot as a solution to backups block devices.
The snapshot is not active for a long time, only for the time during
which we need to upload the backup to the backup server.

As I could read, for example here: http://www.nikhef.nl/~dennisvd/lvmcr
ap.html, snapshots multiply the IO needs of the origin Logical Volume.
We would like to avoid that.

As our servers have enough free RAM, and the COW table of the
snapshot remains quite small, I had the project to store the snapshot
(or its cow table) in RAM.
The goal there is to avoid additional IO aimed at the disk and redirect
them to memory.
Yes, this would lead to the loss of the snapshot in case of power
failure, but this is not an issue for our use case.

I tried to use lvmcache with an in memory block device for the caches,
but I could not create a snapshot using lvmcache.

Any hints or experience on that?

Many thanks,

François Blondel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] LVM snapshot: in memory COW table
  2017-01-25 13:46 [linux-lvm] LVM snapshot: in memory COW table Francois Blondel
@ 2017-01-25 14:38 ` Mike Snitzer
  2017-01-26  0:50   ` Stuart Gathman
  0 siblings, 1 reply; 3+ messages in thread
From: Mike Snitzer @ 2017-01-25 14:38 UTC (permalink / raw)
  To: Francois Blondel; +Cc: linux-lvm@redhat.com

On Wed, Jan 25 2017 at  8:46am -0500,
Francois Blondel <fblondel@intelliad.de> wrote:

> Hi all,
> 
> We currently use LVM snapshot as a solution to backups block devices.
> The snapshot is not active for a long time, only for the time during
> which we need to upload the backup to the backup server.
> 
> As I could read, for example here: http://www.nikhef.nl/~dennisvd/lvmcrap.html
> snapshots multiply the IO needs of the origin Logical Volume.
> We would like to avoid that.

The old dm-snapshot target has "transient" snapshot store already.
See drivers/md/dm-snap-transient.c

Use of transient isn't going to change the fact that you'll still need N
snapshot stores; and that any write will trigger an N-way copy-out
penality when N snapshots of the origin exist.  All you'd be doing with
transient is getting implicit improved performance of memory speeds.

For improved scalability you'd do well to switch to using DM
thin-provisioning's snapshots.  There are some customers/users who are
already making use of it for the backend for backups (using thin_delta
to know what the differences are between thin volumes and snapshots).

> As our servers have enough free RAM, and the COW table of the
> snapshot remains quite small, I had the project to store the snapshot
> (or its cow table) in RAM.
> The goal there is to avoid additional IO aimed at the disk and redirect
> them to memory.
> Yes, this would lead to the loss of the snapshot in case of power
> failure, but this is not an issue for our use case.

Documentation/device-mapper/snapshot touches on "transient" with:

*) snapshot <origin> <COW device> <persistent?> <chunksize>

A snapshot of the <origin> block device is created. Changed chunks of
<chunksize> sectors will be stored on the <COW device>.  Writes will
only go to the <COW device>.  Reads will come from the <COW device> or
from <origin> for unchanged data.  <COW device> will often be
smaller than the origin and if it fills up the snapshot will become
useless and be disabled, returning errors.  So it is important to monitor
the amount of free space and expand the <COW device> before it fills up.

<persistent?> is P (Persistent) or N (Not persistent - will not survive
after reboot).  O (Overflow) can be added as a persistent store option
to allow userspace to advertise its support for seeing "Overflow" in the
snapshot status.  So supported store types are "P", "PO" and "N".

The difference between persistent and transient is with transient
snapshots less metadata must be saved on disk - they can be kept in
memory by the kernel.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] LVM snapshot: in memory COW table
  2017-01-25 14:38 ` Mike Snitzer
@ 2017-01-26  0:50   ` Stuart Gathman
  0 siblings, 0 replies; 3+ messages in thread
From: Stuart Gathman @ 2017-01-26  0:50 UTC (permalink / raw)
  To: linux-lvm

On 01/25/2017 09:38 AM, Mike Snitzer wrote:
> On Wed, Jan 25 2017 at  8:46am -0500,
> Francois Blondel <fblondel@intelliad.de> wrote:
>
>> Hi all,
>>
>> We currently use LVM snapshot as a solution to backups block devices.
>> The snapshot is not active for a long time, only for the time during
>> which we need to upload the backup to the backup server.
>>
>> As I could read, for example here: http://www.nikhef.nl/~dennisvd/lvmcrap.html
>> snapshots multiply the IO needs of the origin Logical Volume.
>> We would like to avoid that.
> The old dm-snapshot target has "transient" snapshot store already.
> See drivers/md/dm-snap-transient.c
>
> Use of transient isn't going to change the fact that you'll still need N
> snapshot stores; and that any write will trigger an N-way copy-out
> penality when N snapshots of the origin exist.  All you'd be doing with
> transient is getting implicit improved performance of memory speeds.
The transient snapshots seem to only store metadata in memory.  I think
what OP is proposing is to allocate, say, 2G of ram for the snapshot,
instead of allocating a 2G snapshot LV on disk.  The snapshot would
simply disappear if the system crashes or reboots. 

You could kludge this up now by adding a ramdisk PV to the volume group,
somehow avoid allocating normal LVs on the ramdisk PV, and force the
selected snapshot to be allocated on the ramdisk PV.   Adding the
ramdisk to the VG immediately before allocating the snapshot on it (and
using up all the extents) would prevent normal LVs from getting
allocated on it.

The drawback to the above kludge is that on crash or reboot, the ramdisk
PV will be gone - requiring an extra step to remove it from the VG and
undo the snapshot remapping.

A special "use me for snapshots only" flag for PVs would be one way to
start implementing it.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-01-26  1:26 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-01-25 13:46 [linux-lvm] LVM snapshot: in memory COW table Francois Blondel
2017-01-25 14:38 ` Mike Snitzer
2017-01-26  0:50   ` Stuart Gathman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).