linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zkabelac@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Virtualization and LVM data security
Date: Sat, 25 Oct 2014 22:43:49 +0200	[thread overview]
Message-ID: <544C0B85.209@redhat.com> (raw)
In-Reply-To: <544BE01F.9080503@ib.pl>

Dne 25.10.2014 v 19:38 IB Development Team napsal(a):
> W dniu 2014-10-25 o 14:50, Zdenek Kabelac pisze:
>
>>> Is there any way to make LVM2 tools wipe added/freed LV space or plans to add
>>> such functionality?
>
>> lvm.conf    devices { issue_discard = 1 }
>>
>> See it that fits your need ?
>> Note: when using this option - vg/lvremove becomes 'irreversible'operation.
>
> issue_discard seems to require "underlying storage support" which is probably
> not available in common RAID/SATA/SAS/DRBD scenarios. Universal, open (source)
> solution would be better here probably (with hardware alternatives where
> possible).

Yes - this discard needs to be implemented by underlaying storage.

>
>>> When LVM based storage is used for guest virtual disks, it is possible that
>>> after resizing/snapshoting LV, disk data fragments from one guest will be
>>> visible to other guest, which may cause serious security problems if not wiped
>>> somehow[...]
>
>> thin provisioning with zeroing enabled for thin-pool -Zy is likely better
>> option.
>
> Sounds interesting. Is it stable solution for production systems? Does it
> perform not worse than "regular" preallocated LV?

Provisioning does not come for 'free' - so you pay some price when you zero 
provisioned blocks obviously - but blocks are zeroed on demand when they are 
going to be used.

For production system - do not over provision space - if you promise too much 
space and you don't have it - it currently require certain manual admin skills 
to proceed with overfilled pool volumes - it's not yet fully automated.

>> Note: you could obviously implement 'workaround' something like:
>>
>> lvcreate -l100%FREE -n trim_me vg
>> blkdiscard /dev/vg/trim_me
>> (or if disk doesn't support TRIM -   dd if=/dev/zero of=/dev/vg/trim_me....)
>> lvremove vg/trim_me
>
> If I understand correctly, in this scenario, guest data may still be present
> outside "cleaned" LV (i.e. data that was saved outside LV in snapshot LV
> during backups). If so - cleaning should be probably done transparently by LVM
> "software" layer, even without "underlying storage support".

There is no such support on lvm2 side - it's much easier to be implemented
on the user side.

Before you call 'lvremove' - just dd zero volume - if lvm would be zeroing
devices i.e. 1TB volume on lvremove - that would be timely insane operation.

Regards

Zdenek

      reply	other threads:[~2014-10-25 20:43 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-24 17:30 [linux-lvm] Virtualization and LVM data security IB Development Team
2014-10-25 12:50 ` Zdenek Kabelac
2014-10-25 17:38   ` IB Development Team
2014-10-25 20:43     ` Zdenek Kabelac [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=544C0B85.209@redhat.com \
    --to=zkabelac@redhat.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).