qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Laurent Vivier <Laurent.Vivier@bull.net>
To: qemu-devel@nongnu.org
Cc: Shahar Frank <shaharf@qumranet.com>
Subject: Re: [Qemu-devel][PATCH,RFC] Zero cluster dedup
Date: Wed, 03 Sep 2008 09:09:35 +0200	[thread overview]
Message-ID: <1220425775.4159.6.camel@frecb07144> (raw)
In-Reply-To: <ED2414DB5FBBCF4FA66ECE71F290E9A28EC264@EXVBE011-2.exch011.intermedia.net>

Le mardi 02 septembre 2008 à 09:28 -0700, Shahar Frank a écrit :
> Hi All,
> 
> The following patch is implementing a zero cluster dedup feature to
> the qcow2 image format.
> 
> The incentive for this feature is the image COW "inflation" problem
> (i.e. COW image growth). The causes for the COW image pollution are
> many; for example, Windows with NTFS do not reuse recently
> de-allocated space and by that pollute many blocks. A naïve solution
> would be to identify de-allocated space and de-allocate it from the
> image block space. The problems with this approach are that 1) this
> requires a windows side interface to the image, and 2) there is no
> de-allocate verb for the images.
> 
> The suggested solution is simple:
> 	1) Use windows side cleanup/wipe utilities such as "Erase" "Free Wipe
> Wizard" or "Disk Redactor" (or any other free/non free wipe utility)
> to periodically wipe free space and cleanup temporaries. Most
> utilities can be used to write *zeros* on the wiped blocks. 
> 	2) Make qcow2 to identify zero cluster writes and use them as
> de-allocation hints (or "make hole" verb). To implement such feature
> with minimal effort and minimal changes, I suggest to use the internal
> COW mechanism of the qcow2 to create a shared zero cluster. To avoid
> image format changes, such page is created on demand and then can be
> shared with all zero writes. A non zero write on the zero cluster will
> cause a normal COW operation (similar to the shared kernel zero
> page). 

Is it really needed to have a shared zero page ?

When I read qcow_read() and qcow_aio_read_cb() I see:

if (!cluster_offset) {
...
memset(buf, 0, 512 * n);
...
}
and so I think you have just to clear the l2_table entry for the given
clusters (and free them).

Regards,
Laurent
-- 
----------------- Laurent.Vivier@bull.net  ------------------
  "La perfection est atteinte non quand il ne reste rien à
ajouter mais quand il ne reste rien à enlever." Saint Exupéry

  reply	other threads:[~2008-09-03  7:10 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-02 16:28 [Qemu-devel][PATCH,RFC] Zero cluster dedup Shahar Frank
2008-09-03  7:09 ` Laurent Vivier [this message]
2008-09-03  7:35   ` Shahar Frank
2008-09-03  7:59     ` Laurent Vivier
2008-09-03  8:13       ` Kevin Wolf
2008-09-03  8:25         ` Laurent Vivier
2008-09-03  9:38 ` Kevin Wolf
2008-09-03 12:05   ` Shahar Frank
2008-09-03 12:47     ` Kevin Wolf
2008-09-03 13:07       ` Shahar Frank
2008-09-03 13:12         ` Laurent Vivier
2008-09-03 17:44           ` Shahar Frank
2008-09-03 13:09       ` Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1220425775.4159.6.camel@frecb07144 \
    --to=laurent.vivier@bull.net \
    --cc=qemu-devel@nongnu.org \
    --cc=shaharf@qumranet.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).