qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>, qemu-devel@nongnu.org
Cc: pbonzini@redhat.com, stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH] qcow2: Set zero flag for discarded clusters
Date: Sat, 08 Feb 2014 17:17:04 +0100	[thread overview]
Message-ID: <52F65880.9000307@redhat.com> (raw)
In-Reply-To: <1391873296-23451-1-git-send-email-kwolf@redhat.com>

On 08.02.2014 16:28, Kevin Wolf wrote:
> Instead of making the backing file contents visible again after a discard
> request, set the zero flag if possible (i.e. on version >= 3).
>
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>   block/qcow2-cluster.c | 22 ++++++++++++++++++++--
>   1 file changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
> index 25d45d1..008bc04 100644
> --- a/block/qcow2-cluster.c
> +++ b/block/qcow2-cluster.c
> @@ -1333,13 +1333,31 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
>           uint64_t old_offset;
>   
>           old_offset = be64_to_cpu(l2_table[l2_index + i]);
> -        if ((old_offset & L2E_OFFSET_MASK) == 0) {
> +
> +        /*
> +         * Make sure that a discarded area reads back as zeroes for v3 images
> +         * (we cannot do it for v2 without actually writing a zero-filled
> +         * buffer). We can skip the operation if the cluster is already marked
> +         * as zero, or if it's unallocated and we don't have a backing file.
> +         *
> +         * TODO We might want to use bdrv_get_block_status(bs) here, but we're
> +         * holding s->lock, so that doesn't work today.
> +         */
> +        if (!!(old_offset & QCOW_OFLAG_ZERO)) {
> +            continue;
> +        }
> +
> +        if ((old_offset & L2E_OFFSET_MASK) == 0 && !bs->backing_hd) {
>               continue;
>           }
>   
>           /* First remove L2 entries */
>           qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
> -        l2_table[l2_index + i] = cpu_to_be64(0);
> +        if (s->version >= 3) {

Oh, I revoke my reviewed-by from earlier. This should probably be 
"s->qcow_version"; otherwise, it won't compile.

Max

> +            l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
> +        } else {
> +            l2_table[l2_index + i] = cpu_to_be64(0);
> +        }
>   
>           /* Then decrease the refcount */
>           qcow2_free_any_clusters(bs, old_offset, 1, type);

  parent reply	other threads:[~2014-02-08 16:15 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-08 15:28 [Qemu-devel] [PATCH] qcow2: Set zero flag for discarded clusters Kevin Wolf
2014-02-08 16:11 ` Max Reitz
2014-02-08 16:17 ` Max Reitz [this message]
2014-02-08 16:32   ` Kevin Wolf
2014-02-08 16:50   ` [Qemu-devel] [PATCH v2] " Kevin Wolf
2014-02-14 14:34     ` Stefan Hajnoczi
2014-02-14 17:05     ` Stefan Hajnoczi
2014-02-14 18:11       ` Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52F65880.9000307@redhat.com \
    --to=mreitz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).