qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: qemu-devel@nongnu.org
Cc: kwolf@redhat.com, stefanha@gmail.com
Subject: [Qemu-devel] [PATCH 2/3] qcow2: Add qcow2_alloc_clusters_at()
Date: Fri,  2 Mar 2012 19:48:50 +0100	[thread overview]
Message-ID: <1330714131-16908-3-git-send-email-kwolf@redhat.com> (raw)
In-Reply-To: <1330714131-16908-1-git-send-email-kwolf@redhat.com>

This function allows to allocate clusters at a given offset in the image
file. This is useful if you want to allocate the second part of an area
that must be contiguous.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/qcow2-refcount.c |   28 ++++++++++++++++++++++++++++
 block/qcow2.h          |    2 ++
 2 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/block/qcow2-refcount.c b/block/qcow2-refcount.c
index 2db2ede..f39928a 100644
--- a/block/qcow2-refcount.c
+++ b/block/qcow2-refcount.c
@@ -582,6 +582,34 @@ int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size)
     return offset;
 }
 
+int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
+    int nb_clusters)
+{
+    BDRVQcowState *s = bs->opaque;
+    uint64_t cluster_index;
+    int i, refcount, ret;
+
+    /* Check how many clusters there are free */
+    cluster_index = offset >> s->cluster_bits;
+    for(i = 0; i < nb_clusters; i++) {
+        refcount = get_refcount(bs, cluster_index++);
+
+        if (refcount < 0) {
+            return refcount;
+        } else if (refcount != 0) {
+            break;
+        }
+    }
+
+    /* And then allocate them */
+    ret = update_refcount(bs, offset, i << s->cluster_bits, 1);
+    if (ret < 0) {
+        return ret;
+    }
+
+    return i;
+}
+
 /* only used to allocate compressed sectors. We try to allocate
    contiguous sectors. size must be <= cluster_size */
 int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
diff --git a/block/qcow2.h b/block/qcow2.h
index fc35838..5129e3e 100644
--- a/block/qcow2.h
+++ b/block/qcow2.h
@@ -193,6 +193,8 @@ int qcow2_refcount_init(BlockDriverState *bs);
 void qcow2_refcount_close(BlockDriverState *bs);
 
 int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size);
+int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
+    int nb_clusters);
 int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
 void qcow2_free_clusters(BlockDriverState *bs,
     int64_t offset, int64_t size);
-- 
1.7.6.5

  parent reply	other threads:[~2012-03-02 18:45 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-02 18:48 [Qemu-devel] [RFC PATCH 0/3] qcow2: Save requests during cluster allocation Kevin Wolf
2012-03-02 18:48 ` [Qemu-devel] [PATCH 1/3] qcow2: Factor out count_cow_clusters Kevin Wolf
2012-03-02 18:48 ` Kevin Wolf [this message]
2012-03-02 18:48 ` [Qemu-devel] [PATCH 3/3] qcow2: Reduce number of I/O requests Kevin Wolf
2012-03-09 13:08 ` [Qemu-devel] [RFC PATCH 0/3] qcow2: Save requests during cluster allocation Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1330714131-16908-3-git-send-email-kwolf@redhat.com \
    --to=kwolf@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).