From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:38316) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QWYCT-0005xI-Qk for qemu-devel@nongnu.org; Tue, 14 Jun 2011 14:18:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QWYCR-00069c-Ez for qemu-devel@nongnu.org; Tue, 14 Jun 2011 14:18:53 -0400 Received: from mtagate2.uk.ibm.com ([194.196.100.162]:37336) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QWYCQ-00068W-Ik for qemu-devel@nongnu.org; Tue, 14 Jun 2011 14:18:50 -0400 Received: from d06nrmr1307.portsmouth.uk.ibm.com (d06nrmr1307.portsmouth.uk.ibm.com [9.149.38.129]) by mtagate2.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p5EIImLl028972 for ; Tue, 14 Jun 2011 18:18:48 GMT Received: from d06av12.portsmouth.uk.ibm.com (d06av12.portsmouth.uk.ibm.com [9.149.37.247]) by d06nrmr1307.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p5EIIh0j2478194 for ; Tue, 14 Jun 2011 19:18:48 +0100 Received: from d06av12.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av12.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p5EIIgoB015105 for ; Tue, 14 Jun 2011 12:18:43 -0600 From: Stefan Hajnoczi Date: Tue, 14 Jun 2011 19:18:22 +0100 Message-Id: <1308075511-4745-5-git-send-email-stefanha@linux.vnet.ibm.com> In-Reply-To: <1308075511-4745-1-git-send-email-stefanha@linux.vnet.ibm.com> References: <1308075511-4745-1-git-send-email-stefanha@linux.vnet.ibm.com> Subject: [Qemu-devel] [PATCH 04/13] qed: extract qed_start_allocating_write() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Anthony Liguori , Stefan Hajnoczi , Adam Litke Copy-on-read requests are a form of allocating write and will need to be queued like other allocating writes. This patch extracts the request queuing code for allocating writes so that it can be reused for copy-on-read in a later patch. Signed-off-by: Stefan Hajnoczi --- block/qed.c | 32 ++++++++++++++++++++++++++------ 1 files changed, 26 insertions(+), 6 deletions(-) diff --git a/block/qed.c b/block/qed.c index 565bbc1..cc193ad 100644 --- a/block/qed.c +++ b/block/qed.c @@ -1097,14 +1097,15 @@ static bool qed_should_set_need_check(BDRVQEDState *s) } /** - * Write new data cluster + * Start an allocating write request or queue it * - * @acb: Write request - * @len: Length in bytes + * @ret: true if request can proceed, false if queued * - * This path is taken when writing to previously unallocated clusters. + * If a request is queued this function returns false and the caller should + * return. When it becomes time for the request to proceed the qed_aio_next() + * function will be called. */ -static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) +static bool qed_start_allocating_write(QEDAIOCB *acb) { BDRVQEDState *s = acb_to_s(acb); @@ -1119,7 +1120,26 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) } if (acb != QSIMPLEQ_FIRST(&s->allocating_write_reqs) || s->allocating_write_reqs_plugged) { - return; /* wait for existing request to finish */ + return false; + } + return true; +} + +/** + * Write new data cluster + * + * @acb: Write request + * @len: Length in bytes + * + * This path is taken when writing to previously unallocated clusters. + */ +static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) +{ + BDRVQEDState *s = acb_to_s(acb); + BlockDriverCompletionFunc *cb; + + if (!qed_start_allocating_write(acb)) { + return; } acb->cur_nclusters = qed_bytes_to_clusters(s, -- 1.7.5.3