From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1M4Yc5-0001UR-A3 for qemu-devel@nongnu.org; Thu, 14 May 2009 06:56:33 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1M4Yc0-0001Sd-HW for qemu-devel@nongnu.org; Thu, 14 May 2009 06:56:32 -0400 Received: from [199.232.76.173] (port=52202 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1M4Yc0-0001SU-BP for qemu-devel@nongnu.org; Thu, 14 May 2009 06:56:28 -0400 Received: from mail01.svc.cra.dublin.eircom.net ([159.134.118.17]:45670) by monty-python.gnu.org with smtp (Exim 4.60) (envelope-from ) id 1M4Ybz-0000Na-OF for qemu-devel@nongnu.org; Thu, 14 May 2009 06:56:27 -0400 From: Mark McLoughlin Date: Thu, 14 May 2009 11:56:21 +0100 Message-Id: <1242298581-30587-5-git-send-email-markmc@redhat.com> In-Reply-To: <1242298581-30587-4-git-send-email-markmc@redhat.com> References: <1240265600-9469-1-git-send-email-ryanh@us.ibm.com> <1242298581-30587-1-git-send-email-markmc@redhat.com> <1242298581-30587-2-git-send-email-markmc@redhat.com> <1242298581-30587-3-git-send-email-markmc@redhat.com> <1242298581-30587-4-git-send-email-markmc@redhat.com> Subject: [Qemu-devel] [STABLE][PATCH 4/4] Fix DMA API when handling an immediate error from block layer List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: qemu-devel@nongnu.org, Avi Kivity From: Avi Kivity The block layer may signal an immediate error on an asynchronous request by returning NULL. The DMA API did not handle this correctly, returning an AIO request which would never complete (and which would crash if cancelled). Fix by detecting the failure and propagating it. Signed-off-by: Avi Kivity --- dma-helpers.c | 27 +++++++++++++++++++++------ 1 files changed, 21 insertions(+), 6 deletions(-) diff --git a/dma-helpers.c b/dma-helpers.c index 96a120c..1469e34 100644 --- a/dma-helpers.c +++ b/dma-helpers.c @@ -70,20 +70,26 @@ static void continue_after_map_failure(void *opaque) qemu_bh_schedule(dbs->bh); } -static void dma_bdrv_cb(void *opaque, int ret) +static void dma_bdrv_unmap(DMAAIOCB *dbs) { - DMAAIOCB *dbs = (DMAAIOCB *)opaque; - target_phys_addr_t cur_addr, cur_len; - void *mem; int i; - dbs->acb = NULL; - dbs->sector_num += dbs->iov.size / 512; for (i = 0; i < dbs->iov.niov; ++i) { cpu_physical_memory_unmap(dbs->iov.iov[i].iov_base, dbs->iov.iov[i].iov_len, !dbs->is_write, dbs->iov.iov[i].iov_len); } +} + +void dma_bdrv_cb(void *opaque, int ret) +{ + DMAAIOCB *dbs = (DMAAIOCB *)opaque; + target_phys_addr_t cur_addr, cur_len; + void *mem; + + dbs->acb = NULL; + dbs->sector_num += dbs->iov.size / 512; + dma_bdrv_unmap(dbs); qemu_iovec_reset(&dbs->iov); if (dbs->sg_cur_index == dbs->sg->nsg || ret < 0) { @@ -119,6 +125,11 @@ static void dma_bdrv_cb(void *opaque, int ret) dbs->acb = bdrv_aio_readv(dbs->bs, dbs->sector_num, &dbs->iov, dbs->iov.size / 512, dma_bdrv_cb, dbs); } + if (!dbs->acb) { + dma_bdrv_unmap(dbs); + qemu_iovec_destroy(&dbs->iov); + return; + } } static BlockDriverAIOCB *dma_bdrv_io( @@ -138,6 +149,10 @@ static BlockDriverAIOCB *dma_bdrv_io( dbs->bh = NULL; qemu_iovec_init(&dbs->iov, sg->nsg); dma_bdrv_cb(dbs, 0); + if (!dbs->acb) { + qemu_aio_release(dbs); + return NULL; + } return &dbs->common; } -- 1.6.0.6