From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1M4ZIC-0006h0-By for qemu-devel@nongnu.org; Thu, 14 May 2009 07:40:04 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1M4ZI7-0006ez-JF for qemu-devel@nongnu.org; Thu, 14 May 2009 07:40:03 -0400 Received: from [199.232.76.173] (port=37104 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1M4ZI6-0006eq-NZ for qemu-devel@nongnu.org; Thu, 14 May 2009 07:39:59 -0400 Received: from mx2.redhat.com ([66.187.237.31]:47162) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1M4ZI6-0007G4-4g for qemu-devel@nongnu.org; Thu, 14 May 2009 07:39:58 -0400 Subject: Re: [Qemu-devel] [STABLE][PATCH 4/4] Fix DMA API when handling an immediate error from block layer From: Mark McLoughlin In-Reply-To: <1242298581-30587-5-git-send-email-markmc@redhat.com> References: <1240265600-9469-1-git-send-email-ryanh@us.ibm.com> <1242298581-30587-1-git-send-email-markmc@redhat.com> <1242298581-30587-2-git-send-email-markmc@redhat.com> <1242298581-30587-3-git-send-email-markmc@redhat.com> <1242298581-30587-4-git-send-email-markmc@redhat.com> <1242298581-30587-5-git-send-email-markmc@redhat.com> Content-Type: text/plain Date: Thu, 14 May 2009 12:39:54 +0100 Message-Id: <1242301194.32336.2.camel@blaa> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Reply-To: Mark McLoughlin List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: qemu-devel@nongnu.org, Avi Kivity On Thu, 2009-05-14 at 11:56 +0100, Mark McLoughlin wrote: > From: Avi Kivity > > The block layer may signal an immediate error on an asynchronous request > by returning NULL. The DMA API did not handle this correctly, returning > an AIO request which would never complete (and which would crash if > cancelled). > > Fix by detecting the failure and propagating it. > > Signed-off-by: Avi Kivity Signed-off-by: Mark McLoughlin > --- > dma-helpers.c | 27 +++++++++++++++++++++------ > 1 files changed, 21 insertions(+), 6 deletions(-) > > diff --git a/dma-helpers.c b/dma-helpers.c > index 96a120c..1469e34 100644 > --- a/dma-helpers.c > +++ b/dma-helpers.c > @@ -70,20 +70,26 @@ static void continue_after_map_failure(void *opaque) > qemu_bh_schedule(dbs->bh); > } > > -static void dma_bdrv_cb(void *opaque, int ret) > +static void dma_bdrv_unmap(DMAAIOCB *dbs) > { > - DMAAIOCB *dbs = (DMAAIOCB *)opaque; > - target_phys_addr_t cur_addr, cur_len; > - void *mem; > int i; > > - dbs->acb = NULL; > - dbs->sector_num += dbs->iov.size / 512; > for (i = 0; i < dbs->iov.niov; ++i) { > cpu_physical_memory_unmap(dbs->iov.iov[i].iov_base, > dbs->iov.iov[i].iov_len, !dbs->is_write, > dbs->iov.iov[i].iov_len); > } > +} > + > +void dma_bdrv_cb(void *opaque, int ret) > +{ > + DMAAIOCB *dbs = (DMAAIOCB *)opaque; > + target_phys_addr_t cur_addr, cur_len; > + void *mem; > + > + dbs->acb = NULL; > + dbs->sector_num += dbs->iov.size / 512; > + dma_bdrv_unmap(dbs); > qemu_iovec_reset(&dbs->iov); > > if (dbs->sg_cur_index == dbs->sg->nsg || ret < 0) { > @@ -119,6 +125,11 @@ static void dma_bdrv_cb(void *opaque, int ret) > dbs->acb = bdrv_aio_readv(dbs->bs, dbs->sector_num, &dbs->iov, > dbs->iov.size / 512, dma_bdrv_cb, dbs); > } > + if (!dbs->acb) { > + dma_bdrv_unmap(dbs); > + qemu_iovec_destroy(&dbs->iov); > + return; > + } > } > > static BlockDriverAIOCB *dma_bdrv_io( > @@ -138,6 +149,10 @@ static BlockDriverAIOCB *dma_bdrv_io( > dbs->bh = NULL; > qemu_iovec_init(&dbs->iov, sg->nsg); > dma_bdrv_cb(dbs, 0); > + if (!dbs->acb) { > + qemu_aio_release(dbs); > + return NULL; > + } > return &dbs->common; > } >