From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:38290) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5wo3-0001N3-QI for qemu-devel@nongnu.org; Fri, 09 Mar 2012 05:12:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S5wnw-0003vG-SV for qemu-devel@nongnu.org; Fri, 09 Mar 2012 05:12:15 -0500 Received: from mail-iy0-f173.google.com ([209.85.210.173]:53834) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5wnw-0003uy-JS for qemu-devel@nongnu.org; Fri, 09 Mar 2012 05:12:08 -0500 Received: by iafj26 with SMTP id j26so2147621iaf.4 for ; Fri, 09 Mar 2012 02:12:07 -0800 (PST) Sender: Paolo Bonzini Message-ID: <4F59D75D.5040505@redhat.com> Date: Fri, 09 Mar 2012 11:11:41 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1331269308-22372-1-git-send-email-david@gibson.dropbear.id.au> <1331269308-22372-8-git-send-email-david@gibson.dropbear.id.au> In-Reply-To: <1331269308-22372-8-git-send-email-david@gibson.dropbear.id.au> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 07/13] iommu: Make sglists and dma_bdrv helpers use new universal DMA helpers List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson Cc: Kevin Wolf , mst@redhat.com, agraf@suse.de, qemu-devel@nongnu.org, eduard.munteanu@linux360.ro, rth@twiddle.net Il 09/03/2012 06:01, David Gibson ha scritto: > dma-helpers.c contains a number of helper functions for doing > scatter/gather DMA, and various block device related DMA. Currently, > these directly access guest memory using cpu_physical_memory_*(), > assuming no IOMMU translation. > > This patch updates this code to use the new universal DMA helper > functions. qemu_sglist_init() now takes a DMAContext * to describe > the DMA address space in which the scatter/gather will take place. > > We minimally update the callers qemu_sglist_init() to pass NULL > (i.e. no translation, same as current behaviour). Some of those > callers should pass something else in some cases to allow proper IOMMU > translation in future, but that will be fixed in later patches. > > Cc: Kevin Wolf > Cc: Michael S. Tsirkin > > Signed-off-by: David Gibson > --- > dma-helpers.c | 26 ++++++++++++++++++-------- > dma.h | 3 ++- > hw/ide/ahci.c | 3 ++- > hw/ide/macio.c | 4 ++-- > hw/pci.h | 2 +- > 5 files changed, 25 insertions(+), 13 deletions(-) > > diff --git a/dma-helpers.c b/dma-helpers.c > index 5f19a85..9dcfb2c 100644 > --- a/dma-helpers.c > +++ b/dma-helpers.c > @@ -11,12 +11,13 @@ > #include "block_int.h" > #include "trace.h" > > -void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint) > +void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint, DMAContext *dma) > { > qsg->sg = g_malloc(alloc_hint * sizeof(ScatterGatherEntry)); > qsg->nsg = 0; > qsg->nalloc = alloc_hint; > qsg->size = 0; > + qsg->dma = dma; > } > > void qemu_sglist_add(QEMUSGList *qsg, dma_addr_t base, dma_addr_t len) > @@ -75,10 +76,9 @@ static void dma_bdrv_unmap(DMAAIOCB *dbs) > int i; > > for (i = 0; i < dbs->iov.niov; ++i) { > - cpu_physical_memory_unmap(dbs->iov.iov[i].iov_base, > - dbs->iov.iov[i].iov_len, > - dbs->dir != DMA_DIRECTION_TO_DEVICE, > - dbs->iov.iov[i].iov_len); > + dma_memory_unmap(dbs->sg->dma, dbs->iov.iov[i].iov_base, > + dbs->iov.iov[i].iov_len, dbs->dir, > + dbs->iov.iov[i].iov_len); > } > qemu_iovec_reset(&dbs->iov); > } > @@ -104,10 +104,20 @@ static void dma_complete(DMAAIOCB *dbs, int ret) > } > } > > +static void dma_bdrv_cancel(void *opaque) > +{ > + DMAAIOCB *dbs = opaque; > + > + bdrv_aio_cancel(dbs->acb); > + dma_bdrv_unmap(dbs); > + qemu_iovec_destroy(&dbs->iov); > + qemu_aio_release(dbs); > +} What Kevin said. Instead of a generic callback, dma_memory_map should probably just receive the AIOCB (in this case &dbs->common) and call bdrv_aio_cancel on it. > static void dma_bdrv_cb(void *opaque, int ret) > { > DMAAIOCB *dbs = (DMAAIOCB *)opaque; > - target_phys_addr_t cur_addr, cur_len; > + dma_addr_t cur_addr, cur_len; > void *mem; > > trace_dma_bdrv_cb(dbs, ret); > @@ -124,8 +134,8 @@ static void dma_bdrv_cb(void *opaque, int ret) > while (dbs->sg_cur_index < dbs->sg->nsg) { > cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte; > cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte; > - mem = cpu_physical_memory_map(cur_addr, &cur_len, > - dbs->dir != DMA_DIRECTION_TO_DEVICE); > + mem = dma_memory_map(dbs->sg->dma, dma_bdrv_cancel, dbs, > + cur_addr, &cur_len, dbs->dir); > if (!mem) > break; > qemu_iovec_add(&dbs->iov, mem, cur_len); dma_buf_rw should also use the DMAContext here (passing a NULL invalidate function). Paolo