From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:55210) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ShSGm-0003y0-Go for qemu-devel@nongnu.org; Wed, 20 Jun 2012 17:16:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ShSGi-0002in-QW for qemu-devel@nongnu.org; Wed, 20 Jun 2012 17:16:55 -0400 Received: from mail-pz0-f45.google.com ([209.85.210.45]:60535) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ShSGi-0002iW-HB for qemu-devel@nongnu.org; Wed, 20 Jun 2012 17:16:52 -0400 Received: by dadn2 with SMTP id n2so9934553dad.4 for ; Wed, 20 Jun 2012 14:16:51 -0700 (PDT) Message-ID: <4FE23DBF.9040706@codemonkey.ws> Date: Wed, 20 Jun 2012 16:16:47 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <1340087992-2399-1-git-send-email-benh@kernel.crashing.org> <1340087992-2399-4-git-send-email-benh@kernel.crashing.org> In-Reply-To: <1340087992-2399-4-git-send-email-benh@kernel.crashing.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 03/13] iommu: Add universal DMA helper functions List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Benjamin Herrenschmidt Cc: Eduard - Gabriel Munteanu , Richard Henderson , "Michael S. Tsirkin" , qemu-devel@nongnu.org, David Gibson On 06/19/2012 01:39 AM, Benjamin Herrenschmidt wrote: > From: David Gibson > > Not that long ago, every device implementation using DMA directly > accessed guest memory using cpu_physical_memory_*(). This meant that > adding support for a guest visible IOMMU would require changing every > one of these devices to go through IOMMU translation. > > Shortly before qemu 1.0, I made a start on fixing this by providing > helper functions for PCI DMA. These are currently just stubs which > call the direct access functions, but mean that an IOMMU can be > implemented in one place, rather than for every PCI device. > > Clearly, this doesn't help for non PCI devices, which could also be > IOMMU translated on some platforms. It is also problematic for the > devices which have both PCI and non-PCI version (e.g. OHCI, AHCI) - we > cannot use the the pci_dma_*() functions, because they assume the > presence of a PCIDevice, but we don't want to have to check between > pci_dma_*() and cpu_physical_memory_*() every time we do a DMA in the > device code. > > This patch makes the first step on addressing both these problems, by > introducing new (stub) dma helper functions which can be used for any > DMA capable device. > > These dma functions take a DMAContext *, a new (currently empty) > variable describing the DMA address space in which the operation is to > take place. NULL indicates untranslated DMA directly into guest > physical address space. The intention is that in future non-NULL > values will given information about any necessary IOMMU translation. > > DMA using devices must obtain a DMAContext (or, potentially, contexts) > from their bus or platform. For now this patch just converts the PCI > wrappers to be implemented in terms of the universal wrappers, > converting other drivers can take place over time. > > Cc: Michael S. Tsirkin > Cc: Eduard - Gabriel Munteanu > Cc: Richard Henderson > > Signed-off-by: David Gibson > Signed-off-by: Benjamin Herrenschmidt > --- > dma.h | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > hw/pci.h | 21 ++++++------ > qemu-common.h | 1 + > 3 files changed, 113 insertions(+), 9 deletions(-) > > diff --git a/dma.h b/dma.h > index fe08b72..4449a0c 100644 > --- a/dma.h > +++ b/dma.h > @@ -34,6 +34,106 @@ typedef target_phys_addr_t dma_addr_t; > #define DMA_ADDR_BITS TARGET_PHYS_ADDR_BITS > #define DMA_ADDR_FMT TARGET_FMT_plx > > +/* Checks that the given range of addresses is valid for DMA. This is > + * useful for certain cases, but usually you should just use > + * dma_memory_{read,write}() and check for errors */ > +static inline bool dma_memory_valid(DMAContext *dma, dma_addr_t addr, > + dma_addr_t len, DMADirection dir) > +{ > + /* Stub version, with no iommu we assume all bus addresses are valid */ > + return true; > +} > + > +static inline int dma_memory_rw(DMAContext *dma, dma_addr_t addr, > + void *buf, dma_addr_t len, DMADirection dir) > +{ > + /* Stub version when we have no iommu support */ > + cpu_physical_memory_rw(addr, buf, (target_phys_addr_t)len, > + dir == DMA_DIRECTION_FROM_DEVICE); > + return 0; > +} > + > +static inline int dma_memory_read(DMAContext *dma, dma_addr_t addr, > + void *buf, dma_addr_t len) > +{ > + return dma_memory_rw(dma, addr, buf, len, DMA_DIRECTION_TO_DEVICE); > +} > + > +static inline int dma_memory_write(DMAContext *dma, dma_addr_t addr, > + const void *buf, dma_addr_t len) > +{ > + return dma_memory_rw(dma, addr, (void *)buf, len, > + DMA_DIRECTION_FROM_DEVICE); > +} > + > +static inline int dma_memory_set(DMAContext *dma, dma_addr_t addr, > + uint8_t c, dma_addr_t len) > +{ > + /* Stub version when we have no iommu support */ > + cpu_physical_memory_set(addr, c, len); > + return 0; > +} > + > +static inline void *dma_memory_map(DMAContext *dma, > + dma_addr_t addr, dma_addr_t *len, > + DMADirection dir) > +{ > + target_phys_addr_t xlen = *len; > + void *p; > + > + p = cpu_physical_memory_map(addr,&xlen, > + dir == DMA_DIRECTION_FROM_DEVICE); > + *len = xlen; > + return p; > +} > + > +static inline void dma_memory_unmap(DMAContext *dma, > + void *buffer, dma_addr_t len, > + DMADirection dir, dma_addr_t access_len) > +{ > + return cpu_physical_memory_unmap(buffer, (target_phys_addr_t)len, > + dir == DMA_DIRECTION_FROM_DEVICE, > + access_len); > +} > + > +#define DEFINE_LDST_DMA(_lname, _sname, _bits, _end) \ > + static inline uint##_bits##_t ld##_lname##_##_end##_dma(DMAContext *dma, \ > + dma_addr_t addr) \ > + { \ > + uint##_bits##_t val; \ > + dma_memory_read(dma, addr,&val, (_bits) / 8); \ > + return _end##_bits##_to_cpu(val); \ > + } \ > + static inline void st##_sname##_##_end##_dma(DMAContext *dma, \ > + dma_addr_t addr, \ > + uint##_bits##_t val) \ > + { \ > + val = cpu_to_##_end##_bits(val); \ > + dma_memory_write(dma, addr,&val, (_bits) / 8); \ > + } > + > +static inline uint8_t ldub_dma(DMAContext *dma, dma_addr_t addr) > +{ > + uint8_t val; > + > + dma_memory_read(dma, addr,&val, 1); > + return val; > +} > + > +static inline void stb_dma(DMAContext *dma, dma_addr_t addr, uint8_t val) > +{ > + dma_memory_write(dma, addr,&val, 1); > +} > + > +DEFINE_LDST_DMA(uw, w, 16, le); > +DEFINE_LDST_DMA(l, l, 32, le); > +DEFINE_LDST_DMA(q, q, 64, le); > +DEFINE_LDST_DMA(uw, w, 16, be); > +DEFINE_LDST_DMA(l, l, 32, be); > +DEFINE_LDST_DMA(q, q, 64, be); > + > +#undef DEFINE_LDST_DMA > + > struct ScatterGatherEntry { > dma_addr_t base; > dma_addr_t len; > diff --git a/hw/pci.h b/hw/pci.h > index 7f223c0..ee669d9 100644 > --- a/hw/pci.h > +++ b/hw/pci.h > @@ -558,10 +558,16 @@ static inline uint32_t pci_config_size(const PCIDevice *d) > } > > /* DMA access functions */ > +static inline DMAContext *pci_dma_context(PCIDevice *dev) > +{ > + /* Stub for when we have no PCI iommu support */ > + return NULL; > +} Why is all of this stuff static inline? > + > static inline int pci_dma_rw(PCIDevice *dev, dma_addr_t addr, > void *buf, dma_addr_t len, DMADirection dir) > { > - cpu_physical_memory_rw(addr, buf, len, dir == DMA_DIRECTION_FROM_DEVICE); > + dma_memory_rw(pci_dma_context(dev), addr, buf, len, dir); > return 0; > } > > @@ -581,12 +587,12 @@ static inline int pci_dma_write(PCIDevice *dev, dma_addr_t addr, > static inline uint##_bits##_t ld##_l##_pci_dma(PCIDevice *dev, \ > dma_addr_t addr) \ > { \ > - return ld##_l##_phys(addr); \ > + return ld##_l##_dma(pci_dma_context(dev), addr); \ > } \ > static inline void st##_s##_pci_dma(PCIDevice *dev, \ > - dma_addr_t addr, uint##_bits##_t val) \ > + dma_addr_t addr, uint##_bits##_t val) \ > { \ > - st##_s##_phys(addr, val); \ > + st##_s##_dma(pci_dma_context(dev), addr, val); \ > } > > PCI_DMA_DEFINE_LDST(ub, b, 8); > @@ -602,19 +608,16 @@ PCI_DMA_DEFINE_LDST(q_be, q_be, 64); > static inline void *pci_dma_map(PCIDevice *dev, dma_addr_t addr, > dma_addr_t *plen, DMADirection dir) > { > - target_phys_addr_t len = *plen; > void *buf; > > - buf = cpu_physical_memory_map(addr,&len, dir == DMA_DIRECTION_FROM_DEVICE); > - *plen = len; > + buf = dma_memory_map(pci_dma_context(dev), addr, plen, dir); > return buf; > } > > static inline void pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len, > DMADirection dir, dma_addr_t access_len) > { > - cpu_physical_memory_unmap(buffer, len, dir == DMA_DIRECTION_FROM_DEVICE, > - access_len); > + dma_memory_unmap(pci_dma_context(dev), buffer, len, dir, access_len); > } > > static inline void pci_dma_sglist_init(QEMUSGList *qsg, PCIDevice *dev, > diff --git a/qemu-common.h b/qemu-common.h > index 8f87e41..80026af 100644 > --- a/qemu-common.h > +++ b/qemu-common.h > @@ -264,6 +264,7 @@ typedef struct EventNotifier EventNotifier; > typedef struct VirtIODevice VirtIODevice; > typedef struct QEMUSGList QEMUSGList; > typedef struct SHPCDevice SHPCDevice; > +typedef struct DMAContext DMAContext; Please don't put this in qemu-common.h. Stick it in a dma-specific header. Regards, Anthony Liguori > typedef uint64_t pcibus_t; >