From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paul Brook Subject: Re: [PATCH 2/6] PCI DMA API (v2) Date: Mon, 7 Apr 2008 10:44:41 -0500 Message-ID: <200804071044.42048.paul@codesourcery.com> References: <1207368175-19476-1-git-send-email-aliguori@us.ibm.com> <1207368175-19476-2-git-send-email-aliguori@us.ibm.com> Reply-To: qemu-devel@nongnu.org Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Cc: kvm-devel@lists.sourceforge.net, Marcelo Tosatti , Anthony Liguori , Aurelien Jarno To: qemu-devel@nongnu.org Return-path: In-Reply-To: <1207368175-19476-2-git-send-email-aliguori@us.ibm.com> Content-Disposition: inline List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org Errors-To: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org List-Id: kvm.vger.kernel.org > +/* Return a new IOVector that's a subset of the passed in IOVector. It > should + * be freed with qemu_free when you are done with it. */ > +IOVector *iovector_trim(const IOVector *iov, size_t offset, size_t size); Using qemu_free directly seems a bad idea. I guess we're likely to want to switch to a different memory allocation scheme in the future. The comment is also potentially misleading because iovector_new() doesn't mention anything about having to free the vetor. > +int bdrv_readv(BlockDriverState *bs, int64_t sector_num, >... > + size = iovector_size(iovec); > + buffer = qemu_malloc(size); This concerns me for two reasons: (a) I'm alway suspicious about the performance implications of using malloc on a hot path. (b) The size of the bufer is unbounded. I'd expect multi-megabyte transters to be common, and gigabyte sized operations are plausible. At minimum you need a comment acknowledging that we've considered these issues. > +void *cpu_map_physical_page(target_phys_addr_t addr) > + /* DMA'ing to MMIO, just skip */ > + phys_offset = cpu_get_physical_page_desc(addr); > + if ((phys_offset & ~TARGET_PAGE_MASK) != IO_MEM_RAM) > + return NULL; This is not OK. It's fairly common for smaller devies to use a separate DMA engine that writes to a MMIO region. You also never check the return value of this function, so it will crash qemu. > +void pci_device_dma_unmap(PCIDevice *s, const IOVector *orig, This funcion should not exist. Dirty bits should be set by the memcpy routines. Paul