From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JitWk-0002TH-7B for qemu-devel@nongnu.org; Mon, 07 Apr 2008 11:44:58 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JitWi-0002Sg-IO for qemu-devel@nongnu.org; Mon, 07 Apr 2008 11:44:57 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JitWi-0002Sb-AD for qemu-devel@nongnu.org; Mon, 07 Apr 2008 11:44:56 -0400 Received: from mail.codesourcery.com ([65.74.133.4]) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1JitWh-0006la-Nd for qemu-devel@nongnu.org; Mon, 07 Apr 2008 11:44:56 -0400 From: Paul Brook Subject: Re: [Qemu-devel] [PATCH 2/6] PCI DMA API (v2) Date: Mon, 7 Apr 2008 10:44:41 -0500 References: <1207368175-19476-1-git-send-email-aliguori@us.ibm.com> <1207368175-19476-2-git-send-email-aliguori@us.ibm.com> In-Reply-To: <1207368175-19476-2-git-send-email-aliguori@us.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200804071044.42048.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kvm-devel@lists.sourceforge.net, Marcelo Tosatti , Anthony Liguori , Aurelien Jarno > +/* Return a new IOVector that's a subset of the passed in IOVector. It > should + * be freed with qemu_free when you are done with it. */ > +IOVector *iovector_trim(const IOVector *iov, size_t offset, size_t size); Using qemu_free directly seems a bad idea. I guess we're likely to want to switch to a different memory allocation scheme in the future. The comment is also potentially misleading because iovector_new() doesn't mention anything about having to free the vetor. > +int bdrv_readv(BlockDriverState *bs, int64_t sector_num, >... > + size = iovector_size(iovec); > + buffer = qemu_malloc(size); This concerns me for two reasons: (a) I'm alway suspicious about the performance implications of using malloc on a hot path. (b) The size of the bufer is unbounded. I'd expect multi-megabyte transters to be common, and gigabyte sized operations are plausible. At minimum you need a comment acknowledging that we've considered these issues. > +void *cpu_map_physical_page(target_phys_addr_t addr) > + /* DMA'ing to MMIO, just skip */ > + phys_offset = cpu_get_physical_page_desc(addr); > + if ((phys_offset & ~TARGET_PAGE_MASK) != IO_MEM_RAM) > + return NULL; This is not OK. It's fairly common for smaller devies to use a separate DMA engine that writes to a MMIO region. You also never check the return value of this function, so it will crash qemu. > +void pci_device_dma_unmap(PCIDevice *s, const IOVector *orig, This funcion should not exist. Dirty bits should be set by the memcpy routines. Paul