From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LBdGV-0000wH-8A for qemu-devel@nongnu.org; Sat, 13 Dec 2008 17:47:15 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LBdGT-0000w0-7L for qemu-devel@nongnu.org; Sat, 13 Dec 2008 17:47:14 -0500 Received: from [199.232.76.173] (port=39457 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LBdGT-0000vr-25 for qemu-devel@nongnu.org; Sat, 13 Dec 2008 17:47:13 -0500 Received: from rn-out-0910.google.com ([64.233.170.185]:53119) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LBdGS-00007Q-OE for qemu-devel@nongnu.org; Sat, 13 Dec 2008 17:47:12 -0500 Received: by rn-out-0910.google.com with SMTP id 56so1776447rnw.8 for ; Sat, 13 Dec 2008 14:47:11 -0800 (PST) Message-ID: <49443B6B.3030907@codemonkey.ws> Date: Sat, 13 Dec 2008 16:47:07 -0600 From: Anthony Liguori MIME-Version: 1.0 References: <4942B841.6010900@codemonkey.ws> In-Reply-To: <4942B841.6010900@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH 2 of 5] add can_dma/post_dma for direct IO Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrea Arcangeli Cc: chrisw@redhat.com, avi@redhat.com, Gerd Hoffmann , kvm@vger.kernel.org, qemu-devel@nongnu.org Anthony Liguori wrote: > > This could correspond to a: > > void *cpu_physical_memory_map(target_phys_addr_t addr, ram_addr_t > size, int is_write); > > void cpu_physical_memory_unmap(target_physical_addr_t addr, ram_addr_t > size, void *mapping, int is_dirty); A really nice touch here, and note this is optional and can be a follow up series later, would be to use the mapping itself to encode the physical address and size so the signatures were: void *cpu_physical_memory_map(target_phys_addr_t addr, ram_addr_t size, int flags); void cpu_physical_memory_unmap(void *mapping); flags could be PHYS_MAP_READ and/or PHYS_MAP_WRITE. In unmap, you could check to see if the address is in phys_ram_base ... phys_ram_size. If so, you can get the physical address. If you maintained a list of mappings, you could then search the list of mappings based on the physical address and check the flags to see if a flush was required. If you also stored the address in the list, you could search on unmap if the address was not in phys_ram_base .. phys_ram_size (which implies a bounce buffer). Another potential optimization would be to provide a mechanism to explicitly set the dirty range of a physical mapping. For instance: cpu_physical_memory_map_set_dirty(void *start, ram_addr_t size); That would let you only copy the data that actually needed to. I think we can probably ignore this later optimization for a while though. Regards, Anthony Liguori > The whole dma.c thing should not exist. If we're going to introduce a > higher level API, it should be a PCI DMA API. > > Something like virtio could use this API directly seeing as it doesn't > really live on a PCI bus in real life. > > Regards, > > Anthony Liguori >