From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LOvYP-0006AC-RQ for qemu-devel@nongnu.org; Mon, 19 Jan 2009 09:56:41 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LOvYO-00069l-AQ for qemu-devel@nongnu.org; Mon, 19 Jan 2009 09:56:41 -0500 Received: from [199.232.76.173] (port=52873 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LOvYO-00069i-4s for qemu-devel@nongnu.org; Mon, 19 Jan 2009 09:56:40 -0500 Received: from mail-gx0-f17.google.com ([209.85.217.17]:49201) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LOvYN-0008L1-LB for qemu-devel@nongnu.org; Mon, 19 Jan 2009 09:56:39 -0500 Received: by gxk10 with SMTP id 10so2167753gxk.10 for ; Mon, 19 Jan 2009 06:56:37 -0800 (PST) Message-ID: <49749497.6040701@codemonkey.ws> Date: Mon, 19 Jan 2009 08:56:23 -0600 From: Anthony Liguori MIME-Version: 1.0 References: <1232308399-21679-1-git-send-email-avi@redhat.com> <1232308399-21679-2-git-send-email-avi@redhat.com> In-Reply-To: <1232308399-21679-2-git-send-email-avi@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH 1/5] Add target memory mapping API Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: qemu-devel@nongnu.org Avi Kivity wrote: > Devices accessing large amounts of memory (as with DMA) will wish to obtain > a pointer to guest memory rather than access it indirectly via > cpu_physical_memory_rw(). Add a new API to convert target addresses to > host pointers. > > In case the target address does not correspond to RAM, a bounce buffer is > allocated. To prevent the guest from causing the host to allocate unbounded > amounts of bounce buffer, this memory is limited (currently to one page). > > Signed-off-by: Avi Kivity > --- > cpu-all.h | 5 +++ > exec.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 98 insertions(+), 0 deletions(-) > > diff --git a/cpu-all.h b/cpu-all.h > index ee0a6e3..3439999 100644 > --- a/cpu-all.h > +++ b/cpu-all.h > @@ -923,6 +923,11 @@ static inline void cpu_physical_memory_write(target_phys_addr_t addr, > { > cpu_physical_memory_rw(addr, (uint8_t *)buf, len, 1); > } > +void *cpu_physical_memory_map(target_phys_addr_t addr, > + target_phys_addr_t *plen, > + int is_write); > +void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len, > + int is_write); > uint32_t ldub_phys(target_phys_addr_t addr); > uint32_t lduw_phys(target_phys_addr_t addr); > uint32_t ldl_phys(target_phys_addr_t addr); > diff --git a/exec.c b/exec.c > index faa6333..7162271 100644 > --- a/exec.c > +++ b/exec.c > @@ -3045,6 +3045,99 @@ void cpu_physical_memory_write_rom(target_phys_addr_t addr, > } > } > > +typedef struct { > + void *buffer; > + target_phys_addr_t addr; > + target_phys_addr_t len; > +} BounceBuffer; > + > +static BounceBuffer bounce; > + > +void *cpu_physical_memory_map(target_phys_addr_t addr, > + target_phys_addr_t *plen, > + int is_write) > +{ > + target_phys_addr_t len = *plen; > + target_phys_addr_t done = 0; > + int l; > + uint8_t *ret = NULL; > + uint8_t *ptr; > + target_phys_addr_t page; > + unsigned long pd; > + PhysPageDesc *p; > + unsigned long addr1; > + > + while (len > 0) { > + page = addr & TARGET_PAGE_MASK; > + l = (page + TARGET_PAGE_SIZE) - addr; > + if (l > len) > + l = len; > + p = phys_page_find(page >> TARGET_PAGE_BITS); > + if (!p) { > + pd = IO_MEM_UNASSIGNED; > + } else { > + pd = p->phys_offset; > + } > + > + if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) { > + if (done || bounce.buffer) { > + break; > + } > + bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, TARGET_PAGE_SIZE); > + bounce.addr = addr; > + bounce.len = l; > I like the bouncing approach. Namely, that it never bounces more than a page at a time. I think that's clever. Could you add some documentation to this function? Namely, making it clear that it can return NULL and that if it does, the caller must retry. > +void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len, > + int is_write) > +{ > + if (buffer != bounce.buffer) { > + if (is_write) { > + unsigned long addr1 = (uint8_t *)buffer - phys_ram_base; > + do { > + unsigned l; > + l = TARGET_PAGE_SIZE; > + if (l > len) > + l = len; > + if (!cpu_physical_memory_is_dirty(addr1)) { > + /* invalidate code */ > + tb_invalidate_phys_page_range(addr1, addr1 + len, 0); > + /* set dirty bit */ > + phys_ram_dirty[addr1 >> TARGET_PAGE_BITS] |= > + (0xff & ~CODE_DIRTY_FLAG); > + } > + addr1 += l; > + len -= l; > + } while (len); > + } > + return; > + } > + if (is_write) { > + cpu_physical_memory_write(bounce.addr, bounce.buffer, bounce.len); > + } > + qemu_free(bounce.buffer); > + bounce.buffer = NULL; > If map() fails, how does the caller determine when to retry the mapping? Regards, Anthony Liguori > +} > > /* warning: addr must be aligned */ > uint32_t ldl_phys(target_phys_addr_t addr) >