From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1I4G5i-00009D-Fi for qemu-devel@nongnu.org; Fri, 29 Jun 2007 09:00:50 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1I4G5g-00008X-3w for qemu-devel@nongnu.org; Fri, 29 Jun 2007 09:00:49 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1I4G5f-00008T-5Q for qemu-devel@nongnu.org; Fri, 29 Jun 2007 09:00:47 -0400 Received: from mail.codesourcery.com ([65.74.133.4]) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1I4G5e-0000WE-HN for qemu-devel@nongnu.org; Fri, 29 Jun 2007 09:00:46 -0400 From: Paul Brook Subject: Re: [Qemu-devel] 4G address space remapping on 64-bit host Date: Fri, 29 Jun 2007 14:00:40 +0100 References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200706291400.40815.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Blue Swirl > I had an idea of mapping the full 32-bit target virtual address space > to a 4GB area on 64-bit hosts. Then the loads and stores to normal RAM > (except page tables, code_mem_write etc) could be made much faster, > falling back to softmmu for other pages. The idea has come up before, > for example in this Fabrice's message: > http://article.gmane.org/gmane.comp.emulators.qemu/685 > > But I'm not sure if this would be worth the effort, the speedup would > depend on the frequency of the loads/stores and also translation time > vs. translated code execution times. Does anyone have good statistics > on those? I'd expect the overhead of SIGSEGV+mmap to be prohibitive. I don't have numbers to back this up, but experience with MIPS system emulation shows that TLB miss cost can have significant effect on overall performance. Like Fabrice, I think this would be most useful in combination with some sort of hypervisor. Somewhere on my TODO list is porting qemu to run directly as a paravirtual Xen DomU. This means you can insert the guest pagetable walk directly into the host mmu fault handler, and do clever things with shadow pagetables. I should probably get the cycle counting patches polished and applied. These include a mechanism for distinguishing RAM and MMIO accesses. Paul