From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NuQGq-0001y8-E1 for qemu-devel@nongnu.org; Wed, 24 Mar 2010 09:05:16 -0400 Received: from [140.186.70.92] (port=58322 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NuQGo-0001vB-Vi for qemu-devel@nongnu.org; Wed, 24 Mar 2010 09:05:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1NuQGm-0000u1-Hl for qemu-devel@nongnu.org; Wed, 24 Mar 2010 09:05:14 -0400 Received: from mx20.gnu.org ([199.232.41.8]:40872) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1NuQGm-0000to-Bh for qemu-devel@nongnu.org; Wed, 24 Mar 2010 09:05:12 -0400 Received: from mail.codesourcery.com ([38.113.113.100]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NuQGl-0000YD-KE for qemu-devel@nongnu.org; Wed, 24 Mar 2010 09:05:11 -0400 From: Paul Brook Subject: Re: [Qemu-devel] Guest memory mapping in Qemu Date: Wed, 24 Mar 2010 13:05:09 +0000 References: In-Reply-To: MIME-Version: 1.0 Content-Type: Text/Plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Message-Id: <201003241305.09507.paul@codesourcery.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Michael T > If the technical documentation at > http://www.usenix.org/publications/library/proceedings/usenix05/tech/freeni > x/full_papers/bellard/bellard_html/index.html is still valid (I think it > is), Qemu has two modes of handling access to guest memory - system > emulation, in which an entire guest address space is mapped on the host, > and emulated MMU. No. qemu-fast (using the host address space) was removed long ago. There are a few stray remnants, but nothing useful. We always use an emulated MMU. > I was wondering whether something in-between would also > be feasible. That is, chunks of guest address space (say 4MB chunks for > the sake of the argument) are mmapped into the address space of the Qemu > process on the host, and when an access to guest memory is made, there is > an initial check to see whether it is in the same chunk as the last one, > in which case all the MMU emulation bits could be saved. I could imagine > Qemu keeping a current/most recent chunk for each register which can be > used for relative addressing, plus one for non-register-relative accesses. > It seems to me that this could potentially speed up memory access quite a > bit, and as a bonus even make it easy to support x86 segmentation (as part > of the bounds check for whether a memory access is in a chunk). This is effectively shadow paging implemented in userspace via mmap. It's very hard to make it work in a sane way, and even harder to make it go fast. TLB handling is already a significant bottleneck for many tasks, adding a mmap call is likely to make this orders of magnitude worse. Most guests use virtual memory extensively, so the virtual->physical mappings tend to be extremely fragmented. If you really want to do shadow paging for cross environments, you probably need to move it into kernel space. Either as a host kernel module, or as a bare-metal kernel/application that runs inside KVM. Even then you have to use various tricks to partition off a section of the host address space for use by qemu. It's not impossible, but it is a significant undertaking with somewhat unclear benefits. Paul