From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:42256) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TFhVQ-0003fi-EV for qemu-devel@nongnu.org; Sun, 23 Sep 2012 04:25:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TFhVP-0003kc-F6 for qemu-devel@nongnu.org; Sun, 23 Sep 2012 04:25:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:20882) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TFhVP-0003kY-6o for qemu-devel@nongnu.org; Sun, 23 Sep 2012 04:25:35 -0400 Message-ID: <505EC779.5050600@redhat.com> Date: Sun, 23 Sep 2012 10:25:29 +0200 From: Avi Kivity MIME-Version: 1.0 References: <50532E80.5060905@redhat.com> <5059C069.3010108@redhat.com> <505AFE98.10906@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] directory hierarchy List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Blue Swirl Cc: Paolo Bonzini , qemu-devel On 09/22/2012 04:15 PM, Blue Swirl wrote: > > > >> This could have nice cleanup effects though and for example enable > >> generic 'info vmtree' to discover VA->PA mappings for any target > >> instead of current MMU table walkers. > > > > How? That's in a hardware defined format that's completely invisible to > > the memory API. > > It's invisible now, but target-specific code could grab the mappings > and feed them to memory API. Memory API would just see the per-CPU > virtual memory as address spaces that map to physical memory address > space. > > For RAM backed MMU tables like x86 and Sparc32, writes to page table > memory areas would need to be tracked like SMC. For in-MMU TLBs, this > would not be needed. > > Again, if performance would degrade, this would not be worthwhile. I'd > expect VA->PA mappings to change at least at context switch rate + > page fault rate + mmap/exec activity so this could amount to thousands > of changes per second per CPU. > > In theory KVM could use memory API as CPU type agnostic way to > exchange this information, I'd expect that KVM exit rate is not nearly > as big and in many cases exchange of mapping information would not be > needed. It would not improve performance there either. > First, the memory API does not operate at that level. It handles (guest physical) -> (host virtual | io callback) translations. These are (guest virtual) -> (guest physical translations). Second, the memory API is machine-wide and designed for coarse maps. Processor memory maps are per-cpu and page-grained. (the memory API actually needs to efficiently support page-grained maps (for iommus) and per-cpu maps (smm), but that's another story). Third, we know from the pre-npt/ept days that tracking all mappings destroys performance. It's much better to do this on demand. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.