From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:54409) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TG5NV-0007Bo-VX for qemu-devel@nongnu.org; Mon, 24 Sep 2012 05:55:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TG5NP-0006qZ-Pz for qemu-devel@nongnu.org; Mon, 24 Sep 2012 05:55:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52605) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TG5NP-0006qV-Hi for qemu-devel@nongnu.org; Mon, 24 Sep 2012 05:54:55 -0400 Message-ID: <50602DEA.5030203@redhat.com> Date: Mon, 24 Sep 2012 11:54:50 +0200 From: Avi Kivity MIME-Version: 1.0 References: <50532E80.5060905@redhat.com> <5059C069.3010108@redhat.com> <505AFE98.10906@redhat.com> <505EC779.5050600@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] directory hierarchy List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Blue Swirl Cc: Paolo Bonzini , qemu-devel On 09/23/2012 06:07 PM, Blue Swirl wrote: > On Sun, Sep 23, 2012 at 8:25 AM, Avi Kivity wrote: >> On 09/22/2012 04:15 PM, Blue Swirl wrote: >>> > >>> >> This could have nice cleanup effects though and for example enable >>> >> generic 'info vmtree' to discover VA->PA mappings for any target >>> >> instead of current MMU table walkers. >>> > >>> > How? That's in a hardware defined format that's completely invisible to >>> > the memory API. >>> >>> It's invisible now, but target-specific code could grab the mappings >>> and feed them to memory API. Memory API would just see the per-CPU >>> virtual memory as address spaces that map to physical memory address >>> space. >>> >>> For RAM backed MMU tables like x86 and Sparc32, writes to page table >>> memory areas would need to be tracked like SMC. For in-MMU TLBs, this >>> would not be needed. >>> >>> Again, if performance would degrade, this would not be worthwhile. I'd >>> expect VA->PA mappings to change at least at context switch rate + >>> page fault rate + mmap/exec activity so this could amount to thousands >>> of changes per second per CPU. >>> >>> In theory KVM could use memory API as CPU type agnostic way to >>> exchange this information, I'd expect that KVM exit rate is not nearly >>> as big and in many cases exchange of mapping information would not be >>> needed. It would not improve performance there either. >>> > > Perhaps I was not very clear, but this was just theoretical. > >> >> First, the memory API does not operate at that level. It handles (guest >> physical) -> (host virtual | io callback) translations. These are >> (guest virtual) -> (guest physical translations). > > I don't see why memory API could not be used also for GVA-GPA > translation if we ignore performance for the sake of discussion. For the reasons I mentioned. The guest doesn't issue calls into the memory API. The granularity is wrong. It is a system-wide API. The latter two issues have to change to support IOMMUs, and then indeed the memory API will be much closer to a CPU MMU (on x86 they can even share page tables in some circumstances). It will still be the wrong API IMO. > >> Second, the memory API is machine-wide and designed for coarse maps. >> Processor memory maps are per-cpu and page-grained. (the memory API >> actually needs to efficiently support page-grained maps (for iommus) and >> per-cpu maps (smm), but that's another story). >> >> Third, we know from the pre-npt/ept days that tracking all mappings >> destroys performance. It's much better to do this on demand. > > Yes, performance reasons kill this idea. It would still be beautiful. > Maybe I'm missing something, but I don't see this. But as you said, it's theoretical. -- error compiling committee.c: too many arguments to function