From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MDf9F-0000QZ-10 for qemu-devel@nongnu.org; Mon, 08 Jun 2009 09:44:25 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MDf9A-0000E7-7e for qemu-devel@nongnu.org; Mon, 08 Jun 2009 09:44:24 -0400 Received: from [199.232.76.173] (port=56277 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MDf9A-0000Dy-1o for qemu-devel@nongnu.org; Mon, 08 Jun 2009 09:44:20 -0400 Received: from gecko.sbs.de ([194.138.37.40]:18215) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1MDf99-00028j-F0 for qemu-devel@nongnu.org; Mon, 08 Jun 2009 09:44:19 -0400 Message-ID: <4A2D15AC.9000004@siemens.com> Date: Mon, 08 Jun 2009 15:44:12 +0200 From: Jan Kiszka MIME-Version: 1.0 References: <4A26F1E3.1040509@codemonkey.ws> <4A27FC69.9070501@mayc.ru> <20090605201415.GA22847@csclub.uwaterloo.ca> <20090608001312.GE15426@shareable.org> <4A2CA8C2.2080004@redhat.com> <20090608115755.GD25684@shareable.org> <4A2CFE07.90700@redhat.com> <20090608121626.GF25684@shareable.org> <4A2D03E7.8070702@redhat.com> <4A2D07A3.8090101@siemens.com> <4A2D0CE4.9080405@redhat.com> <4A2D0FB6.20104@siemens.com> <4A2D10FA.2050606@redhat.com> In-Reply-To: <4A2D10FA.2050606@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: POLL: Why do you use kqemu? List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Anton D Kachalov , "qemu-devel@nongnu.org" , Lennart Sorensen Avi Kivity wrote: > Jan Kiszka wrote: >> Avi Kivity wrote: >> >>> Jan Kiszka wrote: >>> >>>> And the fact that kqemu has to use tcg in order to achieve a reasonable >>>> performance is rather a disadvantage. The complexity and overhead for >>>> synchronizing tcg with the in-kernel accelerator is enormous. If there >>>> were a feasible way to overcome this with kqemu, it would benefit a >>>> lot. >>>> But unfortunately there is none (given you don't want to invest >>>> reasonable efforts). >>>> >>> Note that kvm suffers from something similar (to a smaller magnitude) as >>> well: if a guest pages in its page tables, kvm knows nothing about it >>> and will thus have outdated shadows. To date we haven't encountered a >>> problem with it, but it's conceivable. I think Windows can page its >>> page tables, but maybe it's disabled by default, or maybe it doesn't dma >>> directly into the page tables. >>> >> >> Can't follow, always thought that kernel space gets informed when some >> I/O operation handled by user space modified an "interesting" page. >> > > It doesn't. Host userspace has unrestricted access to guest memory. > >>> Not sure how to fix. Maybe write protect the host page tables when we >>> >> >> You mean guest page table? >> > > Both :) > > When kvm write protects a guest page table in the shadow page table > entries pointing to that guest page, it should also write protect the > guest page table in the host page table entries to the same guest page. Ah, now I got it. What do other hypervisors do? > >>> shadow a page table, and get an mmu notifier to tell us when its made >>> writable? Seems expensive. Burying head in sand is much easier. >>> >>> >> >> Does this still apply to nested paging? I guess (hope) not... >> > > No, nested paging brings cancer and cures world peace. Or something. > Well, then it's probably not worth bothering, at least until a real guest problem is explainable with this limitation. Are there any suspicious reports floating around (maybe not only about Windows)? Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux