From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NrAHt-0000QQ-NH for qemu-devel@nongnu.org; Mon, 15 Mar 2010 09:24:53 -0400 Received: from [199.232.76.173] (port=44413 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NrAHt-0000PT-0W for qemu-devel@nongnu.org; Mon, 15 Mar 2010 09:24:53 -0400 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NrAHr-0007pY-MJ for qemu-devel@nongnu.org; Mon, 15 Mar 2010 09:24:52 -0400 Received: from 8bytes.org ([88.198.83.132]:54317) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NrAHr-0007pQ-A1 for qemu-devel@nongnu.org; Mon, 15 Mar 2010 09:24:51 -0400 Date: Mon, 15 Mar 2010 14:24:48 +0100 From: Joerg Roedel Subject: Re: [Qemu-devel] Ideas wiki for GSoC 2010 Message-ID: <20100315132448.GF13108@8bytes.org> References: <20100310183023.6632aece@redhat.com> <4B9E2745.7060903@redhat.com> <20100315125313.GK9457@il.ibm.com> <20100315130310.GE13108@8bytes.org> <4B9E320E.7040605@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B9E320E.7040605@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Muli Ben-Yehuda , agraf@suse.de, aliguori@us.ibm.com, kvm@vger.kernel.org, jan.kiszka@siemens.com, qemu-devel@nongnu.org, Luiz Capitulino , agl@us.ibm.com, Nadav Amit , Ben-Ami Yassour1 On Mon, Mar 15, 2010 at 03:11:42PM +0200, Avi Kivity wrote: > On 03/15/2010 03:03 PM, Joerg Roedel wrote: >> >>>> I will add another project - iommu emulation. Could be very useful >>>> for doing device assignment to nested guests, which could make >>>> testing a lot easier. >>>> >>> Our experiments show that nested device assignment is pretty much >>> required for I/O performance in nested scenarios. >>> >> Really? I did a small test with virtio-blk in a nested guest (disk read >> with dd, so not a real benchmark) and got a reasonable read-performance >> of around 25MB/s from the disk in the l2-guest. >> >> > > Your guest wasn't doing a zillion VMREADs and VMWRITEs every exit. > > I plan to reduce VMREAD/VMWRITE overhead for kvm, but not much we can do > for other guests. Does it matter for the ept-on-ept case? The initial patchset of nested-vmx implemented it and they reported a performance drop of around 12% between levels which is reasonable. So I expected the loss of io-performance for l2 also reasonable in this case. My small measurement was also done using npt-on-npt. Joerg