From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joerg Roedel Subject: Re: [Qemu-devel] Ideas wiki for GSoC 2010 Date: Mon, 15 Mar 2010 14:24:48 +0100 Message-ID: <20100315132448.GF13108@8bytes.org> References: <20100310183023.6632aece@redhat.com> <4B9E2745.7060903@redhat.com> <20100315125313.GK9457@il.ibm.com> <20100315130310.GE13108@8bytes.org> <4B9E320E.7040605@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Muli Ben-Yehuda , Luiz Capitulino , qemu-devel@nongnu.org, aliguori@us.ibm.com, kvm@vger.kernel.org, jan.kiszka@siemens.com, agraf@suse.de, agl@us.ibm.com, Nadav Amit , Ben-Ami Yassour1 To: Avi Kivity Return-path: Received: from 8bytes.org ([88.198.83.132]:40162 "EHLO 8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964876Ab0CONYu (ORCPT ); Mon, 15 Mar 2010 09:24:50 -0400 Content-Disposition: inline In-Reply-To: <4B9E320E.7040605@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Mar 15, 2010 at 03:11:42PM +0200, Avi Kivity wrote: > On 03/15/2010 03:03 PM, Joerg Roedel wrote: >> >>>> I will add another project - iommu emulation. Could be very useful >>>> for doing device assignment to nested guests, which could make >>>> testing a lot easier. >>>> >>> Our experiments show that nested device assignment is pretty much >>> required for I/O performance in nested scenarios. >>> >> Really? I did a small test with virtio-blk in a nested guest (disk read >> with dd, so not a real benchmark) and got a reasonable read-performance >> of around 25MB/s from the disk in the l2-guest. >> >> > > Your guest wasn't doing a zillion VMREADs and VMWRITEs every exit. > > I plan to reduce VMREAD/VMWRITE overhead for kvm, but not much we can do > for other guests. Does it matter for the ept-on-ept case? The initial patchset of nested-vmx implemented it and they reported a performance drop of around 12% between levels which is reasonable. So I expected the loss of io-performance for l2 also reasonable in this case. My small measurement was also done using npt-on-npt. Joerg