From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NrBsw-0007hM-1s for qemu-devel@nongnu.org; Mon, 15 Mar 2010 11:07:14 -0400 Received: from [199.232.76.173] (port=57507 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NrBsv-0007gq-IO for qemu-devel@nongnu.org; Mon, 15 Mar 2010 11:07:13 -0400 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NrBsu-0004ly-NI for qemu-devel@nongnu.org; Mon, 15 Mar 2010 11:07:13 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60926) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NrBsu-0004lq-49 for qemu-devel@nongnu.org; Mon, 15 Mar 2010 11:07:12 -0400 Message-ID: <4B9E4D11.70402@redhat.com> Date: Mon, 15 Mar 2010 17:06:57 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] Ideas wiki for GSoC 2010 References: <20100310183023.6632aece@redhat.com> <4B9E2745.7060903@redhat.com> <20100315125313.GK9457@il.ibm.com> <20100315130310.GE13108@8bytes.org> <4B9E320E.7040605@redhat.com> <4B9E34E1.3090709@codemonkey.ws> In-Reply-To: <4B9E34E1.3090709@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Muli Ben-Yehuda , agraf@suse.de, aliguori@us.ibm.com, kvm@vger.kernel.org, jan.kiszka@siemens.com, Joerg Roedel , qemu-devel@nongnu.org, Luiz Capitulino , agl@us.ibm.com, Nadav Amit , Ben-Ami Yassour1 On 03/15/2010 03:23 PM, Anthony Liguori wrote: > On 03/15/2010 08:11 AM, Avi Kivity wrote: >> On 03/15/2010 03:03 PM, Joerg Roedel wrote: >>> >>>>> I will add another project - iommu emulation. Could be very useful >>>>> for doing device assignment to nested guests, which could make >>>>> testing a lot easier. >>>> Our experiments show that nested device assignment is pretty much >>>> required for I/O performance in nested scenarios. >>> Really? I did a small test with virtio-blk in a nested guest (disk read >>> with dd, so not a real benchmark) and got a reasonable read-performance >>> of around 25MB/s from the disk in the l2-guest. >>> >> >> Your guest wasn't doing a zillion VMREADs and VMWRITEs every exit. >> >> I plan to reduce VMREAD/VMWRITE overhead for kvm, but not much we can >> do for other guests. > > VMREAD/VMWRITEs are generally optimized by hypervisors as they tend to > be costly. KVM is a bit unusual in terms of how many times the > instructions are executed per exit. Do you know offhand of any unnecessary read/writes? There's update_cr8_intercept(), but on normal exits, I don't see what else we can remove. -- error compiling committee.c: too many arguments to function