From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NrPfk-0003AB-RD for qemu-devel@nongnu.org; Tue, 16 Mar 2010 01:50:32 -0400 Received: from [199.232.76.173] (port=38845 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NrPfj-00039f-Is for qemu-devel@nongnu.org; Tue, 16 Mar 2010 01:50:31 -0400 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NrPfi-0002w2-P4 for qemu-devel@nongnu.org; Tue, 16 Mar 2010 01:50:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:10853) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NrPfi-0002vy-CZ for qemu-devel@nongnu.org; Tue, 16 Mar 2010 01:50:30 -0400 Message-ID: <4B9F1C15.6020200@redhat.com> Date: Tue, 16 Mar 2010 07:50:13 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] Ideas wiki for GSoC 2010 References: <20100310183023.6632aece@redhat.com> <4B9E2745.7060903@redhat.com> <20100315125313.GK9457@il.ibm.com> <20100315130310.GE13108@8bytes.org> <4B9E320E.7040605@redhat.com> <4B9E34E1.3090709@codemonkey.ws> <4B9E4D11.70402@redhat.com> <4B9EDD32.10505@codemonkey.ws> In-Reply-To: <4B9EDD32.10505@codemonkey.ws> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Muli Ben-Yehuda , agraf@suse.de, aliguori@us.ibm.com, kvm@vger.kernel.org, jan.kiszka@siemens.com, Joerg Roedel , qemu-devel@nongnu.org, Luiz Capitulino , agl@us.ibm.com, Nadav Amit , Ben-Ami Yassour1 On 03/16/2010 03:21 AM, Anthony Liguori wrote: > On 03/15/2010 10:06 AM, Avi Kivity wrote: >> On 03/15/2010 03:23 PM, Anthony Liguori wrote: >>> On 03/15/2010 08:11 AM, Avi Kivity wrote: >>>> On 03/15/2010 03:03 PM, Joerg Roedel wrote: >>>>> >>>>>>> I will add another project - iommu emulation. Could be very useful >>>>>>> for doing device assignment to nested guests, which could make >>>>>>> testing a lot easier. >>>>>> Our experiments show that nested device assignment is pretty much >>>>>> required for I/O performance in nested scenarios. >>>>> Really? I did a small test with virtio-blk in a nested guest (disk >>>>> read >>>>> with dd, so not a real benchmark) and got a reasonable >>>>> read-performance >>>>> of around 25MB/s from the disk in the l2-guest. >>>>> >>>> >>>> Your guest wasn't doing a zillion VMREADs and VMWRITEs every exit. >>>> >>>> I plan to reduce VMREAD/VMWRITE overhead for kvm, but not much we >>>> can do for other guests. >>> >>> VMREAD/VMWRITEs are generally optimized by hypervisors as they tend >>> to be costly. KVM is a bit unusual in terms of how many times the >>> instructions are executed per exit. >> >> Do you know offhand of any unnecessary read/writes? There's >> update_cr8_intercept(), but on normal exits, I don't see what else we >> can remove. > > Yeah, there are a number of examples. > > vmcs_clear_bits() and vmcs_set_bits() read a field of the VMCS and > then immediately writes it. This is unnecessary as the same > information could be kept in a shadow variable. In vmx_fpu_activate, > we call vmcs_clear_bits() followed immediately by vmcs_set_bits(). > which means we're reading GUEST_CR0 twice and writing it twice. This should be much better these days (2.6.34-rc1) as vmx_fpu_activate() is called at most once per heavyweight exit (and I have evil plans to reduce it even further). Still, that code should be optimized. > vmx_get_rflags() reads from the VMCS and we frequently call > get_rflags() followed by a set_rflags() to update a bit. We also > don't cache the value between calls and there's a few spots in the > code that make multiple calls. We definitely should cache that (and segment access from the emulator as well). But I'd have thought this to be relatively infrequent. At least with Linux, using x2apic and virtio allows you to eliminate most emulator access, if you have npt or ept. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.