From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=39453 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PNuJp-0000t5-8Y for qemu-devel@nongnu.org; Wed, 01 Dec 2010 16:34:30 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PNuJm-0002Ft-NV for qemu-devel@nongnu.org; Wed, 01 Dec 2010 16:34:28 -0500 Received: from mail-qy0-f195.google.com ([209.85.216.195]:58514) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PNuJm-0002Ff-H1 for qemu-devel@nongnu.org; Wed, 01 Dec 2010 16:34:26 -0500 Received: by qyk2 with SMTP id 2so2174816qyk.10 for ; Wed, 01 Dec 2010 13:34:26 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <4CF63FDD.7040901@redhat.com> References: <1289483242-6069-1-git-send-email-stefanha@linux.vnet.ibm.com> <1289483242-6069-3-git-send-email-stefanha@linux.vnet.ibm.com> <20101111164518.GA28773@infradead.org> <4CDFBB19.7010702@redhat.com> <4CDFC288.9050800@redhat.com> <4CDFD3BE.8090702@redhat.com> <4CF63FDD.7040901@redhat.com> Date: Wed, 1 Dec 2010 21:34:25 +0000 Message-ID: Subject: Re: [Qemu-devel] Re: [PATCH 2/3] virtio-pci: Use ioeventfd for virtqueue notify From: Stefan Hajnoczi Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Stefan Hajnoczi , kvm@vger.kernel.org, "Michael S. Tsirkin" , qemu-devel@nongnu.org, Christoph Hellwig , Khoa Huynh On Wed, Dec 1, 2010 at 12:30 PM, Avi Kivity wrote: > On 12/01/2010 01:44 PM, Stefan Hajnoczi wrote: >> >> >> >> >> =A0And, what about efficiency? =A0As in bits/cycle? >> > >> > =A0We are running benchmarks with this latest patch and will report >> > results. >> >> Full results here (thanks to Khoa Huynh): >> >> http://wiki.qemu.org/Features/VirtioIoeventfd >> >> The host CPU utilization is scaled to 16 CPUs so a 2-3% reduction is >> actually in the 32-48% range for a single CPU. >> >> The guest CPU utilization numbers include an efficiency metric: %vcpu >> per MB/sec. =A0Here we see significant improvements too. =A0Guests that >> previously couldn't get more CPU work done now have regained some >> breathing space. > > Thanks for those numbers. =A0The guest improvements were expected, but th= e > host numbers surprised me. =A0Do you have an explanation as to why total = host > load should decrease? The first vcpu does virtqueue kick - it holds the guest driver vblk->lock across kick. Before this kick completes a second vcpu tries to acquire vblk->lock, finds it is contended, and spins. So we're burning CPU due to the long vblk->lock hold times. With virtio-ioeventfd those kick times are reduced an there is less contention on vblk->lock. Stefan