From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=45157 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PNl6a-0001xp-Te for qemu-devel@nongnu.org; Wed, 01 Dec 2010 06:44:14 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PNl6Z-0000XF-Ne for qemu-devel@nongnu.org; Wed, 01 Dec 2010 06:44:12 -0500 Received: from mail-wy0-f173.google.com ([74.125.82.173]:55210) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PNl6Z-0000X4-JU for qemu-devel@nongnu.org; Wed, 01 Dec 2010 06:44:11 -0500 Received: by wyb36 with SMTP id 36so14431wyb.4 for ; Wed, 01 Dec 2010 03:44:10 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: <1289483242-6069-1-git-send-email-stefanha@linux.vnet.ibm.com> <1289483242-6069-3-git-send-email-stefanha@linux.vnet.ibm.com> <20101111164518.GA28773@infradead.org> <4CDFBB19.7010702@redhat.com> <4CDFC288.9050800@redhat.com> <4CDFD3BE.8090702@redhat.com> Date: Wed, 1 Dec 2010 11:44:10 +0000 Message-ID: Subject: Re: [Qemu-devel] Re: [PATCH 2/3] virtio-pci: Use ioeventfd for virtqueue notify From: Stefan Hajnoczi Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Stefan Hajnoczi , kvm@vger.kernel.org, "Michael S. Tsirkin" , qemu-devel@nongnu.org, Christoph Hellwig , Khoa Huynh On Mon, Nov 15, 2010 at 11:20 AM, Stefan Hajnoczi wrot= e: > On Sun, Nov 14, 2010 at 12:19 PM, Avi Kivity wrote: >> On 11/14/2010 01:05 PM, Avi Kivity wrote: >>>> >>>> I agree, but let's enable virtio-ioeventfd carefully because bad code >>>> is out there. >>> >>> >>> Sure. =A0Note as long as the thread waiting on ioeventfd doesn't consum= e too >>> much cpu, it will awaken quickly and we won't have the "transaction per >>> timeslice" effect. >>> >>> btw, what about virtio-blk with linux-aio? =A0Have you benchmarked that= with >>> and without ioeventfd? >>> >> >> And, what about efficiency? =A0As in bits/cycle? > > We are running benchmarks with this latest patch and will report results. Full results here (thanks to Khoa Huynh): http://wiki.qemu.org/Features/VirtioIoeventfd The host CPU utilization is scaled to 16 CPUs so a 2-3% reduction is actually in the 32-48% range for a single CPU. The guest CPU utilization numbers include an efficiency metric: %vcpu per MB/sec. Here we see significant improvements too. Guests that previously couldn't get more CPU work done now have regained some breathing space. Stefan