From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=43353 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PRsk4-00040x-Ej for qemu-devel@nongnu.org; Sun, 12 Dec 2010 15:42:01 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PRsjz-0003v4-1J for qemu-devel@nongnu.org; Sun, 12 Dec 2010 15:42:00 -0500 Received: from mx1.redhat.com ([209.132.183.28]:28293) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PRsjy-0003uu-QW for qemu-devel@nongnu.org; Sun, 12 Dec 2010 15:41:54 -0500 Date: Sun, 12 Dec 2010 22:41:29 +0200 From: "Michael S. Tsirkin" Message-ID: <20101212204127.GA24726@redhat.com> References: <1292166128-10874-1-git-send-email-stefanha@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1292166128-10874-1-git-send-email-stefanha@linux.vnet.ibm.com> Subject: [Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue notify List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: qemu-devel@nongnu.org On Sun, Dec 12, 2010 at 03:02:04PM +0000, Stefan Hajnoczi wrote: > See below for the v5 changelog. > > Due to lack of connectivity I am sending from GMail. Git should retain my > stefanha@linux.vnet.ibm.com From address. > > Virtqueue notify is currently handled synchronously in userspace virtio. This > prevents the vcpu from executing guest code while hardware emulation code > handles the notify. > > On systems that support KVM, the ioeventfd mechanism can be used to make > virtqueue notify a lightweight exit by deferring hardware emulation to the > iothread and allowing the VM to continue execution. This model is similar to > how vhost receives virtqueue notifies. > > The result of this change is improved performance for userspace virtio devices. > Virtio-blk throughput increases especially for multithreaded scenarios and > virtio-net transmit throughput increases substantially. Interestingly, I see decreased throughput for small message host to get netperf runs. The command that I used was: netperf -H $vguest -- -m 200 And the results are: - with ioeventfd=off TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.104 (11.0.0.104) port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 200 10.00 3035.48 15.50 99.30 6.695 2.680 - with ioeventfd=on TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.104 (11.0.0.104) port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 200 10.00 1770.95 18.16 51.65 13.442 2.389 Do you see this behaviour too? -- MST