From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Nu8MI-0008Om-Fe for qemu-devel@nongnu.org; Tue, 23 Mar 2010 13:57:42 -0400 Received: from [140.186.70.92] (port=59748 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Nu8MG-0008Mw-Q9 for qemu-devel@nongnu.org; Tue, 23 Mar 2010 13:57:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Nu8MF-0002II-6q for qemu-devel@nongnu.org; Tue, 23 Mar 2010 13:57:40 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:57174) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Nu8MF-0002IA-1J for qemu-devel@nongnu.org; Tue, 23 Mar 2010 13:57:39 -0400 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e35.co.us.ibm.com (8.14.3/8.13.1) with ESMTP id o2NHr0kl017314 for ; Tue, 23 Mar 2010 11:53:00 -0600 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id o2NHvPrk228292 for ; Tue, 23 Mar 2010 11:57:29 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id o2NHvKsB010819 for ; Tue, 23 Mar 2010 11:57:21 -0600 Message-ID: <4BA9010D.6010607@us.ibm.com> Date: Tue, 23 Mar 2010 10:57:33 -0700 From: Badari Pulavarty MIME-Version: 1.0 References: <1269304444.7931.68.camel@badari-desktop> <20100323123916.GA24750@redhat.com> In-Reply-To: <20100323123916.GA24750@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [RFC] vhost-blk implementation List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Discussion of kvm memory usage , qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org Michael S. Tsirkin wrote: > On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote: > >> Write Results: >> ============== >> >> I see degraded IO performance when doing sequential IO write >> tests with vhost-blk compared to virtio-blk. >> >> # time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct >> >> I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with >> vhost-blk. Wondering why ? >> > > Try to look and number of interrupts and/or number of exits. > I checked interrupts and IO exits - there is no major noticeable difference between vhost-blk and virtio-blk scenerios. > It could also be that you are overrunning some queue. > > I don't see any exit mitigation strategy in your patch: > when there are already lots of requests in a queue, it's usually > a good idea to disable notifications and poll the > queue as requests complete. That could help performance. > Do you mean poll eventfd for new requests instead of waiting for new notifications ? Where do you do that in vhost-net code ? Unlike network socket, since we are dealing with a file, there is no ->poll support for it. So I can't poll for the data. And also, Issue I am having is on the write() side. I looked at it some more - I see 512K write requests on the virtio-queue in both vhost-blk and virtio-blk cases. Both qemu or vhost is doing synchronous writes to page cache (there is no write batching in qemu that is affecting this case). I still puzzled on why virtio-blk outperforms vhost-blk. Thanks, Badari