From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Nu8Yc-0005fb-3J for qemu-devel@nongnu.org; Tue, 23 Mar 2010 14:10:26 -0400 Received: from [140.186.70.92] (port=60817 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Nu8YZ-0005eZ-4H for qemu-devel@nongnu.org; Tue, 23 Mar 2010 14:10:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Nu8YT-0004yF-HO for qemu-devel@nongnu.org; Tue, 23 Mar 2010 14:10:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:15923) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Nu8YT-0004xh-9z for qemu-devel@nongnu.org; Tue, 23 Mar 2010 14:10:17 -0400 Date: Tue, 23 Mar 2010 20:06:44 +0200 From: "Michael S. Tsirkin" Message-ID: <20100323180642.GA20175@redhat.com> References: <1269304444.7931.68.camel@badari-desktop> <20100323123916.GA24750@redhat.com> <4BA9010D.6010607@us.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4BA9010D.6010607@us.ibm.com> Subject: [Qemu-devel] Re: [RFC] vhost-blk implementation List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Badari Pulavarty Cc: Discussion of kvm memory usage , qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote: > Michael S. Tsirkin wrote: >> On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote: >> >>> Write Results: >>> ============== >>> >>> I see degraded IO performance when doing sequential IO write >>> tests with vhost-blk compared to virtio-blk. >>> >>> # time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct >>> >>> I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with >>> vhost-blk. Wondering why ? >>> >> >> Try to look and number of interrupts and/or number of exits. >> > > I checked interrupts and IO exits - there is no major noticeable > difference between > vhost-blk and virtio-blk scenerios. >> It could also be that you are overrunning some queue. >> >> I don't see any exit mitigation strategy in your patch: >> when there are already lots of requests in a queue, it's usually >> a good idea to disable notifications and poll the >> queue as requests complete. That could help performance. >> > Do you mean poll eventfd for new requests instead of waiting for new > notifications ? > Where do you do that in vhost-net code ? vhost_disable_notify does this. > Unlike network socket, since we are dealing with a file, there is no > ->poll support for it. > So I can't poll for the data. And also, Issue I am having is on the > write() side. Not sure I understand. > I looked at it some more - I see 512K write requests on the > virtio-queue in both vhost-blk and virtio-blk cases. Both qemu or > vhost is doing synchronous writes to page cache (there is no write > batching in qemu that is affecting this case). I still puzzled on > why virtio-blk outperforms vhost-blk. > > Thanks, > Badari If you say the number of requests is the same, we are left with: - requests are smaller for some reason? - something is causing retries? -- MST