From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NuNKP-0002wJ-TU for qemu-devel@nongnu.org; Wed, 24 Mar 2010 05:56:45 -0400 Received: from [140.186.70.92] (port=54473 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NuNKF-0002vM-Uv for qemu-devel@nongnu.org; Wed, 24 Mar 2010 05:56:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1NuNKC-0004Pn-Nk for qemu-devel@nongnu.org; Wed, 24 Mar 2010 05:56:34 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48781) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1NuNKC-0004Ph-Ft for qemu-devel@nongnu.org; Wed, 24 Mar 2010 05:56:32 -0400 Date: Wed, 24 Mar 2010 11:52:21 +0200 From: "Michael S. Tsirkin" Message-ID: <20100324095221.GA7622@redhat.com> References: <1269304444.7931.68.camel@badari-desktop> <20100323123916.GA24750@redhat.com> <4BA9010D.6010607@us.ibm.com> <20100323180642.GA20175@redhat.com> <4BA91C9B.8070900@us.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4BA91C9B.8070900@us.ibm.com> Subject: [Qemu-devel] Re: [RFC] vhost-blk implementation List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Badari Pulavarty Cc: qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org On Tue, Mar 23, 2010 at 12:55:07PM -0700, Badari Pulavarty wrote: > Michael S. Tsirkin wrote: >> On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote: >> >>> Michael S. Tsirkin wrote: >>> >>>> On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote: >>>> >>>>> Write Results: >>>>> ============== >>>>> >>>>> I see degraded IO performance when doing sequential IO write >>>>> tests with vhost-blk compared to virtio-blk. >>>>> >>>>> # time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct >>>>> >>>>> I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with >>>>> vhost-blk. Wondering why ? >>>>> >>>> Try to look and number of interrupts and/or number of exits. >>>> >>> I checked interrupts and IO exits - there is no major noticeable >>> difference between >>> vhost-blk and virtio-blk scenerios. >>> >>>> It could also be that you are overrunning some queue. >>>> >>>> I don't see any exit mitigation strategy in your patch: >>>> when there are already lots of requests in a queue, it's usually >>>> a good idea to disable notifications and poll the >>>> queue as requests complete. That could help performance. >>>> >>> Do you mean poll eventfd for new requests instead of waiting for new >>> notifications ? >>> Where do you do that in vhost-net code ? >>> >> >> vhost_disable_notify does this. >> >> >>> Unlike network socket, since we are dealing with a file, there is no >>> ->poll support for it. >>> So I can't poll for the data. And also, Issue I am having is on the >>> write() side. >>> >> >> Not sure I understand. >> >> >>> I looked at it some more - I see 512K write requests on the >>> virtio-queue in both vhost-blk and virtio-blk cases. Both qemu or >>> vhost is doing synchronous writes to page cache (there is no write >>> batching in qemu that is affecting this case). I still puzzled on >>> why virtio-blk outperforms vhost-blk. >>> >>> Thanks, >>> Badari >>> >> >> If you say the number of requests is the same, we are left with: >> - requests are smaller for some reason? >> - something is causing retries? >> > No. IO requests sizes are exactly same (512K) in both cases. There are > no retries or > errors in both cases. One thing I am not clear is - for some reason > guest kernel > could push more data into virtio-ring in case of virtio-blk vs > vhost-blk. Is this possible ? > Does guest gets to run much sooner in virtio-blk case than vhost-blk ? > Sorry, if its dumb question - > I don't understand all the vhost details :( > > Thanks, > Badari > You said you observed same number of requests in userspace versus kernel above. And request size is the same as well. But somehow more data is transferred? I'm confused. -- MST