From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NuABr-00053w-7r for qemu-devel@nongnu.org; Tue, 23 Mar 2010 15:55:03 -0400 Received: from [140.186.70.92] (port=36644 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NuABp-000534-Tk for qemu-devel@nongnu.org; Tue, 23 Mar 2010 15:55:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1NuABn-0007gf-Ol for qemu-devel@nongnu.org; Tue, 23 Mar 2010 15:55:01 -0400 Received: from e36.co.us.ibm.com ([32.97.110.154]:49050) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1NuABn-0007gS-Ij for qemu-devel@nongnu.org; Tue, 23 Mar 2010 15:54:59 -0400 Received: from d03relay01.boulder.ibm.com (d03relay01.boulder.ibm.com [9.17.195.226]) by e36.co.us.ibm.com (8.14.3/8.13.1) with ESMTP id o2NJpxPv000336 for ; Tue, 23 Mar 2010 13:51:59 -0600 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay01.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o2NJsrIu141178 for ; Tue, 23 Mar 2010 13:54:56 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id o2NJsqli031518 for ; Tue, 23 Mar 2010 13:54:52 -0600 Message-ID: <4BA91C9B.8070900@us.ibm.com> Date: Tue, 23 Mar 2010 12:55:07 -0700 From: Badari Pulavarty MIME-Version: 1.0 References: <1269304444.7931.68.camel@badari-desktop> <20100323123916.GA24750@redhat.com> <4BA9010D.6010607@us.ibm.com> <20100323180642.GA20175@redhat.com> In-Reply-To: <20100323180642.GA20175@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [RFC] vhost-blk implementation List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org Michael S. Tsirkin wrote: > On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote: > >> Michael S. Tsirkin wrote: >> >>> On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote: >>> >>> >>>> Write Results: >>>> ============== >>>> >>>> I see degraded IO performance when doing sequential IO write >>>> tests with vhost-blk compared to virtio-blk. >>>> >>>> # time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct >>>> >>>> I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with >>>> vhost-blk. Wondering why ? >>>> >>>> >>> Try to look and number of interrupts and/or number of exits. >>> >>> >> I checked interrupts and IO exits - there is no major noticeable >> difference between >> vhost-blk and virtio-blk scenerios. >> >>> It could also be that you are overrunning some queue. >>> >>> I don't see any exit mitigation strategy in your patch: >>> when there are already lots of requests in a queue, it's usually >>> a good idea to disable notifications and poll the >>> queue as requests complete. That could help performance. >>> >>> >> Do you mean poll eventfd for new requests instead of waiting for new >> notifications ? >> Where do you do that in vhost-net code ? >> > > vhost_disable_notify does this. > > >> Unlike network socket, since we are dealing with a file, there is no >> ->poll support for it. >> So I can't poll for the data. And also, Issue I am having is on the >> write() side. >> > > Not sure I understand. > > >> I looked at it some more - I see 512K write requests on the >> virtio-queue in both vhost-blk and virtio-blk cases. Both qemu or >> vhost is doing synchronous writes to page cache (there is no write >> batching in qemu that is affecting this case). I still puzzled on >> why virtio-blk outperforms vhost-blk. >> >> Thanks, >> Badari >> > > If you say the number of requests is the same, we are left with: > - requests are smaller for some reason? > - something is causing retries? > No. IO requests sizes are exactly same (512K) in both cases. There are no retries or errors in both cases. One thing I am not clear is - for some reason guest kernel could push more data into virtio-ring in case of virtio-blk vs vhost-blk. Is this possible ? Does guest gets to run much sooner in virtio-blk case than vhost-blk ? Sorry, if its dumb question - I don't understand all the vhost details :( Thanks, Badari