From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [RFC] vhost-blk implementation Date: Tue, 30 Mar 2010 15:43:06 +0300 Message-ID: <4BB1F1DA.2060907@redhat.com> References: <1269306023.7931.72.camel@badari-desktop> <20100324200402.GA22272@infradead.org> <1269877312.7931.93.camel@badari-desktop> <20100329182010.GM1744@sequoia.sous-sol.org> <4BB10F75.2000701@redhat.com> <1269903085.7931.99.camel@badari-desktop> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Chris Wright , Christoph Hellwig , kvm@vger.kernel.org To: Badari Pulavarty Return-path: Received: from mx1.redhat.com ([209.132.183.28]:63220 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755073Ab0C3MnZ (ORCPT ); Tue, 30 Mar 2010 08:43:25 -0400 In-Reply-To: <1269903085.7931.99.camel@badari-desktop> Sender: kvm-owner@vger.kernel.org List-ID: On 03/30/2010 01:51 AM, Badari Pulavarty wrote: > > >>> Your io wait time is twice as long and your throughput is about half. >>> I think the qmeu block submission does an extra attempt at merging >>> requests. Does blktrace tell you anything interesting? >>> >>> > Yes. I see that in my testcase (2M writes) - QEMU is pickup 512K > requests from the virtio ring and merging them back to 2M before > submitting them. > > Unfortunately, I can't do that quite easily in vhost-blk. QEMU > does re-creates iovecs for the merged IO. I have to come up with > a scheme to do this :( > I don't think that either vhost-blk or virtio-blk should do this. Merging increases latency, and in the case of streaming writes makes it impossible for the guest to prepare new requests while earlier ones are being serviced (in effect it reduces the queue depth to 1). qcow2 does benefit from merging, but it should do so itself without impacting raw. >> It does. I suggest using fio O_DIRECT random access patterns to avoid >> such issues. >> > Well, I am not trying to come up with a test case where vhost-blk > performs better than virtio-blk. I am trying to understand where > and why vhost-blk performnce worse than virtio-blk. > In this case qemu-virtio is making an incorrect tradeoff. The guest could easily merge those requests itself. If you want larger writes, tune the guest to issue them. Another way to look at it: merging improved bandwidth but increased latency, yet you are only measuring bandwidth. If you measured only latency you'd find that vhost-blk is better. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.