From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57501) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZV0BN-0007k4-J4 for qemu-devel@nongnu.org; Thu, 27 Aug 2015 12:37:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZV0BH-00027X-SD for qemu-devel@nongnu.org; Thu, 27 Aug 2015 12:37:45 -0400 Received: from mail-wi0-x22c.google.com ([2a00:1450:400c:c05::22c]:38717) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZV0BH-00026l-LP for qemu-devel@nongnu.org; Thu, 27 Aug 2015 12:37:39 -0400 Received: by wicgk12 with SMTP id gk12so14838389wic.1 for ; Thu, 27 Aug 2015 09:37:39 -0700 (PDT) Date: Thu, 27 Aug 2015 17:37:35 +0100 From: Stefan Hajnoczi Message-ID: <20150827163735.GC8298@stefanha-thinkpad.redhat.com> References: <53CA1D06.9090601@redhat.com> <20140719084537.GA3058@irqsave.net> <53CD2AE1.6080803@windriver.com> <20140721151540.GA22161@irqsave.net> <53CD3341.60705@windriver.com> <20140721161034.GC22161@irqsave.net> <53F7E77A.9050509@windriver.com> <20140823075658.GA6687@irqsave.net> <53FB5276.7050003@windriver.com> <53FB75CA.6070500@windriver.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53FB75CA.6070500@windriver.com> Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Chris Friesen Cc: =?iso-8859-1?Q?Beno=EEt?= Canet , Paolo Bonzini , qemu-devel@nongnu.org On Mon, Aug 25, 2014 at 11:43:38AM -0600, Chris Friesen wrote: > On 08/25/2014 09:12 AM, Chris Friesen wrote: > > >I set up another test, checking the inflight value every second. > > > >Running just "dd if=/dev/zero of=testfile2 bs=1M count=700 > >oflag=nocache&" gave a bit over 100 inflight requests. > > > >If I simultaneously run "dd if=testfile of=/dev/null bs=1M count=700 > >oflag=nocache&" then then number of inflight write requests peaks at 176. > > > >I should point out that the above numbers are with qemu 1.7.0, with a > >ceph storage backend. qemu is started with > > > >-drive file=rbd:cinder-volumes/......... > > From a stacktrace that I added it looks like the writes are coming in via > virtio_blk_handle_output(). > > Looking at virtio_blk_device_init() I see it calling virtio_add_queue(vdev, > 128, virtio_blk_handle_output); > > I wondered if that 128 had anything to do with the number of inflight > requests, so I tried recompiling with 16 instead. I still saw the number of > inflight requests go up to 178 and the guest took a kernel panic in > virtqueue_add_buf() so that wasn't very successful. :) > > Following the code path in virtio_blk_handle_write() it looks like it will > bundle up to 32 writes into a single large iovec-based "multiwrite" > operation. But from there on down I don't see a limit on how many writes > can be outstanding at any one time. Still checking the code further up the > virtio call chain. Yes, virtio-blk does write merging. Since QEMU 2.4.0 it also does read request merging. I suggest using the fio benchmark tool with the following job file to try submitting 256 I/O requests at the same time: [randread] blocksize=4k filename=/dev/vda rw=randread direct=1 ioengine=libaio iodepth=256 runtime=120