From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52771) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XLyJ5-00044K-F8 for qemu-devel@nongnu.org; Mon, 25 Aug 2014 13:43:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XLyIx-0005ce-7Q for qemu-devel@nongnu.org; Mon, 25 Aug 2014 13:43:51 -0400 Received: from mail1.windriver.com ([147.11.146.13]:53941) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XLyIw-0005cZ-WC for qemu-devel@nongnu.org; Mon, 25 Aug 2014 13:43:43 -0400 Message-ID: <53FB75CA.6070500@windriver.com> Date: Mon, 25 Aug 2014 11:43:38 -0600 From: Chris Friesen MIME-Version: 1.0 References: <53C9A440.7020306@windriver.com> <53CA06ED.1090102@redhat.com> <53CA0FC4.8080802@windriver.com> <53CA1D06.9090601@redhat.com> <20140719084537.GA3058@irqsave.net> <53CD2AE1.6080803@windriver.com> <20140721151540.GA22161@irqsave.net> <53CD3341.60705@windriver.com> <20140721161034.GC22161@irqsave.net> <53F7E77A.9050509@windriver.com> <20140823075658.GA6687@irqsave.net> <53FB5276.7050003@windriver.com> In-Reply-To: <53FB5276.7050003@windriver.com> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?ISO-8859-1?Q?Beno=EEt_Canet?= Cc: Paolo Bonzini , qemu-devel@nongnu.org On 08/25/2014 09:12 AM, Chris Friesen wrote: > I set up another test, checking the inflight value every second. > > Running just "dd if=/dev/zero of=testfile2 bs=1M count=700 > oflag=nocache&" gave a bit over 100 inflight requests. > > If I simultaneously run "dd if=testfile of=/dev/null bs=1M count=700 > oflag=nocache&" then then number of inflight write requests peaks at 176. > > I should point out that the above numbers are with qemu 1.7.0, with a > ceph storage backend. qemu is started with > > -drive file=rbd:cinder-volumes/......... From a stacktrace that I added it looks like the writes are coming in via virtio_blk_handle_output(). Looking at virtio_blk_device_init() I see it calling virtio_add_queue(vdev, 128, virtio_blk_handle_output); I wondered if that 128 had anything to do with the number of inflight requests, so I tried recompiling with 16 instead. I still saw the number of inflight requests go up to 178 and the guest took a kernel panic in virtqueue_add_buf() so that wasn't very successful. :) Following the code path in virtio_blk_handle_write() it looks like it will bundle up to 32 writes into a single large iovec-based "multiwrite" operation. But from there on down I don't see a limit on how many writes can be outstanding at any one time. Still checking the code further up the virtio call chain. Chris