From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33150) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZV7ZW-00069m-1Z for qemu-devel@nongnu.org; Thu, 27 Aug 2015 20:31:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZV7ZR-0002MW-1N for qemu-devel@nongnu.org; Thu, 27 Aug 2015 20:31:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52084) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZV7ZQ-0002M0-TH for qemu-devel@nongnu.org; Thu, 27 Aug 2015 20:31:04 -0400 Message-ID: <55DFABC8.5040006@redhat.com> Date: Thu, 27 Aug 2015 17:31:04 -0700 From: Josh Durgin MIME-Version: 1.0 References: <53CA0FC4.8080802@windriver.com> <53CA1D06.9090601@redhat.com> <20140719084537.GA3058@irqsave.net> <53CD2AE1.6080803@windriver.com> <20140721151540.GA22161@irqsave.net> <53CD3341.60705@windriver.com> <20140721161034.GC22161@irqsave.net> <53F7E77A.9050509@windriver.com> <20140823075658.GA6687@irqsave.net> <53FBAF8A.3050005@windriver.com> <20150827164952.GE8298@stefanha-thinkpad.redhat.com> In-Reply-To: <20150827164952.GE8298@stefanha-thinkpad.redhat.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi , Chris Friesen Cc: =?windows-1252?Q?Beno=EEt_Canet?= , Paolo Bonzini , Andrey Korolyov , qemu-devel@nongnu.org On 08/27/2015 09:49 AM, Stefan Hajnoczi wrote: > On Mon, Aug 25, 2014 at 03:50:02PM -0600, Chris Friesen wrote: >> The only limit I see in the whole call chain from >> virtio_blk_handle_request() on down is the call to >> bdrv_io_limits_intercept() in bdrv_co_do_writev(). However, that doesn't >> provide any limit on the absolute number of inflight operations, only on >> operations/sec. If the ceph server cluster can't keep up with the aggregate >> load, then the number of inflight operations can still grow indefinitely. > > We probably shouldn't rely on QEMU I/O throttling to keep memory usage > reasonable. Agreed. > Instead rbd should be adjusted to support iovecs as you suggested. That > way no bounce buffers are needed. Yeah, this is pretty simple to do. Internally librbd has iovec-equivalents. I'm not sure this is the main source of extra memory usage though. I suspect the main culprit here is rbd cache letting itself burst too large, rather than the bounce buffers. Andrey, does this still occur with caching off? Josh