From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40374) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X8Aze-0004LD-JW for qemu-devel@nongnu.org; Fri, 18 Jul 2014 12:26:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1X8AzX-0003Ae-Db for qemu-devel@nongnu.org; Fri, 18 Jul 2014 12:26:46 -0400 Received: from mail1.windriver.com ([147.11.146.13]:49168) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X8AzX-0003AW-5v for qemu-devel@nongnu.org; Fri, 18 Jul 2014 12:26:39 -0400 Message-ID: <53C94ABB.1060406@windriver.com> Date: Fri, 18 Jul 2014 10:26:35 -0600 From: Chris Friesen MIME-Version: 1.0 References: <53C9362C.8040507@windriver.com> In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrey Korolyov Cc: "qemu-devel@nongnu.org" On 07/18/2014 09:54 AM, Andrey Korolyov wrote: > On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen > wrote: >> Hi, >> >> I've recently run up against an interesting issue where I had a number of >> guests running and when I started doing heavy disk I/O on a virtio disk >> (backed via ceph rbd) the memory consumption spiked and triggered the >> OOM-killer. >> >> I want to reserve some memory for I/O, but I don't know how much it can use >> in the worst-case. >> >> Is there a limit on the number of in-flight I/O operations? (Preferably as >> a configurable option, but even hard-coded would be good to know as well.) >> >> Thanks, >> Chris >> > > Hi, are you using per-vm cgroups or it was happened on bare system? > Ceph backend have writeback cache setting, may be you hitting it but > it must be set enormously large then. > This is without cgroups. (I think we had tried cgroups and ran into some issues.) Would cgroups even help with iSCSI/rbd/etc? The "-drive" parameter in qemu was using "cache=none" for the VMs in question. But I'm assuming it keeps the buffer around until acked by the far end in order to be able to handle retries. Chris