From: Chris Friesen <chris.friesen@windriver.com>
To: Andrey Korolyov <andrey@xdel.ru>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?
Date: Fri, 18 Jul 2014 10:26:35 -0600 [thread overview]
Message-ID: <53C94ABB.1060406@windriver.com> (raw)
In-Reply-To: <CABYiri9amkDzEh8-3wBVL2c5pHPVfW4+KEZAktZKfiOFdPV0+w@mail.gmail.com>
On 07/18/2014 09:54 AM, Andrey Korolyov wrote:
> On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen
> <chris.friesen@windriver.com> wrote:
>> Hi,
>>
>> I've recently run up against an interesting issue where I had a number of
>> guests running and when I started doing heavy disk I/O on a virtio disk
>> (backed via ceph rbd) the memory consumption spiked and triggered the
>> OOM-killer.
>>
>> I want to reserve some memory for I/O, but I don't know how much it can use
>> in the worst-case.
>>
>> Is there a limit on the number of in-flight I/O operations? (Preferably as
>> a configurable option, but even hard-coded would be good to know as well.)
>>
>> Thanks,
>> Chris
>>
>
> Hi, are you using per-vm cgroups or it was happened on bare system?
> Ceph backend have writeback cache setting, may be you hitting it but
> it must be set enormously large then.
>
This is without cgroups. (I think we had tried cgroups and ran into
some issues.) Would cgroups even help with iSCSI/rbd/etc?
The "-drive" parameter in qemu was using "cache=none" for the VMs in
question. But I'm assuming it keeps the buffer around until acked by
the far end in order to be able to handle retries.
Chris
next prev parent reply other threads:[~2014-07-18 16:26 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-18 14:58 [Qemu-devel] is there a limit on the number of in-flight I/O operations? Chris Friesen
2014-07-18 15:24 ` Paolo Bonzini
2014-07-18 16:22 ` Chris Friesen
2014-07-18 20:13 ` Paolo Bonzini
2014-07-18 22:48 ` Chris Friesen
2014-07-19 5:49 ` Paolo Bonzini
2014-07-19 6:27 ` Chris Friesen
2014-07-19 7:23 ` Paolo Bonzini
2014-07-19 8:45 ` Benoît Canet
2014-07-21 14:59 ` Chris Friesen
2014-07-21 15:15 ` Benoît Canet
2014-07-21 15:35 ` Chris Friesen
2014-07-21 15:54 ` Benoît Canet
2014-07-21 16:10 ` Benoît Canet
2014-08-23 0:59 ` Chris Friesen
2014-08-23 7:56 ` Benoît Canet
2014-08-25 15:12 ` Chris Friesen
2014-08-25 17:43 ` Chris Friesen
2015-08-27 16:37 ` Stefan Hajnoczi
2015-08-27 16:33 ` Stefan Hajnoczi
2014-08-25 21:50 ` Chris Friesen
2014-08-27 5:43 ` Chris Friesen
2015-05-14 13:42 ` Andrey Korolyov
2015-08-26 17:10 ` Andrey Korolyov
2015-08-26 23:31 ` Josh Durgin
2015-08-26 23:47 ` Andrey Korolyov
2015-08-27 0:56 ` Josh Durgin
2015-08-27 16:48 ` Stefan Hajnoczi
2015-08-27 17:05 ` Stefan Hajnoczi
2015-08-27 16:49 ` Stefan Hajnoczi
2015-08-28 0:31 ` Josh Durgin
2015-08-28 8:31 ` Andrey Korolyov
2014-07-21 19:47 ` Benoît Canet
2014-07-21 21:12 ` Chris Friesen
2014-07-21 22:04 ` Benoît Canet
2014-07-18 15:54 ` Andrey Korolyov
2014-07-18 16:26 ` Chris Friesen [this message]
2014-07-18 16:30 ` Andrey Korolyov
2014-07-18 16:46 ` Chris Friesen
[not found] <1000957815.25879188.1441820902018.JavaMail.zimbra@redhat.com>
2015-09-09 18:51 ` Jason Dillaman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53C94ABB.1060406@windriver.com \
--to=chris.friesen@windriver.com \
--cc=andrey@xdel.ru \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).