From: Andrea Righi <righi.andrea@gmail.com>
To: Divyesh Shah <dpshah@google.com>
Cc: balbir@linux.vnet.ibm.com, menage@google.com,
linux-kernel@vger.kernel.org, axboe@kernel.dk, matt@bluehost.com,
roberto@unbit.it, randy.dunlap@oracle.com,
akpm@linux-foundation.org
Subject: Re: i/o bandwidth controller infrastructure
Date: Tue, 17 Jun 2008 00:39:26 +0200 (MEST) [thread overview]
Message-ID: <4856EB9D.6070804@gmail.com> (raw)
In-Reply-To: <48D0786A-AC3B-46C3-B35C-EAAA47BFAEBC@google.com>
Divyesh Shah wrote:
>> This is the core io-throttle kernel infrastructure. It creates the
>> basic
>> interfaces to cgroups and implements the I/O measurement and
>> throttling
>> functions.
>
> I am not sure if throttling an application's cpu usage by explicitly
> putting it to sleep
> in order to restrain it from making more IO requests is the way to go
> here (though I can't think
> of anything better right now).
> With this bandwidth controller, a cpu-intensive job which otherwise
> does not care about its IO
> performance needs to be pin-point accurate about IO bandwidth
> required in order to not suffer
> from cpu-throttling. IMHO, if a cgroup is exceeding its limit for a
> given resource, the throttling
> should be done _only_ for that resource.
>
> -Divyesh
Divyesh,
I understand your point of view. It would be nice if we could just
"disable" the i/o for a cgroup that exceeds its limit, instead of
scheduling some sleep()s, so the tasks running in this cgroup would be
able to continue their non-i/o operations as usual.
However, how to do if the tasks continue to perform i/o ops under this
condition? we could just cache the i/o in memory and at the same time
reduce the i/o priority of those tasks' requests, but this would require
a lot of memory, more space in the page cache, and probably could lead
to potential OOM conditions. A safer approach IMHO is to force the tasks
to wait synchronously on each operation that directly or indirectly
generates i/o. The last one is the solution implemented by this
bandwidth controller.
We could collect additional statistics, or implement some heuristics to
predict the tasks' i/o patterns in order to not penalize cpu-bound jobs
too much, but the basic concept is the same.
Anyway, I agree there must be a better solution, but this is the best
I've found right now... nice ideas are welcome.
-Andrea
next prev parent reply other threads:[~2008-06-16 22:39 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20030410181011$6d15@gated-at.bofh.it>
[not found] ` <aC1Yl-2AL-1@gated-at.bofh.it>
[not found] ` <75b07c02-1595-4af2-ac87-3b067459f62e@w8g2000prd.googlegroups.com>
2008-06-16 20:51 ` i/o bandwidth controller infrastructure Divyesh Shah
2008-06-16 22:39 ` Andrea Righi [this message]
2008-06-22 19:41 ` Eric Rannaud
2008-06-23 10:37 ` Andrea Righi
[not found] ` <aXrl4-2FX-1@gated-at.bofh.it>
2008-08-09 14:08 ` [PATCH 1/1] [x86] Configuration options to compile out x86 CPU support code Bodo Eggert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4856EB9D.6070804@gmail.com \
--to=righi.andrea@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=balbir@linux.vnet.ibm.com \
--cc=dpshah@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matt@bluehost.com \
--cc=menage@google.com \
--cc=randy.dunlap@oracle.com \
--cc=roberto@unbit.it \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox