public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <righi.andrea@gmail.com>
To: Eric Rannaud <eric.rannaud@gmail.com>
Cc: Divyesh Shah <dpshah@google.com>,
	balbir@linux.vnet.ibm.com, menage@google.com,
	linux-kernel@vger.kernel.org, axboe@kernel.dk, matt@bluehost.com,
	roberto@unbit.it, randy.dunlap@oracle.com,
	akpm@linux-foundation.org
Subject: Re: i/o bandwidth controller infrastructure
Date: Mon, 23 Jun 2008 12:37:56 +0200	[thread overview]
Message-ID: <485F7D04.7000609@gmail.com> (raw)
In-Reply-To: <alpine.LFD.1.10.0806221232070.2247@localhost.localdomain>

Eric Rannaud wrote:
> On Tue, 17 Jun 2008, Andrea Righi wrote:
>>> With this bandwidth controller, a cpu-intensive job which otherwise  does
>>> not care about its IO
>>> performance needs to be pin-point accurate about IO bandwidth  required in
>>> order to not suffer
>>> from cpu-throttling. IMHO, if a cgroup is exceeding its limit for a  given
>>> resource, the throttling
>>> should be done _only_ for that resource.
>> I understand your point of view. It would be nice if we could just
>> "disable" the i/o for a cgroup that exceeds its limit, instead of
>> scheduling some sleep()s, so the tasks running in this cgroup would be
>> able to continue their non-i/o operations as usual.
>>
>> However, how to do if the tasks continue to perform i/o ops under this
>> condition? we could just cache the i/o in memory and at the same time
>> reduce the i/o priority of those tasks' requests, but this would require
>> a lot of memory, more space in the page cache, and probably could lead
>> to potential OOM conditions. A safer approach IMHO is to force the tasks
>> to wait synchronously on each operation that directly or indirectly
>> generates i/o. The last one is the solution implemented by this
>> bandwidth controller.
> 
> What about AIO? Is this approach going to make the task sleep as well?
> Would it better to return from aio_write()/_read() with EAGAIN?

Good point. I should check, but it seems sleeps are incorrectly
performed also for AIO requests. I agree the correct behaviour would be
to return EAGAIN instead, as you suggested. I'll look at it if nobody
comes up with a solution.

Thanks,
-Andrea

  reply	other threads:[~2008-06-23 10:38 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20030410181011$6d15@gated-at.bofh.it>
     [not found] ` <aC1Yl-2AL-1@gated-at.bofh.it>
     [not found]   ` <75b07c02-1595-4af2-ac87-3b067459f62e@w8g2000prd.googlegroups.com>
2008-06-16 20:51     ` i/o bandwidth controller infrastructure Divyesh Shah
2008-06-16 22:39       ` Andrea Righi
2008-06-22 19:41         ` Eric Rannaud
2008-06-23 10:37           ` Andrea Righi [this message]
     [not found] ` <aXrl4-2FX-1@gated-at.bofh.it>
2008-08-09 14:08   ` [PATCH 1/1] [x86] Configuration options to compile out x86 CPU support code Bodo Eggert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=485F7D04.7000609@gmail.com \
    --to=righi.andrea@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=dpshah@google.com \
    --cc=eric.rannaud@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt@bluehost.com \
    --cc=menage@google.com \
    --cc=randy.dunlap@oracle.com \
    --cc=roberto@unbit.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox