public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <righiandr@users.sourceforge.net>
To: balbir@linux.vnet.ibm.com
Cc: Andrea Righi <righi.andrea@gmail.com>,
	menage@google.com, matt@bluehost.com, roberto@unbit.it,
	linux-kernel@vger.kernel.org,
	Linux Containers <containers@lists.osdl.org>
Subject: Re: [PATCH 0/3] cgroup: block device i/o bandwidth controller
Date: Sun, 25 May 2008 00:28:06 +0200	[thread overview]
Message-ID: <48389676.5020405@users.sourceforge.net> (raw)
In-Reply-To: <48385CFA.7010201@linux.vnet.ibm.com>

Balbir Singh wrote:
> Andrea Righi wrote:
>> I'm resending an updated version of the cgroup I/O bandwidth controller
>> that I wrote some months ago (http://lwn.net/Articles/265944), since
>> someone asked me recently.
>>
>> The goal of this patchset is to implement a block device I/O bandwidth
>> controller using cgroups.
>>
>> Detailed informations about design, its goal and usage are described in
>> the documentation.
>>
>> Review, comments and feedbacks are welcome.
>>
> 
> Hi, Andrea,
> 
> There are several parallel efforts for the IO controller? Could you
> describe/compare them with yours? Is any consensus building taking place?
> 
> CC'ing containers mailing list.
> 

Hi Balbir,

I've seen the Ryo Tsuruta's dm-band (http://lwn.net/Articles/266257),
the CFQ cgroup solution proposed by Vasily Tarasov
(http://lwn.net/Articles/274652) and a similar approach by Satoshi
Uchida (http://lwn.net/Articles/275944).

First one is implemented at the device mapper layer and AFAIK at the
moment it allows only to define per-task, per-user and per-group rules
(cgroup support is in the TODO list anyway).

Second and third solutions are implemented at the i/o scheduler
layer, CFQ in particular.

They work as expected with direct i/o operations (or synchronous reads).
The problem is: how to limit the i/o activity of an applicaton that
already wrote all the data in memory? The i/o scheduler is not the right
place to throttle application writes, because it's "too late". At the
very least we can map back the i/o path to find which process originally
dirtied memory and depending on this information delay the dispatching
of the requests. However, this needs bigger buffers, more page cache
usage, that can lead to potential OOM conditions in massively shared
environments.

So, my approach has the disadvantage or the advantage (depending on the
context and the requirements) to explicitly choke applications'
requests. Other solutions that operates in the subsystem used to
dispatch i/o requests are probably better to maximize overall
performance, but do not offer the same control over a real QoS as
request limiting can do.

Another difference is that all of them are priority/weighted based. The
io-throttle controller, instead, allows to define direct bandwidth
limiting rules. The difference here is that priority based approach
propagates bursts and does no smoothing. Bandwidth limiting approach
controls bursts by smoothing the i/o rate. This means better performance
predictability at the cost of poor throughput optimization.

I'll run some benchmarks and post the results ASAP. It would be also
interesting to run the same benchmarks using the other i/o controllers
and compare the results in terms of fairness, performance
predictability, responsiveness, throughput, etc. I'll see what I can do.

-Andrea

  reply	other threads:[~2008-05-24 22:28 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-05-24 16:56 [PATCH 0/3] cgroup: block device i/o bandwidth controller Andrea Righi
2008-05-24 18:22 ` Balbir Singh
2008-05-24 22:28   ` Andrea Righi [this message]
2008-06-02 11:17     ` Andrea Righi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48389676.5020405@users.sourceforge.net \
    --to=righiandr@users.sourceforge.net \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=containers@lists.osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt@bluehost.com \
    --cc=menage@google.com \
    --cc=righi.andrea@gmail.com \
    --cc=roberto@unbit.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox