From: Andrea Righi <righi.andrea@gmail.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>,
Paul Menage <menage@google.com>,
randy.dunlap@oracle.com, Carl Henrik Lunde <chlunde@ping.uio.no>,
Divyesh Shah <dpshah@google.com>,
eric.rannaud@gmail.com, fernando@oss.ntt.co.jp,
akpm@linux-foundation.org, agk@sourceware.org,
subrata@linux.vnet.ibm.com, axboe@kernel.dk,
Marco Innocenti <m.innocenti@cineca.it>,
containers@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, dave@linux.vnet.ibm.com,
matt@bluehost.com, roberto@unbit.it, ngupta@google.com
Subject: Re: [RFC][PATCH -mm 1/5] i/o controller documentation
Date: Thu, 18 Sep 2008 17:03:59 +0200 [thread overview]
Message-ID: <48D26DDF.9080402@gmail.com> (raw)
In-Reply-To: <20080918140416.GF20640@redhat.com>
Vivek Goyal wrote:
> On Wed, Aug 27, 2008 at 06:07:33PM +0200, Andrea Righi wrote:
>> Documentation of the block device I/O controller: description, usage,
>> advantages and design.
>>
>> Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
>> ---
>> Documentation/controllers/io-throttle.txt | 377 +++++++++++++++++++++++++++++
>> 1 files changed, 377 insertions(+), 0 deletions(-)
>> create mode 100644 Documentation/controllers/io-throttle.txt
>>
>> diff --git a/Documentation/controllers/io-throttle.txt b/Documentation/controllers/io-throttle.txt
>> new file mode 100644
>> index 0000000..09df0af
>> --- /dev/null
>> +++ b/Documentation/controllers/io-throttle.txt
>> @@ -0,0 +1,377 @@
>> +
>> + Block device I/O bandwidth controller
>> +
>> +----------------------------------------------------------------------
>> +1. DESCRIPTION
>> +
>> +This controller allows to limit the I/O bandwidth of specific block devices for
>> +specific process containers (cgroups) imposing additional delays on I/O
>> +requests for those processes that exceed the limits defined in the control
>> +group filesystem.
>> +
>> +Bandwidth limiting rules offer better control over QoS with respect to priority
>> +or weight-based solutions that only give information about applications'
>> +relative performance requirements. Nevertheless, priority based solutions are
>> +affected by performance bursts, when only low-priority requests are submitted
>> +to a general purpose resource dispatcher.
>> +
>> +The goal of the I/O bandwidth controller is to improve performance
>> +predictability from the applications' point of view and provide performance
>> +isolation of different control groups sharing the same block devices.
>> +
>> +NOTE #1: If you're looking for a way to improve the overall throughput of the
>> +system probably you should use a different solution.
>> +
>> +NOTE #2: The current implementation does not guarantee minimum bandwidth
>> +levels, the QoS is implemented only slowing down I/O "traffic" that exceeds the
>> +limits specified by the user; minimum I/O rate thresholds are supposed to be
>> +guaranteed if the user configures a proper I/O bandwidth partitioning of the
>> +block devices shared among the different cgroups (theoretically if the sum of
>> +all the single limits defined for a block device doesn't exceed the total I/O
>> +bandwidth of that device).
>> +
>
> Hi Andrea,
>
> Had a query. What's your use case for capping max bandwidth? I was
> wondering will proportional bandwidth not cover it. So if we allocate
> weight/share to every cgroup and limit the bandwidth based on shares
> only in case of contention. Otherwise applications get to unlimited
> bandwidth. Much like what cpu controller does or for that matter dm-ioband
> seems to be doing the same thing. Will you not get same kind of QoS here when
> comapred to max-bandwidth. The only thing probably missing is what we call
> hard limit. When BW is available but you don't want a user to use that
> BW, until and unless user has paid for that.
At the beginning my use case was to guarantee a certain level
performance _predictability_. That means no more and no less than the
specified threshold (should I say this would be useful for the real-time
apps? maybe yes).
But at this stage of development IMHO it's worth to implement a more
generic solution, able to guarantee both min/max thresholds (to cover my
original use case) as well as the weight/share functionality to cover a
larger degree use case (QoS for massive shared environments).
-Andrea
next prev parent reply other threads:[~2008-09-18 15:04 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-08-27 16:07 [RFC][PATCH -mm 1/5] i/o controller documentation Andrea Righi
2008-09-18 14:04 ` Vivek Goyal
2008-09-18 15:03 ` Andrea Righi [this message]
2008-09-18 15:33 ` Vivek Goyal
2008-09-18 16:26 ` Andrea Righi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48D26DDF.9080402@gmail.com \
--to=righi.andrea@gmail.com \
--cc=agk@sourceware.org \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=balbir@linux.vnet.ibm.com \
--cc=chlunde@ping.uio.no \
--cc=containers@lists.linux-foundation.org \
--cc=dave@linux.vnet.ibm.com \
--cc=dpshah@google.com \
--cc=eric.rannaud@gmail.com \
--cc=fernando@oss.ntt.co.jp \
--cc=linux-kernel@vger.kernel.org \
--cc=m.innocenti@cineca.it \
--cc=matt@bluehost.com \
--cc=menage@google.com \
--cc=ngupta@google.com \
--cc=randy.dunlap@oracle.com \
--cc=roberto@unbit.it \
--cc=subrata@linux.vnet.ibm.com \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox