From: Andrea Righi <righi.andrea@gmail.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Hirokazu Takahashi <taka@valinux.co.jp>,
randy.dunlap@oracle.com, menage@google.com, chlunde@ping.uio.no,
dpshah@google.com, eric.rannaud@gmail.com,
balbir@linux.vnet.ibm.com, fernando@oss.ntt.co.jp,
akpm@linux-foundation.org, agk@sourceware.org,
subrata@linux.vnet.ibm.com, axboe@kernel.dk,
m.innocenti@cineca.it, containers@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, dave@linux.vnet.ibm.com,
matt@bluehost.com, roberto@unbit.it, ngupta@google.com
Subject: Re: [RFC][PATCH -mm 0/5] cgroup: block device i/o controller (v9)
Date: Thu, 18 Sep 2008 16:54:27 +0200 [thread overview]
Message-ID: <48D26BA3.40009@gmail.com> (raw)
In-Reply-To: <20080918135513.GE20640@redhat.com>
Vivek Goyal wrote:
> On Wed, Sep 17, 2008 at 10:47:54AM +0200, Andrea Righi wrote:
>> Hirokazu Takahashi wrote:
>>> Hi,
>>>
>>>> TODO:
>>>>
>>>> * Try to push down the throttling and implement it directly in the I/O
>>>> schedulers, using bio-cgroup (http://people.valinux.co.jp/~ryov/bio-cgroup/)
>>>> to keep track of the right cgroup context. This approach could lead to more
>>>> memory consumption and increases the number of dirty pages (hard/slow to
>>>> reclaim pages) in the system, since dirty-page ratio in memory is not
>>>> limited. This could even lead to potential OOM conditions, but these problems
>>>> can be resolved directly into the memory cgroup subsystem
>>>>
>>>> * Handle I/O generated by kswapd: at the moment there's no control on the I/O
>>>> generated by kswapd; try to use the page_cgroup functionality of the memory
>>>> cgroup controller to track this kind of I/O and charge the right cgroup when
>>>> pages are swapped in/out
>>> FYI, this also can be done with bio-cgroup, which determine the owner cgroup
>>> of a given anonymous page.
>>>
>>> Thanks,
>>> Hirokazu Takahashi
>> That would be great! FYI here is how I would like to proceed:
>>
>> - today I'll post a new version of my cgroup-io-throttle patch rebased
>> to 2.6.27-rc5-mm1 (it's well tested and seems to be stable enough).
>> To keep the things light and simpler I've implemented custom
>> get_cgroup_from_page() / put_cgroup_from_page() in the memory
>> controller to retrieve the owner of a page, holding a reference to the
>> corresponding memcg, during async writes in submit_bio(); this is not
>> probably the best way to proceed, and a more generic framework like
>> bio-cgroup sounds better, but it seems to work quite well. The only
>> problem I've found is that during swap_writepage() the page is not
>> assigned to any page_cgroup (page_get_page_cgroup() returns NULL), and
>> so I'm not able to charge the cost of this I/O operation to the right
>> cgroup. Does bio-cgroup address or even resolve this issue?
>> - begin to implement a new branch of cgroup-io-throttle on top of
>> bio-cgroup
>> - also start to implement an additional request queue to provide first a
>> control at the cgroup level and a dispatcher to pass the request to
>> the elevator (as suggested by Vivek)
>>
>
> Hi Andrea,
>
> So if we maintain and rb-tree per request queue and implement the cgroup
> rules there, then that will take care of io-throttling also. (One can
> control the release of bio/requests to elevator based on any kind of
> rules. proportional weight/max-bandwidth).
>
> If that's the case, I was wondering what do you mean by "begin to
> implement new branch of cgroup-io-throttle" on top of bio-cgroup".
Correct, with the rb-tree per request queue solution there's no need to
keep track of the context in the struct bio, since the i/o control
based on per cgroup rules has been already performed by the first i/o
dispatcher. And I would really like to dedicate all my efforts to move
in this direction, but it would be interesting as well to test the
bio-cgroup functionality since it's working from now, it's a generic
framework and used by another project (dm-ioband). This is the reason
because I put it there, specifying to open a new branch, because it
would be an alternative solution to the following point.
-Andrea
next prev parent reply other threads:[~2008-09-18 14:54 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-08-27 16:07 [RFC][PATCH -mm 0/5] cgroup: block device i/o controller (v9) Andrea Righi
2008-09-02 18:06 ` Vivek Goyal
2008-09-02 20:50 ` Andrea Righi
2008-09-02 21:41 ` Vivek Goyal
2008-09-05 15:59 ` Vivek Goyal
2008-09-05 17:38 ` Andrea Righi
2008-09-17 7:18 ` Hirokazu Takahashi
2008-09-17 8:47 ` Andrea Righi
2008-09-18 11:24 ` Hirokazu Takahashi
2008-09-18 14:37 ` Andrea Righi
2008-09-18 13:55 ` Vivek Goyal
2008-09-18 14:54 ` Andrea Righi [this message]
2008-09-17 9:04 ` Takuya Yoshikawa
2008-09-17 9:42 ` Andrea Righi
2008-09-17 10:08 ` Andrea Righi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48D26BA3.40009@gmail.com \
--to=righi.andrea@gmail.com \
--cc=agk@sourceware.org \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=balbir@linux.vnet.ibm.com \
--cc=chlunde@ping.uio.no \
--cc=containers@lists.linux-foundation.org \
--cc=dave@linux.vnet.ibm.com \
--cc=dpshah@google.com \
--cc=eric.rannaud@gmail.com \
--cc=fernando@oss.ntt.co.jp \
--cc=linux-kernel@vger.kernel.org \
--cc=m.innocenti@cineca.it \
--cc=matt@bluehost.com \
--cc=menage@google.com \
--cc=ngupta@google.com \
--cc=randy.dunlap@oracle.com \
--cc=roberto@unbit.it \
--cc=subrata@linux.vnet.ibm.com \
--cc=taka@valinux.co.jp \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox