From: Vivek Goyal <vgoyal@redhat.com>
To: Andrea Righi <righi.andrea@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
jens.axboe@oracle.com, ryov@valinux.co.jp,
fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com,
taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
jmoyer@redhat.com, dhaval@linux.vnet.ibm.com,
balbir@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
containers@lists.linux-foundation.org, agk@redhat.com,
dm-devel@redhat.com, snitzer@redhat.com, m-ikeda@ds.jp.nec.com,
peterz@infradead.org
Subject: Re: IO scheduler based IO Controller V2
Date: Wed, 6 May 2009 18:17:41 -0400 [thread overview]
Message-ID: <20090506221741.GL8180@redhat.com> (raw)
In-Reply-To: <20090506220250.GD4282@linux>
On Thu, May 07, 2009 at 12:02:51AM +0200, Andrea Righi wrote:
> On Wed, May 06, 2009 at 05:21:21PM -0400, Vivek Goyal wrote:
> > > Well, IMHO the big concern is at which level we want to implement the
> > > logic of control: IO scheduler, when the IO requests are already
> > > submitted and need to be dispatched, or at high level when the
> > > applications generates IO requests (or maybe both).
> > >
> > > And, as pointed by Andrew, do everything by a cgroup-based controller.
> >
> > I am not sure what's the rationale behind that. Why to do it at higher
> > layer? Doing it at IO scheduler layer will make sure that one does not
> > breaks the IO scheduler's properties with-in cgroup. (See my other mail
> > with some io-throttling test results).
> >
> > The advantage of higher layer mechanism is that it can also cover software
> > RAID devices well.
> >
> > >
> > > The other features, proportional BW, throttling, take the current ioprio
> > > model in account, etc. are implementation details and any of the
> > > proposed solutions can be extended to support all these features. I
> > > mean, io-throttle can be extended to support proportional BW (for a
> > > certain perspective it is already provided by the throttling water mark
> > > in v16), as well as the IO scheduler based controller can be extended to
> > > support absolute BW limits. The same for dm-ioband. I don't think
> > > there're huge obstacle to merge the functionalities in this sense.
> >
> > Yes, from technical point of view, one can implement a proportional BW
> > controller at higher layer also. But that would practically mean almost
> > re-implementing the CFQ logic at higher layer. Now why to get into all
> > that complexity. Why not simply make CFQ hiearchical to also handle the
> > groups?
>
> Make CFQ aware of cgroups is very important also. I could be wrong, but
> I don't think we shouldn't re-implement the same exact CFQ logic at
> higher layers. CFQ dispatches IO requests, at higher layers applications
> submit IO requests. We're talking about different things and applying
> different logic doesn't sound too strange IMHO. I mean, at least we
> should consider/test also this different approach before deciding drop
> it.
>
Lot of CFQ code is all about maintaining per io context queues, for
different classes and different prio level, about anticipation for
reads etc. Anybody who wants to get classes and ioprio within cgroup
right will end up duplicating all that logic (to cover all the cases).
So I did not mean that you will end up copying the whole code but
logically a lot of it.
Secondly, there will be mismatch in anticipation logic. CFQ gives
preference to reads and for dependent readers it idles and waits for
next request to come. A higher level throttling can interefere with IO
pattern of application and can lead CFQ to think that average thinktime
of this application is high and disable the anticipation on that
application. Which should result in high latencies for simple commands
like "ls", in presence of competing applications.
> This solution also guarantee no changes in the IO schedulers for those
> who are not interested in using the cgroup IO controller. What is the
> impact of the IO scheduler based controller for those users?
>
IO scheduler based solution is highly customizable. First of all there
are compile time switches to either completely remove fair queuing code
(for noop, deadline and AS only) or to disable group scheduling only. If
that's the case one would expect same behavior as old scheduler.
Secondly, even if everything is compiled in and customer is not using
cgroups, I would expect almost same behavior (because we will have only
root group). There will be extra code in the way and we will need some
optimizations to detect that there is only one group and bypass as much
code as possible bringing the overhead of the new code to the minimum.
So if customer is not using IO controller, he should get the same behavior
as old system. Can't prove it right now because my patches are not in that
matured but there are no fundamental design limitations.
Thanks
Vivek
next prev parent reply other threads:[~2009-05-06 22:23 UTC|newest]
Thread overview: 133+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-05 19:58 IO scheduler based IO Controller V2 Vivek Goyal
2009-05-05 19:58 ` [PATCH 01/18] io-controller: Documentation Vivek Goyal
2009-05-06 3:16 ` Gui Jianfeng
2009-05-06 13:31 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 02/18] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-05-22 6:43 ` Gui Jianfeng
2009-05-22 12:32 ` Vivek Goyal
2009-05-23 20:04 ` Jens Axboe
2009-05-05 19:58 ` [PATCH 03/18] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-05-05 19:58 ` [PATCH 04/18] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-05-22 8:54 ` Gui Jianfeng
2009-05-22 12:33 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 05/18] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-05-07 7:42 ` Gui Jianfeng
2009-05-07 8:05 ` Li Zefan
2009-05-08 12:45 ` Vivek Goyal
2009-05-08 21:09 ` Andrea Righi
2009-05-08 21:17 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 06/18] io-controller: cfq changes to use " Vivek Goyal
2009-05-05 19:58 ` [PATCH 07/18] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-05-13 2:39 ` Gui Jianfeng
2009-05-13 14:51 ` Vivek Goyal
2009-05-14 7:53 ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 08/18] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-05-13 15:00 ` Vivek Goyal
2009-06-09 7:56 ` Gui Jianfeng
2009-06-09 17:51 ` Vivek Goyal
2009-06-10 1:30 ` Gui Jianfeng
2009-06-10 13:26 ` Vivek Goyal
2009-06-11 1:22 ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 09/18] io-controller: Separate out queue and data Vivek Goyal
2009-05-05 19:58 ` [PATCH 10/18] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-05-05 19:58 ` [PATCH 11/18] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-05-05 19:58 ` [PATCH 12/18] io-controller: deadline " Vivek Goyal
2009-05-05 19:58 ` [PATCH 13/18] io-controller: anticipatory " Vivek Goyal
2009-05-05 19:58 ` [PATCH 14/18] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-05-05 19:58 ` [PATCH 15/18] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-05-05 19:58 ` [PATCH 16/18] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-05-05 19:58 ` [PATCH 17/18] io-controller: IO group refcounting support Vivek Goyal
2009-05-08 2:59 ` Gui Jianfeng
2009-05-08 12:44 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 18/18] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-05-06 21:40 ` IKEDA, Munehiro
2009-05-06 21:58 ` Vivek Goyal
2009-05-06 22:19 ` IKEDA, Munehiro
2009-05-06 22:24 ` Vivek Goyal
2009-05-06 23:01 ` IKEDA, Munehiro
2009-05-05 20:24 ` IO scheduler based IO Controller V2 Andrew Morton
2009-05-05 22:20 ` Peter Zijlstra
2009-05-06 3:42 ` Balbir Singh
2009-05-06 10:20 ` Fabio Checconi
2009-05-06 17:10 ` Balbir Singh
2009-05-06 18:47 ` Divyesh Shah
2009-05-06 20:42 ` Andrea Righi
2009-05-06 2:33 ` Vivek Goyal
2009-05-06 17:59 ` Nauman Rafique
2009-05-06 20:07 ` Andrea Righi
2009-05-06 21:21 ` Vivek Goyal
2009-05-06 22:02 ` Andrea Righi
2009-05-06 22:17 ` Vivek Goyal [this message]
2009-05-06 20:32 ` Vivek Goyal
2009-05-06 21:34 ` Andrea Righi
2009-05-06 21:52 ` Vivek Goyal
2009-05-06 22:35 ` Andrea Righi
2009-05-07 1:48 ` Ryo Tsuruta
2009-05-07 9:04 ` Andrea Righi
2009-05-07 12:22 ` Andrea Righi
2009-05-07 14:11 ` Vivek Goyal
2009-05-07 14:45 ` Vivek Goyal
2009-05-07 15:36 ` Vivek Goyal
2009-05-07 15:42 ` Vivek Goyal
2009-05-07 22:19 ` Andrea Righi
2009-05-08 18:09 ` Vivek Goyal
2009-05-08 20:05 ` Andrea Righi
2009-05-08 21:56 ` Vivek Goyal
2009-05-09 9:22 ` Peter Zijlstra
2009-05-14 10:31 ` Andrea Righi
2009-05-14 16:43 ` Dhaval Giani
2009-05-07 22:40 ` Andrea Righi
2009-05-07 0:18 ` Ryo Tsuruta
2009-05-07 1:25 ` Vivek Goyal
2009-05-11 11:23 ` Ryo Tsuruta
2009-05-11 12:49 ` Vivek Goyal
2009-05-08 14:24 ` Rik van Riel
2009-05-11 10:11 ` Ryo Tsuruta
2009-05-06 3:41 ` Balbir Singh
2009-05-06 13:28 ` Vivek Goyal
2009-05-06 8:11 ` Gui Jianfeng
2009-05-06 16:10 ` Vivek Goyal
2009-05-07 5:36 ` Li Zefan
2009-05-08 13:37 ` Vivek Goyal
2009-05-11 2:59 ` Gui Jianfeng
2009-05-07 5:47 ` Gui Jianfeng
2009-05-08 9:45 ` [PATCH] io-controller: Add io group reference handling for request Gui Jianfeng
2009-05-08 13:57 ` Vivek Goyal
2009-05-08 17:41 ` Nauman Rafique
2009-05-08 18:56 ` Vivek Goyal
2009-05-08 19:06 ` Nauman Rafique
2009-05-11 1:33 ` Gui Jianfeng
2009-05-11 15:41 ` Vivek Goyal
2009-05-15 5:15 ` Gui Jianfeng
2009-05-15 7:48 ` Andrea Righi
2009-05-15 8:16 ` Gui Jianfeng
2009-05-15 14:09 ` Vivek Goyal
2009-05-15 14:06 ` Vivek Goyal
2009-05-17 10:26 ` Andrea Righi
2009-05-18 14:01 ` Vivek Goyal
2009-05-18 14:39 ` Andrea Righi
2009-05-26 11:34 ` Ryo Tsuruta
2009-05-27 6:56 ` Ryo Tsuruta
2009-05-27 8:17 ` Andrea Righi
2009-05-27 11:53 ` Ryo Tsuruta
2009-05-27 17:32 ` Vivek Goyal
2009-05-19 12:18 ` Ryo Tsuruta
2009-05-15 7:40 ` Gui Jianfeng
2009-05-15 14:01 ` Vivek Goyal
2009-05-13 2:00 ` [PATCH] IO Controller: Add per-device weight and ioprio_class handling Gui Jianfeng
2009-05-13 14:44 ` Vivek Goyal
2009-05-14 0:59 ` Gui Jianfeng
2009-05-13 15:29 ` Vivek Goyal
2009-05-14 1:02 ` Gui Jianfeng
2009-05-13 15:59 ` Vivek Goyal
2009-05-14 1:51 ` Gui Jianfeng
2009-05-14 2:25 ` Gui Jianfeng
2009-05-13 17:17 ` Vivek Goyal
2009-05-14 1:24 ` Gui Jianfeng
2009-05-13 19:09 ` Vivek Goyal
2009-05-14 1:35 ` Gui Jianfeng
2009-05-14 7:26 ` Gui Jianfeng
2009-05-14 15:15 ` Vivek Goyal
2009-05-18 22:33 ` IKEDA, Munehiro
2009-05-20 1:44 ` Gui Jianfeng
2009-05-20 15:41 ` IKEDA, Munehiro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090506221741.GL8180@redhat.com \
--to=vgoyal@redhat.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=containers@lists.linux-foundation.org \
--cc=dhaval@linux.vnet.ibm.com \
--cc=dm-devel@redhat.com \
--cc=dpshah@google.com \
--cc=fchecconi@gmail.com \
--cc=fernando@oss.ntt.co.jp \
--cc=guijianfeng@cn.fujitsu.com \
--cc=jens.axboe@oracle.com \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=m-ikeda@ds.jp.nec.com \
--cc=mikew@google.com \
--cc=nauman@google.com \
--cc=paolo.valente@unimore.it \
--cc=peterz@infradead.org \
--cc=righi.andrea@gmail.com \
--cc=ryov@valinux.co.jp \
--cc=s-uchida@ap.jp.nec.com \
--cc=snitzer@redhat.com \
--cc=taka@valinux.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).