linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrea Righi <righi.andrea@gmail.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
	mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
	jens.axboe@oracle.com, ryov@valinux.co.jp,
	fernando@intellilink.co.jp, s-uchida@ap.jp.nec.com,
	taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
	arozansk@redhat.com, jmoyer@redhat.com, oz-kernel@redhat.com,
	dhaval@linux.vnet.ibm.com, balbir@linux.vnet.ibm.com,
	linux-kernel@vger.kernel.org,
	containers@lists.linux-foundation.org, menage@google.com,
	peterz@infradead.org
Subject: Re: [PATCH 01/10] Documentation
Date: Fri, 17 Apr 2009 11:37:28 +0200	[thread overview]
Message-ID: <20090417093656.GA5246@linux> (raw)
In-Reply-To: <20090416183753.GE8896@redhat.com>

On Thu, Apr 16, 2009 at 02:37:53PM -0400, Vivek Goyal wrote:
> > I think it would be possible to implement both proportional and limiting
> > rules at the same level (e.g., the IO scheduler), but we need also to
> > address the memory consumption problem (I still need to review your
> > patchset in details and I'm going to test it soon :), so I don't know if
> > you already addressed this issue).
> > 
> 
> Can you please elaborate a bit on this? Are you concerned about that data
> structures created to solve the problem consume a lot of memory?

Sorry I was not very clear here. With memory consumption I mean wasting
the memory with hard/slow reclaimable dirty pages or pending IO
requests.

If there's only a global limit on dirty pages, any cgroup can exhaust
that limit and cause other cgroups/processes to block when they try to
write to disk.

But, ok, the IO controller is not probably the best place to implement
such functionality. I should rework on the per cgroup dirty_ratio:

https://lists.linux-foundation.org/pipermail/containers/2008-September/013140.html

Last time we focused too much on the best interfaces to define dirty
pages limit, and I never re-posted an updated version of this patchset.
Now I think we can simply provide the same dirty_ratio/dirty_bytes
interface that we provide globally, but per cgroup.

> 
> > IOW if we simply don't dispatch requests and we don't throttle the tasks
> > in the cgroup that exceeds its limit, how do we avoid the waste of
> > memory due to the succeeding IO requests and the increasingly dirty
> > pages in the page cache (that are also hard to reclaim)? I may be wrong,
> > but I think we talked about this problem in a previous email... sorry I
> > don't find the discussion in my mail archives.
> > 
> > IMHO a nice approach would be to measure IO consumption at the IO
> > scheduler level, and control IO applying proportional weights / absolute
> > limits _both_ at the IO scheduler / elevator level _and_ at the same
> > time block the tasks from dirtying memory that will generate additional
> > IO requests.
> > 
> > Anyway, there's no need to provide this with a single IO controller, we
> > could split the problem in two parts: 1) provide a proportional /
> > absolute IO controller in the IO schedulers and 2) allow to set, for
> > example, a maximum limit of dirty pages for each cgroup.
> > 
> 
> I think setting a maximum limit on dirty pages is an interesting thought.
> It sounds like as if memory controller can handle it?

Exactly, the same above.

> 
> I guess currently memory controller puts limit on total amount of memory
> consumed by cgroup and there are no knobs on type of memory consumed. So
> if one can limit amount of dirty page cache memory per cgroup, it
> automatically throttles the aysnc writes at the input itself.
>  
> So I agree that if we can limit the process from dirtying too much of
> memory than IO scheduler level controller should be able to do both
> proportional weight and max bw controller.
> 
> Currently doing proportional weight control for async writes is very
> tricky. I am not seeing constantly backlogged traffic at IO scheudler
> level and hence two different weight processes seem to be getting same
> BW.
> 
> I will dive deeper into the patches on dm-ioband to see how they have
> solved this issue. Looks like they are just waiting longer for slowest
> group to consume its tokens and that will keep the disk idle. Extended
> delays might now show up immediately as performance hog, because it might
> also promote increased merging but it should lead to increased latency of
> response. And proving latency issues is hard. :-)   
> 
> > Maybe I'm just repeating what we already said in a previous
> > discussion... in this case sorry for the duplicate thoughts. :)
> > 
> > > 
> > > - Have you thought of doing hierarchical control? 
> > > 
> > 
> > Providing hiearchies in cgroups is in general expensive, deeper
> > hierarchies imply checking all the way up to the root cgroup, so I think
> > we need to be very careful and be aware of the trade-offs before
> > providing such feature. For this particular case (IO controller)
> > wouldn't it be simpler and more efficient to just ignore hierarchies in
> > the kernel and opportunely handle them in userspace? for absolute
> > limiting rules this isn't difficult at all, just imagine a config file
> > and a script or a deamon that dynamically create the opportune cgroups
> > and configure them accordingly to what is defined in the configuration
> > file.
> > 
> > I think we can simply define hierarchical dependencies in the
> > configuration file, translate them in absolute values and use the
> > absolute values to configure the cgroups' properties.
> > 
> > For example, we can just check that the BW allocated for a particular
> > parent cgroup is not greater than the total BW allocated for the
> > children. And for each child just use the min(parent_BW, BW) or equally
> > divide the parent's BW among the children, etc.
> 
> IIUC, you are saying that allow hiearchy in user space and then flatten it
> out and pass it to kernel?
> 
> Hmm.., agree that handling hierarchies is hard and expensive. But at the
> same time rest of the controllers like cpu and memory are handling it in
> kernel so it probably makes sense to keep the IO controller also in line.
> 
> In practice I am not expecting deep hiearchices. May be 2- 3 levels would
> be good for most of the people.
> 
> > 
> > > - What happens to the notion of CFQ task classes and task priority. Looks
> > >   like max bw rule supercede everything. There is no way that an RT task
> > >   get unlimited amount of disk BW even if it wants to? (There is no notion
> > >   of RT cgroup etc)
> > 
> > What about moving all the RT tasks in a separate cgroup with unlimited
> > BW?
> 
> Hmm.., I think that should work. I have yet to look at your patches in
> detail but it looks like unlimited BW group will not be throttled at all
> hence RT tasks can just go right through without getting impacted.

Correct.

> 
> > 
> > > 
> > > > > 
> > > > >   Above requirement can create configuration problems.
> > > > > 
> > > > > 	- If there are large number of disks in system, per cgroup one shall
> > > > > 	  have to create rules for each disk. Until and unless admin knows
> > > > > 	  what applications are in which cgroup and strictly what disk
> > > > > 	  these applications do IO to and create rules for only those
> > > > >  	  disks.
> > > > 
> > > > I don't think this is a huge problem anyway. IMHO a userspace tool, e.g.
> > > > a script, would be able to efficiently create/modify rules parsing user
> > > > defined rules in some human-readable form (config files, etc.), even in
> > > > presence of hundreds of disk. The same is valid for dm-ioband I think.
> > > > 
> > > > > 
> > > > > 	- I think problem gets compounded if there is a hierarchy of
> > > > > 	  logical devices. I think in that case one shall have to create
> > > > > 	  rules for logical devices and not actual physical devices.
> > > > 
> > > > With logical devices you mean device-mapper devices (i.e. LVM, software
> > > > RAID, etc.)? or do you mean that we need to introduce the concept of
> > > > "logical device" to easily (quickly) configure IO requirements and then
> > > > map those logical devices to the actual physical devices? In this case I
> > > > think this can be addressed in userspace. Or maybe I'm totally missing
> > > > the point here.
> > > 
> > > Yes, I meant LVM, Software RAID etc. So if I have got many disks in the system
> > > and I have created software raid on some of them, I need to create rules for
> > > lvm devices or physical devices behind those lvm devices? I am assuming
> > > that it will be logical devices.
> > > 
> > > So I need to know exactly to what all devices applications in a particular
> > > cgroup is going to do IO, and also know exactly how many cgroups are
> > > contending for that cgroup, and also know what worst case disk rate I can
> > > expect from that device and then I can do a good job of giving a
> > > reasonable value to the max rate of that cgroup on a particular device?
> > 
> > ok, I understand. For these cases dm-ioband perfectly addresses the
> > problem. For the general case, I think the only solution is to provide a
> > common interface that each dm subsystem must call to account IO and
> > apply limiting and proportional rules.
> > 
> > > 
> > > > 
> > > > > 
> > > > > - Because it is not proportional weight distribution, if some
> > > > >   cgroup is not using its planned BW, other group sharing the
> > > > >   disk can not make use of spare BW.  
> > > > > 	
> > > > 
> > > > Right.
> > > > 
> > > > > - I think one should know in advance the throughput rate of underlying media
> > > > >   and also know competing applications so that one can statically define
> > > > >   the BW assigned to each cgroup on each disk.
> > > > > 
> > > > >   This will be difficult. Effective BW extracted out of a rotational media
> > > > >   is dependent on the seek pattern so one shall have to either try to make
> > > > >   some conservative estimates and try to divide BW (we will not utilize disk
> > > > >   fully) or take some peak numbers and divide BW (cgroup might not get the
> > > > >   maximum rate configured).
> > > > 
> > > > Correct. I think the proportional weight approach is the only solution
> > > > to efficiently use the whole BW. OTOH absolute limiting rules offer a
> > > > better control over QoS, because you can totally remove performance
> > > > bursts/peaks that could break QoS requirements for short periods of
> > > > time.
> > > 
> > > Can you please give little more details here regarding how QoS requirements
> > > are not met with proportional weight?
> > 
> > With proportional weights the whole bandwidth is allocated if no one
> > else is using it. When IO is submitted other tasks with a higher weight
> > can be forced to sleep until the IO generated by the low weight tasks is
> > not completely dispatched. Or any extent of the priority inversion
> > problems.
> 
> Hmm..., I am not very sure here. When admin is allocating the weights, he
> has the whole picture. He knows how many groups are conteding for the disk
> and what could be the worst case scenario. So if I have got two groups
> with A and B with weight 1 and 2 and both are contending, then as an 
> admin one would expect to get 33% of BW for group A in worst case (if
> group B is continuously backlogged). If B is not contending than A can get
> 100% of BW. So while configuring the system, will one not plan for worst
> case (33% for A, and 66 % for B)?

OK, I'm quite convinced.. :)

To a large degree, if we want to provide a BW reservation strategy we
must provide an interface that allows cgroups to ask for time slices
such as max/min 5 IO requests every 50ms or something like that.
Probably the same functionality can be achieved translating time slices
from weights, percentages or absolute BW limits.

>   
> > 
> > Maybe it's not an issue at all for the most part of the cases, but using
> > a solution that is able to provide also a real partitioning of the
> > available resources can be profitely used by those who need to guarantee
> > _strict_ BW requirements (soft real-time, maximize the responsiveness of
> > certain services, etc.), because in this case we're sure that a certain
> > amount of "spare" BW will be always available when needed by some
> > "critical" services.
> > 
> 
> Will the same thing not happen in proportional weight? If it is an RT
> application, one can put it in RT groups to make sure it always gets
> the BW first even if there is contention. 
> 
> Even in regular group, the moment you issue the IO and IO scheduler sees
> it, you will start getting your reserved share according to your weight.
> 
> How it will be different in the case of io throttling? Even if I don't
> utilize the disk fully, cfq will still put the new guy in the queue and
> then try to give its share (based on prio).
> 
> Are you saying that by keeping disk relatively free, the latency of
> response for soft real time application will become better? In that
> case can't one simply underprovision the disk?
> 
> But having said that I am not disputing the need of max BW controller
> as some people have expressed the need of a constant BW view and don't
> want too big a fluctuations even if BW is available. Max BW controller
> can't gurantee the minumum BW hence can't avoid the fluctuations
> completely, but it can still help in smoothing the traffic because
> other competitiors will be stopped from doing too much of IO.

Agree.

-Andrea

  parent reply	other threads:[~2009-04-17  9:37 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-12  1:56 [RFC] IO Controller Vivek Goyal
2009-03-12  1:56 ` [PATCH 01/10] Documentation Vivek Goyal
2009-03-12  7:11   ` Andrew Morton
2009-03-12 10:07     ` Ryo Tsuruta
2009-03-12 18:01     ` Vivek Goyal
2009-03-16  8:40       ` Ryo Tsuruta
2009-03-16 13:39         ` Vivek Goyal
2009-04-05 15:15       ` Andrea Righi
2009-04-06  6:50         ` Nauman Rafique
2009-04-07  6:40         ` Vivek Goyal
2009-04-08 20:37           ` Andrea Righi
2009-04-16 18:37             ` Vivek Goyal
2009-04-17  5:35               ` Dhaval Giani
2009-04-17 13:49                 ` IO Controller discussion (Was: Re: [PATCH 01/10] Documentation) Vivek Goyal
2009-04-17  9:37               ` Andrea Righi [this message]
2009-04-17 14:13                 ` IO controller " Vivek Goyal
2009-04-17 18:09                   ` Nauman Rafique
2009-04-18  8:13                     ` Andrea Righi
2009-04-19 12:59                     ` Vivek Goyal
2009-04-19 13:08                     ` Vivek Goyal
2009-04-17 22:38                   ` Andrea Righi
2009-04-19 13:21                     ` Vivek Goyal
2009-04-18 13:19                   ` Balbir Singh
2009-04-19 13:45                     ` Vivek Goyal
2009-04-19 15:53                       ` Andrea Righi
2009-04-21  1:16                         ` KAMEZAWA Hiroyuki
2009-04-19  4:35                   ` Nauman Rafique
2009-03-12  7:45   ` [PATCH 01/10] Documentation Yang Hongyang
2009-03-12 13:51     ` Vivek Goyal
2009-03-12 10:00   ` Dhaval Giani
2009-03-12 14:04     ` Vivek Goyal
2009-03-12 14:48       ` Fabio Checconi
2009-03-12 15:03         ` Vivek Goyal
2009-03-18  7:23       ` Gui Jianfeng
2009-03-18 21:55         ` Vivek Goyal
2009-03-19  3:38           ` Gui Jianfeng
2009-03-24  5:32           ` Nauman Rafique
2009-03-24 12:58             ` Vivek Goyal
2009-03-24 18:14               ` Nauman Rafique
2009-03-24 18:29                 ` Vivek Goyal
2009-03-24 18:41                   ` Fabio Checconi
2009-03-24 18:35                     ` Vivek Goyal
2009-03-24 18:49                       ` Nauman Rafique
2009-03-24 19:04                       ` Fabio Checconi
2009-03-12 10:24   ` Peter Zijlstra
2009-03-12 14:09     ` Vivek Goyal
2009-04-06 14:35   ` Balbir Singh
2009-04-06 22:00     ` Nauman Rafique
2009-04-07  5:59     ` Gui Jianfeng
2009-04-13 13:40     ` Vivek Goyal
2009-05-01 22:04       ` IKEDA, Munehiro
2009-05-01 22:45         ` IO Controller per cgroup request descriptors (Re: [PATCH 01/10] Documentation) Vivek Goyal
2009-05-01 23:39           ` Nauman Rafique
2009-05-04 17:18             ` IKEDA, Munehiro
2009-03-12  1:56 ` [PATCH 02/10] Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-03-19  6:27   ` Gui Jianfeng
2009-03-27  8:30   ` [PATCH] IO Controller: Don't store the pid in single queue circumstances Gui Jianfeng
2009-03-27 13:52     ` Vivek Goyal
2009-04-02  4:06   ` [PATCH 02/10] Common flat fair queuing code in elevaotor layer Divyesh Shah
2009-04-02 13:52     ` Vivek Goyal
2009-03-12  1:56 ` [PATCH 03/10] Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-03-12  1:56 ` [PATCH 04/10] Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-03-12  1:56 ` [PATCH 05/10] cfq changes to use " Vivek Goyal
2009-04-16  5:25   ` [PATCH] IO-Controller: Fix kernel panic after moving a task Gui Jianfeng
2009-04-16 19:15     ` Vivek Goyal
2009-03-12  1:56 ` [PATCH 06/10] Separate out queue and data Vivek Goyal
2009-03-12  1:56 ` [PATCH 07/10] Prepare elevator layer for single queue schedulers Vivek Goyal
2009-03-12  1:56 ` [PATCH 08/10] noop changes for hierarchical fair queuing Vivek Goyal
2009-03-12  1:56 ` [PATCH 09/10] deadline " Vivek Goyal
2009-03-12  1:56 ` [PATCH 10/10] anticipatory " Vivek Goyal
2009-03-27  6:58   ` [PATCH] IO Controller: No need to stop idling in as Gui Jianfeng
2009-03-27 14:05     ` Vivek Goyal
2009-03-30  1:09       ` Gui Jianfeng
2009-03-12  3:27 ` [RFC] IO Controller Takuya Yoshikawa
2009-03-12  6:40   ` anqin
2009-03-12  6:55     ` Li Zefan
2009-03-12  7:11       ` anqin
2009-03-12 14:57         ` Vivek Goyal
2009-03-12 13:46     ` Vivek Goyal
2009-03-12 13:43   ` Vivek Goyal
2009-04-02  6:39 ` Gui Jianfeng
2009-04-02 14:00   ` Vivek Goyal
2009-04-07  1:40     ` Gui Jianfeng
2009-04-07  6:40       ` Gui Jianfeng
2009-04-10  9:33 ` Gui Jianfeng
2009-04-10 17:49   ` Nauman Rafique
2009-04-13 13:09   ` Vivek Goyal
2009-04-22  3:04     ` Gui Jianfeng
2009-04-22  3:10       ` Nauman Rafique
2009-04-22 13:23       ` Vivek Goyal
2009-04-30 19:38         ` Nauman Rafique
2009-05-05  3:18           ` Gui Jianfeng
2009-05-01  1:25 ` Divyesh Shah
2009-05-01  2:45   ` Vivek Goyal
2009-05-01  3:00     ` Divyesh Shah

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090417093656.GA5246@linux \
    --to=righi.andrea@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=arozansk@redhat.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=containers@lists.linux-foundation.org \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=dpshah@google.com \
    --cc=fchecconi@gmail.com \
    --cc=fernando@intellilink.co.jp \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=menage@google.com \
    --cc=mikew@google.com \
    --cc=nauman@google.com \
    --cc=oz-kernel@redhat.com \
    --cc=paolo.valente@unimore.it \
    --cc=peterz@infradead.org \
    --cc=ryov@valinux.co.jp \
    --cc=s-uchida@ap.jp.nec.com \
    --cc=taka@valinux.co.jp \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).