public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Andrea Righi <arighi@develer.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Suleiman Souhlal <suleiman@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	containers@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC] [PATCH 0/2] memcg: per cgroup dirty limit
Date: Mon, 22 Feb 2010 09:27:45 -0500	[thread overview]
Message-ID: <20100222142744.GB13823@redhat.com> (raw)
In-Reply-To: <1266765525-30890-1-git-send-email-arighi@develer.com>

On Sun, Feb 21, 2010 at 04:18:43PM +0100, Andrea Righi wrote:
> Control the maximum amount of dirty pages a cgroup can have at any given time.
> 
> Per cgroup dirty limit is like fixing the max amount of dirty (hard to reclaim)
> page cache used by any cgroup. So, in case of multiple cgroup writers, they
> will not be able to consume more than their designated share of dirty pages and
> will be forced to perform write-out if they cross that limit.
> 
> The overall design is the following:
> 
>  - account dirty pages per cgroup
>  - limit the number of dirty pages via memory.dirty_bytes in cgroupfs
>  - start to write-out in balance_dirty_pages() when the cgroup or global limit
>    is exceeded
> 
> This feature is supposed to be strictly connected to any underlying IO
> controller implementation, so we can stop increasing dirty pages in VM layer
> and enforce a write-out before any cgroup will consume the global amount of
> dirty pages defined by the /proc/sys/vm/dirty_ratio|dirty_bytes limit.
> 

Thanks Andrea. I had been thinking about looking into it from IO
controller perspective so that we can control async IO (buffered writes
also).

Before I dive into patches, two quick things.

- IIRC, last time you had implemented per memory cgroup "dirty_ratio" and
  not "dirty_bytes". Why this change? To begin with either per memcg
  configurable dirty ratio also makes sense? By default it can be the
  global dirty ratio for each cgroup.

- Looks like we will start writeout from memory cgroup once we cross the
  dirty ratio, but still there is no gurantee that we be writting pages
  belonging to cgroup which crossed the dirty ratio and triggered the
  writeout.

  This behavior is not very good at least from IO controller perspective
  where if two dd threads are dirtying memory in two cgroups, then if
  one crosses it dirty ratio, it should perform writeouts of its own pages
  and not other cgroups pages. Otherwise we probably will again introduce
  serialization among two writers and will not see service differentation.

  May be we can modify writeback_inodes_wbc() to check first dirty page
  of the inode. And if it does not belong to same memcg as the task who
  is performing balance_dirty_pages(), then skip that inode.

  This does not handle the problem of shared files where processes from
  two different cgroups are dirtying same file but it will atleast cover
  other cases without introducing too much of complexity?

Thanks
Vivek

> TODO:
>  - handle the migration of tasks across different cgroups (a page may be set
>    dirty when a task runs in a cgroup and cleared after the task is moved to
>    another cgroup).
>  - provide an appropriate documentation (in Documentation/cgroups/memory.txt)
> 
> -Andrea

  parent reply	other threads:[~2010-02-22 14:29 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-21 15:18 [RFC] [PATCH 0/2] memcg: per cgroup dirty limit Andrea Righi
2010-02-21 15:18 ` [PATCH 1/2] memcg: dirty pages accounting and limiting infrastructure Andrea Righi
2010-02-21 21:28   ` David Rientjes
2010-02-21 22:17     ` Andrea Righi
2010-02-22 18:07       ` Vivek Goyal
2010-02-23 11:58         ` Andrea Righi
2010-02-25 15:36           ` Minchan Kim
2010-02-26  0:23             ` KAMEZAWA Hiroyuki
2010-02-26  4:50               ` Minchan Kim
2010-02-26  5:01                 ` KAMEZAWA Hiroyuki
2010-02-26  5:53                   ` Minchan Kim
2010-02-26  6:15                     ` KAMEZAWA Hiroyuki
2010-02-26  6:35                       ` Minchan Kim
2010-02-22  0:22   ` KAMEZAWA Hiroyuki
2010-02-22 18:00     ` Andrea Righi
2010-02-22 21:21       ` David Rientjes
2010-02-22 19:31     ` Vivek Goyal
2010-02-23  9:58       ` Andrea Righi
2010-02-22 15:58   ` Vivek Goyal
2010-02-22 17:29     ` Balbir Singh
2010-02-23  9:26     ` Andrea Righi
2010-02-22 16:14   ` Balbir Singh
2010-02-23  9:28     ` Andrea Righi
2010-02-24  0:09       ` KAMEZAWA Hiroyuki
2010-02-21 15:18 ` [PATCH 2/2] memcg: dirty pages instrumentation Andrea Righi
2010-02-21 21:38   ` David Rientjes
2010-02-21 22:33     ` Andrea Righi
2010-02-22  0:32   ` KAMEZAWA Hiroyuki
2010-02-22 17:57     ` Andrea Righi
2010-02-22 16:52   ` Vivek Goyal
2010-02-23  9:40     ` Andrea Righi
2010-02-23  9:45       ` Andrea Righi
2010-02-23 19:56       ` Vivek Goyal
2010-02-23 22:22         ` David Rientjes
2010-02-25 14:34           ` Andrea Righi
2010-02-26  0:14             ` KAMEZAWA Hiroyuki
2010-02-22 18:20   ` Peter Zijlstra
2010-02-23  9:46     ` Andrea Righi
2010-02-23 21:29   ` Vivek Goyal
2010-02-25 15:12     ` Andrea Righi
2010-02-26 21:48       ` Vivek Goyal
2010-02-26 22:21         ` Andrea Righi
2010-02-26 22:28           ` Vivek Goyal
2010-03-01  0:47         ` KAMEZAWA Hiroyuki
2010-02-21 23:48 ` [RFC] [PATCH 0/2] memcg: per cgroup dirty limit KAMEZAWA Hiroyuki
2010-02-22 14:27 ` Vivek Goyal [this message]
2010-02-22 17:36   ` Balbir Singh
2010-02-22 17:58     ` Vivek Goyal
2010-02-23  0:07       ` KAMEZAWA Hiroyuki
2010-02-23 15:12         ` Vivek Goyal
2010-02-24  0:19           ` KAMEZAWA Hiroyuki
2010-02-22 18:12   ` Andrea Righi
2010-02-22 18:29     ` Vivek Goyal
2010-02-22 21:15       ` David Rientjes
2010-02-23  9:55       ` Andrea Righi
2010-02-23 20:01         ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100222142744.GB13823@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=arighi@develer.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=containers@lists.linux-foundation.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=suleiman@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox