linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <balbir@linux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>
Subject: Re: [RFC] Shared page accounting for memory cgroup
Date: Wed, 6 Jan 2010 12:31:50 +0530	[thread overview]
Message-ID: <20100106070150.GL3059@balbir.in.ibm.com> (raw)
In-Reply-To: <20100106130258.a918e047.kamezawa.hiroyu@jp.fujitsu.com>

* KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2010-01-06 13:02:58]:

> On Mon, 4 Jan 2010 06:20:31 +0530
> Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> 
> > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2010-01-04 09:35:28]:
> > 
> > > On Mon, 4 Jan 2010 05:37:52 +0530
> > > Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> > > 
> > > > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2010-01-04 08:51:08]:
> > > > 
> > > > > On Tue, 29 Dec 2009 23:57:43 +0530
> > > > > Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> > > > > 
> > > > > > Hi, Everyone,
> > > > > > 
> > > > > > I've been working on heuristics for shared page accounting for the
> > > > > > memory cgroup. I've tested the patches by creating multiple cgroups
> > > > > > and running programs that share memory and observed the output.
> > > > > > 
> > > > > > Comments?
> > > > > 
> > > > > Hmm? Why we have to do this in the kernel ?
> > > > >
> > > > 
> > > > For several reasons that I can think of
> > > > 
> > > > 1. With task migration changes coming in, getting consistent data free of races
> > > > is going to be hard.
> > > 
> > > Hmm, Let's see real-worlds's "ps" or "top" command. Even when there are no guarantee
> > > of error range of data, it's still useful.
> > 
> > Yes, my concern is this
> > 
> > 1. I iterate through tasks and calculate RSS
> > 2. I look at memory.usage_in_bytes
> > 
> > If the time in user space between 1 and 2 is large I get very wrong
> > results, specifically if the workload is changing its memory usage
> > drastically.. no?
> > 
> No. If it takes long time, locking fork()/exit() for such long time is the bigger
> issue.
> I recommend you to add memacct subsystem to sum up RSS of all processes's RSS counting
> under a cgroup.  Althoght it may add huge costs in page fault path but implementation
> will be very simple and will not hurt realtime ops.
> There will be no terrible race, I guess.
>

But others hold that lock as well, simple thing like listing tasks and
moving tasks, etc. I expect the usage of shared to be in the same
range.

-- 
	Balbir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-01-06  7:02 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-12-29 18:27 [RFC] Shared page accounting for memory cgroup Balbir Singh
2010-01-03 23:51 ` KAMEZAWA Hiroyuki
2010-01-04  0:07   ` Balbir Singh
2010-01-04  0:35     ` KAMEZAWA Hiroyuki
2010-01-04  0:50       ` Balbir Singh
2010-01-06  4:02         ` KAMEZAWA Hiroyuki
2010-01-06  7:01           ` Balbir Singh [this message]
2010-01-06  7:12             ` KAMEZAWA Hiroyuki
2010-01-07  7:15               ` Balbir Singh
2010-01-07  7:36                 ` KAMEZAWA Hiroyuki
2010-01-07  8:34                   ` Balbir Singh
2010-01-07  8:48                     ` KAMEZAWA Hiroyuki
2010-01-07  9:08                       ` KAMEZAWA Hiroyuki
2010-01-07  9:27                         ` Balbir Singh
2010-01-07 23:47                           ` KAMEZAWA Hiroyuki
2010-01-17 19:30                             ` Balbir Singh
2010-01-18  0:05                               ` KAMEZAWA Hiroyuki
2010-01-18  0:22                                 ` KAMEZAWA Hiroyuki
2010-01-18  0:49                               ` Daisuke Nishimura
2010-01-18  8:26                                 ` Balbir Singh
2010-01-19  1:22                                   ` Daisuke Nishimura
2010-01-19  1:49                                     ` Balbir Singh
2010-01-19  2:34                                       ` Daisuke Nishimura
2010-01-19  3:52                                         ` Balbir Singh
2010-01-20  4:09                                           ` Daisuke Nishimura
2010-01-20  7:15                                             ` Daisuke Nishimura
2010-01-20  7:43                                               ` KAMEZAWA Hiroyuki
2010-01-20  8:18                                               ` Balbir Singh
2010-01-20  8:17                                             ` Balbir Singh
2010-01-21  1:04                                               ` Daisuke Nishimura
2010-01-21  1:30                                                 ` KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100106070150.GL3059@balbir.in.ibm.com \
    --to=balbir@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nishimura@mxp.nes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).