linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <balbir@linux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"lizf@cn.fujitsu.com" <lizf@cn.fujitsu.com>,
	Rik van Riel <riel@surriel.com>,
	Bharata B Rao <bharata.rao@in.ibm.com>,
	Dhaval Giani <dhaval@linux.vnet.ibm.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFI] Shared accounting for memory resource controller
Date: Tue, 7 Apr 2009 13:33:55 +0530	[thread overview]
Message-ID: <20090407080355.GS7082@balbir.in.ibm.com> (raw)
In-Reply-To: <20090407163331.8e577170.kamezawa.hiroyu@jp.fujitsu.com>

* KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2009-04-07 16:33:31]:

> On Tue, 7 Apr 2009 12:48:25 +0530
> Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> 
> > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2009-04-07 16:00:14]:
> > 
> > > On Tue, 7 Apr 2009 12:07:22 +0530
> > > Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> > > 
> > > > Hi, All,
> > > > 
> > > > This is a request for input for the design of shared page accounting for
> > > > the memory resource controller, here is what I have so far
> > > > 
> > > 
> > > In my first impression, I think simple counting is impossible.
> > > IOW, "usage count" and "shared or not" is very different problem.
> > > 
> > > Assume a page and its page_cgroup.
> > > 
> > > Case 1)
> > >   1. a page is mapped by process-X under group-A
> > >   2. its mapped by process-Y in group-B (now, shared and charged under group-A)
> > >   3. move process-X to group-B
> > >   4. now the page is not shared.
> > > 
> > 
> > By shared I don't mean only between cgroups, it could be a page shared
> > in the same cgroup
> > 
> Hmm, is it good information ?
> 
> Such kind of information can be calucated by
> ==
>    rss = 0;
>    for_each_process_under_cgroup() {
>        mm = tsk->mm
>        rss += mm->anon_rss;
>    }
>    some_of_all_rss = rss;
>    
>    shared_ratio = mem_cgrou->rss *100 / some_of_all_rss.
> ==
>    if 100%, all anon memory are not shared.
>

Why only anon? This seems like a good idea, except when we have a page
charged to a cgroup and the task that charged it has migrated, in that
case sum_of_all_rss will be 0.
 
> 
> > > Case 2)
> > >   swap is an object which can be shared.
> > > 
> > 
> > Good point, I expect the user to account all cached pages as shared -
> > no
> Maybe yes if we explain it's so ;)
> 
> ?
> > 
> > > Case 3)
> > >   1. a page known as "A" is mapped by process-X under group-A.
> > >   2. its mapped by process-Y under group-B(now, shared and charged under group-A)
> > >   3. Do copy-on-write by process-X.
> > >      Now, "A" is mapped only by B but accoutned under group-A.
> > >      This case is ignored intentionally, now.
> > 
> > Yes, that is the original design
> > 
> > >      Do you want to call try_charge() both against group-A and group-B
> > >      under process-X's page fault ?
> > > 
> > 
> > No we don't, but copy-on-write is caught at page_rmap_dup() - no?
> > 
> Hmm, if we don't consider group-B, maybe we can.
> But I wonder counting is overkill..
> 
> 
> > > There will be many many corner case.
> > > 
> > > 
> > > > Motivation for shared page accounting
> > > > -------------------------------------
> > > > 1. Memory cgroup administrators will benefit from the knowledge of how
> > > >    much of the data is shared, it helps size the groups correctly.
> > > > 2. We currently report only the pages brought in by the cgroup, knowledge
> > > >    of shared data will give a complete picture of the actual usage.
> > > > 
> > > 
> > > Motivation sounds good. But counting this in generic rmap will have tons of
> > > troubles and slow-down.
> > > 
> > > I bet we should prepare a file as
> > >   /proc/<pid>/cgroup_maps
> > > 
> > > And show RSS/RSS-owned-by-us per process. Maybe this feature will be able to be
> > > implemented in 3 days.
> > 
> > Yes, we can probably do that, but if we have too many processes in one
> > cgroup, we'll need to walk across all of them in user space. One other
> > alternative I did not mention is to walk the LRU like we walk page
> > tables and look at page_mapcount of every page, but that will be
> > very slow.
> 
> Can't we make use of information in mm_counters ? (As I shown in above)
> (see set/get/add/inc/dec_mm_counters())
>

I've seen them, might be a good way to get started, except some corner
cases mentioned above. 

-- 
	Balbir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-04-07  8:04 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-07  6:37 [RFI] Shared accounting for memory resource controller Balbir Singh
2009-04-07  7:00 ` KAMEZAWA Hiroyuki
2009-04-07  7:18   ` Balbir Singh
2009-04-07  7:33     ` KAMEZAWA Hiroyuki
2009-04-07  8:03       ` Balbir Singh [this message]
2009-04-07  8:24         ` KAMEZAWA Hiroyuki
2009-04-07 10:10           ` Balbir Singh
2009-04-08  5:29           ` Balbir Singh
2009-04-08  6:15             ` KAMEZAWA Hiroyuki
2009-04-08  7:04               ` Balbir Singh
2009-04-08  7:07                 ` KAMEZAWA Hiroyuki
2009-04-08  7:11                   ` Balbir Singh
2009-04-08  7:18                     ` KAMEZAWA Hiroyuki
2009-04-08  7:31                       ` Bharata B Rao
2009-04-08  7:34                         ` KAMEZAWA Hiroyuki
2009-04-08  7:45                           ` Bharata B Rao
2009-04-08  7:52                             ` Dhaval Giani
2009-04-08  7:39                       ` KAMEZAWA Hiroyuki
2009-04-08  7:48                       ` Balbir Singh
2009-04-08  8:03                         ` KAMEZAWA Hiroyuki
2009-04-08  8:49                           ` Balbir Singh
2009-04-08  8:54                             ` KAMEZAWA Hiroyuki
2009-04-08  9:02                               ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090407080355.GS7082@balbir.in.ibm.com \
    --to=balbir@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=bharata.rao@in.ibm.com \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=riel@surriel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).