linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hirokazu Takahashi <taka@valinux.co.jp>
To: kamezawa.hiroyu@jp.fujitsu.com
Cc: balbir@linux.vnet.ibm.com, hugh@veritas.com,
	yamamoto@valinux.co.jp, ak@suse.de, nickpiggin@yahoo.com.au,
	linux-mm@kvack.org
Subject: Re: [RFC][PATCH] radix-tree based page_cgroup. [6/7] radix-tree based page cgroup
Date: Mon, 25 Feb 2008 16:05:40 +0900 (JST)	[thread overview]
Message-ID: <20080225.160540.80745258.taka@valinux.co.jp> (raw)
In-Reply-To: <20080225155211.f21fb44d.kamezawa.hiroyu@jp.fujitsu.com>

Hi,

> > I looked into the code a bit and I have some comments.
> > 
> > > Each radix-tree entry contains base address of array of page_cgroup.
> > > As sparsemem does, this registered base address is subtracted by base_pfn
> > > for that entry. See sparsemem's logic if unsure.
> > > 
> > > Signed-off-By: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > 
> >   (snip)
> > 
> > > +#define PCGRP_SHIFT	(8)
> > > +#define PCGRP_SIZE	(1 << PCGRP_SHIFT)
> > 
> > I wonder where the value of PCGRP_SHIFT comes from.
> > 
> On 32bit systems, (I think 64bit should use vmalloc),
> this order comes from sizeof(struct page_cgroup) * 2^8  <= 8192 ,  2 pages.

The size of struct page_cgroup on 32bit will be 28byte,
so that sizeof(struct page_cgroup) * 2^8 = 28 * 2^8 = 7168 byte.
I'm not sure it is acceptable if we lose (8192 - 7168)/8192 = 0.125 = 12.5%
of memory for page_cgroup.

+struct page_cgroup {
+	struct page 		*page;       /* the page this accounts for*/
+	struct mem_cgroup 	*mem_cgroup; /* current cgroup subsys */
+	int    			flags;	     /* See below */
+	int    			refcnt;      /* reference count */
+	spinlock_t		lock;        /* lock for all above members */
+	struct list_head 	lru;         /* for per cgroup LRU */
+};

I wonder if we can find any better way.

> >   (snip)
> > 
> > > +static struct page_cgroup *alloc_init_page_cgroup(unsigned long pfn, int nid,
> > > +					gfp_t mask)
> > > +{
> > > +	int size, order;
> > > +	struct page *page;
> > > +
> > > +	size = PCGRP_SIZE * sizeof(struct page_cgroup);
> > > +	order = get_order(PAGE_ALIGN(size));
> > 
> > I wonder if this alignment will waste some memory.
> > 
> Maybe. 
> 
> > > +	page = alloc_pages_node(nid, mask, order);
> > 
> > I think you should make "order" be 0 not to cause extra memory pressure
> > if possible.
> > 
> Hmm, and increase depth of radix-tree ? 
> But ok, starting from safe code is better. will make this order to be 0
> and see what happens.
> 
> Thanks
> -Kame
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2008-02-25  7:05 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-25  3:07 [RFC][PATCH] radix-tree based page_cgroup. [0/7] introduction KAMEZAWA Hiroyuki
2008-02-25  3:10 ` [RFC][PATCH] radix-tree based page_cgroup. [1/7] definitions for page_cgroup KAMEZAWA Hiroyuki
2008-02-25  7:47   ` Hirokazu Takahashi
2008-02-25  7:56     ` Balbir Singh
2008-02-25  8:03     ` KAMEZAWA Hiroyuki
2008-02-26  7:46       ` Hirokazu Takahashi
2008-02-26  9:07         ` KAMEZAWA Hiroyuki
2008-02-25  3:12 ` [RFC][PATCH] radix-tree based page_cgroup. [2/7] charge/uncharge KAMEZAWA Hiroyuki
2008-02-25  3:13 ` [RFC][PATCH] radix-tree based page_cgroup. [3/7] move lists KAMEZAWA Hiroyuki
2008-02-25  3:14 ` [RFC][PATCH] radix-tree based page_cgroup. [4/7] migration KAMEZAWA Hiroyuki
2008-02-25  3:16 ` [RFC][PATCH] radix-tree based page_cgroup. [5/7] force_empty KAMEZAWA Hiroyuki
2008-02-25  3:17 ` [RFC][PATCH] radix-tree based page_cgroup. [6/7] radix-tree based page cgroup KAMEZAWA Hiroyuki
2008-02-25  5:56   ` YAMAMOTO Takashi
2008-02-25  6:07     ` KAMEZAWA Hiroyuki
2008-02-25  6:40   ` Hirokazu Takahashi
2008-02-25  6:52     ` KAMEZAWA Hiroyuki
2008-02-25  7:05       ` Hirokazu Takahashi [this message]
2008-02-25  7:25         ` KAMEZAWA Hiroyuki
2008-02-25  8:02           ` Hirokazu Takahashi
2008-02-25  8:11             ` KAMEZAWA Hiroyuki
2008-02-25  8:28             ` KAMEZAWA Hiroyuki
2008-02-25  3:18 ` [RFC][PATCH] radix-tree based page_cgroup. [7/7] per cpu fast lookup KAMEZAWA Hiroyuki
2008-02-25  5:36   ` YAMAMOTO Takashi
2008-02-25  5:46     ` KAMEZAWA Hiroyuki
2008-02-26 13:26   ` minchan Kim
2008-02-26 13:31     ` minchan Kim
2008-02-26 23:37     ` KAMEZAWA Hiroyuki
2008-02-27  0:57       ` minchan Kim
2008-02-27  1:09         ` KAMEZAWA Hiroyuki
2008-02-27  1:21           ` minchan Kim
2008-02-25  3:19 ` [RFC][PATCH] radix-tree based page_cgroup. [8/7] vmalloc for large machines KAMEZAWA Hiroyuki
2008-02-25  7:06   ` KAMEZAWA Hiroyuki
2008-02-25  3:24 ` [RFC][PATCH] radix-tree based page_cgroup. [0/7] introduction Balbir Singh
2008-02-25  4:02   ` KAMEZAWA Hiroyuki
2008-02-25  3:31 ` KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080225.160540.80745258.taka@valinux.co.jp \
    --to=taka@valinux.co.jp \
    --cc=ak@suse.de \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=hugh@veritas.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=nickpiggin@yahoo.com.au \
    --cc=yamamoto@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).