From: Hugh Dickins <hugh@veritas.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Pavel Emelianov <xemul@openvz.org>,
Sudhir Kumar <skumar@linux.vnet.ibm.com>,
YAMAMOTO Takashi <yamamoto@valinux.co.jp>,
Paul Menage <menage@google.com>,
lizf@cn.fujitsu.com, linux-kernel@vger.kernel.org,
taka@valinux.co.jp, linux-mm@kvack.org,
David Rientjes <rientjes@google.com>
Subject: Re: [PATCH] Move memory controller allocations to their own slabs (v3)
Date: Fri, 14 Mar 2008 11:16:09 +0000 (GMT) [thread overview]
Message-ID: <Pine.LNX.4.64.0803141100110.19587@blonde.site> (raw)
In-Reply-To: <20080314115645.e78b7f5c.kamezawa.hiroyu@jp.fujitsu.com>
On Fri, 14 Mar 2008, KAMEZAWA Hiroyuki wrote:
> At first, in my understanding,
> - MOVABLE is for migratable pages. (so, not for kernel objects.)
> - RECLAIMABLE is for reclaimable kernel objects. (for slab etc..)
>
> All reclaimable objects are not necessary to be always reclaimable but
> some amount of RECLAIMABLE objects (not all) should be recraimable easily.
> For example, some of dentry-cache, inode-cache is reclaimable because *unused*
> objects are cached.
>
> When it comes to page_cgroup, *all* objects has dependency to pages which are
> assigned to. And user pages are reclaimable.
> There is a similar object....the radix tree. radix-tree's node is allocated as
> RECLAIMABLE object.
>
> So I think it makes sense to changing page_cgroup to be reclaimable.
>
> But final decision should be done by how fragmentation avoidance works.
> It's good to test "how many hugepages can be allocated dynamically" when we
> make page_cgroup to be GFP_RECAIMABLE
I agree with you on all points. No need for it to be done in the same
patch as Balbir's, but yes, __GFP_RECLAIMABLE appears to be appropriate
for the page_cgroup kmem_cache.
(I think it's a better fit than for the radix_tree_node cache: though
the common pagecache usage of the radix_tree implies that its nodes
are reclaimable, I can't see why radix_tree nodes would intrinsically
be reclaimable. If a significant non-reclaimable user of radix-tree
comes on the scene, I'd expect us to change that assumption.)
Hugh
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-03-14 11:16 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-13 14:03 [PATCH] Move memory controller allocations to their own slabs (v3) Balbir Singh
2008-03-14 2:56 ` KAMEZAWA Hiroyuki
2008-03-14 11:16 ` Hugh Dickins [this message]
2008-03-17 11:25 ` Andy Whitcroft
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.64.0803141100110.19587@blonde.site \
--to=hugh@veritas.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizf@cn.fujitsu.com \
--cc=menage@google.com \
--cc=rientjes@google.com \
--cc=skumar@linux.vnet.ibm.com \
--cc=taka@valinux.co.jp \
--cc=xemul@openvz.org \
--cc=yamamoto@valinux.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).