From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751218AbZEQEQ3 (ORCPT ); Sun, 17 May 2009 00:16:29 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750879AbZEQEQS (ORCPT ); Sun, 17 May 2009 00:16:18 -0400 Received: from e4.ny.us.ibm.com ([32.97.182.144]:44241 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750858AbZEQEQR (ORCPT ); Sun, 17 May 2009 00:16:17 -0400 Date: Sun, 17 May 2009 12:15:43 +0800 From: Balbir Singh To: KAMEZAWA Hiroyuki Cc: "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Andrew Morton , "nishimura@mxp.nes.nec.co.jp" , "lizf@cn.fujitsu.com" , "menage@google.com" , KOSAKI Motohiro Subject: Re: [RFC] Low overhead patches for the memory cgroup controller (v2) Message-ID: <20090517041543.GA5156@balbir.in.ibm.com> Reply-To: balbir@linux.vnet.ibm.com References: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * KAMEZAWA Hiroyuki [2009-05-16 02:45:03]: > I think set/clear flag here adds race condtion....because pc->flags is > modfied by > pc->flags = pcg_dafault_flags[ctype] in commit_charge() > you have to modify above lines to be > > SetPageCgroupCache(pc) or some.. > ... > SetPageCgroupUsed(pc) > > Then, you can use set_bit() without lock_page_cgroup(). > (Currently, pc->flags is modified only under lock_page_cgroup(), so, > non atomic code is used.) > Here is the next version of the patch Feature: Remove the overhead associated with the root cgroup From: Balbir Singh This patch changes the memory cgroup and removes the overhead associated with accounting all pages in the root cgroup. As a side-effect, we can no longer set a memory hard limit in the root cgroup. A new flag is used to track page_cgroup associated with the root cgroup pages. A new flag to track whether the page has been accounted or not has been added as well. Flags are now set atomically for page_cgroup, pcg_default_flags is now obsolete, but I've not removed it yet. It provides some readability to help the code. Tests: 1. Tested lightly, previous versions showed good performance improvement 10%. NOTE: I haven't got the time right now to run oprofile and get detailed test results, since I am in the middle of travel. Please review the code for functional correctness and if you can test it even better. I would like to push this in, especially if the % performance difference I am seeing is reproducible elsewhere as well. Signed-off-by: Balbir Singh --- include/linux/page_cgroup.h | 12 ++++++++++++ mm/memcontrol.c | 42 ++++++++++++++++++++++++++++++++++++++---- mm/page_cgroup.c | 1 - 3 files changed, 50 insertions(+), 5 deletions(-) diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h index 7339c7b..ebdae9a 100644 --- a/include/linux/page_cgroup.h +++ b/include/linux/page_cgroup.h @@ -26,6 +26,8 @@ enum { PCG_LOCK, /* page cgroup is locked */ PCG_CACHE, /* charged as cache */ PCG_USED, /* this object is in use. */ + PCG_ROOT, /* page belongs to root cgroup */ + PCG_ACCT, /* page has been accounted for */ }; #define TESTPCGFLAG(uname, lname) \ @@ -42,9 +44,19 @@ static inline void ClearPageCgroup##uname(struct page_cgroup *pc) \ /* Cache flag is set only once (at allocation) */ TESTPCGFLAG(Cache, CACHE) +SETPCGFLAG(Cache, CACHE) TESTPCGFLAG(Used, USED) CLEARPCGFLAG(Used, USED) +SETPCGFLAG(Used, USED) + +SETPCGFLAG(Root, ROOT) +CLEARPCGFLAG(Root, ROOT) +TESTPCGFLAG(Root, ROOT) + +SETPCGFLAG(Acct, ACCT) +CLEARPCGFLAG(Acct, ACCT) +TESTPCGFLAG(Acct, ACCT) static inline int page_cgroup_nid(struct page_cgroup *pc) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9712ef7..35415fc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -43,6 +43,7 @@ struct cgroup_subsys mem_cgroup_subsys __read_mostly; #define MEM_CGROUP_RECLAIM_RETRIES 5 +struct mem_cgroup *root_mem_cgroup __read_mostly; #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP /* Turned on only when memory cgroup is enabled && really_do_swap_account = 0 */ @@ -196,6 +197,10 @@ enum charge_type { #define PCGF_CACHE (1UL << PCG_CACHE) #define PCGF_USED (1UL << PCG_USED) #define PCGF_LOCK (1UL << PCG_LOCK) +/* Not used, but added here for completeness */ +#define PCGF_ROOT (1UL << PCG_ROOT) +#define PCGF_ACCT (1UL << PCG_ACCT) + static const unsigned long pcg_default_flags[NR_CHARGE_TYPE] = { PCGF_CACHE | PCGF_USED | PCGF_LOCK, /* File Cache */ @@ -420,7 +425,7 @@ void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru) return; pc = lookup_page_cgroup(page); /* can happen while we handle swapcache. */ - if (list_empty(&pc->lru) || !pc->mem_cgroup) + if ((!PageCgroupAcct(pc) && list_empty(&pc->lru)) || !pc->mem_cgroup) return; /* * We don't check PCG_USED bit. It's cleared when the "page" is finally @@ -429,6 +434,9 @@ void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru) mz = page_cgroup_zoneinfo(pc); mem = pc->mem_cgroup; MEM_CGROUP_ZSTAT(mz, lru) -= 1; + ClearPageCgroupAcct(pc); + if (PageCgroupRoot(pc)) + return; list_del_init(&pc->lru); return; } @@ -452,8 +460,8 @@ void mem_cgroup_rotate_lru_list(struct page *page, enum lru_list lru) * For making pc->mem_cgroup visible, insert smp_rmb() here. */ smp_rmb(); - /* unused page is not rotated. */ - if (!PageCgroupUsed(pc)) + /* unused or root page is not rotated. */ + if (!PageCgroupUsed(pc) || PageCgroupRoot(pc)) return; mz = page_cgroup_zoneinfo(pc); list_move(&pc->lru, &mz->lists[lru]); @@ -477,6 +485,9 @@ void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru) mz = page_cgroup_zoneinfo(pc); MEM_CGROUP_ZSTAT(mz, lru) += 1; + SetPageCgroupAcct(pc); + if (PageCgroupRoot(pc)) + return; list_add(&pc->lru, &mz->lists[lru]); } @@ -1114,9 +1125,24 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *mem, css_put(&mem->css); return; } + pc->mem_cgroup = mem; smp_wmb(); - pc->flags = pcg_default_flags[ctype]; + switch (ctype) { + case MEM_CGROUP_CHARGE_TYPE_CACHE: + case MEM_CGROUP_CHARGE_TYPE_SHMEM: + SetPageCgroupCache(pc); + SetPageCgroupUsed(pc); + break; + case MEM_CGROUP_CHARGE_TYPE_MAPPED: + SetPageCgroupUsed(pc); + break; + default: + break; + } + + if (mem == root_mem_cgroup) + SetPageCgroupRoot(pc); mem_cgroup_charge_statistics(mem, pc, true); @@ -1521,6 +1547,8 @@ __mem_cgroup_uncharge_common(struct page *page, enum charge_type ctype) mem_cgroup_charge_statistics(mem, pc, false); ClearPageCgroupUsed(pc); + if (mem == root_mem_cgroup) + ClearPageCgroupRoot(pc); /* * pc->mem_cgroup is not cleared here. It will be accessed when it's * freed from LRU. This is safe because uncharged page is expected not @@ -2038,6 +2066,10 @@ static int mem_cgroup_write(struct cgroup *cont, struct cftype *cft, name = MEMFILE_ATTR(cft->private); switch (name) { case RES_LIMIT: + if (memcg == root_mem_cgroup) { /* Can't set limit on root */ + ret = -EINVAL; + break; + } /* This function does all necessary parse...reuse it */ ret = res_counter_memparse_write_strategy(buffer, &val); if (ret) @@ -2504,6 +2536,7 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont) if (cont->parent == NULL) { enable_swap_cgroup(); parent = NULL; + root_mem_cgroup = mem; } else { parent = mem_cgroup_from_cont(cont->parent); mem->use_hierarchy = parent->use_hierarchy; @@ -2532,6 +2565,7 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont) return &mem->css; free_out: __mem_cgroup_free(mem); + root_mem_cgroup = NULL; return ERR_PTR(error); } diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c index 09b73c5..6145ff6 100644 --- a/mm/page_cgroup.c +++ b/mm/page_cgroup.c @@ -276,7 +276,6 @@ void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat) #endif - #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP static DEFINE_MUTEX(swap_cgroup_mutex); -- Balbir