From: Balbir Singh <balbir@linux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>,
Andrew Morton <akpm@linux-foundation.org>,
Jan Blunck <jblunck@suse.de>,
containers@lists.osdl.org,
Linux-Kernel Mailinglist <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: reduce size of per-cpu-stat to be appropriate size.
Date: Fri, 14 Nov 2008 13:13:29 +0530 [thread overview]
Message-ID: <491D2C21.5000600@linux.vnet.ibm.com> (raw)
In-Reply-To: <20081114144926.d91f36fd.kamezawa.hiroyu@jp.fujitsu.com>
KAMEZAWA Hiroyuki wrote:
> How about this one ?
> tested on x86-64 + mmotm-Nov10, works well.
> (test on other arch is welcome.)
>
> -Kame
> ==
> As Jan Blunck <jblunck@suse.de> pointed out, allocating
> per-cpu stat for memcg to the size of NR_CPUS is not good.
>
> This patch changes mem_cgroup's cpustat allocation not based
> on NR_CPUS but based on nr_cpu_ids.
>
> From: Jan Blunck <jblunck@suse.de>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
> ---
> mm/memcontrol.c | 34 ++++++++++++++++++----------------
> 1 file changed, 18 insertions(+), 16 deletions(-)
>
> Index: mmotm-2.6.28-Nov10/mm/memcontrol.c
> ===================================================================
> --- mmotm-2.6.28-Nov10.orig/mm/memcontrol.c
> +++ mmotm-2.6.28-Nov10/mm/memcontrol.c
> @@ -60,7 +60,7 @@ struct mem_cgroup_stat_cpu {
> } ____cacheline_aligned_in_smp;
>
> struct mem_cgroup_stat {
> - struct mem_cgroup_stat_cpu cpustat[NR_CPUS];
> + struct mem_cgroup_stat_cpu cpustat[0];
> };
>
> /*
> @@ -129,11 +129,10 @@ struct mem_cgroup {
>
> int prev_priority; /* for recording reclaim priority */
> /*
> - * statistics.
> + * statistics. This must be placed at the end of memcg.
> */
> struct mem_cgroup_stat stat;
> };
> -static struct mem_cgroup init_mem_cgroup;
>
> enum charge_type {
> MEM_CGROUP_CHARGE_TYPE_CACHE = 0,
> @@ -1292,42 +1291,45 @@ static void free_mem_cgroup_per_zone_inf
> kfree(mem->info.nodeinfo[node]);
> }
>
> +static int mem_cgroup_size(void)
inline this function?
Other than that, I think the cont->parent check for freeing has already been
spotted and pointed out
--
Balbir
next prev parent reply other threads:[~2008-11-14 7:43 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-11-13 16:42 [PATCH] Dynamically allocate struct mem_cgroup_stat_cpu memory Jan Blunck
2008-11-14 3:18 ` Andrew Morton
2008-11-14 3:52 ` Li Zefan
2008-11-14 4:28 ` KAMEZAWA Hiroyuki
2008-11-14 5:49 ` [PATCH] memcg: reduce size of per-cpu-stat to be appropriate size KAMEZAWA Hiroyuki
2008-11-14 6:26 ` Li Zefan
2008-11-14 7:18 ` [PATCH] memcg: reduce size of per-cpu-stat to be appropriate size.(v2) KAMEZAWA Hiroyuki
2008-11-14 7:43 ` Balbir Singh [this message]
2008-11-14 7:48 ` [PATCH] memcg: reduce size of per-cpu-stat to be appropriate size KAMEZAWA Hiroyuki
2008-11-14 7:53 ` Li Zefan
2008-11-14 8:03 ` Balbir Singh
2008-11-14 8:06 ` KAMEZAWA Hiroyuki
2008-11-14 7:57 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=491D2C21.5000600@linux.vnet.ibm.com \
--to=balbir@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=containers@lists.osdl.org \
--cc=jblunck@suse.de \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox