From: Vivek Goyal <vgoyal@redhat.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
balbir@linux.vnet.ibm.com, linux-mm@kvack.org,
Andrea Righi <arighi@develer.com>,
linux-kernel@vger.kernel.org,
Trond Myklebust <trond.myklebust@fys.uio.no>,
Suleiman Souhlal <suleiman@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
containers@lists.linux-foundation.org
Subject: Re: [PATCH mmotm 2.5/4] memcg: disable irq at page cgroup lock (Re: [PATCH -mmotm 3/4] memcg: dirty pages accounting and limiting infrastructure)
Date: Thu, 11 Mar 2010 11:54:13 -0500 [thread overview]
Message-ID: <20100311165413.GD29246@redhat.com> (raw)
In-Reply-To: <20100311134908.48d8b0fc.kamezawa.hiroyu@jp.fujitsu.com>
On Thu, Mar 11, 2010 at 01:49:08PM +0900, KAMEZAWA Hiroyuki wrote:
> On Thu, 11 Mar 2010 13:31:23 +0900
> Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
>
> > On Wed, 10 Mar 2010 09:26:24 +0530, Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> > > * nishimura@mxp.nes.nec.co.jp <nishimura@mxp.nes.nec.co.jp> [2010-03-10 10:43:09]:
>
> > I made a patch(attached) using both local_irq_disable/enable and local_irq_save/restore.
> > local_irq_save/restore is used only in mem_cgroup_update_file_mapped.
> >
> > And I attached a histogram graph of 30 times kernel build in root cgroup for each.
> >
> > before_root: no irq operation(original)
> > after_root: local_irq_disable/enable for all
> > after2_root: local_irq_save/restore for all
> > after3_root: mixed version(attached)
> >
> > hmm, there seems to be a tendency that before < after < after3 < after2 ?
> > Should I replace save/restore version to mixed version ?
> >
>
> IMHO, starting from after2_root version is the easist.
> If there is a chance to call lock/unlock page_cgroup can be called in
> interrupt context, we _have to_ disable IRQ, anyway.
> And if we have to do this, I prefer migration_lock rather than this mixture.
>
> BTW, how big your system is ? Balbir-san's concern is for bigger machines.
> But I'm not sure this change is affecte by the size of machines.
> I'm sorry I have no big machine, now.
FWIW, I took andrea's patches (local_irq_save/restore solution) and
compiled the kernel on 32 cores hyperthreaded (64 cpus) with make -j32
in /dev/shm/. On this system, I can't see much difference.
I compiled the kernel 10 times and took average.
Without andrea's patches: 28.698 (seconds)
With andrea's patches: 28.711 (seconds).
Diff is .04%
This is all should be in root cgroup. Note, I have not mounted memory cgroup
controller but it is compiled in. So I am assuming that root group
accounting will still be taking place. Also assuming that it is not
required to do actual IO to disk and /dev/shm is enough to see the results
of local_irq_save()/restore.
Thanks
Vivek
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-03-11 16:54 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-07 20:57 [PATCH -mmotm 0/4] memcg: per cgroup dirty limit (v5) Andrea Righi
2010-03-07 20:57 ` [PATCH -mmotm 1/4] memcg: dirty memory documentation Andrea Righi
2010-03-07 20:57 ` [PATCH -mmotm 2/4] page_cgroup: introduce file cache flags Andrea Righi
2010-03-07 20:57 ` [PATCH -mmotm 3/4] memcg: dirty pages accounting and limiting infrastructure Andrea Righi
2010-03-08 1:44 ` Daisuke Nishimura
2010-03-08 1:56 ` KAMEZAWA Hiroyuki
2010-03-08 2:17 ` Daisuke Nishimura
2010-03-08 2:37 ` KAMEZAWA Hiroyuki
2010-03-08 8:07 ` Daisuke Nishimura
2010-03-08 8:31 ` KAMEZAWA Hiroyuki
2010-03-09 0:12 ` Andrea Righi
2010-03-09 0:19 ` KAMEZAWA Hiroyuki
2010-03-09 1:29 ` [PATCH mmotm 2.5/4] memcg: disable irq at page cgroup lock (Re: [PATCH -mmotm 3/4] memcg: dirty pages accounting and limiting infrastructure) Daisuke Nishimura
2010-03-09 2:07 ` KAMEZAWA Hiroyuki
2010-03-09 4:50 ` Balbir Singh
2010-03-10 1:43 ` Daisuke Nishimura
2010-03-10 3:56 ` Balbir Singh
2010-03-11 4:31 ` Daisuke Nishimura
2010-03-11 4:49 ` KAMEZAWA Hiroyuki
2010-03-11 4:58 ` Daisuke Nishimura
2010-03-11 5:13 ` KAMEZAWA Hiroyuki
2010-03-11 6:15 ` KAMEZAWA Hiroyuki
2010-03-11 7:50 ` Daisuke Nishimura
2010-03-11 8:06 ` KAMEZAWA Hiroyuki
2010-03-11 16:54 ` Vivek Goyal [this message]
2010-03-11 22:34 ` Andrea Righi
2010-03-11 23:46 ` KAMEZAWA Hiroyuki
2010-03-09 9:07 ` Andrea Righi
2010-03-09 0:18 ` [PATCH -mmotm 3/4] memcg: dirty pages accounting and limiting infrastructure Daisuke Nishimura
2010-03-09 0:20 ` KAMEZAWA Hiroyuki
2010-03-09 0:52 ` Daisuke Nishimura
2010-03-09 0:03 ` Andrea Righi
2010-03-07 20:57 ` [PATCH -mmotm 4/4] memcg: dirty pages instrumentation Andrea Righi
2010-03-08 2:31 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100311165413.GD29246@redhat.com \
--to=vgoyal@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=arighi@develer.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=containers@lists.linux-foundation.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
--cc=suleiman@google.com \
--cc=trond.myklebust@fys.uio.no \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).