From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail203.messagelabs.com (mail203.messagelabs.com [216.82.254.243]) by kanga.kvack.org (Postfix) with ESMTP id AC5FC6B01F5 for ; Thu, 15 Apr 2010 02:39:03 -0400 (EDT) Received: from kpbe19.cbf.corp.google.com (kpbe19.cbf.corp.google.com [172.25.105.83]) by smtp-out.google.com with ESMTP id o3F6cvip016971 for ; Thu, 15 Apr 2010 08:38:58 +0200 Received: from qw-out-2122.google.com (qwi5.prod.google.com [10.241.195.5]) by kpbe19.cbf.corp.google.com with ESMTP id o3F6cu0x032485 for ; Wed, 14 Apr 2010 23:38:56 -0700 Received: by qw-out-2122.google.com with SMTP id 5so379959qwi.25 for ; Wed, 14 Apr 2010 23:38:56 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20100415152104.62593f37.nishimura@mxp.nes.nec.co.jp> References: <1268609202-15581-2-git-send-email-arighi@develer.com> <20100319102332.f1d81c8d.kamezawa.hiroyu@jp.fujitsu.com> <20100319024039.GH18054@balbir.in.ibm.com> <20100319120049.3dbf8440.kamezawa.hiroyu@jp.fujitsu.com> <20100414140523.GC13535@redhat.com> <20100415114022.ef01b704.nishimura@mxp.nes.nec.co.jp> <20100415152104.62593f37.nishimura@mxp.nes.nec.co.jp> From: Greg Thelen Date: Wed, 14 Apr 2010 23:38:30 -0700 Message-ID: Subject: Re: [PATCH -mmotm 1/5] memcg: disable irq at page cgroup lock Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org To: Daisuke Nishimura Cc: Vivek Goyal , KAMEZAWA Hiroyuki , balbir@linux.vnet.ibm.com, Andrea Righi , Peter Zijlstra , Trond Myklebust , Suleiman Souhlal , "Kirill A. Shutemov" , Andrew Morton , containers@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org List-ID: On Wed, Apr 14, 2010 at 11:21 PM, Daisuke Nishimura wrote: > On Wed, 14 Apr 2010 21:48:25 -0700, Greg Thelen wrot= e: >> On Wed, Apr 14, 2010 at 7:40 PM, Daisuke Nishimura >> wrote: >> > On Wed, 14 Apr 2010 13:14:07 -0700, Greg Thelen w= rote: >> >> Vivek Goyal writes: >> >> >> >> > On Tue, Apr 13, 2010 at 11:55:12PM -0700, Greg Thelen wrote: >> >> >> On Thu, Mar 18, 2010 at 8:00 PM, KAMEZAWA Hiroyuki wrote: >> >> >> > On Fri, 19 Mar 2010 08:10:39 +0530 >> >> >> > Balbir Singh wrote: >> >> >> > >> >> >> >> * KAMEZAWA Hiroyuki [2010-03-1= 9 10:23:32]: >> >> >> >> >> >> >> >> > On Thu, 18 Mar 2010 21:58:55 +0530 >> >> >> >> > Balbir Singh wrote: >> >> >> >> > >> >> >> >> > > * KAMEZAWA Hiroyuki [2010-= 03-18 13:35:27]: >> >> >> >> > >> >> >> >> > > > Then, no probelm. It's ok to add mem_cgroup_udpate_stat()= indpendent from >> >> >> >> > > > mem_cgroup_update_file_mapped(). The look may be messy bu= t it's not your >> >> >> >> > > > fault. But please write "why add new function" to patch d= escription. >> >> >> >> > > > >> >> >> >> > > > I'm sorry for wasting your time. >> >> >> >> > > >> >> >> >> > > Do we need to go down this route? We could check the stat a= nd do the >> >> >> >> > > correct thing. In case of FILE_MAPPED, always grab page_cgr= oup_lock >> >> >> >> > > and for others potentially look at trylock. It is OK for di= fferent >> >> >> >> > > stats to be protected via different locks. >> >> >> >> > > >> >> >> >> > >> >> >> >> > I _don't_ want to see a mixture of spinlock and trylock in a = function. >> >> >> >> > >> >> >> >> >> >> >> >> A well documented well written function can help. The other thi= ng is to >> >> >> >> of-course solve this correctly by introducing different locking= around >> >> >> >> the statistics. Are you suggesting the later? >> >> >> >> >> >> >> > >> >> >> > No. As I wrote. >> >> >> > =A0 =A0 =A0 =A0- don't modify codes around FILE_MAPPED in this s= eries. >> >> >> > =A0 =A0 =A0 =A0- add a new functions for new statistics >> >> >> > Then, >> >> >> > =A0 =A0 =A0 =A0- think about clean up later, after we confirm al= l things work as expected. >> >> >> >> >> >> I have ported Andrea Righi's memcg dirty page accounting patches t= o latest >> >> >> mmtom-2010-04-05-16-09. =A0In doing so I have to address this lock= ing issue. =A0Does >> >> >> the following look good? =A0I will (of course) submit the entire p= atch for review, >> >> >> but I wanted make sure I was aiming in the right direction. >> >> >> >> >> >> void mem_cgroup_update_page_stat(struct page *page, >> >> >> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0enum mem_cgroup_write_page_= stat_item idx, bool charge) >> >> >> { >> >> >> =A0 =A0static int seq; >> >> >> =A0 =A0struct page_cgroup *pc; >> >> >> >> >> >> =A0 =A0if (mem_cgroup_disabled()) >> >> >> =A0 =A0 =A0 =A0 =A0 =A0return; >> >> >> =A0 =A0pc =3D lookup_page_cgroup(page); >> >> >> =A0 =A0if (!pc || mem_cgroup_is_root(pc->mem_cgroup)) >> >> >> =A0 =A0 =A0 =A0 =A0 =A0return; >> >> >> >> >> >> =A0 =A0/* >> >> >> =A0 =A0 * This routine does not disable irq when updating stats. = =A0So it is >> >> >> =A0 =A0 * possible that a stat update from within interrupt routin= e, could >> >> >> =A0 =A0 * deadlock. =A0Use trylock_page_cgroup() to avoid such dea= dlock. =A0This >> >> >> =A0 =A0 * makes the memcg counters fuzzy. =A0More complicated, or = lower >> >> >> =A0 =A0 * performing locking solutions avoid this fuzziness, but a= re not >> >> >> =A0 =A0 * currently needed. >> >> >> =A0 =A0 */ >> >> >> =A0 =A0if (irqs_disabled()) { >> >> > =A0 =A0 =A0 =A0 =A0 =A0 ^^^^^^^^^ >> >> > Or may be in_interrupt()? >> >> >> >> Good catch. =A0I will replace irqs_disabled() with in_interrupt(). >> >> >> > I think you should check both. __remove_from_page_cache(), which will = update >> > DIRTY, is called with irq disabled(iow, under mapping->tree_lock) but = not in >> > interrupt context. >> >> The only reason to use trylock in this case is to prevent deadlock >> when running in a context that may have preempted or interrupted a >> routine that already holds the bit locked. =A0In the >> __remove_from_page_cache() irqs are disabled, but that does not imply >> that a routine holding the spinlock has been preempted. =A0When the bit >> is locked, preemption is disabled. =A0The only way to interrupt a holder >> of the bit for an interrupt to occur (I/O, timer, etc). =A0So I think >> that in_interrupt() is sufficient. =A0Am I missing something? >> > IIUC, it's would be enough to prevent deadlock where one CPU tries to acq= uire > the same page cgroup lock. But there is still some possibility where 2 CP= Us > can cause dead lock each other(please see the commit e767e056). > IOW, my point is "don't call lock_page_cgroup() under mapping->tree_lock"= . I see your point. Thank you for explaining. -- Greg -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org