From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932719Ab0CaGaU (ORCPT ); Wed, 31 Mar 2010 02:30:20 -0400 Received: from e28smtp04.in.ibm.com ([122.248.162.4]:39313 "EHLO e28smtp04.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756637Ab0CaGaR (ORCPT ); Wed, 31 Mar 2010 02:30:17 -0400 Date: Wed, 31 Mar 2010 12:00:07 +0530 From: Balbir Singh To: KAMEZAWA Hiroyuki Cc: David Rientjes , anfei , Oleg Nesterov , Andrew Morton , KOSAKI Motohiro , nishimura@mxp.nes.nec.co.jp, Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] oom killer: break from infinite loop Message-ID: <20100331063007.GN3308@balbir.in.ibm.com> Reply-To: balbir@linux.vnet.ibm.com References: <20100328145528.GA14622@desktop> <20100328162821.GA16765@redhat.com> <20100329140633.GA26464@desktop> <20100330142923.GA10099@desktop> <20100331095714.9137caab.kamezawa.hiroyu@jp.fujitsu.com> <20100331151356.673c16c0.kamezawa.hiroyu@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20100331151356.673c16c0.kamezawa.hiroyu@jp.fujitsu.com> User-Agent: Mutt/1.5.20 (2009-08-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * KAMEZAWA Hiroyuki [2010-03-31 15:13:56]: > On Tue, 30 Mar 2010 23:07:08 -0700 (PDT) > David Rientjes wrote: > > > On Wed, 31 Mar 2010, KAMEZAWA Hiroyuki wrote: > > > > > > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > > > > > index 0cb1ca4..9e89a29 100644 > > > > > --- a/mm/oom_kill.c > > > > > +++ b/mm/oom_kill.c > > > > > @@ -510,8 +510,10 @@ retry: > > > > > if (PTR_ERR(p) == -1UL) > > > > > goto out; > > > > > > > > > > - if (!p) > > > > > - p = current; > > > > > + if (!p) { > > > > > + read_unlock(&tasklist_lock); > > > > > + panic("Out of memory and no killable processes...\n"); > > > > > + } > > > > > > > > > > if (oom_kill_process(p, gfp_mask, 0, points, limit, mem, > > > > > "Memory cgroup out of memory")) > > > > > > > > > > > > > This actually does appear to be necessary but for a different reason: if > > > > current is unkillable because it has OOM_DISABLE, for example, then > > > > oom_kill_process() will repeatedly fail and mem_cgroup_out_of_memory() > > > > will infinitely loop. > > > > > > > > Kame-san? > > > > > > > > > > When a memcg goes into OOM and it only has unkillable processes (OOM_DISABLE), > > > we can do nothing. (we can't panic because container's death != system death.) > > > > > > Because memcg itself has mutex+waitqueue for mutual execusion of OOM killer, > > > I think infinite-loop will not be critical probelm for the whole system. > > > > > > And, now, memcg has oom-kill-disable + oom-kill-notifier features. > > > So, If a memcg goes into OOM and there is no killable process, but oom-kill is > > > not disabled by memcg.....it means system admin's mis-configuraton. > > > > > > He can stop inifite loop by hand, anyway. > > > # echo 1 > ..../group_A/memory.oom_control > > > > > > > Then we should be able to do this since current is by definition > > unkillable since it was not found in select_bad_process(), right? > > To me, this patch is acceptable and seems reasnoable. > > But I didn't joined to memcg development when this check was added > and don't know why kill current.. > The reason for adding current was that we did not want to loop forever, since it stops forward progress - no error/no forward progress. It made sense to oom kill the current process, so that the cgroup admin could look at what went wrong. > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=c7ba5c9e8176704bfac0729875fa62798037584d > > Addinc Balbir to CC. Maybe situation is changed now. > Because we can stop inifinite loop (by hand) and there is no rushing oom-kill > callers, this change is acceptable. > By hand is not always possible if we have a large number of cgroups (I've seen a setup with 2000 cgroups on libcgroup ML). 2000 cgroups * number of processes make the situation complex. I think using OOM notifier is now another way of handling such a situation. -- Three Cheers, Balbir