From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932444AbbCYRBy (ORCPT ); Wed, 25 Mar 2015 13:01:54 -0400 Received: from cantor2.suse.de ([195.135.220.15]:42701 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753713AbbCYRBw (ORCPT ); Wed, 25 Mar 2015 13:01:52 -0400 Message-ID: <5512E9FC.7090105@suse.cz> Date: Wed, 25 Mar 2015 18:01:48 +0100 From: Vlastimil Babka User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Tetsuo Handa , hannes@cmpxchg.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org CC: torvalds@linux-foundation.org, akpm@linux-foundation.org, ying.huang@intel.com, aarcange@redhat.com, david@fromorbit.com, mhocko@suse.cz, tytso@mit.edu Subject: Re: [patch 08/12] mm: page_alloc: wait for OOM killer progress before retrying References: <1427264236-17249-1-git-send-email-hannes@cmpxchg.org> <1427264236-17249-9-git-send-email-hannes@cmpxchg.org> <201503252315.FBJ09847.FSOtOJQFOMLFVH@I-love.SAKURA.ne.jp> In-Reply-To: <201503252315.FBJ09847.FSOtOJQFOMLFVH@I-love.SAKURA.ne.jp> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/25/2015 03:15 PM, Tetsuo Handa wrote: > Johannes Weiner wrote: >> diff --git a/mm/oom_kill.c b/mm/oom_kill.c >> index 5cfda39b3268..e066ac7353a4 100644 >> --- a/mm/oom_kill.c >> +++ b/mm/oom_kill.c >> @@ -711,12 +711,15 @@ bool out_of_memory(struct zonelist *zonelist, gfp_t gfp_mask, >> killed = 1; >> } >> out: >> + if (test_thread_flag(TIF_MEMDIE)) >> + return true; >> /* >> - * Give the killed threads a good chance of exiting before trying to >> - * allocate memory again. >> + * Wait for any outstanding OOM victims to die. In rare cases >> + * victims can get stuck behind the allocating tasks, so the >> + * wait needs to be bounded. It's crude alright, but cheaper >> + * than keeping a global dependency tree between all tasks. >> */ >> - if (killed) >> - schedule_timeout_killable(1); >> + wait_event_timeout(oom_victims_wait, !atomic_read(&oom_victims), HZ); >> >> return true; >> } > > out_of_memory() returning true with bounded wait effectively means that > wait forever without choosing subsequent OOM victims when first OOM victim > failed to die. The system will lock up, won't it? And after patch 12, does this mean that you may not be waiting long enough for the victim to die, before you fail the allocation, prematurely? I can imagine there would be situations where the victim is not deadlocked, but still take more than HZ to finish, no?