From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932243AbcAHMhu (ORCPT ); Fri, 8 Jan 2016 07:37:50 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:35826 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932116AbcAHMhq (ORCPT ); Fri, 8 Jan 2016 07:37:46 -0500 Date: Fri, 8 Jan 2016 13:37:44 +0100 From: Michal Hocko To: Tetsuo Handa Cc: akpm@linux-foundation.org, torvalds@linux-foundation.org, hannes@cmpxchg.org, mgorman@suse.de, rientjes@google.com, hillf.zj@alibaba-inc.com, kamezawa.hiroyu@jp.fujitsu.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/3] OOM detection rework v4 Message-ID: <20160108123744.GC14657@dhcp22.suse.cz> References: <1450203586-10959-1-git-send-email-mhocko@kernel.org> <201512242141.EAH69761.MOVFQtHSFOJFLO@I-love.SAKURA.ne.jp> <201512282108.EDI82328.OHFLtVJOSQFMFO@I-love.SAKURA.ne.jp> <201512282313.DHE87075.OSLJOFOtMVQHFF@I-love.SAKURA.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201512282313.DHE87075.OSLJOFOtMVQHFF@I-love.SAKURA.ne.jp> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 28-12-15 23:13:31, Tetsuo Handa wrote: > Tetsuo Handa wrote: > > Tetsuo Handa wrote: > > > I got OOM killers while running heavy disk I/O (extracting kernel source, > > > running lxr's genxref command). (Environ: 4 CPUs / 2048MB RAM / no swap / XFS) > > > Do you think these OOM killers reasonable? Too weak against fragmentation? > > > > Since I cannot establish workload that caused December 24's natural OOM > > killers, I used the following stressor for generating similar situation. > > > > I came to feel that I am observing a different problem which is currently > hidden behind the "too small to fail" memory-allocation rule. That is, tasks > requesting order > 0 pages are continuously losing the competition when > tasks requesting order = 0 pages dominate, for reclaimed pages are stolen > by tasks requesting order = 0 pages before reclaimed pages are combined to > order > 0 pages (or maybe order > 0 pages are immediately split into > order = 0 pages due to tasks requesting order = 0 pages). > > Currently, order <= PAGE_ALLOC_COSTLY_ORDER allocations implicitly retry > unless chosen by the OOM killer. Therefore, even if tasks requesting > order = 2 pages lost the competition when there are tasks requesting > order = 0 pages, the order = 2 allocation request is implicitly retried > and therefore the OOM killer is not invoked (though there is a problem that > tasks requesting order > 0 allocation will stall as long as tasks requesting > order = 0 pages dominate). Yes this is possible and nothing new. High order allocations (even small orders) are never for free and more expensive than order-0. I have seen an OOM killer striking while there were megs of free memory on a larger machine just because of the high fragmentation. > But this patchset introduced a limit of 16 retries. We retry 16 times _only_ if the reclaim hasn't made _any_ progress which means it hasn't reclaimed a single page. We can still fail due to watermarks check for the required order but I think this is a correct and desirable behavior because there is no guarantee that lower order pages will get coalesced after more retries. The primary point of this rework is to make the whole thing more deterministic. So we can see some OOM reports for high orders (