From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E052250BEC for ; Mon, 23 Mar 2026 16:53:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.44 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774284836; cv=none; b=X58gygGsl29gfPFq2uuGyTAwSJDhZj5pRfc8f6IFacDNxPytvKfmHq3wpnUq+8YZmlaeovC1ZQUKOcJSONnN/oLCX0I3oSHb1go4qoOuShX7DHKzyUnplKuN4uy61ddMtNcvdCT2icwlhlx8ZomKtVAIRYUMMHowS4IJf++RZ2E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774284836; c=relaxed/simple; bh=yiL03zB77wLZ0cJ28Dn5e+Kf8Xsut8Zj8ZPiNIDYXqM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=U/GpkMu6HQ494RXGREf53wALivGjn97J/ifvfiOVosQfyNxv3eK7RTsrX5/DUfpkDQI7hMTyvT6y8WyimnfFCBG0b1T03yIowOE/38e2JMKPWlCT6shuCQiFE1rpAmP0K+3S05fpvm/ERMpOtjACJNJyJsrSLCNIQUd2tvAQI7U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=bkpt2FID; arc=none smtp.client-ip=209.85.128.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="bkpt2FID" Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-4852e9ca034so28741865e9.2 for ; Mon, 23 Mar 2026 09:53:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1774284833; x=1774889633; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Y1YolZNvxEG/QAR3/d/TQV4ZFhFCEVz5YVBjSz+Js6E=; b=bkpt2FIDuQYdntgoLWmmF6MX+P+bwSp3nQxmiZU4ry6CQnVhg9TG+X/hTiiWwu16Hl 5kLAiSv/ZGSTN7oU+WaO1nxcRdpl4JSr5vbFhT7GojZTRTGaibSUGrdVtH2I1h4BmuAK J8IIk9q8cT/n6vfiSIVWXsAsZ7FQY+Drn+MbR7YCOan+SETd/3bpetN3ghFG8XWIlDeN xUwwz/Vc8MUemJo45KbcM9gXgy2pZELtQLb37hwluyDxKt6aYNryzmgU98Ivl3oIznZw jxTCsLyXC1Dul9B36vZfg2vSh5a8miVKfEx7KYXR79/L3I6P46n1MbAqE/vuU7zpBwbT pgPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774284833; x=1774889633; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y1YolZNvxEG/QAR3/d/TQV4ZFhFCEVz5YVBjSz+Js6E=; b=jTam2XsJKapMekDJm2K2CjbE31W6rRU9ki5XIhzuokBhlGwpO956ZGS/kO/vldbfr6 hKII5ezaVmW+SZwqLaPEkMaD7LGKbiMtsCAMPPghSjYqCN9qlOSygiInosDtuESpofe9 fWHfJucjtrQKAHczHWRx7mPfQCDI6vJqX89iieZzvk8k4F4Mf3GQVGQgR46xXE7KuuF8 t/BRhN4fJLwJMjWwXBXJyZ0wkEb8MiZ4vofc0A+qqHmnI3+Dyx9X/FMPxzFhpch2e42t nYiUvy1Bh7n9YzpC8J7PavnIASlyHwQzDEGQh+TKcVLv/5BPCY/rnkNVSpFdVal+c73O NTcA== X-Forwarded-Encrypted: i=1; AJvYcCVpoVRgsu5bzr4PLrWQTgI5DvMC+cYs7EVnP6x4C01H0xIpitpyH/TR5ozLhxiLh0UjLly3vsl4XUGAzsw=@vger.kernel.org X-Gm-Message-State: AOJu0YyRwE8w13IDRQ0Iu2GZJ9HJrGdZssr21+bfwhJNrzT8yIqCT81X GEbczJSDKp+YZKNuWDItvnaCG9ij/1A7BaOkBsqBRzC/BApXMWaAKExZmZbhIOCe2KU= X-Gm-Gg: ATEYQzyo000C6Cfh24uXWZuMMajthJIq0PeZ1R92eZ+V/Cs1xn6u1L4Oyr0YRBsbI05 yN9ZuKGl7A4oFQaUiQN9HdDjpi2zsHkvanFhRZcSdJfSy1YyHdJ5DepK5lT+sxeUb8/N3s5MTax R3w7Kvtyxv+AbTEmdVo0Y+srZG5pSfNciQSkiyOx4Llyh+4+j8C7iwAvLz4pu1DhcxHEtKyQZUY LKhNW8YX3TYNKmh/oHvIpK/+EwBtcNJFSDXfDrz8go8AZlMIToYs+qDXyhCEKpkiSfSKY1r5xxO kK+NmhIRFaRSwR0zB/NFjuTG3LmgARGr/FnxQEIPZvi231TnkgD2UDWbKNxdhgHqoZzFho1ZzUQ hqAmVU4PIcksnLDpOkSW39XA04m+3Rel28LklvUDStRZ3qjyOiejhUW6fLpy0/y+gDegqC16mtL dLGpJd34x2PF/wEDt4Apdtk9NwwAH9d2DnhUzP X-Received: by 2002:a05:600c:524e:b0:485:3ae8:2231 with SMTP id 5b1f17b1804b1-486ff01efd8mr171484705e9.30.1774284833402; Mon, 23 Mar 2026 09:53:53 -0700 (PDT) Received: from localhost (109-81-82-100.rct.o2.cz. [109.81.82.100]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-486f8b949e1sm601004325e9.9.2026.03.23.09.53.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Mar 2026 09:53:53 -0700 (PDT) Date: Mon, 23 Mar 2026 17:53:52 +0100 From: Michal Hocko To: David Rientjes Cc: Andrew Morton , Vlastimil Babka , Suren Baghdasaryan , Brendan Jackman , Johannes Weiner , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Petr Mladek , Steven Rostedt , John Ogness , Sergey Senozhatsky Subject: Re: [RFC] mm, page_alloc: reintroduce page allocation stall warning Message-ID: References: <30945cc3-9c4d-94bb-e7e7-dde71483800c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <30945cc3-9c4d-94bb-e7e7-dde71483800c@google.com> On Sat 21-03-26 20:03:16, David Rientjes wrote: > Previously, we had warnings when a single page allocation took longer > than reasonably expected. This was introduced in commit 63f53dea0c98 > ("mm: warn about allocations which stall for too long"). > > The warning was subsequently reverted in commit 400e22499dd9 ("mm: don't > warn about allocations which stall for too long") but for reasons > unrelated to the warning itself. > > Page allocation stalls in excess of 10 seconds are always useful to debug > because they can result in severe userspace unresponsiveness. Adding > this artifact can be used to correlate with userspace going out to lunch > and to understand the state of memory at the time. > > There should be a reasonable expectation that this warning will never > trigger given it is very passive, it starts with a 10 second floor to > begin with. If it does trigger, this reveals an issue that should be > fixed: a single page allocation should never loop for more than 10 > seconds without oom killing to make memory available. > > Unlike the original implementation, this implementation only reports > stalls that are at least a second longer than the longest stall reported > thus far. Am all for reintroducing the warning in some shape. The biggest problem back then was printk being too eager to stomp all the work at a single executing context. Not sure this is still the case. Let's add printk maintainers. Also it makes some sense to differentiate stalled callers and show_mem which is more verbose. The former tells us who is affected and the second will give us more context and we want to get some information about all of them. The latter can be printed much less often as it will describe situation for a batch of concurrent ones. > Signed-off-by: David Rientjes > --- > mm/page_alloc.c | 32 ++++++++++++++++++++++++++++++++ > 1 file changed, 32 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4706,6 +4706,36 @@ check_retry_cpuset(int cpuset_mems_cookie, struct alloc_context *ac) > return false; > } > > +static unsigned long max_alloc_stall_warn_msecs = 10 * 1000L; > + > +static void check_alloc_stall_warn(gfp_t gfp_mask, nodemask_t *nodemask, > + unsigned int order, unsigned long alloc_start_time) > +{ > + static DEFINE_SPINLOCK(max_alloc_stall_lock); > + unsigned long stall_msecs = jiffies_to_msecs(jiffies - alloc_start_time); > + unsigned long flags; > + > + if (likely(stall_msecs <= READ_ONCE(max_alloc_stall_warn_msecs))) > + return; > + if (gfp_mask & __GFP_NOWARN) > + return; > + > + spin_lock_irqsave(&max_alloc_stall_lock, flags); > + if (stall_msecs > max_alloc_stall_warn_msecs) { > + pr_warn("%s: page allocation stall for %lu secs: order:%d, mode:%#x(%pGg) nodemask=%*pbl", > + current->comm, stall_msecs / MSEC_PER_SEC, order, gfp_mask, &gfp_mask, > + nodemask_pr_args(nodemask)); > + cpuset_print_current_mems_allowed(); > + pr_cont("\n"); > + dump_stack(); > + warn_alloc_show_mem(gfp_mask, nodemask); > + > + /* Only print future stalls that are more than a second longer */ > + WRITE_ONCE(max_alloc_stall_warn_msecs, stall_msecs + MSEC_PER_SEC); > + } > + spin_unlock_irqrestore(&max_alloc_stall_lock, flags); > +} > + > static inline struct page * > __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > struct alloc_context *ac) > @@ -4726,6 +4756,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > int reserve_flags; > bool compact_first = false; > bool can_retry_reserves = true; > + unsigned long alloc_start_time = jiffies; > > if (unlikely(nofail)) { > /* > @@ -4990,6 +5021,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > warn_alloc(gfp_mask, ac->nodemask, > "page allocation failure: order:%u", order); > got_pg: > + check_alloc_stall_warn(gfp_mask, ac->nodemask, order, alloc_start_time); > return page; > } > -- Michal Hocko SUSE Labs