From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3035230B53F for ; Wed, 22 Apr 2026 15:47:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.169 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776872839; cv=none; b=fmXGl5bnOKs0OEEfQ5w37TtWUgQrAbyuiJfgwK0KZnIHi+PAiy9wwzGr38IjuxVjhnTO5SSfizc/rRl4W4ZA3cPKs9ly5qhP4hbSPSeM4aDHa2PAg5sCT+tkkcdJenJ1h8aY1THt0mQ2t7KfPcUleubqQuj0X5kurYRlJqPkyCU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776872839; c=relaxed/simple; bh=4xkvCBATAj8CA24Zm1D9hAIzf06p87DqjCY7xDmlcsA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=dkXWChBzRgESPVktSPqlxPlmfktoUbNRWEWxgMFNg728cEuuR0+WUUFaitYuyBJLsIFA3YWr1bvVgUMQ1pQ6iASmtbIw1/TxJhL5dIIMKcW4uR8YD16GTb+9XJD5m3ctOF8Y2Zr1g2z/i8wG5llhaUa2sVxyMYIa+/oWr73XP3c= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=Wd9ilbJy; arc=none smtp.client-ip=209.85.222.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="Wd9ilbJy" Received: by mail-qk1-f169.google.com with SMTP id af79cd13be357-8eab809593cso367617985a.3 for ; Wed, 22 Apr 2026 08:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1776872836; x=1777477636; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=lgv+a25H/FpWU/HbGzAT+b07NIf/XnGJt4/r5qF/r1A=; b=Wd9ilbJyNqVn4rQgDnye4ziaviDCD364gxbTf2vBo6QtiHTtdd+YC05Nf48jeSs8ah iiX91GQA99z6HYLFBx+Nl83PyoMZs+QuttP/5nBmSCRG484ra9Tjk2GUUBk8OwIjMXWH 0FlCV+VP236wRM37N2RVfeVOAcb7iI2GCtCIpPcoNLD68Oz7fI0a0vz1GLb11cUeFKRt ax3rhV054lWWm2pNRAAHenGKFfDWwT0xNiQchdFyQh01loNmS+k3fl9/SMCHFP3wsyrB Xk1MLPldbmElaB/YwCqM5Y2t7lIxMFUfThLbdZqoku/rhypzBO5b7mpvXCe3ENGjulBA SvzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776872836; x=1777477636; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lgv+a25H/FpWU/HbGzAT+b07NIf/XnGJt4/r5qF/r1A=; b=sRR0KhwzWaYh2Pr44o5V7zqJ9CUuK+7YLLtAdN9ynHtwigSNTxA9bo+Fo3Qfp11ZWw pPHBEKywssGc+ViHVFwpaTNcXo2Xcmx6sg3tdOwYOnghxOv5WQXExPCOY6oG1kNe+d30 C2J3/5FGplwalfG5hF7ibDTniLHxMhTE425n6k/DnFBWy6Fhn6tnIpmjQwuulFy60gFA g5uSyIXj3z8JDamlcgVC+hzbXk/5E/ZKb5ereU+Pmr7SvJ5gJpGwXS96C/MYbka21dNo wbufo4qT2T4SHSaHzEuVB5xiQSeS4NlwLp7zfN90ssD1gzZIdWBuMEN/fBVsMZVKWsl2 ZBHw== X-Forwarded-Encrypted: i=1; AFNElJ88D0AuyyluZKZfM6MqAta4T+Qy9EVTGQYC8ygAZnfefskQWWfWDurNp88WhX7eAf334Osshjg7wjDtgFU=@vger.kernel.org X-Gm-Message-State: AOJu0YynedNH67sH+QRLNKaI4/v+EqB/liUUlsxwKl6WyEv3BEmuplLG hRLTgFfdoxWoRwQLy3Hgl59M4q9KkMwlGFOiVKu6BHkiZxIE+C+2GLB4A+eNAp2mcd4= X-Gm-Gg: AeBDievXE9h1pP6Ct48tgXG704KdM4leN4iz8883Z/N6oxU72bTizsAhV8Lh8wfI6cf KCsE/QtQqdyq+oK3Rb8Ch1FejDEHFeAfxl55P/FXPan/hw39tCE29M7GdWHSudK1FHqq8lWQQo4 JAH/cjz8ljTbqbQV5GQCMAk9oWG9ngf04QmZvBWVEMxLOvssrVgPmHQvSfEbpnhUCQ/zqkIwRwc SC/3VsUSnb3EQY3SpPOGewjnuua1RJpsWb4dZ2a9YzF8UGjs43qEtpD5CEmg9rDn+0ncWR1EBhj rxevxsuJpK/ymSqMB3ikY7I4Xrvs37meA3bY6SVicPBb8TozeRv9uRQNJW1n9h289sqHmQpT9/1 5NBYP0T8Jn70Gv3zXE5GT+O8KRMCVQbNKbC9UMZyAQwrFxUQhks4T/2UtPa/ti+Hy83iugHsf+2 78Cz68bX5OwpsG3boAwpJy5hq0sotRCOec X-Received: by 2002:a05:620a:9368:b0:8ec:a621:a3aa with SMTP id af79cd13be357-8eca6d8d612mr1395914485a.2.1776872835336; Wed, 22 Apr 2026 08:47:15 -0700 (PDT) Received: from localhost ([2603:7001:f100:500:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-8b02ae97347sm131144816d6.41.2026.04.22.08.47.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Apr 2026 08:47:14 -0700 (PDT) Date: Wed, 22 Apr 2026 11:47:10 -0400 From: Johannes Weiner To: "Barry Song (Xiaomi)" Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Baolin Wang , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Kairui Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Wang Lian , Kunwu Chan Subject: Re: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation Message-ID: References: <20260422021842.78495-1-baohua@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260422021842.78495-1-baohua@kernel.org> Hi Barry, On Wed, Apr 22, 2026 at 10:18:42AM +0800, Barry Song (Xiaomi) wrote: > We may encounter cases where the system still has plenty of free > memory, but cannot satisfy higher-order allocations. On phones, we > have observed that bursty network transfers can cause devices to > heat up. Baolin and Kairui have seen similar behavior on servers. > > Currently, kswapd behaves as follows: when a higher-order allocation > is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false > because __zone_watermark_ok() fails if no suitable higher-order > pages exist, even when free memory is well above the high watermark. > As a result, kswapd_shrink_node() sets an excessively large > sc->nr_to_reclaim and attempts aggressive reclamation: > > for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { > sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); > } > > We have an opportunity to re-evaluate the balance by resetting > sc->order to 0 after shrink_node() with the following code > in kswapd_shrink_node(): > /* > * Fragmentation may mean that the system cannot be rebalanced for > * high-order allocations. If twice the allocation size has been > * reclaimed then recheck watermarks only at order-0 to prevent > * excessive reclaim. > */ > if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) > sc->order = 0; > > But we have actually scanned and over-reclaimed far more than > compact_gap(sc->order). Do you have traces for how much it overshoots? > If higher-order allocations continue, we may see persistently high > kswapd CPU utilization coexisting with plenty of free memory in the > system. > > We may want to evaluate the situation earlier at the beginning. > If there is plenty of free memory, we could avoid triggering > reclamation with an excessively large sc->nr_to_reclaim value > and instead prefer compaction. > > Cc: Baolin Wang > Cc: Johannes Weiner > Cc: David Hildenbrand > Cc: Michal Hocko > Cc: Qi Zheng > Cc: Shakeel Butt > Cc: Lorenzo Stoakes > Cc: Kairui Song > Cc: Axel Rasmussen > Cc: Yuanchu Xie > Cc: Wei Xu > Co-developed-by: Wang Lian > Co-developed-by: Kunwu Chan > Signed-off-by: Barry Song (Xiaomi) > --- > -RFC v1 was "mm: net: disable kswapd for high-order network > buffer allocation": > https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ > > mm/vmscan.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bd1b1aa12581..4f9668aa8eef 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, > 0, free_pages)) > return true; > + /* > + * Free pages may be well above the watermark, but if > + * higher-order pages are unavailable, kswapd may still > + * trigger excessive reclamation. > + */ > + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) > + return true; I've tried this in the past, but it was regressing huge page requests under memory pressure and with higher levels of concurrency: https://lore.kernel.org/linux-mm/20250411182156.GE366747@cmpxchg.org/ The compaction gap is sized for a single allocation, but kswapd/kcompactd are a shared resource for potentially hundreds or thousands of incoming requests. So if there is high demand for contiguous memory this isn't enough - kswapd gives up too early, kcompactd efficiency drops, you get storms of direct reclaim/compaction, and still poor allocation success rates. Continued kswapd wakeups mean that there is ongoing unsatisfied demand. The system has to keep moving forward. That said, it's well possible that we're overshooting that progress buffer due to running reclaim scans with a high order. It might be a better idea to look into that?