From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93CC23A3832 for ; Thu, 30 Apr 2026 20:22:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=96.67.55.147 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580572; cv=none; b=cgg1m4Ywh6/HpS6h7fDKa57mzUQm6QGNNVURbzxWwRbWxmHVhiM9wc8cmwin0G7dOB5JsKbbzhK1y+3mnZ8XCUhrlB55clbTqbpTwrWHI3BbKdxPsIkiN3MsNb/F0ie1a552yxDEpW2fIJofQUa5vHjLAMmjG+w0yKhsoRzH0KU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580572; c=relaxed/simple; bh=gwEFOyGViufAqQ4wxHIsezSs2Acu8/G/QvW0NBKi5Q4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Mi0zv5jnRCbKC3mFkPGawPjr3rGhWiguMI4xiC8fNXYwn2iUfnkaFaGzngOfmLFFnq7NXWsmSIWKtbwKQ/QIP8Rz9vH3paHD6yeiM895/9ypF1dA3NDPGK8uzJx+/erT1dAQjaP9iR80guboIeJGmkaywr6jFK4Ai29DlbXjN60= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com; spf=pass smtp.mailfrom=surriel.com; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b=e3CrL9iD; arc=none smtp.client-ip=96.67.55.147 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=surriel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b="e3CrL9iD" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=surriel.com ; s=mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=jOC1UcN2vB0f4nIEYXZrKqkAXItZ6UqjP2wIrP5lxyo=; b=e3CrL9iDJQg3bLDxlljFIap7BW lZn3RrRMDEbMo8qCk+uSIdThugVMdFC65V8s1bsA7c8pv56Uw6Lykf8hFEQYLOQwrtGTB9DjpjWbc //IgIgPihUEM2B12OVjmFyIfTmAa8P16GeW85rhJal+dQrrhb8nIzY6AshJD4yYa9KfU5dXhqngAw cX0lF6+6luu9Ib9QmqXcBBMEuaporryobehBMWvyyZ8EOcXMxyP6+eqnw4B4CYe17xcQ3u89Aa8wl Saa/PDPYhymRd0vrEbAP5sWINvaQY2xF0Wkq/lTzsBlpqg/QbVFaF1nTD+WRVG022KDOnC8/QbdxZ 7n3/Ikag==; Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1wIXuC-000000001R0-34Ae; Thu, 30 Apr 2026 16:22:40 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, david@kernel.org, willy@infradead.org, surenb@google.com, hannes@cmpxchg.org, ljs@kernel.org, ziy@nvidia.com, usama.arif@linux.dev, Rik van Riel , Rik van Riel Subject: [RFC PATCH 13/45] mm: page_alloc: steer movable allocations to fullest clean superpageblocks Date: Thu, 30 Apr 2026 16:20:42 -0400 Message-ID: <20260430202233.111010-14-riel@surriel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260430202233.111010-1-riel@surriel.com> References: <20260430202233.111010-1-riel@surriel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Rik van Riel When refilling PCP with whole pageblocks for movable allocations, prefer pageblocks from the fullest clean (only free + movable) superpageblock. This packs movable allocations into already-partial superpageblocks, preserving empty superpageblocks for potential 1GB hugepage allocation. Add sb_preferred_for_movable() which walks the clean superpageblock lists from SB_FULL toward SB_ALMOST_EMPTY to find the fullest clean superpageblock with available free pageblocks. Add __rmqueue_from_sb() which scans the buddy free list for a page within a specific superpageblock's PFN range, with a bounded scan limit (8 entries) to avoid excessive latency. Hook into rmqueue_bulk() phase 1 (whole pageblock grab for PCP refill) to try the preferred superpageblock before falling back to the normal __rmqueue() path. This is the primary steering point for movable allocations without per-superpageblock free lists. Signed-off-by: Rik van Riel Assisted-by: Claude:claude-opus-4.7 syzkaller --- mm/page_alloc.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 86 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d795f41975c1..8b10322d5221 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2311,6 +2311,73 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags /* Bounded scan limit when searching free lists for tainted superpageblock pages */ #define SPB_SCAN_LIMIT 8 +/** + * sb_preferred_for_movable - Find the fullest clean superpageblock for movable + * @zone: zone to search + * + * Walk spb_lists[CLEAN] from nearly full toward emptiest — pack movable + * allocations into already-partial superpageblocks before starting new ones. + * Skip SB_FULL since those have no free pageblocks. + * Returns NULL if no suitable superpageblock found. + */ +static struct superpageblock *sb_preferred_for_movable(struct zone *zone) +{ + int full; + struct superpageblock *sb; + + for (full = SB_FULL_75; full < __NR_SB_FULLNESS; full++) { + list_for_each_entry(sb, &zone->spb_lists[SB_CLEAN][full], list) { + if (sb->nr_free) + return sb; + } + } + /* Fall back to empty superpageblocks — no clean partials available */ + return NULL; +} + +/** + * __rmqueue_from_sb - Try to allocate a page from a specific superpageblock + * @zone: zone to allocate from + * @order: allocation order + * @migratetype: type to allocate + * @sb: preferred superpageblock + * + * Scan the free list at the given order for a page within the superpageblock's + * PFN range. Bounded scan to avoid excessive latency. Returns NULL if + * no suitable page found. + */ +static struct page *__rmqueue_from_sb(struct zone *zone, unsigned int order, + int migratetype, struct superpageblock *sb) +{ + unsigned int current_order; + unsigned long sb_start = sb->start_pfn; + unsigned long sb_end = sb_start + (1UL << SUPERPAGEBLOCK_ORDER); + struct free_area *area; + struct page *page; + int scanned; + + for (current_order = order; current_order < NR_PAGE_ORDERS; + ++current_order) { + area = &zone->free_area[current_order]; + scanned = 0; + + list_for_each_entry(page, &area->free_list[migratetype], + buddy_list) { + unsigned long pfn = page_to_pfn(page); + + if (pfn >= sb_start && pfn < sb_end) { + page_del_and_expand(zone, page, order, + current_order, + migratetype); + return page; + } + if (++scanned >= SPB_SCAN_LIMIT) + break; + } + } + return NULL; +} + /* * Go through the free lists for the given migratetype and remove * the smallest available page from the freelists @@ -3103,12 +3170,26 @@ static bool rmqueue_bulk(struct zone *zone, unsigned int order, * small zones, pages_needed can be less than a whole * pageblock; skip to smaller blocks or individual pages to * avoid overshooting the PCP high watermark. + * + * For movable allocations, prefer pageblocks from the + * fullest clean superpageblock to pack allocations and + * preserve empty superpageblocks for 1GB hugepages. */ while (refilled + pageblock_nr_pages <= pages_needed) { - struct page *page; + struct page *page = NULL; - page = __rmqueue(zone, pageblock_order, - migratetype, alloc_flags, &rmqm); + if (migratetype == MIGRATE_MOVABLE) { + struct superpageblock *sb; + + sb = sb_preferred_for_movable(zone); + if (sb) + page = __rmqueue_from_sb(zone, pageblock_order, + migratetype, sb); + } + if (!page) + page = __rmqueue(zone, pageblock_order, + migratetype, + alloc_flags, &rmqm); if (!page) break; @@ -5738,6 +5819,8 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, goto out; gfp = alloc_gfp; + alloc_flags |= alloc_flags_nofragment(zonelist_zone(ac.preferred_zoneref), gfp); + /* Find an allowed local zone that meets the low watermark. */ z = ac.preferred_zoneref; for_next_zone_zonelist_nodemask(zone, z, ac.highest_zoneidx, ac.nodemask) { -- 2.52.0