From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F4E53A1681 for ; Thu, 30 Apr 2026 20:22:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=96.67.55.147 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580573; cv=none; b=UbicKtByTDfpxo1dmSWnECMCWf5BgmEPU9UrwSVZ461o14fP3sVEDy82pF2vmnmFFhvj2NmpHf0+J1BWjCtxSt+KeYOR5vaL6QwEJc+B6zsbj4dhnVJmUV6Me7o/N49rErzcy77EzFWLiBnpw689PDj6Ug9F908fMD3HpX+I2w8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580573; c=relaxed/simple; bh=1natxij0h5Chf5+teNkBtu34lg1FKSrB2FbmMjf5J4I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Tww4tf5ENLiaul1w8ae1Vz1TGwaiG/cbku3oIO8baw8RBGUuPpAx9wexUkU6cmG2LMVMaMKRZYnPjHv8vCjcoptVWaNVjVhCtqeOSimIprw99yI0SJ/HU+SwJlPWMo/dUAOR/4MF5zQnTaq3lxgsquNHA3wyf95z9dl2vdqed7A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com; spf=pass smtp.mailfrom=surriel.com; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b=EhCHmZfv; arc=none smtp.client-ip=96.67.55.147 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=surriel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b="EhCHmZfv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=surriel.com ; s=mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=zf9reZHgSHYNVIk6lg9DWjojuUVoE5Unv6DtuPNPWzM=; b=EhCHmZfvKOaoXulpv4/AskEX44 zA5DDsl31PJqXlxrNfGZWEZQ4YobqdVNx2cXAuCcXElgvvrWTlpxUgTHqJfpho4uVmYIhL5O05XY7 332gHM79S5LUKeKt/bJ/Kf/lmZdlGVVnQ0jPG7QzGME7PVT8RdKjZ38boZ7L/yNCjcUKaJZ69GbGA WSozUole/i4geGl5o4BZoNt780GY3eKaIABybCSL1Ds+JT4yDoJTimPcIcegCZjNHdsU6yhFKgrqf SE75y+YoWinxCKKRr2pjTwEoiICT8ehRyNup6JF9XLz6HDKJcgWr6rP8cxSIjvuLEuhzz7mVzYYWO Tr5mGaFQ==; Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1wIXuD-000000001R0-0AjO; Thu, 30 Apr 2026 16:22:41 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, david@kernel.org, willy@infradead.org, surenb@google.com, hannes@cmpxchg.org, ljs@kernel.org, ziy@nvidia.com, usama.arif@linux.dev, Rik van Riel , Rik van Riel Subject: [RFC PATCH 26/45] mm: page_alloc: prevent UNMOVABLE/RECLAIMABLE mixing in pageblocks Date: Thu, 30 Apr 2026 16:20:55 -0400 Message-ID: <20260430202233.111010-27-riel@surriel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260430202233.111010-1-riel@surriel.com> References: <20260430202233.111010-1-riel@surriel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Rik van Riel Summary: Inside a tainted SPB, free pages of UNMOVABLE and RECLAIMABLE allocations cannot be told apart by the buddy allocator's compatibility heuristic (alike_pages == 0 between the two non-movable types in try_to_claim_block). Once a pageblock holds in-use pages of both, any sticky UNMOVABLE pinhole prevents the RECLAIMABLE pages from coalescing into useful higher-order chunks when they drain back to the buddy. The PB's free capacity is permanently capped at order-1 dust regardless of how much of it actually returns. Sticky recl pages (active dentries, locked btrfs eb folios, NOFS slab) are unavoidable; the cost is paid in internal fragmentation. Two paths in the page allocator create UNMOVABLE<->RECLAIMABLE mixing today: 1. try_to_claim_block() relabels a partial PB whenever the 50% threshold "free_pages + alike_pages >= pageblock_nr_pages/2" passes. For UNMOV<->RECL, alike_pages == 0, so the rule degenerates to free_pages >= 256. A PB with 256 in-use UNMOV pages plus 256 free pages passes and is relabeled RECL. Both PB_has_unmovable and PB_has_reclaimable are then set. 2. __rmqueue_steal() takes a single foreign-type page out of a PB without relabeling the PB. A UNMOVABLE allocation stealing from a RECLAIMABLE-labeled PB sets PB_has_unmovable on top of the existing PB_has_reclaimable. Tighten both paths: - Add noncompatible_cross_type() helper that detects the UNMOV<->RECL pair (MOVABLE may still mix with either since movable pages can be migrated out). - In try_to_claim_block(), require a fully-free PB (free_pages == pageblock_nr_pages) for any cross-type relabel, regardless of from_tainted_spb. The other-type bit inherited from the prior label is stale on a fully-free PB (no in-use pages of either type) so clear it during the relabel rather than leaving the PB visibly mixed in PB_has_* state. - In __rmqueue_steal(), pass a new SB_SKIP_CROSS_TYPE flag to __rmqueue_sb_find_fallback() so the cross-type fallback entry in fallbacks[] is skipped. Steal then falls through to the MIGRATE_MOVABLE second fallback instead of single-page-stealing into a foreign non-movable PB. The from_tainted_spb=true caller of try_to_claim_block() is unaffected because it hardcodes block_type=MIGRATE_MOVABLE. The claim_whole_block() branch (current_order >= pageblock_order) is also unaffected: it requires PB_all_free, so the PB is fully free of any prior type. Test Plan: Bare-metal devvm with the existing 4 stuck tainted SPBs (sb[2,15, 36,51] in Normal). Build and reboot. Compare per-order free distribution in newly tainted SPBs against pre-patch baseline: today o0/o1 dominate, target meaningful (>10%) free at order >= 3 in pure-RECL SPBs created post-patch. Watch for tainted SPB count growth past ~12 (3x current baseline) — the fully-free constraint on cross-type claims will taint fresh SPBs more often, and a runaway count means the cost was misjudged. Watch dmesg for allocation failures and kswapd CPU stays under 2 cores. Existing mixed SPBs from before this change won't unmix; the win is for SPBs created after. Reviewers: Subscribers: Tasks: Tags: Signed-off-by: Rik van Riel Assisted-by: Claude:claude-opus-4.7 syzkaller --- mm/page_alloc.c | 111 ++++++++++++++++++++++++++++++++++++------------ 1 file changed, 85 insertions(+), 26 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 67cc8165ab1f..ceb1284a63ed 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3057,6 +3057,23 @@ static int fallbacks[MIGRATE_PCPTYPES][MIGRATE_PCPTYPES - 1] = { [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE }, }; +/* + * UNMOVABLE and RECLAIMABLE allocations should not share the same + * pageblock. Their free pages are interchangeable on the buddy free + * lists (alike_pages == 0 between them), so once a PB holds both + * types the buddy can no longer tell them apart and any sticky + * UNMOVABLE pinhole prevents the RECLAIMABLE pages from coalescing + * into useful higher-order chunks when they drain back. MOVABLE may + * mix with either, since MOVABLE pages can be migrated out. + */ +static inline bool noncompatible_cross_type(int start_type, int fallback_type) +{ + return (start_type == MIGRATE_UNMOVABLE && + fallback_type == MIGRATE_RECLAIMABLE) || + (start_type == MIGRATE_RECLAIMABLE && + fallback_type == MIGRATE_UNMOVABLE); +} + #ifdef CONFIG_CMA static __always_inline struct page *__rmqueue_cma_fallback(struct zone *zone, unsigned int order) @@ -3434,6 +3451,9 @@ try_to_claim_block(struct zone *zone, struct page *page, bool from_tainted_spb) { int free_pages, movable_pages, alike_pages; +#ifdef CONFIG_COMPACTION + struct superpageblock *sb; +#endif unsigned long start_pfn; /* @@ -3492,35 +3512,48 @@ try_to_claim_block(struct zone *zone, struct page *page, * allocations. Inside a tainted SPB the protection is unnecessary: * fragmentation has already been accepted at the SPB level, and * relabeling is much cheaper than tainting a fresh clean SPB. - */ - if (from_tainted_spb || - free_pages + alike_pages >= (1 << (pageblock_order-1)) || - page_group_by_mobility_disabled) { - __move_freepages_block(zone, start_pfn, block_type, start_type); - set_pageblock_migratetype(pfn_to_page(start_pfn), start_type); -#ifdef CONFIG_COMPACTION - /* - * Track actual page contents in pageblock flags and - * update superpageblock counters so the SPB moves to - * the correct fullness list for steering. - */ - { - struct page *start_page = pfn_to_page(start_pfn); - struct superpageblock *sb; - - __spb_set_has_type(start_page, start_type); - if (block_type != start_type) - __spb_set_has_type(start_page, block_type); + * + * UNMOVABLE<->RECLAIMABLE cross-type claims override these rules: + * once mixed, sticky pinholes of one type prevent the other from + * coalescing into useful higher-order free chunks even after drain. + * Only relabel a fully-free PB in that case, regardless of whether + * the SPB is tainted. + */ + if (noncompatible_cross_type(start_type, block_type)) { + if (free_pages != pageblock_nr_pages) + return NULL; + } else if (!from_tainted_spb && + free_pages + alike_pages < (1 << (pageblock_order-1)) && + !page_group_by_mobility_disabled) { + return NULL; + } - sb = pfn_to_superpageblock(zone, start_pfn); - if (sb) - spb_update_list(sb); - } -#endif - return __rmqueue_smallest(zone, order, start_type); + __move_freepages_block(zone, start_pfn, block_type, start_type); + set_pageblock_migratetype(pfn_to_page(start_pfn), start_type); +#ifdef CONFIG_COMPACTION + /* + * Track actual page contents in pageblock flags and update + * superpageblock counters so the SPB moves to the correct + * fullness list for steering. + * + * For cross-type UNMOVABLE<->RECLAIMABLE relabel (which by the + * predicate above only fires on a fully-free PB), the inherited + * PB_has_ bit is stale — there are no in-use pages + * of that type. Clear it so the resulting PB is unmixed. + */ + __spb_set_has_type(pfn_to_page(start_pfn), start_type); + if (block_type != start_type) { + if (noncompatible_cross_type(start_type, block_type)) + __spb_clear_has_type(pfn_to_page(start_pfn), block_type); + else + __spb_set_has_type(pfn_to_page(start_pfn), block_type); } - return NULL; + sb = pfn_to_superpageblock(zone, start_pfn); + if (sb) + spb_update_list(sb); +#endif + return __rmqueue_smallest(zone, order, start_type); } /* @@ -3544,6 +3577,13 @@ try_to_claim_block(struct zone *zone, struct page *page, #define SB_SEARCH_EMPTY (1 << 1) #define SB_SEARCH_FALLBACK (1 << 2) #define SB_SEARCH_ALL (SB_SEARCH_PREFERRED | SB_SEARCH_EMPTY | SB_SEARCH_FALLBACK) +/* + * Skip UNMOVABLE<->RECLAIMABLE cross-type fallback. Used by the steal + * path to prevent landing single foreign-type pages into a PB labeled + * with the other non-movable type — a steal does not relabel the PB + * so cross-type stealing creates permanent mixing. + */ +#define SB_SKIP_CROSS_TYPE (1 << 3) static struct page * __rmqueue_sb_find_fallback(struct zone *zone, unsigned int order, @@ -3580,6 +3620,10 @@ __rmqueue_sb_find_fallback(struct zone *zone, unsigned int order, int fmt = fallbacks[start_migratetype][i]; struct page *page; + if ((search_cats & SB_SKIP_CROSS_TYPE) && + noncompatible_cross_type(start_migratetype, fmt)) + continue; + page = get_page_from_free_area(area, fmt); if (page) { @@ -3601,6 +3645,10 @@ __rmqueue_sb_find_fallback(struct zone *zone, unsigned int order, int fmt = fallbacks[start_migratetype][i]; struct page *page; + if ((search_cats & SB_SKIP_CROSS_TYPE) && + noncompatible_cross_type(start_migratetype, fmt)) + continue; + page = get_page_from_free_area(area, fmt); if (page) { @@ -3629,6 +3677,10 @@ __rmqueue_sb_find_fallback(struct zone *zone, unsigned int order, int fmt = fallbacks[start_migratetype][i]; struct page *page; + if ((search_cats & SB_SKIP_CROSS_TYPE) && + noncompatible_cross_type(start_migratetype, fmt)) + continue; + page = get_page_from_free_area(area, fmt); if (page) { @@ -3765,11 +3817,18 @@ __rmqueue_steal(struct zone *zone, int order, int start_migratetype, /* * When ALLOC_NOFRAG_TAINTED_OK is set, only steal from tainted * SPBs to avoid tainting clean ones. Otherwise search all categories. + * + * Always skip UNMOVABLE<->RECLAIMABLE cross-type fallback. The steal + * path takes a single page without relabeling its PB, so a cross-type + * steal would land an UNMOVABLE page in a RECLAIMABLE-labeled PB + * (or vice versa) and create permanent mixing. Falling through to + * MIGRATE_MOVABLE (the second fallback) is preferable. */ if (alloc_flags & ALLOC_NOFRAG_TAINTED_OK) search_cats = SB_SEARCH_PREFERRED; else search_cats = SB_SEARCH_PREFERRED | SB_SEARCH_FALLBACK; + search_cats |= SB_SKIP_CROSS_TYPE; /* * Search per-superpageblock free lists for fallback migratetypes. -- 2.52.0