From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A46A3A5E8A for ; Thu, 30 Apr 2026 20:22:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=96.67.55.147 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580581; cv=none; b=XiCkIltGXDcplhiie1uj4E1QVGA491+7nTALb9bj1IjovLwXHn4ufE85yj2nWQD6vVYrxXBpw6VQf8NzEBtPgF486HNniVlXmsljBavpsa41BUP3xjd2CUYSjIP9CuRw/JgslYIZV2WBzjUuEja2Sv3TFH6toLyCPQITD+ddRIQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580581; c=relaxed/simple; bh=siq0S5QwcxMN6OeuD76xDXx/ut5BLFkJ687Y6GAMPW8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ShF23NqZSjSqCrQCqi5VWleDrRZZr/RArPcmfW+hP7R61quNfhdFywGMGhMR8vGTmT3xFWyqZ1K8t1uirk11YG2Aq4Wb8Re1dTSI3OhrShu7zZdxyVC12Dzgz4jaq05mzRDZdWCAT5NPV07vbSorvXlrzTppGdBy2W7OdsM0Wmk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com; spf=pass smtp.mailfrom=surriel.com; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b=BoZodtr1; arc=none smtp.client-ip=96.67.55.147 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=surriel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b="BoZodtr1" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=surriel.com ; s=mail; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Iau1uajHmWm3rM+qXHG8UrUzgUxSiM6Prt/eiG7YJr4=; b=BoZodtr1BzcAILrwsHeb1JWkMh blWzOA7IiciiN6HZrQaXFcvKW9t6ZZKXcFiqmbr0sQY1IW7X8oUNsbgPfHByzQO2kRTayCJKlUFNA ZjECZ49RJNKA5/T81whh/hsjPnwElaIJKVIQgEVU0Tp0RkSwthvVhV2WVAdozu5OmVvsL1GdL5yKh jeeOmNl6RusaxAWpgL7cP5Rl7dRM7dVii+EvBt4wuVJFUfipHM/W6eTfri+LSfv50xSGWsopc7P9k u3qype7v76M22UF83ayNNIHqqLafnubK4gn2z2PQ4yTHO8sj9AQKQM4im1Lm/UKMCWoHoCMMhqymp AP9+De5w==; Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1wIXuC-000000001R0-4BcX; Thu, 30 Apr 2026 16:22:41 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, david@kernel.org, willy@infradead.org, surenb@google.com, hannes@cmpxchg.org, ljs@kernel.org, ziy@nvidia.com, usama.arif@linux.dev, Rik van Riel , Rik van Riel Subject: [RFC PATCH 24/45] mm: page_alloc: targeted evacuation and dynamic reserves for tainted SPBs Date: Thu, 30 Apr 2026 16:20:53 -0400 Message-ID: <20260430202233.111010-25-riel@surriel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260430202233.111010-1-riel@surriel.com> References: <20260430202233.111010-1-riel@surriel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Rik van Riel Reduce tainted superpageblock proliferation with two changes: 1. Dynamic SPB_TAINTED_RESERVE: scale the movable steering reserve with SPB size (~3% of pageblocks, minimum 4). For a 512-pageblock SPB this gives 16 reserved pageblocks instead of the previous flat 4, triggering async defrag 4x earlier and keeping more headroom for unmovable claims. 2. Two-phase targeted evacuation before NOFRAGMENT drop: when the slowpath is about to drop ALLOC_NOFRAGMENT for unmovable/reclaimable allocations, first try evacuating movable pages from tainted SPBs to create free pageblocks. Phase 1 evacuates movable pages from pageblocks already labeled as the desired migratetype (buddy coalescing). Phase 2 evacuates entire MOVABLE pageblocks to create free whole pageblocks that Pass 2 can claim for the desired migratetype. This avoids tainting clean SPBs in many cases where existing tainted SPBs have reclaimable capacity. Signed-off-by: Rik van Riel Assisted-by: Claude:claude-opus-4.7 syzkaller --- mm/page_alloc.c | 176 ++++++++++++++++++++++++++++++++++-------------- 1 file changed, 125 insertions(+), 51 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9f4298fc2727..493db531b869 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2675,8 +2675,16 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags * fewer than this many free pageblocks, ensuring that unmovable claims * always find room in existing tainted superpageblocks instead of spilling * into clean ones. + * + * Scale with SPB size: reserve ~3% of pageblocks (minimum 4). + * For a 512-pageblock SPB this gives 16 reserved pageblocks. */ -#define SPB_TAINTED_RESERVE 4 +#define SPB_TAINTED_RESERVE_MIN 4 + +static inline u16 spb_tainted_reserve(const struct superpageblock *sb) +{ + return max_t(u16, SPB_TAINTED_RESERVE_MIN, sb->total_pageblocks / 32); +} /* * On systems with many superpageblocks, we can afford to "write off" @@ -2988,7 +2996,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, * with few free pageblocks to reserve space * for future unmovable/reclaimable claims. */ - if (sb->nr_free <= SPB_TAINTED_RESERVE) + if (sb->nr_free <= spb_tainted_reserve(sb)) continue; for (current_order = order; current_order < NR_PAGE_ORDERS; @@ -3552,7 +3560,7 @@ __rmqueue_sb_find_fallback(struct zone *zone, unsigned int order, &sb->free_area[order]; if (movable && cat == SB_TAINTED && - sb->nr_free <= SPB_TAINTED_RESERVE) + sb->nr_free <= spb_tainted_reserve(sb)) continue; for (i = 0; i < MIGRATE_PCPTYPES - 1; i++) { @@ -3601,7 +3609,7 @@ __rmqueue_sb_find_fallback(struct zone *zone, unsigned int order, &sb->free_area[order]; if (movable && cat == SB_TAINTED && - sb->nr_free <= SPB_TAINTED_RESERVE) + sb->nr_free <= spb_tainted_reserve(sb)) continue; for (i = 0; i < MIGRATE_PCPTYPES - 1; i++) { @@ -6588,9 +6596,33 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, /* * Reclaim and compaction have been tried but could not free enough - * pages in already-tainted superpageblocks. Drop NOFRAGMENT as a - * last resort to allow claiming from clean/empty SPBs and stealing - * across migratetype boundaries. This is better than OOM-killing. + * pages in already-tainted superpageblocks. Before dropping + * NOFRAGMENT, try targeted evacuation of movable pages from + * tainted SPBs to create free pageblocks for unmovable claims. + */ + if ((alloc_flags & ALLOC_NOFRAGMENT) && + (ac->migratetype == MIGRATE_UNMOVABLE || + ac->migratetype == MIGRATE_RECLAIMABLE)) { + struct zoneref *z; + struct zone *zone; + + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, + ac->highest_zoneidx, + ac->nodemask) { + if (spb_evacuate_for_order(zone, order, + ac->migratetype)) { + page = get_page_from_freelist(gfp_mask, order, + alloc_flags, ac); + if (page) + goto got_pg; + } + } + } + + /* + * Targeted evacuation could not free enough either. Drop + * NOFRAGMENT as a last resort to allow claiming from clean/empty + * SPBs. This is better than OOM-killing. */ if (alloc_flags & ALLOC_NOFRAGMENT) { alloc_flags &= ~ALLOC_NOFRAGMENT; @@ -8688,7 +8720,7 @@ static bool spb_needs_defrag(struct superpageblock *sb) */ if (spb_get_category(sb) == SB_TAINTED) return sb->nr_movable > 0 && - sb->nr_free < SPB_TAINTED_RESERVE; + sb->nr_free < spb_tainted_reserve(sb); /* * Clean superpageblocks: compact scattered free pages into whole @@ -8720,7 +8752,7 @@ static bool spb_defrag_done(struct superpageblock *sb) */ if (spb_get_category(sb) == SB_TAINTED) return !sb->nr_movable || - sb->nr_free >= SPB_TAINTED_RESERVE; + sb->nr_free >= spb_tainted_reserve(sb); /* Clean superpageblocks: stop when enough free pageblocks exist */ if (sb->nr_free >= 2) @@ -9710,16 +9742,18 @@ static struct page *spb_try_alloc_contig(struct zone *zone, } /** - * sb_collect_evacuate_candidates - Find pageblocks for targeted evacuation + * sb_collect_evacuate_candidates - Find tainted SPBs for targeted evacuation * @zone: zone to search (must hold zone->lock) - * @migratetype: desired migratetype (MIGRATE_UNMOVABLE or MIGRATE_RECLAIMABLE) + * @migratetype: desired migratetype (MIGRATE_UNMOVABLE or MIGRATE_RECLAIMABLE), + * or -1 to find any tainted SPB with movable pages * @sb_pfns: output array of tainted superpageblock start PFNs * @max: maximum candidates to collect * - * Find tainted superpageblocks containing pageblocks of the desired migratetype - * that also have movable pages to evacuate. Evacuating movable pages from - * these pageblocks creates buddy coalescing opportunities for high-order - * allocations of the desired migratetype. + * Find tainted superpageblocks with movable pages to evacuate. When + * @migratetype is specified, only return SPBs that also contain pageblocks + * of that type (for coalescing within existing non-movable pageblocks). + * When @migratetype is -1, return any tainted SPB with movable pages + * (for freeing whole pageblocks via movable evacuation). * * Returns number of candidate superpageblock PFNs found. */ @@ -9734,20 +9768,22 @@ static int sb_collect_evacuate_candidates(struct zone *zone, int migratetype, for (full = 0; full < __NR_SB_FULLNESS; full++) { list_for_each_entry(sb, &zone->spb_lists[SB_TAINTED][full], list) { - bool has_matching; - if (!sb->nr_movable) continue; - if (migratetype == MIGRATE_UNMOVABLE) - has_matching = sb->nr_unmovable > 0; - else if (migratetype == MIGRATE_RECLAIMABLE) - has_matching = sb->nr_reclaimable > 0; - else - continue; + if (migratetype >= 0) { + bool has_matching; - if (!has_matching) - continue; + if (migratetype == MIGRATE_UNMOVABLE) + has_matching = sb->nr_unmovable > 0; + else if (migratetype == MIGRATE_RECLAIMABLE) + has_matching = sb->nr_reclaimable > 0; + else + continue; + + if (!has_matching) + continue; + } sb_pfns[n++] = sb->start_pfn; if (n >= max) @@ -9757,17 +9793,56 @@ static int sb_collect_evacuate_candidates(struct zone *zone, int migratetype, return n; } +/* + * Evacuate pageblocks of the given migratetype within a range. + * Returns number of pageblocks evacuated. + */ +static int evacuate_pb_range(struct zone *zone, unsigned long start_pfn, + unsigned long end_pfn, int migratetype, int max) +{ + unsigned long pfn; + int nr_evacuated = 0; + + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) { + struct page *page; + + if (!pfn_valid(pfn)) + continue; + + if (!zone_spans_pfn(zone, pfn)) + continue; + + page = pfn_to_page(pfn); + + if (get_pfnblock_migratetype(page, pfn) != migratetype) + continue; + + if (!get_pfnblock_bit(page, pfn, PB_has_movable)) + continue; + + evacuate_pageblock(zone, pfn, true); + if (++nr_evacuated >= max) + break; + } + return nr_evacuated; +} + /** * spb_evacuate_for_order - Targeted evacuation of movable pages from - * unmovable/reclaimable pageblocks + * tainted superpageblocks * @zone: zone to work on * @order: allocation order that failed * @migratetype: desired migratetype (MIGRATE_UNMOVABLE or MIGRATE_RECLAIMABLE) * - * Instead of blind compaction, use superpageblock metadata to find pageblocks - * of the right migratetype in tainted superpageblocks and evacuate their - * movable pages. This creates buddy coalescing opportunities within - * the pageblock, enabling higher-order allocations. + * Two-phase evacuation to create free space in tainted superpageblocks: + * + * Phase 1: Evacuate movable pages from pageblocks already labeled as + * @migratetype. This creates buddy coalescing opportunities within + * existing non-movable pageblocks. + * + * Phase 2: Evacuate entire MOVABLE pageblocks from tainted SPBs. + * When fully evacuated, these become free whole pageblocks that + * __rmqueue_smallest Pass 2 can claim for the desired migratetype. * * Returns true if evacuation was performed (caller should retry allocation). */ @@ -9779,40 +9854,39 @@ static bool spb_evacuate_for_order(struct zone *zone, unsigned int order, int nr_sbs, i; bool did_evacuate = false; + /* Phase 1: coalesce within existing non-movable pageblocks */ spin_lock_irqsave(&zone->lock, flags); nr_sbs = sb_collect_evacuate_candidates(zone, migratetype, sb_pfns, SPB_CONTIG_MAX_CANDIDATES); spin_unlock_irqrestore(&zone->lock, flags); - for (i = 0; i < nr_sbs && !did_evacuate; i++) { - unsigned long pfn, end_pfn; - - end_pfn = sb_pfns[i] + SUPERPAGEBLOCK_NR_PAGES; - for (pfn = sb_pfns[i]; pfn < end_pfn; - pfn += pageblock_nr_pages) { - struct page *page; + for (i = 0; i < nr_sbs; i++) { + unsigned long end_pfn = sb_pfns[i] + SUPERPAGEBLOCK_NR_PAGES; - if (!pfn_valid(pfn)) - continue; - - /* Superpageblocks can straddle zone boundaries. */ - if (!zone_spans_pfn(zone, pfn)) - continue; + if (evacuate_pb_range(zone, sb_pfns[i], end_pfn, + migratetype, 3)) + did_evacuate = true; + } - page = pfn_to_page(pfn); + if (did_evacuate) + return true; - if (get_pfnblock_migratetype(page, pfn) != migratetype) - continue; + /* Phase 2: evacuate MOVABLE pageblocks to create free whole pageblocks */ + spin_lock_irqsave(&zone->lock, flags); + nr_sbs = sb_collect_evacuate_candidates(zone, -1, + sb_pfns, + SPB_CONTIG_MAX_CANDIDATES); + spin_unlock_irqrestore(&zone->lock, flags); - if (!get_pfnblock_bit(page, pfn, PB_has_movable)) - continue; + for (i = 0; i < nr_sbs; i++) { + unsigned long end_pfn = sb_pfns[i] + SUPERPAGEBLOCK_NR_PAGES; - evacuate_pageblock(zone, pfn, true); + if (evacuate_pb_range(zone, sb_pfns[i], end_pfn, + MIGRATE_MOVABLE, 3)) did_evacuate = true; - break; - } } + return did_evacuate; } #endif /* CONFIG_COMPACTION */ -- 2.52.0