From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BD9B3A3E79 for ; Thu, 30 Apr 2026 20:22:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=96.67.55.147 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580572; cv=none; b=RY3sCCVOwf2VTD56WEgW3sIa2dxhRiCcEzZFe1tHBj+hCN52xY7KC4r0+4RNvhdOp523qJRgUxsa9G3wT4S57n5ejUobZlHqO0iP3cWHwisDlIqfqLE7/RP/nFp98Yd1jZDjHGEeE+cTZrAgQGBhnPJyXajcfQ/wfwOnWZGLXJs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777580572; c=relaxed/simple; bh=ZZWAcmlWdzS+3q2Pjchv/eps04euadsKM1CA4l2inQs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=SywmlhOvnROkk9+Vf9ZGs1DfRlh2XURKA0cvCozW9ZzYp9xGnBw8Q7XmErZZawq0clbJFVqZXcO0EdMlmPxk7iBrVHhrXe6WFfwF+zqMd15CzD+PnpNQRLnc37mxe3BLyrqoDgRY1MM9uBVYNOtJes/RVytuVQutkeNP6zdGs+A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com; spf=pass smtp.mailfrom=surriel.com; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b=P11XhRCO; arc=none smtp.client-ip=96.67.55.147 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=surriel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b="P11XhRCO" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=surriel.com ; s=mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=oDRktvTM/qypK4RLJHwBR8Udqq/qtl1o5oc/hIDnT74=; b=P11XhRCOE5PnDPaG5xTDFtCSFh DlZx/vQn4T28j3eKgm1TYWaS+Jp28QpC9w7hcFG0m+A5eL7rvwaODim1CcIazRWdHAomkKnBJgpqr WEDsSiRPOd9TuqutVv/tI5zRwe/zK00DsnR0XRzbDg7au8UYJsn9o7t8XUeqxRfMr8NrHgo0p48Mi 7locO8kyZjTakSwPWsLJwGjdnEG3Xnype7eW2qHMfPlajrRE5osTrkQXxvEugX4oRndQFrDeLF+zF ZwuYBss6mqIUHRnavvLr3F2h3czrWKUVZWJQnq/A8wnhDy50XrA73snPFT/HOgZz/pvpC8yWXLiJv cxjOGezg==; Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1wIXuC-000000001R0-39ZE; Thu, 30 Apr 2026 16:22:40 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, david@kernel.org, willy@infradead.org, surenb@google.com, hannes@cmpxchg.org, ljs@kernel.org, ziy@nvidia.com, usama.arif@linux.dev, Rik van Riel , Rik van Riel Subject: [RFC PATCH 14/45] mm: page_alloc: extract claim_whole_block from try_to_claim_block Date: Thu, 30 Apr 2026 16:20:43 -0400 Message-ID: <20260430202233.111010-15-riel@surriel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260430202233.111010-1-riel@surriel.com> References: <20260430202233.111010-1-riel@surriel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Rik van Riel Extract the whole-pageblock claiming logic from try_to_claim_block() into a standalone claim_whole_block() function. This handles the PB_all_free → used transition, pageblock migratetype change, and block splitting for orders >= pageblock_order. Pure refactoring, no functional change. Prepares for reuse of this logic in the per-superpageblock free lists patch. Signed-off-by: Rik van Riel Assisted-by: Claude:claude-opus-4.7 syzkaller --- mm/page_alloc.c | 90 +++++++++++++++++++++++++++++-------------------- 1 file changed, 54 insertions(+), 36 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8b10322d5221..907ce46c060f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2731,6 +2731,57 @@ int find_suitable_fallback(struct free_area *area, unsigned int order, return -1; } +/* + * claim_whole_block - claim a free block (>= pageblock_order) for a new type + * @zone: zone containing the page + * @page: free page to claim + * @current_order: order of the free page + * @order: requested allocation order + * @new_type: migratetype to assign + * @old_type: current migratetype of the block (for free list removal) + * + * Handle the PB_all_free → used transition, change the pageblock + * migratetype, split the block down to @order, and return the page. + */ +static struct page * +claim_whole_block(struct zone *zone, struct page *page, + int current_order, int order, int new_type, int old_type) +{ + struct superpageblock *sb; + unsigned int nr_added; + unsigned long pb_pfn; + + VM_WARN_ON_ONCE(current_order < order); + + /* + * Clear PB_all_free for pageblocks being claimed. + * This path bypasses page_del_and_expand(), so we + * must handle the free→used transition here. + */ + for (pb_pfn = page_to_pfn(page); + pb_pfn < page_to_pfn(page) + (1 << current_order); + pb_pfn += pageblock_nr_pages) { + struct page *pb_page = pfn_to_page(pb_pfn); + + if (get_pfnblock_bit(pb_page, pb_pfn, PB_all_free)) { + clear_pfnblock_bit(pb_page, pb_pfn, PB_all_free); + superpageblock_pb_now_used(pb_page); + } + __spb_set_has_type(pb_page, new_type); + } + + del_page_from_free_list(page, zone, current_order, old_type); + change_pageblock_range(page, current_order, new_type); + nr_added = expand(zone, page, order, current_order, new_type); + account_freepages(zone, nr_added, new_type); + + /* Single list update after all pageblocks processed */ + sb = pfn_to_superpageblock(zone, page_to_pfn(page)); + if (sb) + spb_update_list(sb); + return page; +} + /* * This function implements actual block claiming behaviour. If order is large * enough, we can claim the whole pageblock for the requested migratetype. If @@ -2754,42 +2805,9 @@ try_to_claim_block(struct zone *zone, struct page *page, return NULL; /* Take ownership for orders >= pageblock_order */ - if (current_order >= pageblock_order) { - unsigned int nr_added; - unsigned long pb_pfn; - - /* - * Clear PB_all_free for pageblocks being claimed. - * This path bypasses page_del_and_expand(), so we - * must handle the free→used transition here. - * Use block_type (the original migratetype) because - * that's what was decremented when PB_all_free was set. - */ - for (pb_pfn = page_to_pfn(page); - pb_pfn < page_to_pfn(page) + (1 << current_order); - pb_pfn += pageblock_nr_pages) { - struct page *pb_page = pfn_to_page(pb_pfn); - - if (get_pfnblock_bit(pb_page, pb_pfn, PB_all_free)) { - clear_pfnblock_bit(pb_page, pb_pfn, PB_all_free); - superpageblock_pb_now_used(pb_page); - } - __spb_set_has_type(pb_page, start_type); - } - /* Single list update after all pageblocks processed */ - { - struct superpageblock *sb = - pfn_to_superpageblock(zone, page_to_pfn(page)); - if (sb) - spb_update_list(sb); - } - - del_page_from_free_list(page, zone, current_order, block_type); - change_pageblock_range(page, current_order, start_type); - nr_added = expand(zone, page, order, current_order, start_type); - account_freepages(zone, nr_added, start_type); - return page; - } + if (current_order >= pageblock_order) + return claim_whole_block(zone, page, current_order, order, + start_type, block_type); /* moving whole block can fail due to zone boundary conditions */ if (!prep_move_freepages_block(zone, page, &start_pfn, &free_pages, -- 2.52.0