From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 354832DF12F for ; Thu, 23 Apr 2026 03:49:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776916167; cv=none; b=kTRbaYvRW3Q2Og2OBumg7q2DzsD9Hyw4OZk42Ik5cxfcol4WvKjo+0+7sTvL1lU7x6pgRlcT2AKfGToMhkoaq7yC+UnQCL2KknhEnrg+adfsCHK1hb8x/vAmAmswPDeBIaOFi8+/R8zouRpKDKrCdHVoCFTfeHVf/o9bVAU5r/c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776916167; c=relaxed/simple; bh=bmU+ipTDgo8Yb73l8Rsll0rbHOuXNPxzYHiFb0POzww=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Type; b=qW1dnmGUdgr/JBrF133WrcZ82h5pMDWA2LsTJ/7RrE2ur6ffrr7n/NXaJGe6bxUMELvVeXDRB4IJsWfS6m95FRu+OFnB/Ebqy4BUCWlj+jOt0edI8pIMA921MQ15teL4DoJDvVJBqE0apsBE6+xuTDu3PV7ZaSCirQu7q8yTZjg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Box8lWd1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Box8lWd1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4CD14C2BCB2; Thu, 23 Apr 2026 03:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776916166; bh=bmU+ipTDgo8Yb73l8Rsll0rbHOuXNPxzYHiFb0POzww=; h=From:To:Cc:Subject:Date:From; b=Box8lWd1nUR1kOkHZy0puKvCs8XzgT9MrMC0xhugU09LYn0+k0MFaVqS5K0VtE54l niQ3j44LggsoEJiml7x9jfx7JP0WeATZx89eottOFM7537jog1K0HNRP5FBtNNAFjm f8/TemgT+eXxOj81rGRJlRuT9la+6q/EMqPfHSRlrYe32RWHjmVsZZxNYuIFBPMfT+ OMmcDlhA3XrZaTB/PDI7Apn1bkbhN5wvodQvglCyv+jmPObT3stwuOfY98UbjvSVZX EmCEX6ZMcTZOYbH0M6VB1VlBN/Dg9PailqO+i8jgOhBUR1xNB+ovWSqtpm2ZgxT3En 7yeAMrF0BagTQ== From: "Barry Song (Xiaomi)" To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, "Barry Song (Xiaomi)" , David Hildenbrand , Lorenzo Stoakes , Zi Yan , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Lance Yang , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Youngjun Park Subject: [PATCH] mm/huge_memory: Fix outdated comment about freeing subpages in __folio_split Date: Thu, 23 Apr 2026 11:49:17 +0800 Message-Id: <20260423034917.8234-1-baohua@kernel.org> X-Mailer: git-send-email 2.39.3 (Apple Git-146) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The comment appears to be outdated. First, add_to_swap() no longer exists after Kairui’s commit b487a2da3575 ("mm, swap: simplify folio swap allocation"). Second, partially zapped folios are now always split before folio_alloc_swap() to avoid extra I/O, following Ryan’s commit 5ed890ce5147 ("mm: vmscan: avoid split during shrink_folio_list()"). Fix this by making the description more generic. Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: Zi Yan Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Lance Yang Cc: Chris Li Cc: Kairui Song Cc: Kemeng Shi Cc: Nhat Pham Cc: Baoquan He Cc: Youngjun Park Signed-off-by: Barry Song (Xiaomi) --- mm/huge_memory.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 970e077019b7..4586f3ccb133 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4190,11 +4190,10 @@ static int __folio_split(struct folio *folio, unsigned int new_order, folio_unlock(new_folio); /* - * Subpages may be freed if there wasn't any mapping - * like if add_to_swap() is running on a lru page that - * had its mapping zapped. And freeing these pages - * requires taking the lru_lock so we do the put_page - * of the tail pages after the split is complete. + * Subpages whose mapping has been zapped may be freed + * earlier, but freeing them requires taking the + * lru_lock, so we defer put_page() on tail pages until + * after the split completes. */ free_folio_and_swap_cache(new_folio); } -- 2.39.3 (Apple Git-146)