From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91C0E2FFF99 for ; Tue, 28 Apr 2026 15:24:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777389865; cv=none; b=F9MDd6GioBA4o4DN7a1/wHDzdZBZMnzRWuN+u1/eyfNZoHtIBnrswa2fK71jBzgCjcH2VCQWQ3R2kUcLlgesy+Sp1ue/EgUEmc7cPXBN43Bzvx1j0NrhN09GGkam/IuHoO31MKlaIPNt4c1iTI2RqwumHmxp2s170EpyMkY8B48= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777389865; c=relaxed/simple; bh=r/SUcFvLKFcKJXPf2S/wi34hKwqFi7re0vfV3HpqjEc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P/zJFa9zW+5XPaUn4mvOxCruX1TnYAoQk5eEao6oBLiy2KAjINK3u7lrME97xprM9vQTjlXs6fGU7iNtYe18ac9jzzOFUrtAZ09IURotf42q7TLfX+pmdQaQ9xy2O7o8jq0pBedqeUT1Aumwtokxrjaee5BFknH5+9m6NCX00KI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dpg0lUsG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dpg0lUsG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88F1BC2BCB7; Tue, 28 Apr 2026 15:24:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777389865; bh=r/SUcFvLKFcKJXPf2S/wi34hKwqFi7re0vfV3HpqjEc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dpg0lUsG7DapB/oCVobC5TRA9Am3g8Lb3FHhtRH1+4sivEeI8qanzJo5mTDZv6GfE hMIGDkuYUkRhqtff6JkQvaI0UUuM7radgyCx819RHyXjgETI8az+7uCLQg56uoyX7O 21buq7kcsTbEDCnY9NhZhBQvFqfhWXVBY3P0atFf1JZ6/T+KrwXULIYKBQ+xRvOf5b mZLiDBRpVKywywPQaXuodhqz8sf5D2/VJmilBfJNREo9V5wvVRIpR2k8AnwW5659Tw xHCIrdDlnkbpvP4T4PrV+qtsnEYbVgfNEPih7XX+yFfdtzaa9JEFXHnZQYYMFoR+xd 61aJcSGl1nA4g== From: Sasha Levin To: stable@vger.kernel.org Cc: Usama Arif , Johannes Weiner , Zi Yan , "David Hildenbrand (Arm)" , SeongJae Park , Wei Yang , Alistair Popple , Byungchul Park , Gregory Price , "Huang, Ying" , Joshua Hahn , Matthew Brost , "Matthew Wilcox (Oracle)" , Nico Pache , Rakie Kim , Andrew Morton , Sasha Levin Subject: [PATCH 6.12.y 3/3] mm: migrate: requeue destination folio on deferred split queue Date: Tue, 28 Apr 2026 11:24:12 -0400 Message-ID: <20260428152412.3034119-3-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260428152412.3034119-1-sashal@kernel.org> References: <2026042723-large-sedan-f63a@gregkh> <20260428152412.3034119-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Usama Arif [ Upstream commit a2e0c0668a3486f96b86c50e02872c8e94fd4f9c ] During folio migration, __folio_migrate_mapping() removes the source folio from the deferred split queue, but the destination folio is never re-queued. This causes underutilized THPs to escape the shrinker after NUMA migration, since they silently drop off the deferred split list. Fix this by recording whether the source folio was on the deferred split queue and its partially mapped state before move_to_new_folio() unqueues it, and re-queuing the destination folio after a successful migration if it was. By the time migrate_folio_move() runs, partially mapped folios without a pin have already been split by migrate_pages_batch(). So only two cases remain on the deferred list at this point: 1. Partially mapped folios with a pin (split failed). 2. Fully mapped but potentially underused folios. The recorded partially_mapped state is forwarded to deferred_split_folio() so that the destination folio is correctly re-queued in both cases. Because THPs are removed from the deferred_list, THP shinker cannot split the underutilized THPs in time. As a result, users will show less free memory than before. Link: https://lkml.kernel.org/r/20260312104723.1351321-1-usama.arif@linux.dev Fixes: dafff3f4c850 ("mm: split underused THPs") Signed-off-by: Usama Arif Reported-by: Johannes Weiner Acked-by: Johannes Weiner Acked-by: Zi Yan Acked-by: David Hildenbrand (Arm) Acked-by: SeongJae Park Reviewed-by: Wei Yang Cc: Alistair Popple Cc: Byungchul Park Cc: Gregory Price Cc: "Huang, Ying" Cc: Joshua Hahn Cc: Matthew Brost Cc: Matthew Wilcox (Oracle) Cc: Nico Pache Cc: Rakie Kim Cc: Ying Huang Cc: Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- mm/migrate.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/mm/migrate.c b/mm/migrate.c index d541612c7377d..f3d1dc8d72b78 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1345,6 +1345,8 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, int rc; int old_page_state = 0; struct anon_vma *anon_vma = NULL; + bool src_deferred_split = false; + bool src_partially_mapped = false; struct list_head *prev; __migrate_folio_extract(dst, &old_page_state, &anon_vma); @@ -1358,6 +1360,12 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, goto out_unlock_both; } + if (folio_order(src) > 1 && + !data_race(list_empty(&src->_deferred_list))) { + src_deferred_split = true; + src_partially_mapped = folio_test_partially_mapped(src); + } + rc = move_to_new_folio(dst, src, mode); if (rc) goto out; @@ -1378,6 +1386,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, if (old_page_state & PAGE_WAS_MAPPED) remove_migration_ptes(src, dst, 0); + /* + * Requeue the destination folio on the deferred split queue if + * the source was on the queue. The source is unqueued in + * __folio_migrate_mapping(), so we recorded the state from + * before move_to_new_folio(). + */ + if (src_deferred_split) + deferred_split_folio(dst, src_partially_mapped); + out_unlock_both: folio_unlock(dst); set_page_owner_migrate_reason(&dst->page, reason); -- 2.53.0