From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D60DE371047; Wed, 28 Jan 2026 18:42:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769625773; cv=none; b=D/RB0PuHwv38zAxyXWojrX3HxHVCWwsntBLvV1EJlNQiw3evmaJPGvqPjFEzjaeSRyKF0pUviPmlV5KU4SJbqmlM8nwxWImVMzZBP8BDJdACWn7GNysKoFaJ+P/ySJrMfCYh7hBv2Yy5kWsqpbYcD8+8nI1KSD94MpGQDlkJmhw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769625773; c=relaxed/simple; bh=daChFEWlmVlkFjwmx2FrTRrn4H7vOdMNw4qzOPI9hxw=; h=Date:To:From:Subject:Message-Id; b=ObKmoKj1gfd8H5Wv0Y3cbKpfPoJqU27hyVwmJSl5961jV/+aYiBFg0RX1NrKR0xTMRJ4++ASi7kCAg+9gvrqPITm+8zCouEjUTmH2B/iUPKLUJavYlqgcNjBLplkLpKYMIfsY7dcVv7503IE/iJg8drmMj8GhBDTEwTk0tR9YwE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=LuIegFL6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="LuIegFL6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C5BDC4CEF1; Wed, 28 Jan 2026 18:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1769625772; bh=daChFEWlmVlkFjwmx2FrTRrn4H7vOdMNw4qzOPI9hxw=; h=Date:To:From:Subject:From; b=LuIegFL6XG1mKE+7uHEe8NuUmKgybw9e+wMQjAi8fUt1yyXlJvGlNfOVwmLr3z2oD BYIJDWsJSTVNLpDvRKro9Qo6d1DEGJYg+jD7J2sutG0bZlywSFrjzGneQbinpgv8zs PE41fTsRgwwQ8l+cWatm3LtnepOBf3bor+NflN90= Date: Wed, 28 Jan 2026 10:42:51 -0800 To: mm-commits@vger.kernel.org,stable@vger.kernel.org,shikemeng@huaweicloud.com,nphamcs@gmail.com,hughd@google.com,clm@meta.com,chrisl@kernel.org,bhe@redhat.com,baolin.wang@linux.alibaba.com,baohua@kernel.org,kasong@tencent.com,akpm@linux-foundation.org From: Andrew Morton Subject: + mm-shmem-swap-fix-race-of-truncate-and-swap-entry-split.patch added to mm-hotfixes-unstable branch Message-Id: <20260128184252.5C5BDC4CEF1@smtp.kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: mm, shmem: prevent infinite loop on truncate race has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-shmem-swap-fix-race-of-truncate-and-swap-entry-split.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-shmem-swap-fix-race-of-truncate-and-swap-entry-split.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: Kairui Song Subject: mm, shmem: prevent infinite loop on truncate race Date: Thu, 29 Jan 2026 00:19:23 +0800 When truncating a large swap entry, shmem_free_swap() returns 0 when the entry's index doesn't match the given index due to lookup alignment. The failure fallback path checks if the entry crosses the end border and aborts when it happens, so truncate won't erase an unexpected entry or range. But one scenario was ignored. When `index` points to the middle of a large swap entry, and the large swap entry doesn't go across the end border, find_get_entries() will return that large swap entry as the first item in the batch with `indices[0]` equal to `index`. The entry's base index will be smaller than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the "base < index" check. The code will then call shmem_confirm_swap(), get the order, check if it crosses the END boundary (which it doesn't), and retry with the same index. The next iteration will find the same entry again at the same index with same indices, leading to an infinite loop. Fix this by retrying with a round-down index, and abort if the index is smaller than the truncate range. Link: https://lkml.kernel.org/r/aXo6ltB5iqAKJzY8@KASONG-MC4 Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") Fixes: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split") Signed-off-by: Kairui Song Reported-by: Chris Mason Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/ Cc: Baolin Wang Cc: Baoquan He Cc: Barry Song Cc: Chris Li Cc: Hugh Dickins Cc: Kemeng Shi Cc: Nhat Pham Cc: Signed-off-by: Andrew Morton --- mm/shmem.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) --- a/mm/shmem.c~mm-shmem-swap-fix-race-of-truncate-and-swap-entry-split +++ a/mm/shmem.c @@ -1211,17 +1211,22 @@ whole_folios: swaps_freed = shmem_free_swap(mapping, indices[i], end - 1, folio); if (!swaps_freed) { - /* - * If found a large swap entry cross the end border, - * skip it as the truncate_inode_partial_folio above - * should have at least zerod its content once. - */ + pgoff_t base = indices[i]; + order = shmem_confirm_swap(mapping, indices[i], radix_to_swp_entry(folio)); - if (order > 0 && indices[i] + (1 << order) > end) - continue; - /* Swap was replaced by page: retry */ - index = indices[i]; + /* + * If found a large swap entry cross the end or start + * border, skip it as the truncate_inode_partial_folio + * above should have at least zerod its content once. + */ + if (order > 0) { + base = round_down(base, 1 << order); + if (base < start || base + (1 << order) > end) + continue; + } + /* Swap was replaced by page or extended, retry */ + index = base; break; } nr_swaps_freed += swaps_freed; _ Patches currently in -mm which might be from kasong@tencent.com are mm-shmem-swap-fix-race-of-truncate-and-swap-entry-split.patch mm-swap-rename-__read_swap_cache_async-to-swap_cache_alloc_folio.patch mm-swap-split-swap-cache-preparation-loop-into-a-standalone-helper.patch mm-swap-never-bypass-the-swap-cache-even-for-swp_synchronous_io.patch mm-swap-always-try-to-free-swap-cache-for-swp_synchronous_io-devices.patch mm-swap-simplify-the-code-and-reduce-indention.patch mm-swap-free-the-swap-cache-after-folio-is-mapped.patch mm-shmem-never-bypass-the-swap-cache-for-swp_synchronous_io.patch mm-swap-swap-entry-of-a-bad-slot-should-not-be-considered-as-swapped-out.patch mm-swap-consolidate-cluster-reclaim-and-usability-check.patch mm-swap-split-locked-entry-duplicating-into-a-standalone-helper.patch mm-swap-use-swap-cache-as-the-swap-in-synchronize-layer.patch mm-swap-use-swap-cache-as-the-swap-in-synchronize-layer-fix.patch mm-swap-remove-workaround-for-unsynchronized-swap-map-cache-state.patch mm-swap-cleanup-swap-entry-management-workflow.patch mm-swap-cleanup-swap-entry-management-workflow-fix.patch mm-swap-add-folio-to-swap-cache-directly-on-allocation.patch mm-swap-check-swap-table-directly-for-checking-cache.patch mm-swap-clean-up-and-improve-swap-entries-freeing.patch mm-swap-drop-the-swap_has_cache-flag.patch mm-swap-remove-no-longer-needed-_swap_info_get.patch