From: Barry Song <21cnbao@gmail.com>
To: Kairui Song <ryncsn@gmail.com>
Cc: akpm@linux-foundation.org, baolin.wang@linux.alibaba.com,
bhe@redhat.com, chrisl@kernel.org, david@redhat.com,
hannes@cmpxchg.org, hughd@google.com, kaleshsingh@google.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
nphamcs@gmail.com, ryan.roberts@arm.com,
shikemeng@huaweicloud.com, tim.c.chen@linux.intel.com,
willy@infradead.org, ying.huang@linux.alibaba.com,
yosryahmed@google.com
Subject: Re: [PATCH 05/28] mm, swap: sanitize swap cache lookup convention
Date: Wed, 21 May 2025 10:33:36 +1200 [thread overview]
Message-ID: <CAGsJ_4z1cJfOCcpZDt4EuHK7+SON1r0ptRJNv1h=cDv+eOcdSQ@mail.gmail.com> (raw)
In-Reply-To: <CAMgjq7AO__8TFE8ibwQswWmmf4tTGg2NBEJp0aEn32vN+Dy8uw@mail.gmail.com>
On Wed, May 21, 2025 at 7:10 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> On Tue, May 20, 2025 at 12:41 PM Barry Song <21cnbao@gmail.com> wrote:
> >
> > On Tue, May 20, 2025 at 3:31 PM Kairui Song <ryncsn@gmail.com> wrote:
> > >
> > > On Mon, May 19, 2025 at 12:38 PM Barry Song <21cnbao@gmail.com> wrote:
> > > >
> > > > > From: Kairui Song <kasong@tencent.com>
> > > >
> > > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > > > index e5a0db7f3331..5b4f01aecf35 100644
> > > > > --- a/mm/userfaultfd.c
> > > > > +++ b/mm/userfaultfd.c
> > > > > @@ -1409,6 +1409,10 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> > > > > goto retry;
> > > > > }
> > > > > }
> > > > > + if (!folio_swap_contains(src_folio, entry)) {
> > > > > + err = -EBUSY;
> > > > > + goto out;
> > > > > + }
> > > >
> > > > It seems we don't need this. In move_swap_pte(), we have been checking pte pages
> > > > are stable:
> > > >
> > > > if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte,
> > > > dst_pmd, dst_pmdval)) {
> > > > double_pt_unlock(dst_ptl, src_ptl);
> > > > return -EAGAIN;
> > > > }
> > >
> > > The tricky part is when swap_cache_get_folio returns the folio, both
> > > folio and ptes are unlocked. So is it possible that someone else
> > > swapped in the entries, then swapped them out again using the same
> > > entries?
> > >
> > > The folio will be different here but PTEs are still the same value to
> > > they will pass the is_pte_pages_stable check, we previously saw
> > > similar races with anon fault or shmem. I think more strict checking
> > > won't hurt here.
> >
> > This doesn't seem to be the same case as the one you fixed in
> > do_swap_page(). Here, we're hitting the swap cache, whereas in that
> > case, there was no one hitting the swap cache, and you used
> > swap_prepare() to set up the cache to fix the issue.
> >
> > By the way, if we're not hitting the swap cache, src_folio will be
> > NULL. Also, it seems that folio_swap_contains(src_folio, entry) does
> > not guard against that case either.
>
> Ah, that's true, it should be moved inside the if (folio) {...} block
> above. Thanks for catching this!
>
> > But I suspect we won't have a problem, since we're not swapping in —
> > we didn't read any stale data, right? Swap-in will only occur after we
> > move the PTEs.
>
> My concern is that a parallel swapin / swapout could result in the
> folio to be a completely irrelevant or invalid folio.
>
> It's not about the dst, but in the move src side, something like:
>
> CPU1 CPU2
> move_pages_pte
> folio = swap_cache_get_folio(...)
> | Got folio A here
> move_swap_pte
> <swapin src_pte, using folio A>
> <swapout src_pte, put folio A>
> | Now folio A is no longer valid.
> | It's very unlikely but here SWAP
> | could reuse the same entry as above.
swap_cache_get_folio() does increment the folio's refcount, but it seems this
doesn't prevent do_swap_page() from freeing the swap entry after swapping
in src_pte with folio A, if it's a read fault.
for write fault, folio_ref_count(folio) == (1 + folio_nr_pages(folio))
will be false:
static inline bool should_try_to_free_swap(struct folio *folio,
struct vm_area_struct *vma,
unsigned int fault_flags)
{
...
/*
* If we want to map a page that's in the swapcache writable, we
* have to detect via the refcount if we're really the exclusive
* user. Try freeing the swapcache to get rid of the swapcache
* reference only in case it's likely that we'll be the exlusive user.
*/
return (fault_flags & FAULT_FLAG_WRITE) && !folio_test_ksm(folio) &&
folio_ref_count(folio) == (1 + folio_nr_pages(folio));
}
and for swapout, __removing_mapping does check refcount as well:
static int __remove_mapping(struct address_space *mapping, struct folio *folio,
bool reclaimed, struct mem_cgroup *target_memcg)
{
refcount = 1 + folio_nr_pages(folio);
if (!folio_ref_freeze(folio, refcount))
goto cannot_free;
}
However, since __remove_mapping() occurs after pageout(), it seems
this also doesn't prevent swapout from allocating a new swap entry to
fill src_pte.
It seems your concern is valid—unless I'm missing something.
Do you have a reproducer? If so, this will likely need a separate fix
patch rather than being hidden in this patchset.
> double_pt_lock
> is_pte_pages_stable
> | Passed because of entry reuse.
> folio_move_anon_rmap(...)
> | Moved invalid folio A.
>
> And could it be possible that the swap_cache_get_folio returns NULL
> here, but later right before the double_pt_lock, a folio is added to
> swap cache? Maybe we better check the swap cache after clear and
> releasing dst lock, but before releasing src lock?
It seems you're suggesting that a parallel swap-in allocates and adds
a folio to the swap cache, but the PTE has not yet been updated from
a swap entry to a present mapping?
As long as do_swap_page() adds the folio to the swap cache
before updating the PTE to present, this scenario seems possible.
It seems we need to double-check:
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index bc473ad21202..976053bd2bf1 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -1102,8 +1102,14 @@ static int move_swap_pte(struct mm_struct *mm,
struct vm_area_struct *dst_vma,
if (src_folio) {
folio_move_anon_rmap(src_folio, dst_vma);
src_folio->index = linear_page_index(dst_vma, dst_addr);
+ } else {
+ struct folio *folio =
filemap_get_folio(swap_address_space(entry),
+ swap_cache_index(entry));
+ if (!IS_ERR_OR_NULL(folio)) {
+ double_pt_unlock(dst_ptl, src_ptl);
+ return -EAGAIN;
+ }
}
-
orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);
#ifdef CONFIG_MEM_SOFT_DIRTY
orig_src_pte = pte_swp_mksoft_dirty(orig_src_pte);
Let me run test case [1] to check whether this ever happens. I guess I need to
hack kernel a bit to always add folio to swapcache even for SYNC IO.
[1] https://lore.kernel.org/linux-mm/20250219112519.92853-1-21cnbao@gmail.com/
>
>
> >
> > >
> > > >
> > > > Also, -EBUSY is somehow incorrect error code.
> > >
> > > Yes, thanks, I'll use EAGAIN here just like move_swap_pte.
> > >
> > >
> > > >
> > > > > err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,
> > > > > orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval,
> > > > > dst_ptl, src_ptl, src_folio);
> > > > >
> > > >
> >
Thanks
Barry
next prev parent reply other threads:[~2025-05-20 22:33 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-14 20:17 [PATCH 00/28] mm, swap: introduce swap table Kairui Song
2025-05-14 20:17 ` [PATCH 01/28] mm, swap: don't scan every fragment cluster Kairui Song
2025-05-14 20:17 ` [PATCH 02/28] mm, swap: consolidate the helper for mincore Kairui Song
2025-05-14 20:17 ` [PATCH 03/28] mm/shmem, swap: remove SWAP_MAP_SHMEM Kairui Song
2025-05-14 20:17 ` [PATCH 04/28] mm, swap: split readahead update out of swap cache lookup Kairui Song
2025-05-14 20:17 ` [PATCH 05/28] mm, swap: sanitize swap cache lookup convention Kairui Song
2025-05-19 4:38 ` Barry Song
2025-05-20 3:31 ` Kairui Song
2025-05-20 4:41 ` Barry Song
2025-05-20 19:09 ` Kairui Song
2025-05-20 22:33 ` Barry Song [this message]
2025-05-21 2:45 ` Kairui Song
2025-05-21 3:24 ` Barry Song
2025-05-23 2:29 ` Barry Song
2025-05-23 20:01 ` Kairui Song
2025-05-27 7:58 ` Barry Song
2025-05-27 15:11 ` Kairui Song
2025-05-30 8:49 ` Kairui Song
2025-05-30 19:24 ` Kairui Song
2025-05-14 20:17 ` [PATCH 06/28] mm, swap: rearrange swap cluster definition and helpers Kairui Song
2025-05-19 6:26 ` Barry Song
2025-05-20 3:50 ` Kairui Song
2025-05-14 20:17 ` [PATCH 07/28] mm, swap: tidy up swap device and cluster info helpers Kairui Song
2025-05-14 20:17 ` [PATCH 08/28] mm, swap: use swap table for the swap cache and switch API Kairui Song
2025-05-14 20:17 ` [PATCH 09/28] mm/swap: rename __read_swap_cache_async to __swapin_cache_alloc Kairui Song
2025-05-14 20:17 ` [PATCH 10/28] mm, swap: add a swap helper for bypassing only read ahead Kairui Song
2025-05-14 20:17 ` [PATCH 11/28] mm, swap: clean up and consolidate helper for mTHP swapin check Kairui Song
2025-05-15 9:31 ` Klara Modin
2025-05-15 9:39 ` Kairui Song
2025-05-19 7:08 ` Barry Song
2025-05-19 11:09 ` Kairui Song
2025-05-19 11:57 ` Barry Song
2025-05-14 20:17 ` [PATCH 12/28] mm, swap: never bypass the swap cache for SWP_SYNCHRONOUS_IO Kairui Song
2025-05-14 20:17 ` [PATCH 13/28] mm/shmem, swap: avoid redundant Xarray lookup during swapin Kairui Song
2025-05-14 20:17 ` [PATCH 14/28] mm/shmem: never bypass the swap cache for SWP_SYNCHRONOUS_IO Kairui Song
2025-05-14 20:17 ` [PATCH 15/28] mm, swap: split locked entry freeing into a standalone helper Kairui Song
2025-05-14 20:17 ` [PATCH 16/28] mm, swap: use swap cache as the swap in synchronize layer Kairui Song
2025-05-14 20:17 ` [PATCH 17/28] mm, swap: sanitize swap entry management workflow Kairui Song
2025-05-14 20:17 ` [PATCH 18/28] mm, swap: rename and introduce folio_free_swap_cache Kairui Song
2025-05-14 20:17 ` [PATCH 19/28] mm, swap: clean up and improve swap entries batch freeing Kairui Song
2025-05-14 20:17 ` [PATCH 20/28] mm, swap: check swap table directly for checking cache Kairui Song
2025-06-19 10:38 ` Baoquan He
2025-06-19 10:50 ` Kairui Song
2025-06-20 8:04 ` Baoquan He
2025-05-14 20:17 ` [PATCH 21/28] mm, swap: add folio to swap cache directly on allocation Kairui Song
2025-05-14 20:17 ` [PATCH 22/28] mm, swap: drop the SWAP_HAS_CACHE flag Kairui Song
2025-05-14 20:17 ` [PATCH 23/28] mm, swap: remove no longer needed _swap_info_get Kairui Song
2025-05-14 20:17 ` [PATCH 24/28] mm, swap: implement helpers for reserving data in swap table Kairui Song
2025-05-15 9:40 ` Klara Modin
2025-05-16 2:35 ` Kairui Song
2025-05-14 20:17 ` [PATCH 25/28] mm/workingset: leave highest 8 bits empty for anon shadow Kairui Song
2025-05-14 20:17 ` [PATCH 26/28] mm, swap: minor clean up for swapon Kairui Song
2025-05-14 20:17 ` [PATCH 27/28] mm, swap: use swap table to track swap count Kairui Song
2025-05-14 20:17 ` [PATCH 28/28] mm, swap: implement dynamic allocation of swap table Kairui Song
2025-05-21 18:36 ` Nhat Pham
2025-05-22 4:13 ` Kairui Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGsJ_4z1cJfOCcpZDt4EuHK7+SON1r0ptRJNv1h=cDv+eOcdSQ@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kaleshsingh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=ryncsn@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=tim.c.chen@linux.intel.com \
--cc=willy@infradead.org \
--cc=ying.huang@linux.alibaba.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).