From: Kairui Song <ryncsn@gmail.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org
Cc: Hugh Dickins <hughd@google.com>,
Matthew Wilcox <willy@infradead.org>,
Chris Li <chrisl@kernel.org>, Nhat Pham <nphamcs@gmail.com>,
Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH v5 1/8] mm/shmem, swap: improve cached mTHP handling and fix potential hung
Date: Fri, 25 Jul 2025 02:16:51 +0800 [thread overview]
Message-ID: <CAMgjq7DfPXS4PkpGK-zem2L1gZD0dekbAyHa-CPHjf=eonoFXg@mail.gmail.com> (raw)
In-Reply-To: <CAMgjq7A+DBw=z8RPP-P1hcCH4Mid0txfmKqgqXghoE_v7zGEoA@mail.gmail.com>
On Fri, Jul 25, 2025 at 1:02 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> On Thu, Jul 10, 2025 at 11:37 AM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > From: Kairui Song <kasong@tencent.com>
> >
> > The current swap-in code assumes that, when a swap entry in shmem mapping
> > is order 0, its cached folios (if present) must be order 0 too, which
> > turns out not always correct.
> >
> > The problem is shmem_split_large_entry is called before verifying the
> > folio will eventually be swapped in, one possible race is:
> >
> > CPU1 CPU2
> > shmem_swapin_folio
> > /* swap in of order > 0 swap entry S1 */
> > folio = swap_cache_get_folio
> > /* folio = NULL */
> > order = xa_get_order
> > /* order > 0 */
> > folio = shmem_swap_alloc_folio
> > /* mTHP alloc failure, folio = NULL */
> > <... Interrupted ...>
> > shmem_swapin_folio
> > /* S1 is swapped in */
> > shmem_writeout
> > /* S1 is swapped out, folio cached */
> > shmem_split_large_entry(..., S1)
> > /* S1 is split, but the folio covering it has order > 0 now */
> >
> > Now any following swapin of S1 will hang: `xa_get_order` returns 0, and
> > folio lookup will return a folio with order > 0. The
> > `xa_get_order(&mapping->i_pages, index) != folio_order(folio)` will always
> > return false causing swap-in to return -EEXIST.
> >
> > And this looks fragile. So fix this up by allowing seeing a larger folio
> > in swap cache, and check the whole shmem mapping range covered by the
> > swapin have the right swap value upon inserting the folio. And drop the
> > redundant tree walks before the insertion.
> >
> > This will actually improve performance, as it avoids two redundant Xarray
> > tree walks in the hot path, and the only side effect is that in the
> > failure path, shmem may redundantly reallocate a few folios causing
> > temporary slight memory pressure.
> >
> > And worth noting, it may seems the order and value check before inserting
> > might help reducing the lock contention, which is not true. The swap
> > cache layer ensures raced swapin will either see a swap cache folio or
> > failed to do a swapin (we have SWAP_HAS_CACHE bit even if swap cache is
> > bypassed), so holding the folio lock and checking the folio flag is
> > already good enough for avoiding the lock contention. The chance that a
> > folio passes the swap entry value check but the shmem mapping slot has
> > changed should be very low.
> >
> > Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > Reviewed-by: Kemeng Shi <shikemeng@huaweicloud.com>
> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Cc: <stable@vger.kernel.org>
> > ---
> > mm/shmem.c | 30 +++++++++++++++++++++---------
> > 1 file changed, 21 insertions(+), 9 deletions(-)
>
> Hi All,
>
> Just found some issue here with this patch...
>
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 334b7b4a61a0..e3c9a1365ff4 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *folio,
> > pgoff_t index, void *expected, gfp_t gfp)
> > {
> > XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio));
> > - long nr = folio_nr_pages(folio);
> > + unsigned long nr = folio_nr_pages(folio);
> > + swp_entry_t iter, swap;
> > + void *entry;
> >
> > VM_BUG_ON_FOLIO(index != round_down(index, nr), folio);
> > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> > @@ -896,14 +898,24 @@ static int shmem_add_to_page_cache(struct folio *folio,
> >
> > gfp &= GFP_RECLAIM_MASK;
> > folio_throttle_swaprate(folio, gfp);
> > + swap = iter = radix_to_swp_entry(expected);
> >
> > do {
> > xas_lock_irq(&xas);
>
> I missed a xas_reset here, also better reset iter value too.
>
> > - if (expected != xas_find_conflict(&xas)) {
> > - xas_set_err(&xas, -EEXIST);
> > - goto unlock;
> > + xas_for_each_conflict(&xas, entry) {
> > + /*
> > + * The range must either be empty, or filled with
> > + * expected swap entries. Shmem swap entries are never
> > + * partially freed without split of both entry and
> > + * folio, so there shouldn't be any holes.
> > + */
> > + if (!expected || entry != swp_to_radix_entry(iter)) {
> > + xas_set_err(&xas, -EEXIST);
> > + goto unlock;
> > + }
> > + iter.val += 1 << xas_get_order(&xas);
> > }
> > - if (expected && xas_find_conflict(&xas)) {
> > + if (expected && iter.val - nr != swap.val) {
> > xas_set_err(&xas, -EEXIST);
> > goto unlock;
> > }
> > @@ -2323,7 +2335,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > error = -ENOMEM;
> > goto failed;
> > }
> > - } else if (order != folio_order(folio)) {
> > + } else if (order > folio_order(folio)) {
> > /*
> > * Swap readahead may swap in order 0 folios into swapcache
> > * asynchronously, while the shmem mapping can still stores
> > @@ -2348,15 +2360,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >
> > swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
> > }
> > + } else if (order < folio_order(folio)) {
> > + swap.val = round_down(swap.val, 1 << folio_order(folio));
> > }
> >
> > alloced:
> > /* We have to do this with folio locked to prevent races */
> > folio_lock(folio);
> > if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
> > - folio->swap.val != swap.val ||
> > - !shmem_confirm_swap(mapping, index, swap) ||
> > - xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {
>
> And this part is incorrect. This `shmem_confirm_swap(mapping, index,
> swap) ` can't be simply omitted. Some functions below before the
> shmem_add_to_page_cache shouldn't be called on folios might have
> already been mapped by others. This shmem_confirm_swap ensures that
> won't happen.
>
> It may seem like a small change, but it leads to some minor conflicts
> in one or two following commits, the benchmark result will change too.
> So I'll have to send a V6 I think.
>
> We can remove this `shmem_confirm_swap`, but not in this series I
> think, maybe after this. Need to re-arrange some functions, with some
> clean ups for shmem_add_to_page_cache and others.
>
> > + folio->swap.val != swap.val) {
> > error = -EEXIST;
> > goto unlock;
> > }
> > --
> > 2.50.0
> >
>
> In summary, I'll squash this patch into it and do a rebase of later commits:
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index e3c9a1365ff4..4ca0b665b79e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -898,9 +898,11 @@ static int shmem_add_to_page_cache(struct folio *folio,
>
> gfp &= GFP_RECLAIM_MASK;
> folio_throttle_swaprate(folio, gfp);
> - swap = iter = radix_to_swp_entry(expected);
> + swap = radix_to_swp_entry(expected);
>
> do {
> + iter = swap;
> + xas_reset(&xas);
Correction: this xas_reset is not needed, the iter = swap is needed.
> xas_lock_irq(&xas);
> xas_for_each_conflict(&xas, entry) {
> /*
> @@ -2365,9 +2367,16 @@ static int shmem_swapin_folio(struct inode
> *inode, pgoff_t index,
> }
>
> alloced:
And it needs `nr_pages = folio_nr_pages(folio); index =
round_down(index, nr_pages);` here...
> - /* We have to do this with folio locked to prevent races */
> + /*
> + * We have to do this with folio locked to prevent races.
> + * The shmem_confirm_swap below only checks if the first swap
> + * entry matches the folio, that's enough to ensure the folio
> + * is not used outside of shmem, as shmem swap entrie
> + * and swap cache folios are never partially freed.
> + */
> folio_lock(folio);
> if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
> + !shmem_confirm_swap(mapping, index, swap) ||
> folio->swap.val != swap.val) {
> error = -EEXIST;
> goto unlock;
>
> And I'll do some clean up afterward to get rid of this
> shmem_confirm_swap. How do you think?
next prev parent reply other threads:[~2025-07-24 18:17 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-10 3:36 [PATCH v5 0/8] mm/shmem, swap: bugfix and improvement of mTHP swap in Kairui Song
2025-07-10 3:36 ` [PATCH v5 1/8] mm/shmem, swap: improve cached mTHP handling and fix potential hung Kairui Song
2025-07-24 17:02 ` Kairui Song
2025-07-24 18:16 ` Kairui Song [this message]
2025-07-25 3:52 ` Baolin Wang
2025-07-25 4:54 ` Kairui Song
2025-07-10 3:37 ` [PATCH v5 2/8] mm/shmem, swap: avoid redundant Xarray lookup during swapin Kairui Song
2025-07-10 3:37 ` [PATCH v5 3/8] mm/shmem, swap: tidy up THP swapin checks Kairui Song
2025-07-10 3:37 ` [PATCH v5 4/8] mm/shmem, swap: tidy up swap entry splitting Kairui Song
2025-07-16 7:09 ` Baoquan He
2025-07-10 3:37 ` [PATCH v5 5/8] mm/shmem, swap: never use swap cache and readahead for SWP_SYNCHRONOUS_IO Kairui Song
2025-07-11 6:10 ` Baolin Wang
2025-07-13 10:53 ` Barry Song
2025-07-10 3:37 ` [PATCH v5 6/8] mm/shmem, swap: simplify swapin path and result handling Kairui Song
2025-07-11 6:23 ` Baolin Wang
2025-07-11 6:28 ` Kairui Song
2025-07-15 22:09 ` Hugh Dickins
2025-07-16 7:14 ` Kairui Song
2025-07-10 3:37 ` [PATCH v5 7/8] mm/shmem, swap: rework swap entry and index calculation for large swapin Kairui Song
2025-07-11 6:36 ` Baolin Wang
2025-07-14 2:39 ` Baolin Wang
2025-07-10 3:37 ` [PATCH v5 8/8] mm/shmem, swap: fix major fault counting Kairui Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAMgjq7DfPXS4PkpGK-zem2L1gZD0dekbAyHa-CPHjf=eonoFXg@mail.gmail.com' \
--to=ryncsn@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=stable@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).