From: Kairui Song <ryncsn@gmail.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>,
Matthew Wilcox <willy@infradead.org>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Chris Li <chrisl@kernel.org>, Nhat Pham <nphamcs@gmail.com>,
Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 5/9] mm/shmem, swap: avoid false positive swap cache lookup
Date: Mon, 7 Jul 2025 16:04:42 +0800 [thread overview]
Message-ID: <CAMgjq7DAeZq2zib3q_x99BssjHDa29Pnd9YFGAqLttkED_gmSA@mail.gmail.com> (raw)
In-Reply-To: <17d23ed0-3b12-42a5-a5de-994f570b1bca@linux.alibaba.com>
On Mon, Jul 7, 2025 at 3:53 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
> Hi Kairui,
>
> On 2025/7/5 02:17, Kairui Song wrote:
> > From: Kairui Song <kasong@tencent.com>
> >
> > If a shmem read request's index points to the middle of a large swap
> > entry, shmem swap in will try the swap cache lookup using the large
> > swap entry's starting value (which is the first sub swap entry of this
> > large entry). This will lead to false positive lookup results, if only
> > the first few swap entries are cached but the actual requested swap
> > entry pointed by index is uncached. This is not a rare event as swap
> > readahead always try to cache order 0 folios when possible.
> >
> > Currently, shmem will do a large entry split when it occurs, aborts
> > due to a mismatching folio swap value, then retry the swapin from
> > the beginning, which is a waste of CPU and adds wrong info to
> > the readahead statistics.
> >
> > This can be optimized easily by doing the lookup using the right
> > swap entry value.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > ---
> > mm/shmem.c | 31 +++++++++++++++----------------
> > 1 file changed, 15 insertions(+), 16 deletions(-)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 217264315842..2ab214e2771c 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -2274,14 +2274,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > pgoff_t offset;
> >
> > VM_BUG_ON(!*foliop || !xa_is_value(*foliop));
> > - swap = index_entry = radix_to_swp_entry(*foliop);
> > + index_entry = radix_to_swp_entry(*foliop);
> > + swap = index_entry;
> > *foliop = NULL;
> >
> > - if (is_poisoned_swp_entry(swap))
> > + if (is_poisoned_swp_entry(index_entry))
> > return -EIO;
> >
> > - si = get_swap_device(swap);
> > - order = shmem_confirm_swap(mapping, index, swap);
> > + si = get_swap_device(index_entry);
> > + order = shmem_confirm_swap(mapping, index, index_entry);
> > if (unlikely(!si)) {
> > if (order < 0)
> > return -EEXIST;
> > @@ -2293,6 +2294,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > return -EEXIST;
> > }
> >
> > + /* index may point to the middle of a large entry, get the sub entry */
> > + if (order) {
> > + offset = index - round_down(index, 1 << order);
> > + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
> > + }
> > +
> > /* Look it up and read it in.. */
> > folio = swap_cache_get_folio(swap, NULL, 0);
>
> Please drop this patch, which will cause a swapin fault dead loop.
>
> Assume an order-4 shmem folio has been swapped out, and the swap cache
> holds this order-4 folio (assuming index == 0, swap.val == 0x4000).
>
> During swapin, if the index is 1, and the recalculation of the swap
> value here will result in 'swap.val == 0x4001'. This will cause the
> subsequent 'folio->swap.val != swap.val' check to fail, continuously
> triggering a dead-loop swapin fault, ultimately causing the CPU to hang.
>
Oh, thanks for catching that.
Clearly I wasn't thinking carefully enough on this. The problem will
be gone if we calculate the `swap.val` based on folio_order and not
split_order, which is currently done in patch 8.
Previously there were only 4 patches so I never expected this
problem... I can try to organize the patch order again. I was hoping
they could be merged as one patch, some designs are supposed to work
together so splitting the patch may cause intermediate problems like
this.
Perhaps you can help have a look at later patches, if we can just
merge them into one? eg. merge or move patch 8 into this. Or maybe I
need to move this patch later.
The performance / object size / stack usage improvements are
shown in the commit message.
next prev parent reply other threads:[~2025-07-07 8:05 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-04 18:17 [PATCH v4 0/9] mm/shmem, swap: bugfix and improvement of mTHP swap-in Kairui Song
2025-07-04 18:17 ` [PATCH v4 1/9] mm/shmem, swap: improve cached mTHP handling and fix potential hung Kairui Song
2025-07-04 18:17 ` [PATCH v4 2/9] mm/shmem, swap: avoid redundant Xarray lookup during swapin Kairui Song
2025-07-04 18:17 ` [PATCH v4 3/9] mm/shmem, swap: tidy up THP swapin checks Kairui Song
2025-07-04 18:17 ` [PATCH v4 4/9] mm/shmem, swap: tidy up swap entry splitting Kairui Song
2025-07-06 3:35 ` Baolin Wang
2025-07-06 11:50 ` Kairui Song
2025-07-04 18:17 ` [PATCH v4 5/9] mm/shmem, swap: avoid false positive swap cache lookup Kairui Song
2025-07-07 7:53 ` Baolin Wang
2025-07-07 8:04 ` Kairui Song [this message]
2025-07-08 6:00 ` Baolin Wang
2025-07-04 18:17 ` [PATCH v4 6/9] mm/shmem, swap: never use swap cache and readahead for SWP_SYNCHRONOUS_IO Kairui Song
2025-07-07 8:05 ` Baolin Wang
2025-07-04 18:17 ` [PATCH v4 7/9] mm/shmem, swap: simplify swapin path and result handling Kairui Song
2025-07-07 8:14 ` Baolin Wang
2025-07-04 18:17 ` [PATCH v4 8/9] mm/shmem, swap: simplify swap entry and index calculation of large swapin Kairui Song
2025-07-04 18:17 ` [PATCH v4 9/9] mm/shmem, swap: fix major fault counting Kairui Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMgjq7DAeZq2zib3q_x99BssjHDa29Pnd9YFGAqLttkED_gmSA@mail.gmail.com \
--to=ryncsn@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).