linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chris Li <chrisl@kernel.org>
To: Kairui Song <kasong@tencent.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	 Matthew Wilcox <willy@infradead.org>,
	Hugh Dickins <hughd@google.com>, Barry Song <baohua@kernel.org>,
	 Baoquan He <bhe@redhat.com>, Nhat Pham <nphamcs@gmail.com>,
	 Kemeng Shi <shikemeng@huaweicloud.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	 Ying Huang <ying.huang@linux.alibaba.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 David Hildenbrand <david@redhat.com>,
	Yosry Ahmed <yosryahmed@google.com>,
	 Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Zi Yan <ziy@nvidia.com>,
	 linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/9] mm, swap: use unified helper for swap cache look up
Date: Tue, 26 Aug 2025 20:50:36 -0700	[thread overview]
Message-ID: <CACePvbX0ng5Q+OpwNwB--dB=25oDb6tqcW+OJgWm=5LJeQVG2A@mail.gmail.com> (raw)
In-Reply-To: <CACePvbW_tkBAhj6-kQzyU2Jh-1dDy63Qc4K3RFyyA4=yt-_D5Q@mail.gmail.com>

BTW, I think this patch can add no functional change as expected.

Chris

On Tue, Aug 26, 2025 at 7:47 PM Chris Li <chrisl@kernel.org> wrote:
>
> Hi Kairui,
>
> This commit message can use some improvement, I feel the part I am
> interested in, what changed is buried in a lot of detail.
>
> The background is that swap_cache_get_folio() used to do readahead
> update as well. It has VMA as part of the argument. However, the
> hibernation usage does not map swap entry to VMA. It was forced to
> call filemap_get_entry() on swap cache instead, due to no VMA.
>
> So the TL; DR; of what this patch does:
>
> Split the swap readahead outside of swap_cache_get_folio(), so that
> the hibernation non VMA usage can reuse swap_cache_get_folio()  as
> well. No more  calling filemap_get_entry() on swap cache due to lack
> of VMA.
>
> The code itself looks fine. It has gone through some rounds of
> feedback from me already. We can always update the commit message on
> the next iteration.
>
> Acked-by: Chris Li <chrisl@kernel.org>
>
> Chris
>
>
> On Fri, Aug 22, 2025 at 12:20 PM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > From: Kairui Song <kasong@tencent.com>
> >
> > Always use swap_cache_get_folio for swap cache folio look up. The reason
> > we are not using it in all places is that it also updates the readahead
> > info, and some callsites want to avoid that.
> >
> > So decouple readahead update with swap cache lookup into a standalone
> > helper, let the caller call the readahead update helper if that's
> > needed. And convert all swap cache lookups to use swap_cache_get_folio.
> >
> > After this commit, there are only three special cases for accessing swap
> > cache space now: huge memory splitting, migration and shmem replacing,
> > because they need to lock the Xarray. Following commits will wrap their
> I commonly saw using xarray or XArray.
> > accesses to the swap cache too with special helpers.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > ---
> >  mm/memory.c      |  6 ++-
> >  mm/mincore.c     |  3 +-
> >  mm/shmem.c       |  4 +-
> >  mm/swap.h        | 13 +++++--
> >  mm/swap_state.c  | 99 +++++++++++++++++++++++-------------------------
> >  mm/swapfile.c    | 11 +++---
> >  mm/userfaultfd.c |  5 +--
> >  7 files changed, 72 insertions(+), 69 deletions(-)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index d9de6c056179..10ef528a5f44 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4660,9 +4660,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >         if (unlikely(!si))
> >                 goto out;
> >
> > -       folio = swap_cache_get_folio(entry, vma, vmf->address);
> > -       if (folio)
> > +       folio = swap_cache_get_folio(entry);
> > +       if (folio) {
> > +               swap_update_readahead(folio, vma, vmf->address);
> >                 page = folio_file_page(folio, swp_offset(entry));
> > +       }
> >         swapcache = folio;
> >
> >         if (!folio) {
> > diff --git a/mm/mincore.c b/mm/mincore.c
> > index 2f3e1816a30d..8ec4719370e1 100644
> > --- a/mm/mincore.c
> > +++ b/mm/mincore.c
> > @@ -76,8 +76,7 @@ static unsigned char mincore_swap(swp_entry_t entry, bool shmem)
> >                 if (!si)
> >                         return 0;
> >         }
> > -       folio = filemap_get_entry(swap_address_space(entry),
> > -                                 swap_cache_index(entry));
> > +       folio = swap_cache_get_folio(entry);
> >         if (shmem)
> >                 put_swap_device(si);
> >         /* The swap cache space contains either folio, shadow or NULL */
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 13cc51df3893..e9d0d2784cd5 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -2354,7 +2354,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >         }
> >
> >         /* Look it up and read it in.. */
> > -       folio = swap_cache_get_folio(swap, NULL, 0);
> > +       folio = swap_cache_get_folio(swap);
> >         if (!folio) {
> >                 if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) {
> >                         /* Direct swapin skipping swap cache & readahead */
> > @@ -2379,6 +2379,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >                         count_vm_event(PGMAJFAULT);
> >                         count_memcg_event_mm(fault_mm, PGMAJFAULT);
> >                 }
> > +       } else {
> > +               swap_update_readahead(folio, NULL, 0);
> >         }
> >
> >         if (order > folio_order(folio)) {
> > diff --git a/mm/swap.h b/mm/swap.h
> > index 1ae44d4193b1..efb6d7ff9f30 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -62,8 +62,7 @@ void delete_from_swap_cache(struct folio *folio);
> >  void clear_shadow_from_swap_cache(int type, unsigned long begin,
> >                                   unsigned long end);
> >  void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr);
> > -struct folio *swap_cache_get_folio(swp_entry_t entry,
> > -               struct vm_area_struct *vma, unsigned long addr);
> > +struct folio *swap_cache_get_folio(swp_entry_t entry);
> >  struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                 struct vm_area_struct *vma, unsigned long addr,
> >                 struct swap_iocb **plug);
> > @@ -74,6 +73,8 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> >                 struct mempolicy *mpol, pgoff_t ilx);
> >  struct folio *swapin_readahead(swp_entry_t entry, gfp_t flag,
> >                 struct vm_fault *vmf);
> > +void swap_update_readahead(struct folio *folio, struct vm_area_struct *vma,
> > +                          unsigned long addr);
> >
> >  static inline unsigned int folio_swap_flags(struct folio *folio)
> >  {
> > @@ -159,6 +160,11 @@ static inline struct folio *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask,
> >         return NULL;
> >  }
> >
> > +static inline void swap_update_readahead(struct folio *folio,
> > +               struct vm_area_struct *vma, unsigned long addr)
> > +{
> > +}
> > +
> >  static inline int swap_writeout(struct folio *folio,
> >                 struct swap_iocb **swap_plug)
> >  {
> > @@ -169,8 +175,7 @@ static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entr
> >  {
> >  }
> >
> > -static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
> > -               struct vm_area_struct *vma, unsigned long addr)
> > +static inline struct folio *swap_cache_get_folio(swp_entry_t entry)
> >  {
> >         return NULL;
> >  }
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 99513b74b5d8..ff9eb761a103 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -69,6 +69,21 @@ void show_swap_cache_info(void)
> >         printk("Total swap = %lukB\n", K(total_swap_pages));
> >  }
> >
> > +/*
> > + * Lookup a swap entry in the swap cache. A found folio will be returned
> > + * unlocked and with its refcount incremented.
> > + *
> > + * Caller must lock the swap device or hold a reference to keep it valid.
> > + */
> > +struct folio *swap_cache_get_folio(swp_entry_t entry)
> > +{
> > +       struct folio *folio = filemap_get_folio(swap_address_space(entry),
> > +                                               swap_cache_index(entry));
> > +       if (!IS_ERR(folio))
> > +               return folio;
> > +       return NULL;
> > +}
> > +
> >  void *get_shadow_from_swap_cache(swp_entry_t entry)
> >  {
> >         struct address_space *address_space = swap_address_space(entry);
> > @@ -273,54 +288,40 @@ static inline bool swap_use_vma_readahead(void)
> >  }
> >
> >  /*
> > - * Lookup a swap entry in the swap cache. A found folio will be returned
> > - * unlocked and with its refcount incremented - we rely on the kernel
> > - * lock getting page table operations atomic even if we drop the folio
> > - * lock before returning.
> > - *
> > - * Caller must lock the swap device or hold a reference to keep it valid.
> > + * Update the readahead statistics of a vma or globally.
> >   */
> > -struct folio *swap_cache_get_folio(swp_entry_t entry,
> > -               struct vm_area_struct *vma, unsigned long addr)
> > +void swap_update_readahead(struct folio *folio,
> > +                          struct vm_area_struct *vma,
> > +                          unsigned long addr)
> >  {
> > -       struct folio *folio;
> > -
> > -       folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
> > -       if (!IS_ERR(folio)) {
> > -               bool vma_ra = swap_use_vma_readahead();
> > -               bool readahead;
> > +       bool readahead, vma_ra = swap_use_vma_readahead();
> >
> > -               /*
> > -                * At the moment, we don't support PG_readahead for anon THP
> > -                * so let's bail out rather than confusing the readahead stat.
> > -                */
> > -               if (unlikely(folio_test_large(folio)))
> > -                       return folio;
> > -
> > -               readahead = folio_test_clear_readahead(folio);
> > -               if (vma && vma_ra) {
> > -                       unsigned long ra_val;
> > -                       int win, hits;
> > -
> > -                       ra_val = GET_SWAP_RA_VAL(vma);
> > -                       win = SWAP_RA_WIN(ra_val);
> > -                       hits = SWAP_RA_HITS(ra_val);
> > -                       if (readahead)
> > -                               hits = min_t(int, hits + 1, SWAP_RA_HITS_MAX);
> > -                       atomic_long_set(&vma->swap_readahead_info,
> > -                                       SWAP_RA_VAL(addr, win, hits));
> > -               }
> > -
> > -               if (readahead) {
> > -                       count_vm_event(SWAP_RA_HIT);
> > -                       if (!vma || !vma_ra)
> > -                               atomic_inc(&swapin_readahead_hits);
> > -               }
> > -       } else {
> > -               folio = NULL;
> > +       /*
> > +        * At the moment, we don't support PG_readahead for anon THP
> > +        * so let's bail out rather than confusing the readahead stat.
> > +        */
> > +       if (unlikely(folio_test_large(folio)))
> > +               return;
> > +
> > +       readahead = folio_test_clear_readahead(folio);
> > +       if (vma && vma_ra) {
> > +               unsigned long ra_val;
> > +               int win, hits;
> > +
> > +               ra_val = GET_SWAP_RA_VAL(vma);
> > +               win = SWAP_RA_WIN(ra_val);
> > +               hits = SWAP_RA_HITS(ra_val);
> > +               if (readahead)
> > +                       hits = min_t(int, hits + 1, SWAP_RA_HITS_MAX);
> > +               atomic_long_set(&vma->swap_readahead_info,
> > +                               SWAP_RA_VAL(addr, win, hits));
> >         }
> >
> > -       return folio;
> > +       if (readahead) {
> > +               count_vm_event(SWAP_RA_HIT);
> > +               if (!vma || !vma_ra)
> > +                       atomic_inc(&swapin_readahead_hits);
> > +       }
> >  }
> >
> >  struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > @@ -336,14 +337,10 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >         *new_page_allocated = false;
> >         for (;;) {
> >                 int err;
> > -               /*
> > -                * First check the swap cache.  Since this is normally
> > -                * called after swap_cache_get_folio() failed, re-calling
> > -                * that would confuse statistics.
> > -                */
> > -               folio = filemap_get_folio(swap_address_space(entry),
> > -                                         swap_cache_index(entry));
> > -               if (!IS_ERR(folio))
> > +
> > +               /* Check the swap cache in case the folio is already there */
> > +               folio = swap_cache_get_folio(entry);
> > +               if (folio)
> >                         goto got_folio;
> >
> >                 /*
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index a7ffabbe65ef..4b8ab2cb49ca 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -213,15 +213,14 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
> >                                  unsigned long offset, unsigned long flags)
> >  {
> >         swp_entry_t entry = swp_entry(si->type, offset);
> > -       struct address_space *address_space = swap_address_space(entry);
> >         struct swap_cluster_info *ci;
> >         struct folio *folio;
> >         int ret, nr_pages;
> >         bool need_reclaim;
> >
> >  again:
> > -       folio = filemap_get_folio(address_space, swap_cache_index(entry));
> > -       if (IS_ERR(folio))
> > +       folio = swap_cache_get_folio(entry);
> > +       if (!folio)
> >                 return 0;
> >
> >         nr_pages = folio_nr_pages(folio);
> > @@ -2131,7 +2130,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> >                 pte_unmap(pte);
> >                 pte = NULL;
> >
> > -               folio = swap_cache_get_folio(entry, vma, addr);
> > +               folio = swap_cache_get_folio(entry);
> >                 if (!folio) {
> >                         struct vm_fault vmf = {
> >                                 .vma = vma,
> > @@ -2357,8 +2356,8 @@ static int try_to_unuse(unsigned int type)
> >                (i = find_next_to_unuse(si, i)) != 0) {
> >
> >                 entry = swp_entry(type, i);
> > -               folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
> > -               if (IS_ERR(folio))
> > +               folio = swap_cache_get_folio(entry);
> > +               if (!folio)
> >                         continue;
> >
> >                 /*
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index 50aaa8dcd24c..af61b95c89e4 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -1489,9 +1489,8 @@ static long move_pages_ptes(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd
> >                  * separately to allow proper handling.
> >                  */
> >                 if (!src_folio)
> > -                       folio = filemap_get_folio(swap_address_space(entry),
> > -                                       swap_cache_index(entry));
> > -               if (!IS_ERR_OR_NULL(folio)) {
> > +                       folio = swap_cache_get_folio(entry);
> > +               if (folio) {
> >                         if (folio_test_large(folio)) {
> >                                 ret = -EBUSY;
> >                                 folio_put(folio);
> > --
> > 2.51.0
> >


  reply	other threads:[~2025-08-27  3:50 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-22 19:20 [PATCH 0/9] mm, swap: introduce swap table as swap cache (phase I) Kairui Song
2025-08-22 19:20 ` [PATCH 1/9] mm, swap: use unified helper for swap cache look up Kairui Song
2025-08-27  2:47   ` Chris Li
2025-08-27  3:50     ` Chris Li [this message]
2025-08-27 13:45     ` Kairui Song
2025-08-27  3:52   ` Baoquan He
2025-08-27 13:46     ` Kairui Song
2025-08-28  3:20   ` Baolin Wang
2025-09-01 23:50   ` Barry Song
2025-09-02  6:12     ` Kairui Song
2025-09-02  6:52       ` Chris Li
2025-09-02 10:06   ` David Hildenbrand
2025-09-02 12:32     ` Chris Li
2025-09-02 13:18       ` David Hildenbrand
2025-09-02 16:38     ` Kairui Song
2025-09-02 10:10   ` David Hildenbrand
2025-09-02 17:13     ` Kairui Song
2025-09-03  8:00       ` David Hildenbrand
2025-09-03 17:41   ` Nhat Pham
2025-08-22 19:20 ` [PATCH 2/9] mm, swap: always lock and check the swap cache folio before use Kairui Song
2025-08-27  6:13   ` Chris Li
2025-08-27 13:44     ` Kairui Song
2025-08-30  1:42       ` Chris Li
2025-08-27  7:03   ` Chris Li
2025-08-27 14:35     ` Kairui Song
2025-08-28  3:41       ` Baolin Wang
2025-08-28 18:05         ` Kairui Song
2025-08-30  1:53       ` Chris Li
2025-08-30 15:15         ` Kairui Song
2025-08-30 17:17           ` Chris Li
2025-09-01 18:17         ` Kairui Song
2025-09-01 21:10           ` Chris Li
2025-09-02  5:40   ` Barry Song
2025-09-02 10:18   ` David Hildenbrand
2025-09-02 10:21     ` David Hildenbrand
2025-09-02 12:46     ` Chris Li
2025-09-02 13:27       ` Kairui Song
2025-08-22 19:20 ` [PATCH 3/9] mm, swap: rename and move some swap cluster definition and helpers Kairui Song
2025-08-30  2:31   ` Chris Li
2025-09-02  5:53   ` Barry Song
2025-09-02 10:20   ` David Hildenbrand
2025-09-02 12:50     ` Chris Li
2025-08-22 19:20 ` [PATCH 4/9] mm, swap: tidy up swap device and cluster info helpers Kairui Song
2025-08-27  3:47   ` Baoquan He
2025-08-27 17:44     ` Chris Li
2025-08-27 23:46       ` Baoquan He
2025-08-30  2:38         ` Chris Li
2025-09-02  6:01       ` Barry Song
2025-09-03  9:28       ` David Hildenbrand
2025-09-02  6:02   ` Barry Song
2025-09-02 13:33   ` David Hildenbrand
2025-09-02 15:03     ` Kairui Song
2025-09-03  8:11       ` David Hildenbrand
2025-08-22 19:20 ` [PATCH 5/9] mm/shmem, swap: remove redundant error handling for replacing folio Kairui Song
2025-08-25  3:02   ` Baolin Wang
2025-08-25  9:45     ` Kairui Song
2025-08-30  2:41       ` Chris Li
2025-09-03  8:25   ` David Hildenbrand
2025-08-22 19:20 ` [PATCH 6/9] mm, swap: use the swap table for the swap cache and switch API Kairui Song
2025-08-30  1:54   ` Baoquan He
2025-08-30  3:40     ` Chris Li
2025-08-30  3:34   ` Chris Li
2025-08-30 16:52     ` Kairui Song
2025-08-31  1:00       ` Chris Li
2025-09-02 11:51         ` Kairui Song
2025-09-02  9:55   ` Barry Song
2025-09-02 11:58     ` Kairui Song
2025-09-02 23:44       ` Barry Song
2025-09-03  2:12         ` Kairui Song
2025-09-03  2:31           ` Barry Song
2025-09-03 11:41   ` David Hildenbrand
2025-09-03 12:54     ` Kairui Song
2025-08-22 19:20 ` [PATCH 7/9] mm, swap: remove contention workaround for swap cache Kairui Song
2025-08-30  4:07   ` Chris Li
2025-08-30 15:24     ` Kairui Song
2025-08-31 15:54       ` Kairui Song
2025-08-31 20:06         ` Chris Li
2025-08-31 20:04       ` Chris Li
2025-09-02 10:06   ` Barry Song
2025-08-22 19:20 ` [PATCH 8/9] mm, swap: implement dynamic allocation of swap table Kairui Song
2025-08-30  4:17   ` Chris Li
2025-09-02 11:15   ` Barry Song
2025-09-02 13:17     ` Chris Li
2025-09-02 16:57       ` Kairui Song
2025-09-02 23:31       ` Barry Song
2025-09-03  2:13         ` Kairui Song
2025-09-03 12:35         ` Chris Li
2025-09-03 20:52           ` Barry Song
2025-08-22 19:20 ` [PATCH 9/9] mm, swap: use a single page for swap table when the size fits Kairui Song
2025-08-30  4:23   ` Chris Li
2025-08-26 22:00 ` [PATCH 0/9] mm, swap: introduce swap table as swap cache (phase I) Chris Li
2025-08-30  5:44 ` Chris Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACePvbX0ng5Q+OpwNwB--dB=25oDb6tqcW+OJgWm=5LJeQVG2A@mail.gmail.com' \
    --to=chrisl@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bhe@redhat.com \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=nphamcs@gmail.com \
    --cc=shikemeng@huaweicloud.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@linux.alibaba.com \
    --cc=yosryahmed@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).