All of lore.kernel.org
 help / color / mirror / Atom feed
From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Dev Jain <dev.jain@arm.com>,
	akpm@linux-foundation.org, ljs@kernel.org, hughd@google.com,
	chrisl@kernel.org, kasong@tencent.com
Cc: riel@surriel.com, liam@infradead.org, vbabka@kernel.org,
	harry@kernel.org, jannh@google.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, qi.zheng@linux.dev,
	shakeel.butt@linux.dev, baohua@kernel.org,
	axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com,
	rppt@kernel.org, surenb@google.com, mhocko@suse.com,
	baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
	nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com,
	pfalcato@suse.de, ryan.roberts@arm.com,
	anshuman.khandual@arm.com
Subject: Re: [PATCH v3 6/9] mm/swapfile: Add batched version of folio_dup_swap
Date: Mon, 11 May 2026 09:45:01 +0200	[thread overview]
Message-ID: <871d9a91-2d14-4dbb-934d-b52c12dae7cd@kernel.org> (raw)
In-Reply-To: <20260506094504.2588857-7-dev.jain@arm.com>

On 5/6/26 11:45, Dev Jain wrote:
> Add folio_dup_swap_pages to handle a batch of consecutive pages. Note
> that folio_dup_swap already can handle a subset of this: nr_pages == 1 and
> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages.
> 
> Currently we have a not-so-nice logic of passing in subpage == NULL if
> we mean to exercise the logic on the entire folio, and subpage != NULL if
> we want to exercise the logic on only that subpage. Remove this
> indirection: the caller invokes folio_dup_swap_pages() if it wants to
> operate on a range of pages in the folio (i.e nr_pages may be anything
> between 1 till folio_nr_pages()), and invokes folio_dup_swap() if it
> wants to operate on the entire folio.
> 
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
>  mm/rmap.c     |  2 +-
>  mm/shmem.c    |  2 +-
>  mm/swap.h     | 12 ++++++++++--
>  mm/swapfile.c | 20 ++++++++++++--------
>  4 files changed, 24 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 25813e3605991..352ba77d90f67 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2314,7 +2314,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  				goto finish_unmap;
>  			}
>  
> -			if (folio_dup_swap(folio, subpage) < 0) {
> +			if (folio_dup_swap_pages(folio, subpage, 1) < 0) {

Can you throw in a patch to rename subpage -> page first?

>  				set_pte_at(mm, address, pvmw.pte, pteval);
>  				goto walk_abort;
>  			}
> diff --git a/mm/shmem.c b/mm/shmem.c
> index bab3529af23c5..5e4f521399847 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1698,7 +1698,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
>  			spin_unlock(&shmem_swaplist_lock);
>  		}
>  
> -		folio_dup_swap(folio, NULL);
> +		folio_dup_swap(folio);
>  		shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
>  
>  		BUG_ON(folio_mapped(folio));
> diff --git a/mm/swap.h b/mm/swap.h
> index a77016f2423b9..3c25f914e908b 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -206,7 +206,9 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp);
>   * folio_put_swap(): does the opposite thing of folio_dup_swap().
>   */
>  int folio_alloc_swap(struct folio *folio);
> -int folio_dup_swap(struct folio *folio, struct page *subpage);
> +int folio_dup_swap(struct folio *folio);
> +int folio_dup_swap_pages(struct folio *folio, struct page *page,
> +			 unsigned long nr_pages);
>  void folio_put_swap(struct folio *folio, struct page *subpage);
>  
>  /* For internal use */
> @@ -390,7 +392,13 @@ static inline int folio_alloc_swap(struct folio *folio)
>  	return -EINVAL;
>  }
>  
> -static inline int folio_dup_swap(struct folio *folio, struct page *page)
> +static inline int folio_dup_swap(struct folio *folio)
> +{
> +	return -EINVAL;
> +}
> +

Much cleaner.

Can we have a common folio_dup_swap() function instead that does what the
variant below does? (call folio_dup_swap_pages())

> +static inline int folio_dup_swap_pages(struct folio *folio, struct page *page,
> +		unsigned long nr_pages)
>  {
>  	return -EINVAL;
>  }
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index c7e173b93e11d..28daf92839e77 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1740,9 +1740,10 @@ int folio_alloc_swap(struct folio *folio)
>  }
>  
>  /**
> - * folio_dup_swap() - Increase swap count of swap entries of a folio.
> + * folio_dup_swap_pages() - Increase swap count of swap entries of a folio.
>   * @folio: folio with swap entries bounded.
> - * @subpage: if not NULL, only increase the swap count of this subpage.
> + * @page: the first page in the folio to increase the swap count for.
> + * @nr_pages: the number of pages in the folio to increase the swap count for.
>   *
>   * Typically called when the folio is unmapped and have its swap entry to
>   * take its place: Swap entries allocated to a folio has count == 0 and pinned
> @@ -1756,23 +1757,26 @@ int folio_alloc_swap(struct folio *folio)
>   * swap_put_entries_direct on its swap entry before this helper returns, or
>   * the swap count may underflow.
>   */
> -int folio_dup_swap(struct folio *folio, struct page *subpage)
> +int folio_dup_swap_pages(struct folio *folio, struct page *page,
> +		unsigned long nr_pages)
>  {
>  	swp_entry_t entry = folio->swap;
> -	unsigned long nr_pages = folio_nr_pages(folio);
>  
>  	VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
>  	VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio);
>  
> -	if (subpage) {
> -		entry.val += folio_page_idx(folio, subpage);
> -		nr_pages = 1;
> -	}
> +	entry.val += folio_page_idx(folio, page);
>  
>  	return swap_dup_entries_cluster(swap_entry_to_info(entry),
>  					swp_offset(entry), nr_pages);
>  }
>  
> +int folio_dup_swap(struct folio *folio)
> +{
> +	return folio_dup_swap_pages(folio, folio_page(folio, 0),
> +				    folio_nr_pages(folio));
> +}

Can you add simplistic kerneldoc for folio_dup_swap() as well, and mostly just
link to folio_dup_swap_pages() ?

-- 
Cheers,

David


  reply	other threads:[~2026-05-11  7:45 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-06  9:44 [PATCH v3 0/9] Optimize anonymous large folio unmapping Dev Jain
2026-05-06  9:44 ` [PATCH v3 1/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Dev Jain
2026-05-11  6:48   ` David Hildenbrand (Arm)
2026-05-11  8:18     ` Dev Jain
2026-05-11  8:32       ` David Hildenbrand (Arm)
2026-05-12  8:14         ` Dev Jain
2026-05-12  8:17           ` David Hildenbrand (Arm)
2026-05-12 10:49             ` Dev Jain
2026-05-12 11:01               ` David Hildenbrand (Arm)
2026-05-12 11:16                 ` Dev Jain
2026-05-06  9:44 ` [PATCH v3 2/9] mm/rmap: refactor hugetlb pte clearing " Dev Jain
2026-05-11  7:10   ` David Hildenbrand (Arm)
2026-05-11  8:53     ` Dev Jain
2026-05-11  8:59       ` David Hildenbrand (Arm)
2026-05-11 22:20     ` Barry Song
2026-05-12  5:16       ` Dev Jain
2026-05-06  9:44 ` [PATCH v3 3/9] mm/rmap: refactor some code around lazyfree folio unmapping Dev Jain
2026-05-11  7:28   ` David Hildenbrand (Arm)
2026-05-12  5:19     ` Dev Jain
2026-05-06  9:44 ` [PATCH v3 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-05-11  7:37   ` David Hildenbrand (Arm)
2026-05-12  5:59     ` Dev Jain
2026-05-12  6:04       ` David Hildenbrand (Arm)
2026-05-06  9:45 ` [PATCH v3 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-05-11  7:41   ` David Hildenbrand (Arm)
2026-05-06  9:45 ` [PATCH v3 6/9] mm/swapfile: Add batched version of folio_dup_swap Dev Jain
2026-05-11  7:45   ` David Hildenbrand (Arm) [this message]
2026-05-12  6:07     ` Dev Jain
2026-05-12  6:36       ` David Hildenbrand (Arm)
2026-05-06  9:45 ` [PATCH v3 7/9] mm/swapfile: Add batched version of folio_put_swap Dev Jain
2026-05-11  8:07   ` David Hildenbrand (Arm)
2026-05-06  9:45 ` [PATCH v3 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte Dev Jain
2026-05-11  8:13   ` David Hildenbrand (Arm)
2026-05-11  8:14     ` David Hildenbrand (Arm)
2026-05-12  8:57     ` Dev Jain
2026-05-06  9:45 ` [PATCH v3 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-05-11  8:16   ` David Hildenbrand (Arm)
2026-05-12  8:59     ` Dev Jain
2026-05-08 23:38 ` [PATCH v3 0/9] Optimize anonymous large folio unmapping Andrew Morton
2026-05-11  6:21   ` Dev Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=871d9a91-2d14-4dbb-934d-b52c12dae7cd@kernel.org \
    --to=david@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=axelrasmussen@google.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bhe@redhat.com \
    --cc=chrisl@kernel.org \
    --cc=dev.jain@arm.com \
    --cc=harry@kernel.org \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=kasong@tencent.com \
    --cc=liam@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=nphamcs@gmail.com \
    --cc=pfalcato@suse.de \
    --cc=qi.zheng@linux.dev \
    --cc=riel@surriel.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=shakeel.butt@linux.dev \
    --cc=shikemeng@huaweicloud.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=weixugc@google.com \
    --cc=youngjun.park@lge.com \
    --cc=yuanchu@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.