From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Dev Jain <dev.jain@arm.com>,
akpm@linux-foundation.org, ljs@kernel.org, hughd@google.com,
chrisl@kernel.org, kasong@tencent.com
Cc: riel@surriel.com, liam@infradead.org, vbabka@kernel.org,
harry@kernel.org, jannh@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, qi.zheng@linux.dev,
shakeel.butt@linux.dev, baohua@kernel.org,
axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com,
pfalcato@suse.de, ryan.roberts@arm.com,
anshuman.khandual@arm.com
Subject: Re: [PATCH v3 7/9] mm/swapfile: Add batched version of folio_put_swap
Date: Mon, 11 May 2026 10:07:13 +0200 [thread overview]
Message-ID: <37a2531b-7b27-4574-838e-558ba310261b@kernel.org> (raw)
In-Reply-To: <20260506094504.2588857-8-dev.jain@arm.com>
On 5/6/26 11:45, Dev Jain wrote:
> Add folio_put_swap_pages to handle a batch of consecutive pages. Note
> that folio_put_swap already can handle a subset of this: nr_pages == 1 and
> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages.
>
> Currently we have a not-so-nice logic of passing in subpage == NULL if
> we mean to exercise the logic on the entire folio, and subpage != NULL if
> we want to exercise the logic on only that subpage. Remove this
> indirection: the caller invokes folio_put_swap_pages() if it wants to
> operate on a range of pages in the folio (i.e nr_pages may be anything
> between 1 till folio_nr_pages()), and invokes folio_put_swap() if it
> wants to operate on the entire folio.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
[...]
>
> -static inline void folio_put_swap(struct folio *folio, struct page *page)
> +static inline void folio_put_swap(struct folio *folio)
> +{
> +}
Same comment regarding common function.
> +
> +static inline void folio_put_swap_pages(struct folio *folio, struct page *page,
> + unsigned long nr_pages)
> {
> }
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 28daf92839e77..ac576cc63b194 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1778,31 +1778,34 @@ int folio_dup_swap(struct folio *folio)
> }
>
> /**
> - * folio_put_swap() - Decrease swap count of swap entries of a folio.
> + * folio_put_swap_pages() - Decrease swap count of swap entries of a folio.
> * @folio: folio with swap entries bounded, must be in swap cache and locked.
> - * @subpage: if not NULL, only decrease the swap count of this subpage.
> + * @page: the first page in the folio to decrease the swap count for.
> + * @nr_pages: the number of pages in the folio to decrease the swap count for.
> *
> * This won't free the swap slots even if swap count drops to zero, they are
> * still pinned by the swap cache. User may call folio_free_swap to free them.
> * Context: Caller must ensure the folio is locked and in the swap cache.
> */
> -void folio_put_swap(struct folio *folio, struct page *subpage)
> +void folio_put_swap_pages(struct folio *folio, struct page *page,
> + unsigned long nr_pages)
> {
> swp_entry_t entry = folio->swap;
> - unsigned long nr_pages = folio_nr_pages(folio);
> struct swap_info_struct *si = __swap_entry_to_info(entry);
>
> VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
> VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio);
>
> - if (subpage) {
> - entry.val += folio_page_idx(folio, subpage);
> - nr_pages = 1;
> - }
> + entry.val += folio_page_idx(folio, page);
>
> swap_put_entries_cluster(si, swp_offset(entry), nr_pages, false);
> }
>
> +void folio_put_swap(struct folio *folio)
> +{
> + folio_put_swap_pages(folio, folio_page(folio, 0), folio_nr_pages(folio));
> +}
Same comment regarding kerneldoc :)
... and same comment that this definitely looks nicer.
--
Cheers,
David
next prev parent reply other threads:[~2026-05-11 8:07 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-06 9:44 [PATCH v3 0/9] Optimize anonymous large folio unmapping Dev Jain
2026-05-06 9:44 ` [PATCH v3 1/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Dev Jain
2026-05-11 6:48 ` David Hildenbrand (Arm)
2026-05-11 8:18 ` Dev Jain
2026-05-11 8:32 ` David Hildenbrand (Arm)
2026-05-06 9:44 ` [PATCH v3 2/9] mm/rmap: refactor hugetlb pte clearing " Dev Jain
2026-05-11 7:10 ` David Hildenbrand (Arm)
2026-05-11 8:53 ` Dev Jain
2026-05-11 8:59 ` David Hildenbrand (Arm)
2026-05-06 9:44 ` [PATCH v3 3/9] mm/rmap: refactor some code around lazyfree folio unmapping Dev Jain
2026-05-11 7:28 ` David Hildenbrand (Arm)
2026-05-06 9:44 ` [PATCH v3 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-05-11 7:37 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-05-11 7:41 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 6/9] mm/swapfile: Add batched version of folio_dup_swap Dev Jain
2026-05-11 7:45 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 7/9] mm/swapfile: Add batched version of folio_put_swap Dev Jain
2026-05-11 8:07 ` David Hildenbrand (Arm) [this message]
2026-05-06 9:45 ` [PATCH v3 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte Dev Jain
2026-05-11 8:13 ` David Hildenbrand (Arm)
2026-05-11 8:14 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-05-11 8:16 ` David Hildenbrand (Arm)
2026-05-08 23:38 ` [PATCH v3 0/9] Optimize anonymous large folio unmapping Andrew Morton
2026-05-11 6:21 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=37a2531b-7b27-4574-838e-558ba310261b@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=dev.jain@arm.com \
--cc=harry@kernel.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kasong@tencent.com \
--cc=liam@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=qi.zheng@linux.dev \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox