From: Dev Jain <dev.jain@arm.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>,
"Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
Cc: akpm@linux-foundation.org, axelrasmussen@google.com,
yuanchu@google.com, hughd@google.com, chrisl@kernel.org,
kasong@tencent.com, weixugc@google.com, Liam.Howlett@oracle.com,
vbabka@kernel.org, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com,
jannh@google.com, pfalcato@suse.de,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org,
youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org,
willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, ryan.roberts@arm.com,
anshuman.khandual@arm.com
Subject: Re: [PATCH 1/9] mm/rmap: make nr_pages signed in try_to_unmap_one
Date: Tue, 10 Mar 2026 13:53:21 +0530 [thread overview]
Message-ID: <04a7465b-ee69-479f-aaa4-ccdf3ae16805@arm.com> (raw)
In-Reply-To: <31f93292-3de6-475c-b7fe-82ef41a3a7de@kernel.org>
On 10/03/26 1:36 pm, David Hildenbrand (Arm) wrote:
> On 3/10/26 08:56, Lorenzo Stoakes (Oracle) wrote:
>> On Tue, Mar 10, 2026 at 01:00:05PM +0530, Dev Jain wrote:
>>> Currently, nr_pages is defined as unsigned long. We use nr_pages to
>>> manipulate mm rss counters for lazyfree folios as follows:
>>>
>>> add_mm_counter(mm, MM_ANONPAGES, -nr_pages);
>>>
>>> Suppose nr_pages = 3. -nr_pages underflows and becomes ULONG_MAX - 2. Then,
>>> since add_mm_counter() uses this -nr_pages as a long, ULONG_MAX - 2 does
>>> not fit into the positive range of long, and is converted to -3. Eventually
>>> all of this works out, but for keeping things simple, declare nr_pages as
>>> a signed variable.
>>>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>> mm/rmap.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 6398d7eef393f..087c9f5b884fe 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1979,9 +1979,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>> struct page *subpage;
>>> struct mmu_notifier_range range;
>>> enum ttu_flags flags = (enum ttu_flags)(long)arg;
>>> - unsigned long nr_pages = 1, end_addr;
>>> + unsigned long end_addr;
>>> unsigned long pfn;
>>> unsigned long hsz = 0;
>>> + long nr_pages = 1;
>>
>> This is a non-issue that makes the code confusing, so let's not?
>>
>> The convention throughout the kernel is nr_pages generally is unsigned because
>> you can't have negative nr_pages.
>
> Indeed. Documented in:
>
> commit fa17bcd5f65ed702df001579cca8c885fa6bf3e7
> Author: Aristeu Rozanski <aris@ruivo.org>
> Date: Tue Aug 26 11:37:21 2025 -0400
>
> mm: make folio page count functions return unsigned
>
> As raised by Andrew [1], a folio/compound page never spans a negative
> number of pages. Consequently, let's use "unsigned long" instead of
> "long" consistently for folio_nr_pages(), folio_large_nr_pages() and
> compound_nr().
>
> Using "unsigned long" as return value is fine, because even
> "(long)-folio_nr_pages()" will keep on working as expected. Using
> "unsigned int" instead would actually break these use cases.
>
> This patch takes the first step changing these to return unsigned long
> (and making drm_gem_get_pages() use the new types instead of replacing
> min()).
>
> In the future, we might want to make more callers of these functions to
> consistently use "unsigned long".
So when I was playing around with the code, I noticed that passing
unsigned int nr_pages to add_mm_counter(-nr_pages) messes up things. Then
I noticed we have an unsigned long here, which prevents that. This is quite
non-trivial information for me, especially when I searched around the
codebase and found this is the only place where we pass a negative unsigned
long.
But thanks for pointing out this commit. If it is a well-known fact that
(long)-folio_nr_pages() will work correctly, then we can drop this patch.
>
>
>
next prev parent reply other threads:[~2026-03-10 8:23 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-10 7:30 [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping Dev Jain
2026-03-10 7:30 ` [PATCH 1/9] mm/rmap: make nr_pages signed in try_to_unmap_one Dev Jain
2026-03-10 7:56 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:06 ` David Hildenbrand (Arm)
2026-03-10 8:23 ` Dev Jain [this message]
2026-03-10 12:40 ` Matthew Wilcox
2026-03-11 4:54 ` Dev Jain
2026-03-10 7:30 ` [PATCH 2/9] mm/rmap: initialize nr_pages to 1 at loop start " Dev Jain
2026-03-10 8:10 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:31 ` Dev Jain
2026-03-10 8:39 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:43 ` Dev Jain
2026-03-10 7:30 ` [PATCH 3/9] mm/rmap: refactor lazyfree unmap commit path to commit_ttu_lazyfree_folio() Dev Jain
2026-03-10 8:19 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:42 ` Dev Jain
2026-03-19 15:53 ` Lorenzo Stoakes (Oracle)
2026-03-10 7:30 ` [PATCH 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-03-10 7:30 ` [PATCH 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-03-10 8:34 ` Lorenzo Stoakes (Oracle)
2026-03-10 23:32 ` Barry Song
2026-03-11 4:14 ` Barry Song
2026-03-11 4:52 ` Dev Jain
2026-03-11 4:56 ` Dev Jain
2026-03-10 7:30 ` [PATCH 6/9] mm/swapfile: Make folio_dup_swap batchable Dev Jain
2026-03-10 8:27 ` Kairui Song
2026-03-10 8:46 ` Dev Jain
2026-03-10 8:49 ` Lorenzo Stoakes (Oracle)
2026-03-11 5:42 ` Dev Jain
2026-03-19 15:26 ` Lorenzo Stoakes (Oracle)
2026-03-19 16:47 ` Matthew Wilcox
2026-03-18 0:20 ` kernel test robot
2026-03-10 7:30 ` [PATCH 7/9] mm/swapfile: Make folio_put_swap batchable Dev Jain
2026-03-10 8:29 ` Kairui Song
2026-03-10 8:50 ` Dev Jain
2026-03-10 8:55 ` Lorenzo Stoakes (Oracle)
2026-03-18 1:04 ` kernel test robot
2026-03-10 7:30 ` [PATCH 8/9] mm/rmap: introduce folio_try_share_anon_rmap_ptes Dev Jain
2026-03-10 9:38 ` Lorenzo Stoakes (Oracle)
2026-03-11 8:09 ` Dev Jain
2026-03-12 8:19 ` Wei Yang
2026-03-19 15:47 ` Lorenzo Stoakes (Oracle)
2026-04-08 7:14 ` Dev Jain
2026-03-10 7:30 ` [PATCH 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-03-10 8:02 ` [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping Lorenzo Stoakes (Oracle)
2026-03-10 9:28 ` Dev Jain
2026-03-10 12:59 ` Lance Yang
2026-03-11 8:11 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=04a7465b-ee69-479f-aaa4-ccdf3ae16805@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kas@kernel.org \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.