public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>,
	akpm@linux-foundation.org, willy@infradead.org
Cc: lorenzo.stoakes@oracle.com, kas@kernel.org, p.raghav@samsung.com,
	mcgrof@kernel.org, dhowells@redhat.com, djwong@kernel.org,
	hare@suse.de, da.gomez@samsung.com, dchinner@redhat.com,
	brauner@kernel.org, xiangzao@linux.alibaba.com,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: filemap: fix nr_pages calculation overflow in filemap_map_pages()
Date: Wed, 18 Mar 2026 09:17:04 +0800	[thread overview]
Message-ID: <dd67c771-8530-40d7-93c3-cd6f405c32dd@linux.alibaba.com> (raw)
In-Reply-To: <82019011-023c-4bce-a524-6eab119f0a4a@kernel.org>



On 3/18/26 4:49 AM, David Hildenbrand (Arm) wrote:
> On 3/17/26 10:37, David Hildenbrand (Arm) wrote:
>> On 3/17/26 10:29, Baolin Wang wrote:
>>> When running stress-ng on my Arm64 machine with v7.0-rc3 kernel, I encountered
>>> some very strange crash issues showing up as "Bad page state":
>>>
>>> "
>>> [  734.496287] BUG: Bad page state in process stress-ng-env  pfn:415735fb
>>> [  734.496427] page: refcount:0 mapcount:1 mapping:0000000000000000 index:0x4cf316 pfn:0x415735fb
>>> [  734.496434] flags: 0x57fffe000000800(owner_2|node=1|zone=2|lastcpupid=0x3ffff)
>>> [  734.496439] raw: 057fffe000000800 0000000000000000 dead000000000122 0000000000000000
>>> [  734.496440] raw: 00000000004cf316 0000000000000000 0000000000000000 0000000000000000
>>> [  734.496442] page dumped because: nonzero mapcount
>>> "
>>>
>>> After analyzing this page’s state, it is hard to understand why the mapcount
>>> is not 0 while the refcount is 0, since this page is not where the issue first
>>> occurred. By enabling the CONFIG_DEBUG_VM config, I can reproduce the crash as
>>> well and captured the first warning where the issue appears:
>>>
>>> "
>>> [  734.469226] page: refcount:33 mapcount:0 mapping:00000000bef2d187 index:0x81a0 pfn:0x415735c0
>>> [  734.469304] head: order:5 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
>>> [  734.469315] memcg:ffff000807a8ec00
>>> [  734.469320] aops:ext4_da_aops ino:100b6f dentry name(?):"stress-ng-mmaptorture-9397-0-2736200540"
>>> [  734.469335] flags: 0x57fffe400000069(locked|uptodate|lru|head|node=1|zone=2|lastcpupid=0x3ffff)
>>> ......
>>> [  734.469364] page dumped because: VM_WARN_ON_FOLIO((_Generic((page + nr_pages - 1),
>>> const struct page *: (const struct folio *)_compound_head(page + nr_pages - 1), struct page *:
>>> (struct folio *)_compound_head(page + nr_pages - 1))) != folio)
>>> [  734.469390] ------------[ cut here ]------------
>>> [  734.469393] WARNING: ./include/linux/rmap.h:351 at folio_add_file_rmap_ptes+0x3b8/0x468,
>>> CPU#90: stress-ng-mlock/9430
>>> [  734.469551]  folio_add_file_rmap_ptes+0x3b8/0x468 (P)
>>> [  734.469555]  set_pte_range+0xd8/0x2f8
>>> [  734.469566]  filemap_map_folio_range+0x190/0x400
>>> [  734.469579]  filemap_map_pages+0x348/0x638
>>> [  734.469583]  do_fault_around+0x140/0x198
>>> ......
>>> [  734.469640]  el0t_64_sync+0x184/0x188
>>> "
>>>
>>> The code that triggers the warning is: "VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio)",
>>> which indicates that set_pte_range() tried to map beyond the large folio’s
>>> size.
>>>
>>> By adding more debug information, I found that 'nr_pages' had overflowed in
>>> filemap_map_pages(), causing set_pte_range() to establish mappings for a range
>>> exceeding the folio size, potentially corrupting fields of pages that do not
>>> belong to this folio (e.g., page->_mapcount).
>>>
>>> After above analysis, I think the possible race is as follows:
>>>
>>> CPU 0                                                  CPU 1
>>> filemap_map_pages()                                   ext4_setattr()
>>>     //get and lock folio with old inode->i_size
>>>     next_uptodate_folio()
>>>
>>>                                                            .......
>>>                                                            //shrink the inode->i_size
>>>                                                            i_size_write(inode, attr->ia_size);
>>>
>>>     //calculate the end_pgoff with the new inode->i_size
>>>     file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1;
>>>     end_pgoff = min(end_pgoff, file_end);
>>>
>>>     ......
>>>     //nr_pages can be overflowed, cause xas.xa_index > end_pgoff
>>>     end = folio_next_index(folio) - 1;
>>>     nr_pages = min(end, end_pgoff) - xas.xa_index + 1;
>>>
>>>     ......
>>>     //map large folio
>>>     filemap_map_folio_range()
>>>                                                            ......
>>>                                                            //truncate folios
>>>                                                            truncate_pagecache(inode, inode->i_size);
>>>
>>> To fix this issue, move the 'end_pgoff' calculation before next_uptodate_folio(),
>>> so the retrieved folio stays consistent with the file end to avoid 'nr_pages'
>>> calculation overflow. After this patch, the crash issue is gone.
>>>
>>
>> Thanks!
>>
>> Acked-by: David Hildenbrand (Arm) <david@kernel.org>

Thanks for reviewing.

> I just skimmed over the AI review:
> 
> https://sashiko.dev/#/patchset/1cf1ac59018fc647a87b0dad605d4056a71c14e4.1773739704.git.baolin.wang%40linux.alibaba.com

Thanks. Zi Yan also sent me the AI-generated comments, and I don’t think 
this is an issue.

> And I'm not sure if it has a point, in particular whether
> i_size_read(mapping->host) could return 0 and underflow file_end.
> 
> I'd assume, in that case (truncation succeeded), also the
> next_uptodate_folio() would fail.

Yes. If truncation has succeeded, next_uptodate_folio() cannot find a 
folio. If called before truncation, next_uptodate_folio() also checks 
i_size_read(mapping->host) and returns NULL:

max_idx = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE);
if (xas->xa_index >= max_idx)
	goto unlock;

So I don't think this will cause a real issue if 
i_size_read(mapping->host) returns 0.

      reply	other threads:[~2026-03-18  1:17 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-17  9:29 [PATCH] mm: filemap: fix nr_pages calculation overflow in filemap_map_pages() Baolin Wang
2026-03-17  9:37 ` David Hildenbrand (Arm)
2026-03-17 20:49   ` David Hildenbrand (Arm)
2026-03-18  1:17     ` Baolin Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd67c771-8530-40d7-93c3-cd6f405c32dd@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=da.gomez@samsung.com \
    --cc=david@kernel.org \
    --cc=dchinner@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hare@suse.de \
    --cc=kas@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mcgrof@kernel.org \
    --cc=p.raghav@samsung.com \
    --cc=willy@infradead.org \
    --cc=xiangzao@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox