From: Alistair Popple <apopple@nvidia.com>
To: Souvik Banerjee <souvik@amlalabs.com>
Cc: djbw@kernel.org, david@kernel.org, willy@infradead.org,
jack@suse.cz, linux-fsdevel@vger.kernel.org,
nvdimm@lists.linux.dev, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page()
Date: Tue, 12 May 2026 11:34:34 +1000 [thread overview]
Message-ID: <agKC2kIxNWL_ObLA@nvdebian.thelocal> (raw)
In-Reply-To: <20260511214020.208939-1-souvik@amlalabs.com>
On 2026-05-12 at 07:40 +1000, Souvik Banerjee <souvik@amlalabs.com> wrote...
> Commit 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> added zero/empty-entry early returns to dax_associate_entry() and
> dax_disassociate_entry(), but placed them *after* the
> `struct folio *folio = dax_to_folio(entry);` line. dax_to_folio()
> expands to page_folio(pfn_to_page(dax_to_pfn(entry))), which calls
> _compound_head() and performs READ_ONCE(page->compound_info) -- a real
> dereference of the struct page pointer derived from a bogus PFN
> extracted from the empty/zero XA value.
>
> On systems where vmemmap covers all of RAM that dereference reads
> garbage and is harmless: the early return then discards the result.
> On virtio-pmem with altmap (vmemmap stored inside the device), only
> the real device PFN range is mapped, so the dereference triggers a
> kernel paging fault from the truncate / invalidate path and from the
> PMD-downgrade branch of dax_iomap_pte_fault when an entry is being
> freed:
>
> Unable to handle kernel paging request at
> virtual address ffff_fdff_bf00_0008 (vmemmap region)
> Call trace:
> dax_disassociate_entry.isra.0+0x20/0x50
> dax_iomap_pte_fault
> dax_iomap_fault
> erofs_dax_fault
>
> Close the residual gap by moving the dax_to_folio() call after the
> zero/empty guard in both dax_associate_entry() and
> dax_disassociate_entry(). Apply the same treatment to dax_busy_page(),
> which has the identical pattern but was not touched by the prior fix.
> dax_associate_entry() is reachable with a zero entry via
> dax_insert_entry() -> dax_associate_entry(new_entry, ...), where
> new_entry can carry DAX_ZERO_PAGE (built by dax_make_entry() in
> dax_load_hole() / dax_pmd_load_hole()). dax_disassociate_entry() and
> dax_busy_page() additionally see DAX_EMPTY entries created by
> grab_mapping_entry().
>
> The remaining users of dax_to_folio() / dax_to_pfn() in fs/dax.c are
> either guarded or only reachable on real-PFN entries, so this exhausts
> the anti-pattern.
I did that too when reviewing v1 and your conclusion matches mine. So looks good
to me:
Reviewed-by: Alistair Popple <apopple@nvidia.com>
> Fixes: 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> Fixes: 38607c62b34b ("fs/dax: properly refcount fs dax pages")
> Cc: stable@vger.kernel.org # v6.15+
> Cc: Alistair Popple <apopple@nvidia.com>
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Souvik Banerjee <souvik@amlalabs.com>
> ---
> Changes in v2:
> - Also fix dax_associate_entry() (Suggested-by: David Hildenbrand,
> confirmed by Alistair Popple). The same anti-pattern existed there:
> dax_to_folio(entry) ran before the zero/empty guard. new_entry on
> that path can carry DAX_ZERO_PAGE via dax_load_hole() /
> dax_pmd_load_hole(), so the dereference reads a struct page derived
> from the zero-page PFN before the early return discards it.
> - Audited remaining dax_to_folio() / dax_to_pfn() call sites in fs/dax.c;
> no further instances of the pattern.
> - Updated the page_folio() expansion in the commit message to refer to
> the current field name (page->compound_info via _compound_head()).
>
> v1: https://lore.kernel.org/all/20260501233933.2614302-1-souvik@amlalabs.com/
>
> fs/dax.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 6d175cd47a99..4bca6e2bc342 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -480,11 +480,12 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
> unsigned long address, bool shared)
> {
> unsigned long size = dax_entry_size(entry), index;
> - struct folio *folio = dax_to_folio(entry);
> + struct folio *folio;
>
> if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
> return;
>
> + folio = dax_to_folio(entry);
> index = linear_page_index(vma, address & ~(size - 1));
> if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
> if (folio->mapping)
> @@ -505,21 +506,23 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
> static void dax_disassociate_entry(void *entry, struct address_space *mapping,
> bool trunc)
> {
> - struct folio *folio = dax_to_folio(entry);
> + struct folio *folio;
>
> if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
> return;
>
> + folio = dax_to_folio(entry);
> dax_folio_put(folio);
> }
>
> static struct page *dax_busy_page(void *entry)
> {
> - struct folio *folio = dax_to_folio(entry);
> + struct folio *folio;
>
> if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
> return NULL;
>
> + folio = dax_to_folio(entry);
> if (folio_ref_count(folio) - folio_mapcount(folio))
> return &folio->page;
> else
> --
> 2.51.1
>
>
next prev parent reply other threads:[~2026-05-12 1:34 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 21:40 [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page() Souvik Banerjee
2026-05-12 1:34 ` Alistair Popple [this message]
2026-05-12 6:48 ` David Hildenbrand (Arm)
2026-05-12 7:45 ` Jan Kara
2026-05-12 12:49 ` Gupta, Pankaj
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=agKC2kIxNWL_ObLA@nvdebian.thelocal \
--to=apopple@nvidia.com \
--cc=david@kernel.org \
--cc=djbw@kernel.org \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nvdimm@lists.linux.dev \
--cc=souvik@amlalabs.com \
--cc=stable@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox