Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page()
@ 2026-05-11 21:40 Souvik Banerjee
  2026-05-12  1:34 ` Alistair Popple
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Souvik Banerjee @ 2026-05-11 21:40 UTC (permalink / raw)
  To: djbw
  Cc: david, willy, jack, apopple, linux-fsdevel, nvdimm, linux-mm,
	linux-kernel, stable, Souvik Banerjee

Commit 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
added zero/empty-entry early returns to dax_associate_entry() and
dax_disassociate_entry(), but placed them *after* the
`struct folio *folio = dax_to_folio(entry);` line.  dax_to_folio()
expands to page_folio(pfn_to_page(dax_to_pfn(entry))), which calls
_compound_head() and performs READ_ONCE(page->compound_info) -- a real
dereference of the struct page pointer derived from a bogus PFN
extracted from the empty/zero XA value.

On systems where vmemmap covers all of RAM that dereference reads
garbage and is harmless: the early return then discards the result.
On virtio-pmem with altmap (vmemmap stored inside the device), only
the real device PFN range is mapped, so the dereference triggers a
kernel paging fault from the truncate / invalidate path and from the
PMD-downgrade branch of dax_iomap_pte_fault when an entry is being
freed:

  Unable to handle kernel paging request at
  virtual address ffff_fdff_bf00_0008 (vmemmap region)
  Call trace:
   dax_disassociate_entry.isra.0+0x20/0x50
   dax_iomap_pte_fault
   dax_iomap_fault
   erofs_dax_fault

Close the residual gap by moving the dax_to_folio() call after the
zero/empty guard in both dax_associate_entry() and
dax_disassociate_entry().  Apply the same treatment to dax_busy_page(),
which has the identical pattern but was not touched by the prior fix.
dax_associate_entry() is reachable with a zero entry via
dax_insert_entry() -> dax_associate_entry(new_entry, ...), where
new_entry can carry DAX_ZERO_PAGE (built by dax_make_entry() in
dax_load_hole() / dax_pmd_load_hole()).  dax_disassociate_entry() and
dax_busy_page() additionally see DAX_EMPTY entries created by
grab_mapping_entry().

The remaining users of dax_to_folio() / dax_to_pfn() in fs/dax.c are
either guarded or only reachable on real-PFN entries, so this exhausts
the anti-pattern.

Fixes: 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
Fixes: 38607c62b34b ("fs/dax: properly refcount fs dax pages")
Cc: stable@vger.kernel.org # v6.15+
Cc: Alistair Popple <apopple@nvidia.com>
Suggested-by: David Hildenbrand <david@kernel.org>
Signed-off-by: Souvik Banerjee <souvik@amlalabs.com>
---
Changes in v2:
  - Also fix dax_associate_entry() (Suggested-by: David Hildenbrand,
    confirmed by Alistair Popple).  The same anti-pattern existed there:
    dax_to_folio(entry) ran before the zero/empty guard.  new_entry on
    that path can carry DAX_ZERO_PAGE via dax_load_hole() /
    dax_pmd_load_hole(), so the dereference reads a struct page derived
    from the zero-page PFN before the early return discards it.
  - Audited remaining dax_to_folio() / dax_to_pfn() call sites in fs/dax.c;
    no further instances of the pattern.
  - Updated the page_folio() expansion in the commit message to refer to
    the current field name (page->compound_info via _compound_head()).

v1: https://lore.kernel.org/all/20260501233933.2614302-1-souvik@amlalabs.com/

 fs/dax.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 6d175cd47a99..4bca6e2bc342 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -480,11 +480,12 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
 				unsigned long address, bool shared)
 {
 	unsigned long size = dax_entry_size(entry), index;
-	struct folio *folio = dax_to_folio(entry);
+	struct folio *folio;
 
 	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
 		return;
 
+	folio = dax_to_folio(entry);
 	index = linear_page_index(vma, address & ~(size - 1));
 	if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
 		if (folio->mapping)
@@ -505,21 +506,23 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
 static void dax_disassociate_entry(void *entry, struct address_space *mapping,
 				bool trunc)
 {
-	struct folio *folio = dax_to_folio(entry);
+	struct folio *folio;
 
 	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
 		return;
 
+	folio = dax_to_folio(entry);
 	dax_folio_put(folio);
 }
 
 static struct page *dax_busy_page(void *entry)
 {
-	struct folio *folio = dax_to_folio(entry);
+	struct folio *folio;
 
 	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
 		return NULL;
 
+	folio = dax_to_folio(entry);
 	if (folio_ref_count(folio) - folio_mapcount(folio))
 		return &folio->page;
 	else
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page()
  2026-05-11 21:40 [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page() Souvik Banerjee
@ 2026-05-12  1:34 ` Alistair Popple
  2026-05-12  6:48 ` David Hildenbrand (Arm)
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Alistair Popple @ 2026-05-12  1:34 UTC (permalink / raw)
  To: Souvik Banerjee
  Cc: djbw, david, willy, jack, linux-fsdevel, nvdimm, linux-mm,
	linux-kernel, stable

On 2026-05-12 at 07:40 +1000, Souvik Banerjee <souvik@amlalabs.com> wrote...
> Commit 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> added zero/empty-entry early returns to dax_associate_entry() and
> dax_disassociate_entry(), but placed them *after* the
> `struct folio *folio = dax_to_folio(entry);` line.  dax_to_folio()
> expands to page_folio(pfn_to_page(dax_to_pfn(entry))), which calls
> _compound_head() and performs READ_ONCE(page->compound_info) -- a real
> dereference of the struct page pointer derived from a bogus PFN
> extracted from the empty/zero XA value.
> 
> On systems where vmemmap covers all of RAM that dereference reads
> garbage and is harmless: the early return then discards the result.
> On virtio-pmem with altmap (vmemmap stored inside the device), only
> the real device PFN range is mapped, so the dereference triggers a
> kernel paging fault from the truncate / invalidate path and from the
> PMD-downgrade branch of dax_iomap_pte_fault when an entry is being
> freed:
> 
>   Unable to handle kernel paging request at
>   virtual address ffff_fdff_bf00_0008 (vmemmap region)
>   Call trace:
>    dax_disassociate_entry.isra.0+0x20/0x50
>    dax_iomap_pte_fault
>    dax_iomap_fault
>    erofs_dax_fault
> 
> Close the residual gap by moving the dax_to_folio() call after the
> zero/empty guard in both dax_associate_entry() and
> dax_disassociate_entry().  Apply the same treatment to dax_busy_page(),
> which has the identical pattern but was not touched by the prior fix.
> dax_associate_entry() is reachable with a zero entry via
> dax_insert_entry() -> dax_associate_entry(new_entry, ...), where
> new_entry can carry DAX_ZERO_PAGE (built by dax_make_entry() in
> dax_load_hole() / dax_pmd_load_hole()).  dax_disassociate_entry() and
> dax_busy_page() additionally see DAX_EMPTY entries created by
> grab_mapping_entry().
> 
> The remaining users of dax_to_folio() / dax_to_pfn() in fs/dax.c are
> either guarded or only reachable on real-PFN entries, so this exhausts
> the anti-pattern.

I did that too when reviewing v1 and your conclusion matches mine. So looks good
to me:

Reviewed-by: Alistair Popple <apopple@nvidia.com>

> Fixes: 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> Fixes: 38607c62b34b ("fs/dax: properly refcount fs dax pages")
> Cc: stable@vger.kernel.org # v6.15+
> Cc: Alistair Popple <apopple@nvidia.com>
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Souvik Banerjee <souvik@amlalabs.com>
> ---
> Changes in v2:
>   - Also fix dax_associate_entry() (Suggested-by: David Hildenbrand,
>     confirmed by Alistair Popple).  The same anti-pattern existed there:
>     dax_to_folio(entry) ran before the zero/empty guard.  new_entry on
>     that path can carry DAX_ZERO_PAGE via dax_load_hole() /
>     dax_pmd_load_hole(), so the dereference reads a struct page derived
>     from the zero-page PFN before the early return discards it.
>   - Audited remaining dax_to_folio() / dax_to_pfn() call sites in fs/dax.c;
>     no further instances of the pattern.
>   - Updated the page_folio() expansion in the commit message to refer to
>     the current field name (page->compound_info via _compound_head()).
> 
> v1: https://lore.kernel.org/all/20260501233933.2614302-1-souvik@amlalabs.com/
> 
>  fs/dax.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 6d175cd47a99..4bca6e2bc342 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -480,11 +480,12 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
>  				unsigned long address, bool shared)
>  {
>  	unsigned long size = dax_entry_size(entry), index;
> -	struct folio *folio = dax_to_folio(entry);
> +	struct folio *folio;
>  
>  	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>  		return;
>  
> +	folio = dax_to_folio(entry);
>  	index = linear_page_index(vma, address & ~(size - 1));
>  	if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
>  		if (folio->mapping)
> @@ -505,21 +506,23 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
>  static void dax_disassociate_entry(void *entry, struct address_space *mapping,
>  				bool trunc)
>  {
> -	struct folio *folio = dax_to_folio(entry);
> +	struct folio *folio;
>  
>  	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>  		return;
>  
> +	folio = dax_to_folio(entry);
>  	dax_folio_put(folio);
>  }
>  
>  static struct page *dax_busy_page(void *entry)
>  {
> -	struct folio *folio = dax_to_folio(entry);
> +	struct folio *folio;
>  
>  	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>  		return NULL;
>  
> +	folio = dax_to_folio(entry);
>  	if (folio_ref_count(folio) - folio_mapcount(folio))
>  		return &folio->page;
>  	else
> -- 
> 2.51.1
> 
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page()
  2026-05-11 21:40 [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page() Souvik Banerjee
  2026-05-12  1:34 ` Alistair Popple
@ 2026-05-12  6:48 ` David Hildenbrand (Arm)
  2026-05-12  7:45 ` Jan Kara
  2026-05-12 12:49 ` Gupta, Pankaj
  3 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-12  6:48 UTC (permalink / raw)
  To: Souvik Banerjee, djbw
  Cc: willy, jack, apopple, linux-fsdevel, nvdimm, linux-mm,
	linux-kernel, stable

On 5/11/26 23:40, Souvik Banerjee wrote:
> Commit 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> added zero/empty-entry early returns to dax_associate_entry() and
> dax_disassociate_entry(), but placed them *after* the
> `struct folio *folio = dax_to_folio(entry);` line.  dax_to_folio()
> expands to page_folio(pfn_to_page(dax_to_pfn(entry))), which calls
> _compound_head() and performs READ_ONCE(page->compound_info) -- a real
> dereference of the struct page pointer derived from a bogus PFN
> extracted from the empty/zero XA value.
> 
> On systems where vmemmap covers all of RAM that dereference reads
> garbage and is harmless: the early return then discards the result.
> On virtio-pmem with altmap (vmemmap stored inside the device), only
> the real device PFN range is mapped, so the dereference triggers a
> kernel paging fault from the truncate / invalidate path and from the
> PMD-downgrade branch of dax_iomap_pte_fault when an entry is being
> freed:
> 
>   Unable to handle kernel paging request at
>   virtual address ffff_fdff_bf00_0008 (vmemmap region)
>   Call trace:
>    dax_disassociate_entry.isra.0+0x20/0x50
>    dax_iomap_pte_fault
>    dax_iomap_fault
>    erofs_dax_fault
> 
> Close the residual gap by moving the dax_to_folio() call after the
> zero/empty guard in both dax_associate_entry() and
> dax_disassociate_entry().  Apply the same treatment to dax_busy_page(),
> which has the identical pattern but was not touched by the prior fix.
> dax_associate_entry() is reachable with a zero entry via
> dax_insert_entry() -> dax_associate_entry(new_entry, ...), where
> new_entry can carry DAX_ZERO_PAGE (built by dax_make_entry() in
> dax_load_hole() / dax_pmd_load_hole()).  dax_disassociate_entry() and
> dax_busy_page() additionally see DAX_EMPTY entries created by
> grab_mapping_entry().
> 
> The remaining users of dax_to_folio() / dax_to_pfn() in fs/dax.c are
> either guarded or only reachable on real-PFN entries, so this exhausts
> the anti-pattern.
> 
> Fixes: 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> Fixes: 38607c62b34b ("fs/dax: properly refcount fs dax pages")
> Cc: stable@vger.kernel.org # v6.15+
> Cc: Alistair Popple <apopple@nvidia.com>
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Souvik Banerjee <souvik@amlalabs.com>

Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page()
  2026-05-11 21:40 [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page() Souvik Banerjee
  2026-05-12  1:34 ` Alistair Popple
  2026-05-12  6:48 ` David Hildenbrand (Arm)
@ 2026-05-12  7:45 ` Jan Kara
  2026-05-12 12:49 ` Gupta, Pankaj
  3 siblings, 0 replies; 5+ messages in thread
From: Jan Kara @ 2026-05-12  7:45 UTC (permalink / raw)
  To: Souvik Banerjee
  Cc: djbw, david, willy, jack, apopple, linux-fsdevel, nvdimm,
	linux-mm, linux-kernel, stable

On Mon 11-05-26 21:40:20, Souvik Banerjee wrote:
> Commit 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> added zero/empty-entry early returns to dax_associate_entry() and
> dax_disassociate_entry(), but placed them *after* the
> `struct folio *folio = dax_to_folio(entry);` line.  dax_to_folio()
> expands to page_folio(pfn_to_page(dax_to_pfn(entry))), which calls
> _compound_head() and performs READ_ONCE(page->compound_info) -- a real
> dereference of the struct page pointer derived from a bogus PFN
> extracted from the empty/zero XA value.
> 
> On systems where vmemmap covers all of RAM that dereference reads
> garbage and is harmless: the early return then discards the result.
> On virtio-pmem with altmap (vmemmap stored inside the device), only
> the real device PFN range is mapped, so the dereference triggers a
> kernel paging fault from the truncate / invalidate path and from the
> PMD-downgrade branch of dax_iomap_pte_fault when an entry is being
> freed:
> 
>   Unable to handle kernel paging request at
>   virtual address ffff_fdff_bf00_0008 (vmemmap region)
>   Call trace:
>    dax_disassociate_entry.isra.0+0x20/0x50
>    dax_iomap_pte_fault
>    dax_iomap_fault
>    erofs_dax_fault
> 
> Close the residual gap by moving the dax_to_folio() call after the
> zero/empty guard in both dax_associate_entry() and
> dax_disassociate_entry().  Apply the same treatment to dax_busy_page(),
> which has the identical pattern but was not touched by the prior fix.
> dax_associate_entry() is reachable with a zero entry via
> dax_insert_entry() -> dax_associate_entry(new_entry, ...), where
> new_entry can carry DAX_ZERO_PAGE (built by dax_make_entry() in
> dax_load_hole() / dax_pmd_load_hole()).  dax_disassociate_entry() and
> dax_busy_page() additionally see DAX_EMPTY entries created by
> grab_mapping_entry().
> 
> The remaining users of dax_to_folio() / dax_to_pfn() in fs/dax.c are
> either guarded or only reachable on real-PFN entries, so this exhausts
> the anti-pattern.
> 
> Fixes: 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> Fixes: 38607c62b34b ("fs/dax: properly refcount fs dax pages")
> Cc: stable@vger.kernel.org # v6.15+
> Cc: Alistair Popple <apopple@nvidia.com>
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Souvik Banerjee <souvik@amlalabs.com>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
> Changes in v2:
>   - Also fix dax_associate_entry() (Suggested-by: David Hildenbrand,
>     confirmed by Alistair Popple).  The same anti-pattern existed there:
>     dax_to_folio(entry) ran before the zero/empty guard.  new_entry on
>     that path can carry DAX_ZERO_PAGE via dax_load_hole() /
>     dax_pmd_load_hole(), so the dereference reads a struct page derived
>     from the zero-page PFN before the early return discards it.
>   - Audited remaining dax_to_folio() / dax_to_pfn() call sites in fs/dax.c;
>     no further instances of the pattern.
>   - Updated the page_folio() expansion in the commit message to refer to
>     the current field name (page->compound_info via _compound_head()).
> 
> v1: https://lore.kernel.org/all/20260501233933.2614302-1-souvik@amlalabs.com/
> 
>  fs/dax.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 6d175cd47a99..4bca6e2bc342 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -480,11 +480,12 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
>  				unsigned long address, bool shared)
>  {
>  	unsigned long size = dax_entry_size(entry), index;
> -	struct folio *folio = dax_to_folio(entry);
> +	struct folio *folio;
>  
>  	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>  		return;
>  
> +	folio = dax_to_folio(entry);
>  	index = linear_page_index(vma, address & ~(size - 1));
>  	if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
>  		if (folio->mapping)
> @@ -505,21 +506,23 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
>  static void dax_disassociate_entry(void *entry, struct address_space *mapping,
>  				bool trunc)
>  {
> -	struct folio *folio = dax_to_folio(entry);
> +	struct folio *folio;
>  
>  	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>  		return;
>  
> +	folio = dax_to_folio(entry);
>  	dax_folio_put(folio);
>  }
>  
>  static struct page *dax_busy_page(void *entry)
>  {
> -	struct folio *folio = dax_to_folio(entry);
> +	struct folio *folio;
>  
>  	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>  		return NULL;
>  
> +	folio = dax_to_folio(entry);
>  	if (folio_ref_count(folio) - folio_mapcount(folio))
>  		return &folio->page;
>  	else
> -- 
> 2.51.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page()
  2026-05-11 21:40 [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page() Souvik Banerjee
                   ` (2 preceding siblings ...)
  2026-05-12  7:45 ` Jan Kara
@ 2026-05-12 12:49 ` Gupta, Pankaj
  3 siblings, 0 replies; 5+ messages in thread
From: Gupta, Pankaj @ 2026-05-12 12:49 UTC (permalink / raw)
  To: Souvik Banerjee, djbw
  Cc: david, willy, jack, apopple, linux-fsdevel, nvdimm, linux-mm,
	linux-kernel, stable

> Commit 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> added zero/empty-entry early returns to dax_associate_entry() and
> dax_disassociate_entry(), but placed them *after* the
> `struct folio *folio = dax_to_folio(entry);` line.  dax_to_folio()
> expands to page_folio(pfn_to_page(dax_to_pfn(entry))), which calls
> _compound_head() and performs READ_ONCE(page->compound_info) -- a real
> dereference of the struct page pointer derived from a bogus PFN
> extracted from the empty/zero XA value.
>
> On systems where vmemmap covers all of RAM that dereference reads
> garbage and is harmless: the early return then discards the result.
> On virtio-pmem with altmap (vmemmap stored inside the device), only
> the real device PFN range is mapped, so the dereference triggers a
> kernel paging fault from the truncate / invalidate path and from the
> PMD-downgrade branch of dax_iomap_pte_fault when an entry is being
> freed:
>
>    Unable to handle kernel paging request at
>    virtual address ffff_fdff_bf00_0008 (vmemmap region)
>    Call trace:
>     dax_disassociate_entry.isra.0+0x20/0x50
>     dax_iomap_pte_fault
>     dax_iomap_fault
>     erofs_dax_fault
>
> Close the residual gap by moving the dax_to_folio() call after the
> zero/empty guard in both dax_associate_entry() and
> dax_disassociate_entry().  Apply the same treatment to dax_busy_page(),
> which has the identical pattern but was not touched by the prior fix.
> dax_associate_entry() is reachable with a zero entry via
> dax_insert_entry() -> dax_associate_entry(new_entry, ...), where
> new_entry can carry DAX_ZERO_PAGE (built by dax_make_entry() in
> dax_load_hole() / dax_pmd_load_hole()).  dax_disassociate_entry() and
> dax_busy_page() additionally see DAX_EMPTY entries created by
> grab_mapping_entry().
>
> The remaining users of dax_to_folio() / dax_to_pfn() in fs/dax.c are
> either guarded or only reachable on real-PFN entries, so this exhausts
> the anti-pattern.
>
> Fixes: 98c183a4fccf ("fs/dax: don't disassociate zero page entries")
> Fixes: 38607c62b34b ("fs/dax: properly refcount fs dax pages")
> Cc: stable@vger.kernel.org # v6.15+
> Cc: Alistair Popple <apopple@nvidia.com>
> Suggested-by: David Hildenbrand <david@kernel.org>
> Signed-off-by: Souvik Banerjee <souvik@amlalabs.com>

Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>

> ---
> Changes in v2:
>    - Also fix dax_associate_entry() (Suggested-by: David Hildenbrand,
>      confirmed by Alistair Popple).  The same anti-pattern existed there:
>      dax_to_folio(entry) ran before the zero/empty guard.  new_entry on
>      that path can carry DAX_ZERO_PAGE via dax_load_hole() /
>      dax_pmd_load_hole(), so the dereference reads a struct page derived
>      from the zero-page PFN before the early return discards it.
>    - Audited remaining dax_to_folio() / dax_to_pfn() call sites in fs/dax.c;
>      no further instances of the pattern.
>    - Updated the page_folio() expansion in the commit message to refer to
>      the current field name (page->compound_info via _compound_head()).
>
> v1: https://lore.kernel.org/all/20260501233933.2614302-1-souvik@amlalabs.com/
>
>   fs/dax.c | 9 ++++++---
>   1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 6d175cd47a99..4bca6e2bc342 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -480,11 +480,12 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
>                                  unsigned long address, bool shared)
>   {
>          unsigned long size = dax_entry_size(entry), index;
> -       struct folio *folio = dax_to_folio(entry);
> +       struct folio *folio;
>
>          if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>                  return;
>
> +       folio = dax_to_folio(entry);
>          index = linear_page_index(vma, address & ~(size - 1));
>          if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
>                  if (folio->mapping)
> @@ -505,21 +506,23 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
>   static void dax_disassociate_entry(void *entry, struct address_space *mapping,
>                                  bool trunc)
>   {
> -       struct folio *folio = dax_to_folio(entry);
> +       struct folio *folio;
>
>          if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>                  return;
>
> +       folio = dax_to_folio(entry);
>          dax_folio_put(folio);
>   }
>
>   static struct page *dax_busy_page(void *entry)
>   {
> -       struct folio *folio = dax_to_folio(entry);
> +       struct folio *folio;
>
>          if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry))
>                  return NULL;
>
> +       folio = dax_to_folio(entry);
>          if (folio_ref_count(folio) - folio_mapcount(folio))
>                  return &folio->page;
>          else
> --
> 2.51.1
>
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-05-12 12:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 21:40 [PATCH v2] fs/dax: check for empty/zero entries before calling pfn_to_page() Souvik Banerjee
2026-05-12  1:34 ` Alistair Popple
2026-05-12  6:48 ` David Hildenbrand (Arm)
2026-05-12  7:45 ` Jan Kara
2026-05-12 12:49 ` Gupta, Pankaj

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox