All of lore.kernel.org
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Pranjal Shrivastava <praan@google.com>
Cc: Ashish Mhetre <amhetre@nvidia.com>,
	robin.murphy@arm.com, joro@8bytes.org, will@kernel.org,
	iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-mm@kvack.org,
	Christoph Hellwig <hch@lst.de>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
Date: Wed, 25 Feb 2026 09:50:00 +0200	[thread overview]
Message-ID: <20260225075000.GA9541@unreal> (raw)
In-Reply-To: <aZ4Q1HA9q1ojsVYY@google.com>

On Tue, Feb 24, 2026 at 08:57:56PM +0000, Pranjal Shrivastava wrote:
> On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote:
> > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote:
> > > When mapping scatter-gather entries that reference reserved
> > > memory regions without struct page backing (e.g., bootloader created
> > > carveouts), is_pci_p2pdma_page() dereferences the page pointer
> > > returned by sg_page() without first verifying its validity.
> > 
> > I believe this behavior started after commit 88df6ab2f34b  
> > ("mm: add folio_is_pci_p2pdma()"). Prior to that change, the
> > is_zone_device_page(page) check would return false when given a
> > non‑existent page pointer.
> > 
> 
> Doesn't folio_is_pci_p2pdma() also check for zone device?
> I see[1] that it does:
> 
> static inline bool folio_is_pci_p2pdma(const struct folio *folio)
> {
> 	return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
> 		folio_is_zone_device(folio) &&
> 		folio->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
> }
> 
> I believe the problem arises due to the page_folio() call in
> folio_is_pci_p2pdma(page_folio(page)); within is_pci_p2pdma_page().
> page_folio() assumes it has a valid struct page to work with. For these
> carveouts, that isn't true.

Yes, i came to the same conclusion, just explained why it worked before.

> 
> Potentially something like the following would stop the crash:
> 
> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> index e3c2ccf872a8..e47876021afa 100644
> --- a/include/linux/memremap.h
> +++ b/include/linux/memremap.h
> @@ -197,7 +197,8 @@ static inline void folio_set_zone_device_data(struct folio *folio, void *data)
> 
>  static inline bool is_pci_p2pdma_page(const struct page *page)
>  {
> -       return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
> +       return IS_ENABLED(CONFIG_PCI_P2PDMA) && page &&
> +               pfn_valid(page_to_pfn(page)) &&

pfn_valid() is a relatively expensive function [1] to invoke in the data path,
and is_pci_p2pdma_page() ends up being called in these execution flows.

[1] https://elixir.bootlin.com/linux/v6.19.3/source/include/linux/mmzone.h#L2167

>                 folio_is_pci_p2pdma(page_folio(page));
>  }
> 
> 
> But my broader question is: why are we calling a page-based API like 
> is_pci_p2pdma_page() on non-struct-page memory in the first place?

+1

> Could we instead add a helper to verify if the sg_page() return value
> is actually backed by a struct page?

According to the SG design, callers should store only struct page pointers.
There is one known user that violates this requirement: dmabuf, which is
gradually being migrated away from this behavior [2].

[2] https://lore.kernel.org/all/0-v1-b5cab63049c0+191af-dmabuf_map_type_jgg@nvidia.com/

> If it isn't, we should arguably skip the P2PDMA logic entirely and fall
> back to a dma_map_phys style path. Isn't handling these "pageless" physical
> ranges the primary reason dma_map_phys exists?

Right. dma_map_sg() is indeed the wrong API to use for memory that is not
backed by struct page pointers.

Thanks

> 
> +mm list
> 
> Thanks,
> Praan
> 
> [1] https://elixir.bootlin.com/linux/v6.19.3/source/include/linux/memremap.h#L179
> 
> 
> > If any fix is needed, the is_pci_p2pdma_page() must be changed and not iommu.
> > 
> > Thanks
> > 
> > > 
> > > This causes a kernel paging fault when CONFIG_PCI_P2PDMA is enabled
> > > and dma_map_sg_attrs() is called for memory regions that have no
> > > associated struct page:
> > > 
> > > Unable to handle kernel paging request at virtual address fffffc007d100000
> > >   ...
> > >   Call trace:
> > >    iommu_dma_map_sg+0x118/0x414
> > >    dma_map_sg_attrs+0x38/0x44
> > > 
> > > Fix this by adding a pfn_valid() check before calling
> > > is_pci_p2pdma_page(). If the page frame number is invalid, skip the
> > > P2PDMA check entirely as such memory cannot be P2PDMA memory anyway.
> > > 
> > > Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
> > > ---
> > >  drivers/iommu/dma-iommu.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > > 
> > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > index 5dac64be61bb..5f45f33b23c2 100644
> > > --- a/drivers/iommu/dma-iommu.c
> > > +++ b/drivers/iommu/dma-iommu.c
> > > @@ -1423,6 +1423,9 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
> > >  		size_t s_length = s->length;
> > >  		size_t pad_len = (mask - iova_len + 1) & mask;
> > >  
> > > +		if (!pfn_valid(page_to_pfn(sg_page(s))))
> > > +			goto post_pci_p2pdma;
> > > +
> > >  		switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(s))) {
> > >  		case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> > >  			/*
> > > @@ -1449,6 +1452,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
> > >  			goto out_restore_sg;
> > >  		}
> > >  
> > > +post_pci_p2pdma:
> > >  		sg_dma_address(s) = s_iova_off;
> > >  		sg_dma_len(s) = s_length;
> > >  		s->offset -= s_iova_off;
> > > -- 
> > > 2.25.1
> > > 
> > > 
> > 

  parent reply	other threads:[~2026-02-25  7:50 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-24 10:42 [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state Ashish Mhetre
2026-02-24 12:32 ` Leon Romanovsky
2026-02-24 20:57   ` Pranjal Shrivastava
2026-02-25  4:49     ` Ashish Mhetre
2026-02-25  7:56       ` Leon Romanovsky
2026-02-25 20:11         ` Pranjal Shrivastava
2026-02-26  7:58           ` Leon Romanovsky
2026-02-27  5:46             ` Ashish Mhetre
2026-02-27 14:05               ` Robin Murphy
2026-02-27 14:08               ` Pranjal Shrivastava
2026-02-27 14:13                 ` Jason Gunthorpe
2026-02-25  7:50     ` Leon Romanovsky [this message]
2026-02-25 20:15       ` Pranjal Shrivastava

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260225075000.GA9541@unreal \
    --to=leon@kernel.org \
    --cc=amhetre@nvidia.com \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=praan@google.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.