From: Jason Gunthorpe <jgg@ziepe.ca>
To: Mostafa Saleh <smostafa@google.com>
Cc: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, robin.murphy@arm.com,
will@kernel.org, joro@8bytes.org
Subject: Re: [PATCH] iommu/io-pgtable-arm: Drop DMA API usage for CMOs
Date: Thu, 8 Jan 2026 10:45:20 -0400 [thread overview]
Message-ID: <20260108144520.GE545276@ziepe.ca> (raw)
In-Reply-To: <CAFgf54o+YZUyZTszFt-uBW4ZhjrAMsfPFUb9kKTK1uHWdS+++w@mail.gmail.com>
On Thu, Jan 08, 2026 at 01:44:51PM +0000, Mostafa Saleh wrote:
> On Thu, Jan 8, 2026 at 1:27 PM Mostafa Saleh <smostafa@google.com> wrote:
> >
> > On Thu, Jan 8, 2026 at 12:59 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
> > >
> > > On Thu, Jan 08, 2026 at 11:38:46AM +0000, Mostafa Saleh wrote:
> > > > Jason pointed out that the DMA-API calls are not really needed [2].
> > > >
> > > > Looking more into this. Initially, the io-pgtable API let drivers
> > > > do the CMOs using tlb::flush_pgtable() where drivers were using the
> > > > DMA API (map/unmap_single) only to do CMOs as the low-level cache
> > > > functions won’t be available for modules.
> > >
> > > This isn't what I ment at all, the iommu-pages.h could do the flush
> > > inside itself using the already existing
> > > iommu_pages_flush_incoherent() instead of open coding the dma api
> > > calls everwhere, and maybe that could directly call the arch function
> > > on arm (as x86 does) which would be easier to implement in pkvm's
> > > hypervisor.
> > >
> >
> > I see, so basically, we add the check for IOMMU_PAGES_USE_DMA_API in
> > iommu_pages_flush_incoherent() with the DMA-API stuff and call it from
> > the io-pgtable-arm instead.
> >
>
> iommu_pages_flush_incoherent() is also used to flush non-cacheable
> pages in generic_pt from flush_writes_*(), so it might not be that
> simple; We might need to add a new variant.
There are no non-cachable pages in the iommu-pages system???
All memory comes from folios and is mapped to the CPU cachable.
The general API requires the user of iommu-pages to know if the
underlying HW is going to do non-coherent DMA, in this case it must
call iommu_pages_flush_incoherent() before the HW does DMA.
Effectively what I'm saying here is to convert the arm page table to
use iommu-pages and then put your pkvm abstraction at the iommu-pages
level instead of trying to hack up all these users.
You can also do the same conversion to the smmu driver to use
iommu-pages instead of DMA API, AMD and Intel are doing this already.
Jason
next prev parent reply other threads:[~2026-01-08 14:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-08 11:38 [PATCH] iommu/io-pgtable-arm: Drop DMA API usage for CMOs Mostafa Saleh
2026-01-08 12:52 ` Robin Murphy
2026-01-08 13:21 ` Mostafa Saleh
2026-01-08 12:59 ` Jason Gunthorpe
2026-01-08 13:27 ` Mostafa Saleh
2026-01-08 13:44 ` Mostafa Saleh
2026-01-08 14:45 ` Jason Gunthorpe [this message]
2026-01-08 16:21 ` Mostafa Saleh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260108144520.GE545276@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=iommu@lists.linux.dev \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox