From: Jason Gunthorpe <jgg@nvidia.com>
To: Will Deacon <will@kernel.org>
Cc: Pavan Kondeti <pavan.kondeti@oss.qualcomm.com>,
linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev,
jean-philippe@linaro.org, praan@google.com, smostafa@google.com,
robin.murphy@arm.com
Subject: Re: PRI support in arm-smmu-v3 driver
Date: Tue, 25 Nov 2025 14:03:53 -0400 [thread overview]
Message-ID: <20251125180353.GC520526@nvidia.com> (raw)
In-Reply-To: <aSXupxMeHLajOmDW@willie-the-truck>
On Tue, Nov 25, 2025 at 06:00:07PM +0000, Will Deacon wrote:
> [+iommu list and usual suspects]
>
> Hi Pavan,
>
> On Tue, Nov 25, 2025 at 02:22:05PM +0530, Pavan Kondeti wrote:
> > I am trying to understand IO fault handling in Linux w/ SMMUv3. While reading
> > the code, I understand that SVA domain creation allows taking IO pagefaults.
> > arm_smmu_enable_iopf() checks if the master support stall upon fault
> > feature or not. How do we handle page faults for PCIe devices, for which
> > transactions cannot safely be stalled? IIUC, The PRI handling in the
> > driver i.e arm_smmu_priq_thread()->arm_smmu_handle_ppr() is not doing
> > anything. In the SVA support for SMMUv3 series v7, I see the support for
> > PRI via "Add support for PRI" patch [1] but it is not merged.
> >
> > Can you please clarify if we can support SVA with PCIe devices w/o
> > pinning the memory?
> >
> > [1]
> > https://lore.kernel.org/all/20200519175502.2504091-25-jean-philippe@linaro.org/
>
> The only SVA client we've had for SMMUv3 in the upstream kernel is the
> "uacce" thing from HiSilicon which is a platform device (rather than a
> PCIe device) and so I think the PRI support just fell by the wayside due
> to lack of an upstream user and no ability to test it.
Right, we only support "stall" mode in the driver, not PRI right
now. PRI has a bunch of differences at the SMMU level.
> I'm not sure whether or not Jason has plans to implement PRI but maybe
> it's something you could help with if you have hardware?
I've been waiting for someone who has HW to take this on.
Honestly, I'm not entirely sure what the missing gaps are, at least I
think we need to get the PRI information and package it into the fault
queue and link it back to a PRI response.
Jason
next prev parent reply other threads:[~2025-11-25 18:04 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-25 8:52 PRI support in arm-smmu-v3 driver Pavan Kondeti
2025-11-25 18:00 ` Will Deacon
2025-11-25 18:03 ` Jason Gunthorpe [this message]
2025-12-04 18:19 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251125180353.GC520526@nvidia.com \
--to=jgg@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=pavan.kondeti@oss.qualcomm.com \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).