public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: Vasant Hegde <vasant.hegde@amd.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Baolu Lu <baolu.lu@linux.intel.com>,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Nicolin Chen <nicolinc@nvidia.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v4 09/10] iommu: Make iommu_queue_iopf() more generic
Date: Wed, 30 Aug 2023 13:49:19 +0100	[thread overview]
Message-ID: <20230830124919.GA2855675@myrica> (raw)
In-Reply-To: <a4fc5fa2-a234-286b-e108-7f54a7c70862@amd.com>

On Wed, Aug 30, 2023 at 04:32:47PM +0530, Vasant Hegde wrote:
> Tian, Baolu,
> 
> On 8/30/2023 1:13 PM, Tian, Kevin wrote:
> >> From: Baolu Lu <baolu.lu@linux.intel.com>
> >> Sent: Saturday, August 26, 2023 4:01 PM
> >>
> >> On 8/25/23 4:17 PM, Tian, Kevin wrote:
> >>>> +
> >>>>   /**
> >>>>    * iopf_queue_flush_dev - Ensure that all queued faults have been
> >>>> processed
> >>>>    * @dev: the endpoint whose faults need to be flushed.
> >>> Presumably we also need a flush callback per domain given now
> >>> the use of workqueue is optional then flush_workqueue() might
> >>> not be sufficient.
> >>>
> >>
> >> The iopf_queue_flush_dev() function flushes all pending faults from the
> >> IOMMU queue for a specific device. It has no means to flush fault queues
> >> out of iommu core.
> >>
> >> The iopf_queue_flush_dev() function is typically called when a domain is
> >> detaching from a PASID. Hence it's necessary to flush the pending faults
> >> from top to bottom. For example, iommufd should flush pending faults in
> >> its fault queues after detaching the domain from the pasid.
> >>
> > 
> > Is there an ordering problem? The last step of intel_svm_drain_prq()
> > in the detaching path issues a set of descriptors to drain page requests
> > and responses in hardware. It cannot complete if not all software queues
> > are drained and it's counter-intuitive to drain a software queue after 
> > the hardware draining has already been completed.
> > 
> > btw just flushing requests is probably insufficient in iommufd case since
> > the responses are received asynchronously. It requires an interface to
> > drain both requests and responses (presumably with timeouts in case
> > of a malicious guest which never responds) in the detach path.
> > 
> > it's not a problem for sva as responses are synchrounsly delivered after
> > handling mm fault. So fine to not touch it in this series but certainly
> > this area needs more work when moving to support iommufd. 😊
> > 
> > btw why is iopf_queue_flush_dev() called only in intel-iommu driver?
> > Isn't it a common requirement for all sva-capable drivers?

It's not needed by the SMMUv3 driver because it doesn't implement PRI yet,
only the Arm-specific stall fault model where DMA transactions are held in
the SMMU while waiting for the OS to handle IOPFs. Since a device driver
must complete all DMA transactions before calling unbind(), with the stall
model there are no pending IOPFs to flush on unbind(). PRI support with
Stop Markers would add a call to iopf_queue_flush_dev() after flushing the
SMMU PRI queue [2].

Moving the flush to the core shouldn't be a problem, as long as the driver
gets a chance to flush the hardware queue first.

Thanks,
Jean

[2] https://jpbrucker.net/git/linux/commit/?h=sva/2020-12-14&id=bba76fb4ec631bec96f98f14a6cd13b2df81e5ce

> 
> I had same question when we did SVA implementation for AMD IOMMU [1]. Currently
> we call queue_flush from remove_dev_pasid() path. Since PASID can be enabled
> without ATS/PRI, I thought its individual drivers responsibility.
> But looking this series, does it make sense to handle queue_flush in core layer?
> 
> [1]
> https://lore.kernel.org/linux-iommu/20230823140415.729050-1-vasant.hegde@amd.com/T/#t
> 
> -Vasant
> 
> 

  reply	other threads:[~2023-08-30 19:03 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-25  2:30 [PATCH v4 00/10] iommu: Prepare to deliver page faults to user space Lu Baolu
2023-08-25  2:30 ` [PATCH v4 01/10] iommu: Move iommu fault data to linux/iommu.h Lu Baolu
2023-08-25  7:52   ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 02/10] iommu/arm-smmu-v3: Remove unrecoverable faults reporting Lu Baolu
2023-08-25  7:53   ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 03/10] iommu: Remove unrecoverable fault data Lu Baolu
2023-08-25  7:53   ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 04/10] iommu: Cleanup iopf data structure definitions Lu Baolu
2023-08-25  7:57   ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 05/10] iommu: Merge iopf_device_param into iommu_fault_param Lu Baolu
2023-08-25  8:00   ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 06/10] iommu: Remove iommu_[un]register_device_fault_handler() Lu Baolu
2023-08-25  2:30 ` [PATCH v4 07/10] iommu: Merge iommu_fault_event and iopf_fault Lu Baolu
2023-08-25  8:03   ` Tian, Kevin
2023-08-26  7:02     ` Baolu Lu
2023-08-30  7:33       ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 08/10] iommu: Prepare for separating SVA and IOPF Lu Baolu
2023-08-25  8:05   ` Tian, Kevin
2023-08-25  2:30 ` [PATCH v4 09/10] iommu: Make iommu_queue_iopf() more generic Lu Baolu
2023-08-25  8:17   ` Tian, Kevin
2023-08-26  7:32     ` Baolu Lu
2023-08-30  7:34       ` Tian, Kevin
2023-08-26  8:01     ` Baolu Lu
2023-08-30  7:43       ` Tian, Kevin
2023-08-30 11:02         ` Vasant Hegde
2023-08-30 12:49           ` Jean-Philippe Brucker [this message]
2023-08-31  6:57             ` Vasant Hegde
2023-08-31  9:27         ` Baolu Lu
2023-09-01  2:49           ` Tian, Kevin
2023-09-05  5:19             ` Baolu Lu
2023-09-11  6:35               ` Tian, Kevin
2023-09-11 12:26                 ` Baolu Lu
2023-09-13  2:25                   ` Tian, Kevin
2023-09-13  2:44                     ` Baolu Lu
     [not found]       ` <BN9PR11MB527624F1CC4A545FBAE3C9C98CE6A@BN9PR11MB5276.namprd11.prod.outlook.com>
2023-08-30  8:50         ` Tian, Kevin
2023-08-31  9:42           ` Baolu Lu
2023-08-26  8:04     ` Baolu Lu
2023-08-30  7:55       ` Tian, Kevin
2023-08-31 11:24         ` Baolu Lu
2023-09-01  2:50           ` Tian, Kevin
2023-09-05  5:24             ` Baolu Lu
2023-09-11  6:57               ` Tian, Kevin
2023-09-11 12:46                 ` Baolu Lu
2023-09-13  2:34                   ` Tian, Kevin
2023-09-13  4:23                     ` Baolu Lu
2023-09-13  6:18                     ` Baolu Lu
2023-08-26  8:08     ` Baolu Lu
2023-08-25  2:30 ` [PATCH v4 10/10] iommu: Separate SVA and IOPF Lu Baolu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230830124919.GA2855675@myrica \
    --to=jean-philippe@linaro.org \
    --cc=baolu.lu@linux.intel.com \
    --cc=iommu@lists.linux.dev \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jgg@ziepe.ca \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=robin.murphy@arm.com \
    --cc=vasant.hegde@amd.com \
    --cc=will@kernel.org \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox