linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "jgg@nvidia.com" <jgg@nvidia.com>,
	"corbet@lwn.net" <corbet@lwn.net>,
	"will@kernel.org" <will@kernel.org>,
	"bagasdotme@gmail.com" <bagasdotme@gmail.com>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"thierry.reding@gmail.com" <thierry.reding@gmail.com>,
	"vdumpa@nvidia.com" <vdumpa@nvidia.com>,
	"jonathanh@nvidia.com" <jonathanh@nvidia.com>,
	"shuah@kernel.org" <shuah@kernel.org>,
	"jsnitsel@redhat.com" <jsnitsel@redhat.com>,
	"nathan@kernel.org" <nathan@kernel.org>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"mshavit@google.com" <mshavit@google.com>,
	"praan@google.com" <praan@google.com>,
	"zhangzekun11@huawei.com" <zhangzekun11@huawei.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>,
	"linux-kselftest@vger.kernel.org"
	<linux-kselftest@vger.kernel.org>,
	"patches@lists.linux.dev" <patches@lists.linux.dev>,
	"mochs@nvidia.com" <mochs@nvidia.com>,
	"alok.a.tiwari@oracle.com" <alok.a.tiwari@oracle.com>,
	"vasant.hegde@amd.com" <vasant.hegde@amd.com>,
	"dwmw2@infradead.org" <dwmw2@infradead.org>,
	"baolu.lu@linux.intel.com" <baolu.lu@linux.intel.com>
Subject: Re: [PATCH v6 06/25] iommufd/access: Allow access->ops to be NULL for internal use
Date: Wed, 25 Jun 2025 10:33:09 -0700	[thread overview]
Message-ID: <aFwy1V81mmM6R/yU@Asurada-Nvidia> (raw)
In-Reply-To: <aFwlu7FlfIP85gko@Asurada-Nvidia>

On Wed, Jun 25, 2025 at 09:37:18AM -0700, Nicolin Chen wrote:
> On Wed, Jun 25, 2025 at 03:38:19AM +0000, Tian, Kevin wrote:
> > > From: Nicolin Chen <nicolinc@nvidia.com>
> > > Sent: Saturday, June 14, 2025 3:15 PM
> > > 
> > > +int iommufd_access_notify_unmap(struct io_pagetable *iopt, unsigned long
> > > iova,
> > > +				unsigned long length)
> > >  {
> > >  	struct iommufd_ioas *ioas =
> > >  		container_of(iopt, struct iommufd_ioas, iopt);
> > >  	struct iommufd_access *access;
> > >  	unsigned long index;
> > > +	int ret = 0;
> > > 
> > >  	xa_lock(&ioas->iopt.access_list);
> > >  	xa_for_each(&ioas->iopt.access_list, index, access) {
> > > +		if (!access->ops || !access->ops->unmap) {
> > > +			ret = -EBUSY;
> > > +			goto unlock;
> > > +		}
> > 
> > then accesses before this one have been notified to unpin the area
> > while accesses afterwards are left unnotified.
> > 
> > in the end the unmap fails but with some side-effect incurred.
> > 
> > I'm not sure whether this intermediate state may lead to any undesired
> > effect later. Just raise it in case you or Jason already thought about it.
> 
> That's a good point. When an access blocks the unmap, there is no
> unmap happening so no point in notifying devices for ops->unmap.
> 
> And, when the function is re-entered, there could be a duplicated
> ops->unmap call for those devices that are already notified once?
> 
> So, if we play safe, there can be a standalone xa_for_each to dig
> for !access->ops->unmap. And it could be a bit cleaner to add an
> iommufd_access_has_internal_use() to be called under those rwsems.

Correct: it has to be under the xa_lock. So, this pre-check needs
to be done in this function.

Thanks
Nicolin

  reply	other threads:[~2025-06-25 17:33 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-14  7:14 [PATCH v6 00/25] iommufd: Add vIOMMU infrastructure (Part-4 HW QUEUE) Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 01/25] iommu: Add iommu_copy_struct_to_user helper Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 02/25] iommu: Pass in a driver-level user data structure to viommu_init op Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 03/25] iommufd/viommu: Allow driver-specific user data for a vIOMMU object Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 04/25] iommufd/selftest: Support user_data in mock_viommu_alloc Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 05/25] iommufd/selftest: Add coverage for viommu data Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 06/25] iommufd/access: Allow access->ops to be NULL for internal use Nicolin Chen
2025-06-16  6:25   ` Baolu Lu
2025-06-16 13:33   ` Jason Gunthorpe
2025-06-17  2:21     ` Nicolin Chen
2025-06-19  9:14       ` Pranjal Shrivastava
2025-06-25  3:38   ` Tian, Kevin
2025-06-25 16:37     ` Nicolin Chen
2025-06-25 17:33       ` Nicolin Chen [this message]
2025-06-14  7:14 ` [PATCH v6 07/25] iommufd/access: Add internal APIs for HW queue to use Nicolin Chen
2025-06-16 13:37   ` Jason Gunthorpe
2025-06-17  2:25     ` Nicolin Chen
2025-06-17  4:23       ` Baolu Lu
2025-06-17 11:55         ` Jason Gunthorpe
2025-06-19  9:49       ` Pranjal Shrivastava
2025-06-19  9:42   ` Pranjal Shrivastava
2025-06-14  7:14 ` [PATCH v6 08/25] iommufd/viommu: Add driver-defined vDEVICE support Nicolin Chen
2025-06-16  6:26   ` Baolu Lu
2025-06-19 10:26   ` Pranjal Shrivastava
2025-06-19 11:44     ` Jason Gunthorpe
2025-06-21  4:51       ` Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 09/25] iommufd/viommu: Introduce IOMMUFD_OBJ_HW_QUEUE and its related struct Nicolin Chen
2025-06-16 13:47   ` Jason Gunthorpe
2025-06-17  2:29     ` Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 10/25] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl Nicolin Chen
2025-06-16  6:12   ` Baolu Lu
2025-06-16  6:47     ` Nicolin Chen
2025-06-16  6:54       ` Baolu Lu
2025-06-16  7:04         ` Nicolin Chen
2025-06-16  7:09           ` Baolu Lu
2025-06-25  3:43       ` Tian, Kevin
2025-06-25 16:06         ` Nicolin Chen
2025-06-16  7:11   ` Baolu Lu
2025-06-16 13:58   ` Jason Gunthorpe
2025-06-25  3:45   ` Tian, Kevin
2025-06-25 23:06     ` Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 11/25] iommufd/driver: Add iommufd_hw_queue_depend/undepend() helpers Nicolin Chen
2025-06-16 14:06   ` Jason Gunthorpe
2025-06-14  7:14 ` [PATCH v6 12/25] iommufd/selftest: Add coverage for IOMMUFD_CMD_HW_QUEUE_ALLOC Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 13/25] iommufd: Add mmap interface Nicolin Chen
2025-06-16 11:33   ` Baolu Lu
2025-06-16 14:13   ` Jason Gunthorpe
2025-06-17  2:37     ` Nicolin Chen
2025-06-17 11:55       ` Jason Gunthorpe
2025-06-25 21:18     ` Nicolin Chen
2025-06-19 11:15   ` Pranjal Shrivastava
2025-06-14  7:14 ` [PATCH v6 14/25] iommufd/selftest: Add coverage for the new " Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 15/25] Documentation: userspace-api: iommufd: Update HW QUEUE Nicolin Chen
2025-06-16 11:34   ` Baolu Lu
2025-06-14  7:14 ` [PATCH v6 16/25] iommu: Allow an input type in hw_info op Nicolin Chen
2025-06-16 11:53   ` Baolu Lu
2025-06-14  7:14 ` [PATCH v6 17/25] iommufd: Allow an input data_type via iommu_hw_info Nicolin Chen
2025-06-16 11:54   ` Baolu Lu
2025-06-16 14:14   ` Jason Gunthorpe
2025-06-14  7:14 ` [PATCH v6 18/25] iommufd/selftest: Update hw_info coverage for an input data_type Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 19/25] iommu/arm-smmu-v3-iommufd: Add vsmmu_size/type and vsmmu_init impl ops Nicolin Chen
2025-06-16 14:19   ` Jason Gunthorpe
2025-06-14  7:14 ` [PATCH v6 20/25] iommu/arm-smmu-v3-iommufd: Add hw_info to impl_ops Nicolin Chen
2025-06-16 14:20   ` Jason Gunthorpe
2025-06-19 11:47   ` Pranjal Shrivastava
2025-06-19 18:53     ` Jason Gunthorpe
2025-06-20  3:32       ` Pranjal Shrivastava
2025-06-21  5:36         ` Nicolin Chen
2025-06-23 15:13           ` Pranjal Shrivastava
2025-06-14  7:14 ` [PATCH v6 21/25] iommu/tegra241-cmdqv: Use request_threaded_irq Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 22/25] iommu/tegra241-cmdqv: Simplify deinit flow in tegra241_cmdqv_remove_vintf() Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 23/25] iommu/tegra241-cmdqv: Do not statically map LVCMDQs Nicolin Chen
2025-06-16 15:44   ` Jason Gunthorpe
2025-06-14  7:14 ` [PATCH v6 24/25] iommu/tegra241-cmdqv: Add user-space use support Nicolin Chen
2025-06-16 16:03   ` Jason Gunthorpe
2025-06-26 18:51   ` Nicolin Chen
2025-06-14  7:14 ` [PATCH v6 25/25] iommu/tegra241-cmdqv: Add IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV support Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aFwy1V81mmM6R/yU@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=alok.a.tiwari@oracle.com \
    --cc=bagasdotme@gmail.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=dwmw2@infradead.org \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=jsnitsel@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=mochs@nvidia.com \
    --cc=mshavit@google.com \
    --cc=nathan@kernel.org \
    --cc=patches@lists.linux.dev \
    --cc=peterz@infradead.org \
    --cc=praan@google.com \
    --cc=robin.murphy@arm.com \
    --cc=shuah@kernel.org \
    --cc=thierry.reding@gmail.com \
    --cc=vasant.hegde@amd.com \
    --cc=vdumpa@nvidia.com \
    --cc=will@kernel.org \
    --cc=yi.l.liu@intel.com \
    --cc=zhangzekun11@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).