public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <kevin.tian@intel.com>, <yi.l.liu@intel.com>,
	<iommu@lists.linux.dev>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 1/3] iommufd: Add devices_users to track the hw_pagetable usage by device
Date: Wed, 1 Feb 2023 11:25:10 -0800	[thread overview]
Message-ID: <Y9q8llC0JVokHLf7@Asurada-Nvidia> (raw)
In-Reply-To: <Y9qxdinaS6anoWhH@nvidia.com>

On Wed, Feb 01, 2023 at 02:37:42PM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 01, 2023 at 09:46:23AM -0800, Nicolin Chen wrote:
> > > So the issue is with replace you need to have the domain populated
> > > before we can call replace but you can't populate the domain until it
> > > is bound because of the above issue? That seems unsovlable without
> > > fixing up the driver.
> > 
> > Not really. A REPLACE ioctl is just an ATTACH, if the device just
> > gets BIND-ed. So the SMMU driver will initialize ("finalise") the
> > domain during the replace() call, then iopt_table_add_domain() can
> > be done.
> > 
> > So, not a blocker here.
> 
> Well, yes, there sort of is because the whole flow becomes nonsensical
> - we are supposed to have the iommu_domain populated by the time we do
> replace. Otherwise replace is extra-pointless..

The "finalise" is one of the very first lines of the attach_dev()
callback function in SMMU driver, though it might still undesirably
fail the replace().

https://github.com/nicolinc/iommufd/commit/5ae54f360495aae35b5967d1eb00149912145639
Btw, this is a draft that I made to move iopt_table_add_domain(). I
think we can have this with the nesting series.

Later, once we pass in the dev pointer to the ->domain_alloc op
using Robin's change, all the iopt_table_add_domain() can be done
within the hwpt_alloc(), prior to an attach()/replace().

> > > Is there another issue?
> > 
> > Oh. I think we mixed the topics here. These three patches were
> > not to unblock but to clean up a way for the replace series and
> > the nesting series, for the device locking issue:
> > 
> > 	if (cur_hwpt != hwpt)
> > 		mutex_lock(&cur_hwpt->device_lock);
> > 	mutex_lock(&hwpt->device_lock);
> > 	...
> > 	if (iommufd_hw_pagetabe_has_group()) {	// touching device list
> > 		...
> > 		iommu_group_replace_domain();
> > 		...
> > 	}
> > 	if (cur_hwpt && hwpt)
> > 		list_del(&idev->devices_item);
> > 	list_add(&idev->devices_item, &cur_hwpt->devices);
> > 	...
> > 	mutex_unlock(&hwpt->device_lock);
> > 	if (cur_hwpt != hwpt)
> > 		mutex_unlock(&cur_hwpt->device_lock);
> 
> What is the issue? That isn't quite right, but the basic bit is fine
> 
> If you want to do replace then you have to hold both devices_lock and
> you write that super ugly thing like this
> 
> lock_both:
>    if (hwpt_a < hwpt_b) {
>       mutex_lock(&hwpt_a->devices_lock);
>       mutex_lock_nested(&hwpt_b->devices_lock);
>    } else if (hwpt_a > hwpt_b) {
>       mutex_lock(&hwpt_b->devices_lock);
>       mutex_lock_nested(&hwpt_a->devices_lock);
>    } else
>       mutex_lock(&hwpt_a->devices_lock);
> 
> And then it is trivial, yes?

Yea. That's your previous remark.

> Using the group_lock in the iommu core is the right way to fix
> this.. Maybe someday we can do that.
> 
> (also document that replace causes all the devices in the group to
> change iommu_domains at once)

Yes. There's a discussion in PATCH-3 of this series. I drafted a
patch changing iommu_attach/detach_dev():
https://github.com/nicolinc/iommufd/commit/124f7804ef38d50490b606fd56c1e27ce551a839

Baolu had a similar patch series a year ago. So we might continue
that effort in parallel, and eventually drop the device list/lock.

> > I just gave another thought about it. Since we have the patch-2
> > from this series moving the ioas->mutex, it already serializes
> > attach/detach routines. And I see that all the places touching
> > idev->device_item and hwpt->devices are protected by ioas->mutex.
> > So, perhaps we can simply remove the device_lock?
> 
> The two hwpts are not required to have the same ioas, so this doesn't
> really help..

Hmm...in that case, we should hold two ioas->mutex locks in
addition to two device locks?

Thanks
Nic

  reply	other threads:[~2023-02-01 19:25 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-28 21:18 [PATCH v2 0/3] iommufd: Remove iommufd_hw_pagetable_has_group Nicolin Chen
2023-01-28 21:18 ` [PATCH v2 1/3] iommufd: Add devices_users to track the hw_pagetable usage by device Nicolin Chen
2023-01-29  9:23   ` Tian, Kevin
2023-01-29  9:30     ` Nicolin Chen
2023-01-29  9:39       ` Tian, Kevin
2023-01-30  2:22     ` Liu, Yi L
2023-01-30 15:02   ` Jason Gunthorpe
2023-01-30 19:27     ` Nicolin Chen
2023-01-30 19:50       ` Jason Gunthorpe
2023-01-30 20:04         ` Nicolin Chen
2023-01-30 20:35           ` Jason Gunthorpe
2023-01-30 20:53             ` Nicolin Chen
2023-02-01  7:48               ` Nicolin Chen
2023-02-02  9:12                 ` Nicolin Chen
2023-02-07  4:19                   ` Liu, Yi L
2023-02-01  6:57             ` Nicolin Chen
2023-02-01  7:56               ` Nicolin Chen
2023-02-01 15:53               ` Jason Gunthorpe
2023-02-01 17:46                 ` Nicolin Chen
2023-02-01 18:37                   ` Jason Gunthorpe
2023-02-01 19:25                     ` Nicolin Chen [this message]
2023-02-01 20:00                       ` Jason Gunthorpe
2023-02-01 21:18                         ` Nicolin Chen
2023-02-02  7:28                           ` Nicolin Chen
2023-02-02 15:03                             ` Jason Gunthorpe
2023-02-07  4:27                       ` Liu, Yi L
2023-01-28 21:18 ` [PATCH v2 2/3] iommufd/device: Make hwpt_list list_add/del symmetric Nicolin Chen
2023-01-29  9:24   ` Tian, Kevin
2023-01-29  9:31     ` Nicolin Chen
2023-01-30 14:59   ` Jason Gunthorpe
2023-01-30 19:03     ` Nicolin Chen
2023-01-30 19:07       ` Jason Gunthorpe
2023-01-30 19:38         ` Nicolin Chen
2023-01-28 21:18 ` [PATCH v2 3/3] iommufd/device: Change iommufd_hw_pagetable_has_group to device centric Nicolin Chen
2023-01-29  9:37   ` Tian, Kevin
2023-01-29 10:38     ` Nicolin Chen
2023-01-30  0:44       ` Tian, Kevin
2023-01-30 10:22       ` Nicolin Chen
2023-02-01  3:07         ` Tian, Kevin
2023-02-01  6:49           ` Baolu Lu
2023-02-01  6:59             ` Tian, Kevin
2023-02-01  7:20               ` Nicolin Chen
2023-02-02  6:32                 ` Tian, Kevin
2023-02-02  6:36                   ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9q8llC0JVokHLf7@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox