From: Nicolin Chen <nicolinc@nvidia.com>
To: Shuai Xue <xueshuai@linux.alibaba.com>
Cc: <joro@8bytes.org>, <kevin.tian@intel.com>, <will@kernel.org>,
<robin.murphy@arm.com>, <baolu.lu@linux.intel.com>,
<jgg@nvidia.com>, <iommu@lists.linux.dev>,
<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH rc v2] iommu: Fix nested pci_dev_reset_iommu_prepare/done()
Date: Thu, 19 Mar 2026 14:34:37 -0700 [thread overview]
Message-ID: <abxr7XrsK1WSmr7T@Asurada-Nvidia> (raw)
In-Reply-To: <dd72f3be-16d9-4386-82ac-1b8e06186e3d@linux.alibaba.com>
On Thu, Mar 19, 2026 at 07:14:21PM +0800, Shuai Xue wrote:
> On 3/19/26 12:31 PM, Nicolin Chen wrote:
> > @@ -3961,9 +3962,10 @@ int pci_dev_reset_iommu_prepare(struct pci_dev *pdev)
> > guard(mutex)(&group->mutex);
> > - /* Re-entry is not allowed */
> > - if (WARN_ON(group->resetting_domain))
> > - return -EBUSY;
> > + if (group->resetting_domain) {
> > + group->reset_cnt++;
> > + return 0;
> > + }
> > ret = __iommu_group_alloc_blocking_domain(group);
>
> pci_dev_reset_iommu_prepare/done() have NO singleton group check.
> They operate on the specific pdev passed in, but use group-wide
> state (resetting_domain, reset_cnt) to track the reset lifecycle.
>
> Interestingly, the broken_worker in patch 3 of the ATC timeout
> series DOES have an explicit singleton check:
>
> if (list_is_singular(&group->devices)) {
> /* Note: only support group with a single device */
>
> This reveals an implicit assumption: the entire prepare/done
> mechanism works correctly only for singleton groups. For
> multi-device groups:
>
> - prepare() only detaches the specific pdev, leaving other
> devices in the group still attached to the original domain
> - The group-wide resetting_domain/reset_cnt state can be
> corrupted by concurrent resets on different devices (as
> discussed above)
That's a phenomenal catch!
I think we shall have a reset_ndevs in the group and a reset_depth
in the gdev.
> If prepare/done is truly meant only for singleton groups, it
> should enforce this explicitly:
>
> if (!list_is_singular(&group->devices))
> return -EOPNOTSUPP;
-EOPNOTSUPP will fail ail the PCI reset caller functions, though we
could change them to ignore -EOPNOTSUPP or we could probably do:
if (!list_is_singular(&group->devices))
return 0;
But I feel this would be too heavy for a bug fix.
> If it's meant to support multi-device groups, then the per-device
> vs group-wide state mismatch needs to be resolved — either by
> making the state per-device, or by detaching/restoring all
> devices in the group together.
Per-gdev should work, IMHO, at least for this patch.
For iommu_report_device_broken(), I am thinking we could still try
a per-gdev WQ, by changing the mutex-protected gdev list to an RCU
one.
Thanks
Nicolin
next prev parent reply other threads:[~2026-03-19 21:35 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-19 4:31 [PATCH rc v2] iommu: Fix nested pci_dev_reset_iommu_prepare/done() Nicolin Chen
2026-03-19 11:14 ` Shuai Xue
2026-03-19 21:34 ` Nicolin Chen [this message]
2026-03-27 8:27 ` Tian, Kevin
2026-03-27 21:08 ` Nicolin Chen
2026-03-31 7:12 ` Tian, Kevin
2026-03-31 12:23 ` Nicolin Chen
2026-04-01 8:14 ` Tian, Kevin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=abxr7XrsK1WSmr7T@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
--cc=xueshuai@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox