From: Mukesh R <mrathor@linux.microsoft.com>
To: Yu Zhang <zhangyu1@linux.microsoft.com>,
Michael Kelley <mhklinux@outlook.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
"wei.liu@kernel.org" <wei.liu@kernel.org>,
"kys@microsoft.com" <kys@microsoft.com>,
"haiyangz@microsoft.com" <haiyangz@microsoft.com>,
"decui@microsoft.com" <decui@microsoft.com>,
"longli@microsoft.com" <longli@microsoft.com>,
"joro@8bytes.org" <joro@8bytes.org>,
"will@kernel.org" <will@kernel.org>,
"robin.murphy@arm.com" <robin.murphy@arm.com>,
"bhelgaas@google.com" <bhelgaas@google.com>,
"kwilczynski@kernel.org" <kwilczynski@kernel.org>,
"lpieralisi@kernel.org" <lpieralisi@kernel.org>,
"mani@kernel.org" <mani@kernel.org>,
"robh@kernel.org" <robh@kernel.org>,
"arnd@arndb.de" <arnd@arndb.de>, "jgg@ziepe.ca" <jgg@ziepe.ca>,
"jacob.pan@linux.microsoft.com" <jacob.pan@linux.microsoft.com>,
"tgopinath@linux.microsoft.com" <tgopinath@linux.microsoft.com>,
"easwar.hariharan@linux.microsoft.com"
<easwar.hariharan@linux.microsoft.com>
Subject: Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
Date: Fri, 15 May 2026 17:11:19 -0700 [thread overview]
Message-ID: <53754e0b-2af8-edd2-dfc0-293fac002a52@linux.microsoft.com> (raw)
In-Reply-To: <fw2pruvjgo7yigtcxssf3xv27soibsj6hmw2ls5wj4rylfhdha@e63f32cwu2x5>
On 5/15/26 09:53, Yu Zhang wrote:
> On Fri, May 15, 2026 at 02:51:38PM +0000, Michael Kelley wrote:
>> From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Friday, May 15, 2026 7:00 AM
>>>
>>> On Thu, May 14, 2026 at 06:13:24PM +0000, Michael Kelley wrote:
>>>> From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
>>>>>
>>>>> Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
>>>>> This driver implements stage-1 IO translation within the guest OS.
>>>>> It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
>>>>> for:
>>>>> - Capability discovery
>>>>> - Domain allocation, configuration, and deallocation
>>>>> - Device attachment and detachment
>>>>> - IOTLB invalidation
>>>>>
>>>>> The driver constructs x86-compatible stage-1 IO page tables in the
>>>>> guest memory using consolidated IO page table helpers. This allows
>>>>> the guest to manage stage-1 translations independently of vendor-
>>>>> specific drivers (like Intel VT-d or AMD IOMMU).
>>>>>
>>>>> Hyper-V consumes this stage-1 IO page table when a device domain is
>>>>> created and configured, and nests it with the host's stage-2 IO page
>>>>> tables, therefore eliminating the VM exits for guest IOMMU mapping
>>>>> operations. For unmapping operations, VM exits to perform the IOTLB
>>>>> flush are still unavoidable.
>>>>>
>>>>> Hyper-V identifies each PCI pass-thru device by a logical device ID
>>>>> in its hypercall interface. The vPCI driver (pci-hyperv) registers the
>>>>> per-bus portion of this ID with the pvIOMMU driver during bus probe.
>>>>> The pvIOMMU driver stores this mapping and combines it with the function
>>>>> number of the endpoint PCI device to form the complete ID for hypercalls.
>>>>
>>>> As you are probably aware, Mukesh's patch series to support PCI
>>>> pass-thru devices also needs to get the logical device ID. Maybe the
>>>> registration mechanism needs to move somewhere that can be shared
>>>> with his code.
>>>>
>>>
>>> Thank you so much for the review, Michael!
>>>
>>> Yes, I looked at Mukesh's series and noticed his hv_pci_vmbus_device_id()
>>> in pci-hyperv.c has the same dev_instance byte manipulation. We do need
>>> a common registration mechanism.
>>>
>>> Any suggestion on where to put it? drivers/hv/hv_common.c seems like a
>>> natural place, but the register/lookup functions are currently only
>>> meaningful when CONFIG_HYPERV_PVIOMMU is set. If Mukesh's pass-thru
>>> code also needs them, we might need a new shared Kconfig option that
>>> both can select. Open to better ideas.
>>
>> Unfortunately, I have not looked at Mukesh's series in detail yet, so
>> I don't have enough knowledge of the full situation to offer a good
>> recommendation.
>>
>
> Sorry I forgot to Cc Mukesh in the previous reply. :(
> @Mukesh, any thoughts on sharing the logical device ID registration mechanism?
Yeah, I went round and round trying to find the best place. I almost
created virt/hyperv/hv_utils.c file. Maybe that is the best place?
Thanks,
-Mukesh
>>>
>>> [...]
>>>
>>>>> +static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
>>>>> +{
>>>>> + u64 status;
>>>>> + unsigned long flags;
>>>>> + struct hv_input_flush_device_domain *input;
>>>>> +
>>>>> + local_irq_save(flags);
>>>>> +
>>>>> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
>>>>> + memset(input, 0, sizeof(*input));
>>>>> + input->device_domain = hv_domain->device_domain;
>>>>
>>>> The previous version of this patch had code to set several other fields in
>>>> the input. I wanted to confirm that not setting them in this version is
>>>> intentional. Were they not needed?
>>>>
>>>
>>> Oh. The RFC v1 set partition_id, owner_vtl, domain_id.type, and domain_id.id
>>> individually. In this version, I just simplified it to a struct assignment.
>>> No functional change.
>>
>> Of course! I should have looked more closely at the details before making
>> this comment. :-(
>>
>> [...]
>>
>>>>
>>>> Previous versions of this function did hv_iommu_detach_dev(). With that call
>>>> removed from here, hv_iommu_detach_dev() is only called when attaching a
>>>> domain to a device that already has a domain attached. Is it the case that
>>>> Hyper-V doesn't require the detach as a cleanup step?
>>>>
>>>
>>> The IOMMU core attaches the device to release_domain (our blocking domain)
>>> before calling release_device(), so I believe the explicit detach in the RFC
>>> was redundant. I simply didn't realize that at the time.
>>>
>>
>> Got it. But after the IOMMU core attaches the device to the blocking
>> domain, there's the possibility that the vPCI device is rescinded by
>> Hyper-V and it goes away entirely. Or the device might be subjected
>> to an "unbind/bind" cycle in Linux. Does the detach need to be done
>> on the blocking domain in such cases? In this version of the patches, the
>> Hyper-V "attach" and "detach" hypercalls still end up unbalanced. That
>> seems a bit untidy at best, and I wonder if there are scenarios where
>> Hyper-V will complain about the lack of balance.
>>
>
> Thank you, Michael. May I ask what "the vPCI device is rescinded by
> Hyper-V and it goes away entirely" mean?
>
> I realized it's a bit untidy. But I want to understand this issue more
> clearly first. :)
>
> B.R.
> Yu
next prev parent reply other threads:[~2026-05-16 0:11 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
2026-05-11 16:24 ` [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
2026-05-15 22:19 ` Jason Gunthorpe
2026-05-11 16:24 ` [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2026-05-13 18:39 ` Jacob Pan
2026-05-15 12:38 ` Yu Zhang
2026-05-14 18:13 ` Michael Kelley
2026-05-15 13:59 ` Yu Zhang
2026-05-15 14:51 ` Michael Kelley
2026-05-15 16:53 ` Yu Zhang
2026-05-15 17:36 ` Michael Kelley
2026-05-16 0:11 ` Mukesh R [this message]
2026-05-15 22:31 ` Jason Gunthorpe
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
2026-05-12 23:45 ` sashiko-bot
2026-05-14 18:14 ` Michael Kelley
2026-05-14 21:16 ` Michael Kelley
2026-05-15 16:23 ` Yu Zhang
2026-05-15 18:00 ` Michael Kelley
2026-05-15 23:33 ` Michael Kelley
2026-05-15 22:35 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53754e0b-2af8-edd2-dfc0-293fac002a52@linux.microsoft.com \
--to=mrathor@linux.microsoft.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=decui@microsoft.com \
--cc=easwar.hariharan@linux.microsoft.com \
--cc=haiyangz@microsoft.com \
--cc=iommu@lists.linux.dev \
--cc=jacob.pan@linux.microsoft.com \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=kwilczynski@kernel.org \
--cc=kys@microsoft.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=longli@microsoft.com \
--cc=lpieralisi@kernel.org \
--cc=mani@kernel.org \
--cc=mhklinux@outlook.com \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=tgopinath@linux.microsoft.com \
--cc=wei.liu@kernel.org \
--cc=will@kernel.org \
--cc=zhangyu1@linux.microsoft.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox