* [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests
@ 2026-05-11 16:24 Yu Zhang
2026-05-11 16:24 ` [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Yu Zhang @ 2026-05-11 16:24 UTC (permalink / raw)
To: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch
Cc: wei.liu, kys, haiyangz, decui, longli, joro, will, robin.murphy,
bhelgaas, kwilczynski, lpieralisi, mani, robh, arnd, jgg,
mhklinux, jacob.pan, tgopinath, easwar.hariharan
This patch series introduces a para-virtualized IOMMU driver for
Linux guests running on Microsoft Hyper-V. The driver enables two
primary use cases:
1) In-kernel DMA protection for devices assigned to the guest.
2) Device assignment to guest user space (e.g., via VFIO).
The driver implements the following core functionality:
* Hypercall-based Enumeration
Unlike traditional ACPI-based discovery (e.g., DMAR/IVRS),
this driver enumerates the Hyper-V IOMMU capabilities directly
via hypercalls. This approach allows the guest to discover
IOMMU presence and features without requiring specific virtual
firmware extensions or modifications.
* Domain Management
The driver manages IOMMU domains through a new set of Hyper-V
hypercall interfaces, handling domain allocation, attachment,
and detachment for endpoint devices.
* Nested Translation Support
This implementation leverages guest-managed stage-1 I/O page
tables nested with host stage-2 translations. It is built
upon the consolidated IOMMU page table framework (IOMMU_PT).
This design eliminates the need for emulating map operations.
Both Intel VT-d and AMD IOMMU platforms are supported.
* IOTLB Invalidation
IOTLB invalidation requests are marshaled and issued to the
hypervisor through the same hypercall mechanism. Both domain-
selective and page-selective flushes are supported.
Implementation Notes:
* Platform Support
The current implementation targets x86 platforms with Intel
VT-d and AMD IOMMU hardware.
* MSI Region Handling
The hardware MSI region is hard-coded to the standard x86
interrupt range (0xfee00000 - 0xfeefffff). Future updates may
allow this configuration to be queried via hypercalls if new
hardware platforms are to be supported.
* Reserved Regions (RMRR)
There is currently no requirement to support assigned devices with
ACPI RMRR limitations. Consequently, this patch series does not
specify or query reserved memory regions.
Testing:
This series has been validated with the following configurations:
- Intel DSA devices assigned to the guest, tested with dmatest.
- NVMe devices assigned to the guest on AMD platforms, tested
with fio.
- dma_map_benchmark for DMA mapping performance evaluation.
Changes since RFC v1 [1]:
- Scoped platform support to x86 only (Intel VT-d and AMD IOMMU);
initialization now uses x86_init.iommu.iommu_init
- Added page-selective IOTLB flush support (new Patch 4)
- Disable device ATS in hv_iommu_release_device()
- Addressed review comments from Michael Kelley:
- Reversed dependency: pvIOMMU exports registration API for
pci-hyperv to call, instead of pci-hyperv exporting
hv_build_logical_dev_id()
- Dropped separate output page allocation patch; hypercall input
and output now share the same per-CPU page
- Cleaned up Kconfig (removed PCI_HYPERV dependency, unnecessary
selects)
- Removed dev_list, per-domain spinlock, and syscore_ops
- Removed forward declarations by reordering functions
- Fixed typos, cleaned up Kconfig selects, improved pr_info
messages, etc.
[1] https://lore.kernel.org/linux-hyperv/20251209051128.76913-1-zhangyu1@linux.microsoft.com/
Easwar Hariharan (1):
iommu: Move Hyper-V IOMMU driver to its own subdirectory
Wei Liu (1):
hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU
Yu Zhang (2):
iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
iommu/hyperv: Add page-selective IOTLB flush support
MAINTAINERS | 2 +-
arch/x86/hyperv/hv_init.c | 4 +
arch/x86/include/asm/mshyperv.h | 4 +
drivers/iommu/Kconfig | 10 +-
drivers/iommu/Makefile | 2 +-
drivers/iommu/hyperv/Kconfig | 27 +
drivers/iommu/hyperv/Makefile | 3 +
drivers/iommu/hyperv/iommu.c | 794 ++++++++++++++++++
drivers/iommu/hyperv/iommu.h | 54 ++
.../irq_remapping.c} | 2 +-
drivers/pci/controller/pci-hyperv.c | 19 +-
include/asm-generic/mshyperv.h | 12 +
include/hyperv/hvgdk_mini.h | 9 +
include/hyperv/hvhdk_mini.h | 141 ++++
14 files changed, 1070 insertions(+), 13 deletions(-)
create mode 100644 drivers/iommu/hyperv/Kconfig
create mode 100644 drivers/iommu/hyperv/Makefile
create mode 100644 drivers/iommu/hyperv/iommu.c
create mode 100644 drivers/iommu/hyperv/iommu.h
rename drivers/iommu/{hyperv-iommu.c => hyperv/irq_remapping.c} (99%)
--
2.52.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
@ 2026-05-11 16:24 ` Yu Zhang
2026-05-11 16:24 ` [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 19+ messages in thread
From: Yu Zhang @ 2026-05-11 16:24 UTC (permalink / raw)
To: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch
Cc: wei.liu, kys, haiyangz, decui, longli, joro, will, robin.murphy,
bhelgaas, kwilczynski, lpieralisi, mani, robh, arnd, jgg,
mhklinux, jacob.pan, tgopinath, easwar.hariharan
From: Easwar Hariharan <eahariha@linux.microsoft.com>
The Hyper-V IOMMU driver currently only supports IRQ remapping.
As it will be adding DMA remapping support, prepare a directory
to contain all the different feature files.
This is a simple rename commit and has no functional changes.
Signed-off-by: Easwar Hariharan <eahariha@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
MAINTAINERS | 2 +-
drivers/iommu/Kconfig | 10 +---------
drivers/iommu/Makefile | 2 +-
drivers/iommu/hyperv/Kconfig | 10 ++++++++++
drivers/iommu/hyperv/Makefile | 2 ++
.../iommu/{hyperv-iommu.c => hyperv/irq_remapping.c} | 2 +-
6 files changed, 16 insertions(+), 12 deletions(-)
create mode 100644 drivers/iommu/hyperv/Kconfig
create mode 100644 drivers/iommu/hyperv/Makefile
rename drivers/iommu/{hyperv-iommu.c => hyperv/irq_remapping.c} (99%)
diff --git a/MAINTAINERS b/MAINTAINERS
index b2040011a386..1519de2a2a09 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11979,7 +11979,7 @@ F: drivers/clocksource/hyperv_timer.c
F: drivers/hid/hid-hyperv.c
F: drivers/hv/
F: drivers/input/serio/hyperv-keyboard.c
-F: drivers/iommu/hyperv-iommu.c
+F: drivers/iommu/hyperv/
F: drivers/net/ethernet/microsoft/
F: drivers/net/hyperv/
F: drivers/pci/controller/pci-hyperv-intf.c
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f86262b11416..0bdf2ac3d782 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -195,6 +195,7 @@ config MSM_IOMMU
source "drivers/iommu/amd/Kconfig"
source "drivers/iommu/arm/Kconfig"
source "drivers/iommu/intel/Kconfig"
+source "drivers/iommu/hyperv/Kconfig"
source "drivers/iommu/iommufd/Kconfig"
source "drivers/iommu/riscv/Kconfig"
@@ -351,15 +352,6 @@ config MTK_IOMMU_V1
if unsure, say N here.
-config HYPERV_IOMMU
- bool "Hyper-V IRQ Handling"
- depends on HYPERV && X86
- select IOMMU_API
- default HYPERV
- help
- Stub IOMMU driver to handle IRQs to support Hyper-V Linux
- guest and root partitions.
-
config VIRTIO_IOMMU
tristate "Virtio IOMMU driver"
depends on VIRTIO
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 0275821f4ef9..32f3ca758556 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_AMD_IOMMU) += amd/
obj-$(CONFIG_INTEL_IOMMU) += intel/
obj-$(CONFIG_RISCV_IOMMU) += riscv/
obj-$(CONFIG_GENERIC_PT) += generic_pt/fmt/
+obj-$(CONFIG_HYPERV_IOMMU) += hyperv/
obj-$(CONFIG_IOMMU_API) += iommu.o
obj-$(CONFIG_IOMMU_SUPPORT) += iommu-pages.o
obj-$(CONFIG_IOMMU_API) += iommu-traces.o
@@ -30,7 +31,6 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
-obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o
diff --git a/drivers/iommu/hyperv/Kconfig b/drivers/iommu/hyperv/Kconfig
new file mode 100644
index 000000000000..30f40d867036
--- /dev/null
+++ b/drivers/iommu/hyperv/Kconfig
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# HyperV paravirtualized IOMMU support
+config HYPERV_IOMMU
+ bool "Hyper-V IRQ Handling"
+ depends on HYPERV && X86
+ select IOMMU_API
+ default HYPERV
+ help
+ Stub IOMMU driver to handle IRQs to support Hyper-V Linux
+ guest and root partitions.
diff --git a/drivers/iommu/hyperv/Makefile b/drivers/iommu/hyperv/Makefile
new file mode 100644
index 000000000000..9f557bad94ff
--- /dev/null
+++ b/drivers/iommu/hyperv/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_HYPERV_IOMMU) += irq_remapping.o
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv/irq_remapping.c
similarity index 99%
rename from drivers/iommu/hyperv-iommu.c
rename to drivers/iommu/hyperv/irq_remapping.c
index 479103261ae6..62a94a8c9e95 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv/irq_remapping.c
@@ -22,7 +22,7 @@
#include <asm/hypervisor.h>
#include <asm/mshyperv.h>
-#include "irq_remapping.h"
+#include "../irq_remapping.h"
#ifdef CONFIG_IRQ_REMAP
--
2.52.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
2026-05-11 16:24 ` [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
@ 2026-05-11 16:24 ` Yu Zhang
2026-05-12 21:24 ` sashiko-bot
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
3 siblings, 1 reply; 19+ messages in thread
From: Yu Zhang @ 2026-05-11 16:24 UTC (permalink / raw)
To: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch
Cc: wei.liu, kys, haiyangz, decui, longli, joro, will, robin.murphy,
bhelgaas, kwilczynski, lpieralisi, mani, robh, arnd, jgg,
mhklinux, jacob.pan, tgopinath, easwar.hariharan
From: Wei Liu <wei.liu@kernel.org>
Hyper-V guest IOMMU is a para-virtualized IOMMU based on hypercalls.
Introduce the hypercalls used by the child partition to interact with
this facility.
These hypercalls fall into below categories:
- Detection and capability: HVCALL_GET_IOMMU_CAPABILITIES is used to
detect the existence and capabilities of the guest IOMMU.
- Device management: HVCALL_GET_LOGICAL_DEVICE_PROPERTY is used to
check whether an endpoint device is managed by the guest IOMMU.
- Domain management: A set of hypercalls is provided to handle the
creation, configuration, and deletion of guest domains, as well as
the attachment/detachment of endpoint devices to/from those domains.
- IOTLB flushing: HVCALL_FLUSH_DEVICE_DOMAIN is used to ask Hyper-V
for a domain-selective IOTLB flush (which in its handler may flush
the device TLB as well).
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Co-developed-by: Yu Zhang <zhangyu1@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
include/hyperv/hvgdk_mini.h | 8 +++
include/hyperv/hvhdk_mini.h | 124 ++++++++++++++++++++++++++++++++++++
2 files changed, 132 insertions(+)
diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 6a4e8b9d570f..5bdbb44da112 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -486,10 +486,16 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */
#define HVCALL_GET_VP_INDEX_FROM_APIC_ID 0x009a
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
+#define HVCALL_CREATE_DEVICE_DOMAIN 0x00b1
+#define HVCALL_ATTACH_DEVICE_DOMAIN 0x00b2
#define HVCALL_SIGNAL_EVENT_DIRECT 0x00c0
#define HVCALL_POST_MESSAGE_DIRECT 0x00c1
#define HVCALL_DISPATCH_VP 0x00c2
+#define HVCALL_DETACH_DEVICE_DOMAIN 0x00c4
+#define HVCALL_DELETE_DEVICE_DOMAIN 0x00c5
#define HVCALL_GET_GPA_PAGES_ACCESS_STATES 0x00c9
+#define HVCALL_CONFIGURE_DEVICE_DOMAIN 0x00ce
+#define HVCALL_FLUSH_DEVICE_DOMAIN 0x00d0
#define HVCALL_ACQUIRE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d7
#define HVCALL_RELEASE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d8
#define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db
@@ -502,6 +508,8 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */
#define HVCALL_MMIO_READ 0x0106
#define HVCALL_MMIO_WRITE 0x0107
#define HVCALL_DISABLE_HYP_EX 0x010f
+#define HVCALL_GET_IOMMU_CAPABILITIES 0x0125
+#define HVCALL_GET_LOGICAL_DEVICE_PROPERTY 0x0127
#define HVCALL_MAP_STATS_PAGE2 0x0131
/* HV_HYPERCALL_INPUT */
diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
index b4cb2fa26e9b..493608e791b4 100644
--- a/include/hyperv/hvhdk_mini.h
+++ b/include/hyperv/hvhdk_mini.h
@@ -547,4 +547,128 @@ union hv_device_id { /* HV_DEVICE_ID */
} acpi;
} __packed;
+/* Device domain types */
+#define HV_DEVICE_DOMAIN_TYPE_S1 1 /* Stage 1 domain */
+
+/* ID for default domain and NULL domain */
+#define HV_DEVICE_DOMAIN_ID_DEFAULT 0
+#define HV_DEVICE_DOMAIN_ID_NULL 0xFFFFFFFFULL
+
+union hv_device_domain_id {
+ u64 as_uint64;
+ struct {
+ u32 type: 4;
+ u32 reserved: 28;
+ u32 id;
+ } __packed;
+};
+
+struct hv_input_device_domain {
+ u64 partition_id;
+ union hv_input_vtl owner_vtl;
+ u8 padding[7];
+ union hv_device_domain_id domain_id;
+} __packed;
+
+union hv_create_device_domain_flags {
+ u32 as_uint32;
+ struct {
+ u32 forward_progress_required: 1;
+ u32 inherit_owning_vtl: 1;
+ u32 reserved: 30;
+ } __packed;
+};
+
+struct hv_input_create_device_domain {
+ struct hv_input_device_domain device_domain;
+ union hv_create_device_domain_flags create_device_domain_flags;
+} __packed;
+
+struct hv_input_delete_device_domain {
+ struct hv_input_device_domain device_domain;
+} __packed;
+
+struct hv_input_attach_device_domain {
+ struct hv_input_device_domain device_domain;
+ union hv_device_id device_id;
+} __packed;
+
+struct hv_input_detach_device_domain {
+ u64 partition_id;
+ union hv_device_id device_id;
+} __packed;
+
+struct hv_device_domain_settings {
+ struct {
+ /*
+ * Enable translations. If not enabled, all transaction bypass
+ * S1 translations.
+ */
+ u64 translation_enabled: 1;
+ u64 blocked: 1;
+ /*
+ * First stage address translation paging mode:
+ * 0: 4-level paging (default)
+ * 1: 5-level paging
+ */
+ u64 first_stage_paging_mode: 1;
+ u64 reserved: 61;
+ } flags;
+
+ /* Address of translation table */
+ u64 page_table_root;
+} __packed;
+
+struct hv_input_configure_device_domain {
+ struct hv_input_device_domain device_domain;
+ struct hv_device_domain_settings settings;
+} __packed;
+
+struct hv_input_get_iommu_capabilities {
+ u64 partition_id;
+ u64 reserved;
+} __packed;
+
+struct hv_output_get_iommu_capabilities {
+ u32 size;
+ u16 reserved;
+ u8 max_iova_width;
+ u8 max_pasid_width;
+
+#define HV_IOMMU_CAP_PRESENT (1ULL << 0)
+#define HV_IOMMU_CAP_S2 (1ULL << 1)
+#define HV_IOMMU_CAP_S1 (1ULL << 2)
+#define HV_IOMMU_CAP_S1_5LVL (1ULL << 3)
+#define HV_IOMMU_CAP_PASID (1ULL << 4)
+#define HV_IOMMU_CAP_ATS (1ULL << 5)
+#define HV_IOMMU_CAP_PRI (1ULL << 6)
+
+ u64 iommu_cap;
+ u64 pgsize_bitmap;
+} __packed;
+
+enum hv_logical_device_property_code {
+ HV_LOGICAL_DEVICE_PROPERTY_PVIOMMU = 10,
+};
+
+struct hv_input_get_logical_device_property {
+ u64 partition_id;
+ u64 logical_device_id;
+ /* Takes values from enum hv_logical_device_property_code. */
+ u32 code;
+ u32 reserved;
+} __packed;
+
+struct hv_output_get_logical_device_property {
+#define HV_DEVICE_IOMMU_ENABLED (1ULL << 0)
+ u64 device_iommu;
+ u64 reserved;
+} __packed;
+
+struct hv_input_flush_device_domain {
+ struct hv_input_device_domain device_domain;
+ u32 flags;
+ u32 reserved;
+} __packed;
+
#endif /* _HV_HVHDK_MINI_H */
--
2.52.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
2026-05-11 16:24 ` [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
2026-05-11 16:24 ` [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
@ 2026-05-11 16:24 ` Yu Zhang
2026-05-12 22:30 ` sashiko-bot
` (2 more replies)
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
3 siblings, 3 replies; 19+ messages in thread
From: Yu Zhang @ 2026-05-11 16:24 UTC (permalink / raw)
To: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch
Cc: wei.liu, kys, haiyangz, decui, longli, joro, will, robin.murphy,
bhelgaas, kwilczynski, lpieralisi, mani, robh, arnd, jgg,
mhklinux, jacob.pan, tgopinath, easwar.hariharan
Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
This driver implements stage-1 IO translation within the guest OS.
It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
for:
- Capability discovery
- Domain allocation, configuration, and deallocation
- Device attachment and detachment
- IOTLB invalidation
The driver constructs x86-compatible stage-1 IO page tables in the
guest memory using consolidated IO page table helpers. This allows
the guest to manage stage-1 translations independently of vendor-
specific drivers (like Intel VT-d or AMD IOMMU).
Hyper-V consumes this stage-1 IO page table when a device domain is
created and configured, and nests it with the host's stage-2 IO page
tables, therefore eliminating the VM exits for guest IOMMU mapping
operations. For unmapping operations, VM exits to perform the IOTLB
flush are still unavoidable.
Hyper-V identifies each PCI pass-thru device by a logical device ID
in its hypercall interface. The vPCI driver (pci-hyperv) registers the
per-bus portion of this ID with the pvIOMMU driver during bus probe.
The pvIOMMU driver stores this mapping and combines it with the function
number of the endpoint PCI device to form the complete ID for hypercalls.
Co-developed-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
arch/x86/hyperv/hv_init.c | 4 +
arch/x86/include/asm/mshyperv.h | 4 +
drivers/iommu/hyperv/Kconfig | 17 +
drivers/iommu/hyperv/Makefile | 1 +
drivers/iommu/hyperv/iommu.c | 705 ++++++++++++++++++++++++++++
drivers/iommu/hyperv/iommu.h | 54 +++
drivers/pci/controller/pci-hyperv.c | 19 +-
include/asm-generic/mshyperv.h | 12 +
8 files changed, 815 insertions(+), 1 deletion(-)
create mode 100644 drivers/iommu/hyperv/iommu.c
create mode 100644 drivers/iommu/hyperv/iommu.h
diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
index 323adc93f2dc..2c8ff8e06249 100644
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -578,6 +578,10 @@ void __init hyperv_init(void)
old_setup_percpu_clockev = x86_init.timers.setup_percpu_clockev;
x86_init.timers.setup_percpu_clockev = hv_stimer_setup_percpu_clockev;
+#ifdef CONFIG_HYPERV_PVIOMMU
+ x86_init.iommu.iommu_init = hv_iommu_init;
+#endif
+
hv_apic_init();
x86_init.pci.arch_init = hv_pci_init;
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index f64393e853ee..20d947c2c758 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -313,6 +313,10 @@ static inline void mshv_vtl_return_hypercall(void) {}
static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
#endif
+#ifdef CONFIG_HYPERV_PVIOMMU
+int __init hv_iommu_init(void);
+#endif
+
#include <asm-generic/mshyperv.h>
#endif
diff --git a/drivers/iommu/hyperv/Kconfig b/drivers/iommu/hyperv/Kconfig
index 30f40d867036..9e658d5c9a77 100644
--- a/drivers/iommu/hyperv/Kconfig
+++ b/drivers/iommu/hyperv/Kconfig
@@ -8,3 +8,20 @@ config HYPERV_IOMMU
help
Stub IOMMU driver to handle IRQs to support Hyper-V Linux
guest and root partitions.
+
+if HYPERV_IOMMU
+config HYPERV_PVIOMMU
+ bool "Microsoft Hypervisor para-virtualized IOMMU support"
+ depends on X86 && HYPERV
+ select IOMMU_API
+ select GENERIC_PT
+ select IOMMU_PT
+ select IOMMU_PT_X86_64
+ select IOMMU_IOVA
+ default HYPERV
+ help
+ Para-virtualized IOMMU driver for Linux guests running on
+ Microsoft Hyper-V. Provides DMA remapping and IOTLB
+ flush support to enable DMA isolation for devices
+ assigned to the guest.
+endif
diff --git a/drivers/iommu/hyperv/Makefile b/drivers/iommu/hyperv/Makefile
index 9f557bad94ff..8669741c0a51 100644
--- a/drivers/iommu/hyperv/Makefile
+++ b/drivers/iommu/hyperv/Makefile
@@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_HYPERV_IOMMU) += irq_remapping.o
+obj-$(CONFIG_HYPERV_PVIOMMU) += iommu.o
diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
new file mode 100644
index 000000000000..e5fc625314b5
--- /dev/null
+++ b/drivers/iommu/hyperv/iommu.c
@@ -0,0 +1,705 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Hyper-V IOMMU driver.
+ *
+ * Copyright (C) 2019, 2024-2026 Microsoft, Inc.
+ */
+
+#define pr_fmt(fmt) "Hyper-V pvIOMMU: " fmt
+#define dev_fmt(fmt) pr_fmt(fmt)
+
+#include <linux/iommu.h>
+#include <linux/pci.h>
+#include <linux/dma-map-ops.h>
+#include <linux/generic_pt/iommu.h>
+#include <linux/pci-ats.h>
+
+#include <asm/iommu.h>
+#include <asm/hypervisor.h>
+#include <asm/mshyperv.h>
+
+#include "iommu.h"
+#include "../iommu-pages.h"
+
+struct hv_iommu_dev *hv_iommu_device;
+
+/*
+ * Identity and blocking domains are static singletons: identity is a 1:1
+ * passthrough with no page table, blocking rejects all DMA. Neither holds
+ * per-IOMMU state, so one instance suffices even with multiple vIOMMUs.
+ */
+static struct hv_iommu_domain hv_identity_domain;
+static struct hv_iommu_domain hv_blocking_domain;
+static const struct iommu_domain_ops hv_iommu_identity_domain_ops;
+static const struct iommu_domain_ops hv_iommu_blocking_domain_ops;
+static struct iommu_ops hv_iommu_ops;
+static LIST_HEAD(hv_iommu_pci_bus_list);
+static DEFINE_SPINLOCK(hv_iommu_pci_bus_lock);
+
+#define hv_iommu_present(iommu_cap) (iommu_cap & HV_IOMMU_CAP_PRESENT)
+#define hv_iommu_s1_domain_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1)
+#define hv_iommu_5lvl_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1_5LVL)
+#define hv_iommu_ats_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_ATS)
+
+int hv_iommu_register_pci_bus(int pci_domain_nr, u32 logical_dev_id_prefix)
+{
+ struct hv_pci_busdata *bus, *new;
+ int ret = 0;
+
+ if (no_iommu || !iommu_detected)
+ return 0;
+
+ new = kzalloc_obj(*new, GFP_KERNEL);
+ if (!new)
+ return -ENOMEM;
+
+ spin_lock(&hv_iommu_pci_bus_lock);
+ list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
+ if (bus->pci_domain_nr != pci_domain_nr)
+ continue;
+
+ if (bus->logical_dev_id_prefix != logical_dev_id_prefix) {
+ pr_err("stale registration for PCI domain %d (old prefix 0x%08x, new 0x%08x)\n",
+ pci_domain_nr, bus->logical_dev_id_prefix,
+ logical_dev_id_prefix);
+ ret = -EEXIST;
+ }
+
+ goto out_free;
+ }
+
+ new->pci_domain_nr = pci_domain_nr;
+ new->logical_dev_id_prefix = logical_dev_id_prefix;
+ list_add(&new->list, &hv_iommu_pci_bus_list);
+ spin_unlock(&hv_iommu_pci_bus_lock);
+ return 0;
+
+out_free:
+ spin_unlock(&hv_iommu_pci_bus_lock);
+ kfree(new);
+ return ret;
+}
+EXPORT_SYMBOL_FOR_MODULES(hv_iommu_register_pci_bus, "pci-hyperv");
+
+void hv_iommu_unregister_pci_bus(int pci_domain_nr)
+{
+ struct hv_pci_busdata *bus, *tmp;
+
+ spin_lock(&hv_iommu_pci_bus_lock);
+ list_for_each_entry_safe(bus, tmp, &hv_iommu_pci_bus_list, list) {
+ if (bus->pci_domain_nr == pci_domain_nr) {
+ list_del(&bus->list);
+ kfree(bus);
+ break;
+ }
+ }
+ spin_unlock(&hv_iommu_pci_bus_lock);
+}
+EXPORT_SYMBOL_FOR_MODULES(hv_iommu_unregister_pci_bus, "pci-hyperv");
+
+/*
+ * Look up the logical device ID for a vPCI device. Returns 0 on success
+ * with *logical_id filled in; -ENODEV if no entry registered for this
+ * device's vPCI bus.
+ */
+static int hv_iommu_lookup_logical_dev_id(struct pci_dev *pdev, u64 *logical_id)
+{
+ struct hv_pci_busdata *bus;
+ int domain = pci_domain_nr(pdev->bus);
+ int ret = -ENODEV;
+
+ spin_lock(&hv_iommu_pci_bus_lock);
+ list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
+ if (bus->pci_domain_nr == domain) {
+ *logical_id = (u64)bus->logical_dev_id_prefix |
+ PCI_FUNC(pdev->devfn);
+ ret = 0;
+ break;
+ }
+ }
+ spin_unlock(&hv_iommu_pci_bus_lock);
+ return ret;
+}
+
+static int hv_create_device_domain(struct hv_iommu_domain *hv_domain, u32 domain_stage)
+{
+ int ret;
+ u64 status;
+ unsigned long flags;
+ struct hv_input_create_device_domain *input;
+
+ ret = ida_alloc_range(&hv_iommu_device->domain_ids,
+ hv_iommu_device->first_domain, hv_iommu_device->last_domain,
+ GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+
+ hv_domain->device_domain.partition_id = HV_PARTITION_ID_SELF;
+ hv_domain->device_domain.domain_id.type = domain_stage;
+ hv_domain->device_domain.domain_id.id = ret;
+ hv_domain->hv_iommu = hv_iommu_device;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->device_domain = hv_domain->device_domain;
+ input->create_device_domain_flags.forward_progress_required = 1;
+ input->create_device_domain_flags.inherit_owning_vtl = 0;
+ status = hv_do_hypercall(HVCALL_CREATE_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status)) {
+ pr_err("HVCALL_CREATE_DEVICE_DOMAIN failed, status %lld\n", status);
+ ida_free(&hv_iommu_device->domain_ids, hv_domain->device_domain.domain_id.id);
+ }
+
+ return hv_result_to_errno(status);
+}
+
+static void hv_delete_device_domain(struct hv_iommu_domain *hv_domain)
+{
+ u64 status;
+ unsigned long flags;
+ struct hv_input_delete_device_domain *input;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->device_domain = hv_domain->device_domain;
+ status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_DELETE_DEVICE_DOMAIN failed, status %lld\n", status);
+
+ ida_free(&hv_domain->hv_iommu->domain_ids, hv_domain->device_domain.domain_id.id);
+}
+
+static bool hv_iommu_capable(struct device *dev, enum iommu_cap cap)
+{
+ switch (cap) {
+ case IOMMU_CAP_CACHE_COHERENCY:
+ return true;
+ case IOMMU_CAP_DEFERRED_FLUSH:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
+{
+ u64 status;
+ unsigned long flags;
+ struct hv_input_flush_device_domain *input;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->device_domain = hv_domain->device_domain;
+ status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_FLUSH_DEVICE_DOMAIN failed, status %lld\n", status);
+}
+
+static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev)
+{
+ u64 status;
+ unsigned long flags;
+ struct hv_input_detach_device_domain *input;
+ struct pci_dev *pdev;
+ struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
+ struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+ /* See the attach function, only PCI devices for now */
+ if (!dev_is_pci(dev) || vdev->hv_domain != hv_domain)
+ return;
+
+ pdev = to_pci_dev(dev);
+
+ dev_dbg(dev, "detaching from domain %d\n", hv_domain->device_domain.domain_id.id);
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->partition_id = HV_PARTITION_ID_SELF;
+ if (hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64)) {
+ local_irq_restore(flags);
+ dev_warn(&pdev->dev, "no IOMMU registration for vPCI bus on detach\n");
+ return;
+ }
+ status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_DETACH_DEVICE_DOMAIN failed, status %lld\n", status);
+
+ hv_flush_device_domain(hv_domain);
+
+ vdev->hv_domain = NULL;
+}
+
+static int hv_iommu_attach_dev(struct iommu_domain *domain, struct device *dev,
+ struct iommu_domain *old)
+{
+ u64 status;
+ unsigned long flags;
+ struct pci_dev *pdev;
+ struct hv_input_attach_device_domain *input;
+ struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
+ struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
+ int ret;
+
+ /* Only allow PCI devices for now */
+ if (!dev_is_pci(dev))
+ return -EINVAL;
+
+ if (vdev->hv_domain == hv_domain)
+ return 0;
+
+ if (vdev->hv_domain)
+ hv_iommu_detach_dev(&vdev->hv_domain->domain, dev);
+
+ pdev = to_pci_dev(dev);
+ dev_dbg(dev, "attaching to domain %d\n",
+ hv_domain->device_domain.domain_id.id);
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->device_domain = hv_domain->device_domain;
+ ret = hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64);
+ if (ret) {
+ local_irq_restore(flags);
+ dev_err(&pdev->dev, "no IOMMU registration for vPCI bus\n");
+ return ret;
+ }
+ status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_ATTACH_DEVICE_DOMAIN failed, status %lld\n", status);
+ else
+ vdev->hv_domain = hv_domain;
+
+ return hv_result_to_errno(status);
+}
+
+static int hv_iommu_get_logical_device_property(struct device *dev,
+ u32 code,
+ struct hv_output_get_logical_device_property *property)
+{
+ u64 status, lid;
+ unsigned long flags;
+ int ret;
+ struct hv_input_get_logical_device_property *input;
+ struct hv_output_get_logical_device_property *output;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ output = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*input);
+ memset(input, 0, sizeof(*input));
+ input->partition_id = HV_PARTITION_ID_SELF;
+ ret = hv_iommu_lookup_logical_dev_id(to_pci_dev(dev), &lid);
+ if (ret) {
+ local_irq_restore(flags);
+ return ret;
+ }
+ input->logical_device_id = lid;
+ input->code = code;
+ status = hv_do_hypercall(HVCALL_GET_LOGICAL_DEVICE_PROPERTY, input, output);
+ *property = *output;
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_GET_LOGICAL_DEVICE_PROPERTY failed, status %lld\n", status);
+
+ return hv_result_to_errno(status);
+}
+
+static struct iommu_device *hv_iommu_probe_device(struct device *dev)
+{
+ struct pci_dev *pdev;
+ struct hv_iommu_endpoint *vdev;
+ struct hv_output_get_logical_device_property device_iommu_property = {0};
+
+ if (!dev_is_pci(dev))
+ return ERR_PTR(-ENODEV);
+
+ pdev = to_pci_dev(dev);
+
+ if (hv_iommu_get_logical_device_property(dev,
+ HV_LOGICAL_DEVICE_PROPERTY_PVIOMMU,
+ &device_iommu_property) ||
+ !(device_iommu_property.device_iommu & HV_DEVICE_IOMMU_ENABLED))
+ return ERR_PTR(-ENODEV);
+
+ vdev = kzalloc_obj(*vdev, GFP_KERNEL);
+ if (!vdev)
+ return ERR_PTR(-ENOMEM);
+
+ vdev->dev = dev;
+ vdev->hv_iommu = hv_iommu_device;
+ dev_iommu_priv_set(dev, vdev);
+
+ if (hv_iommu_ats_supported(hv_iommu_device->cap) &&
+ pci_ats_supported(pdev))
+ pci_enable_ats(pdev, __ffs(hv_iommu_device->pgsize_bitmap));
+
+ return &vdev->hv_iommu->iommu;
+}
+
+static void hv_iommu_release_device(struct device *dev)
+{
+ struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ if (pdev->ats_enabled)
+ pci_disable_ats(pdev);
+
+ dev_iommu_priv_set(dev, NULL);
+ set_dma_ops(dev, NULL);
+
+ kfree(vdev);
+}
+
+static struct iommu_group *hv_iommu_device_group(struct device *dev)
+{
+ if (dev_is_pci(dev))
+ return pci_device_group(dev);
+ else
+ return generic_device_group(dev);
+}
+
+static int hv_configure_device_domain(struct hv_iommu_domain *hv_domain, u32 domain_type)
+{
+ u64 status;
+ unsigned long flags;
+ struct pt_iommu_x86_64_hw_info pt_info;
+ struct hv_input_configure_device_domain *input;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->device_domain = hv_domain->device_domain;
+ input->settings.flags.blocked = (domain_type == IOMMU_DOMAIN_BLOCKED);
+ input->settings.flags.translation_enabled = (domain_type != IOMMU_DOMAIN_IDENTITY);
+
+ if (domain_type & __IOMMU_DOMAIN_PAGING) {
+ pt_iommu_x86_64_hw_info(&hv_domain->pt_iommu_x86_64, &pt_info);
+ input->settings.page_table_root = pt_info.gcr3_pt;
+ input->settings.flags.first_stage_paging_mode =
+ pt_info.levels == 5;
+ }
+ status = hv_do_hypercall(HVCALL_CONFIGURE_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_CONFIGURE_DEVICE_DOMAIN failed, status %lld\n", status);
+
+ return hv_result_to_errno(status);
+}
+
+static int __init hv_initialize_static_domains(void)
+{
+ int ret;
+ struct hv_iommu_domain *hv_domain;
+
+ /* Default stage-1 identity domain */
+ hv_domain = &hv_identity_domain;
+
+ ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
+ if (ret)
+ return ret;
+
+ ret = hv_configure_device_domain(hv_domain, IOMMU_DOMAIN_IDENTITY);
+ if (ret)
+ goto delete_identity_domain;
+
+ hv_domain->domain.type = IOMMU_DOMAIN_IDENTITY;
+ hv_domain->domain.ops = &hv_iommu_identity_domain_ops;
+ hv_domain->domain.owner = &hv_iommu_ops;
+ hv_domain->domain.geometry = hv_iommu_device->geometry;
+ hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
+
+ /* Default stage-1 blocked domain */
+ hv_domain = &hv_blocking_domain;
+
+ ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
+ if (ret)
+ goto delete_identity_domain;
+
+ ret = hv_configure_device_domain(hv_domain, IOMMU_DOMAIN_BLOCKED);
+ if (ret)
+ goto delete_blocked_domain;
+
+ hv_domain->domain.type = IOMMU_DOMAIN_BLOCKED;
+ hv_domain->domain.ops = &hv_iommu_blocking_domain_ops;
+ hv_domain->domain.owner = &hv_iommu_ops;
+ hv_domain->domain.geometry = hv_iommu_device->geometry;
+ hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
+
+ return 0;
+
+delete_blocked_domain:
+ hv_delete_device_domain(&hv_blocking_domain);
+delete_identity_domain:
+ hv_delete_device_domain(&hv_identity_domain);
+ return ret;
+}
+
+#define INTERRUPT_RANGE_START (0xfee00000)
+#define INTERRUPT_RANGE_END (0xfeefffff)
+static void hv_iommu_get_resv_regions(struct device *dev,
+ struct list_head *head)
+{
+ struct iommu_resv_region *region;
+
+ region = iommu_alloc_resv_region(INTERRUPT_RANGE_START,
+ INTERRUPT_RANGE_END - INTERRUPT_RANGE_START + 1,
+ 0, IOMMU_RESV_MSI, GFP_KERNEL);
+ if (!region)
+ return;
+
+ list_add_tail(®ion->list, head);
+}
+
+static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
+{
+ hv_flush_device_domain(to_hv_iommu_domain(domain));
+}
+
+static void hv_iommu_iotlb_sync(struct iommu_domain *domain,
+ struct iommu_iotlb_gather *iotlb_gather)
+{
+ hv_flush_device_domain(to_hv_iommu_domain(domain));
+
+ iommu_put_pages_list(&iotlb_gather->freelist);
+}
+
+static void hv_iommu_paging_domain_free(struct iommu_domain *domain)
+{
+ struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
+
+ /* Free all remaining mappings */
+ pt_iommu_deinit(&hv_domain->pt_iommu);
+
+ hv_delete_device_domain(hv_domain);
+
+ kfree(hv_domain);
+}
+
+static const struct iommu_domain_ops hv_iommu_identity_domain_ops = {
+ .attach_dev = hv_iommu_attach_dev,
+};
+
+static const struct iommu_domain_ops hv_iommu_blocking_domain_ops = {
+ .attach_dev = hv_iommu_attach_dev,
+};
+
+static const struct iommu_domain_ops hv_iommu_paging_domain_ops = {
+ .attach_dev = hv_iommu_attach_dev,
+ IOMMU_PT_DOMAIN_OPS(x86_64),
+ .flush_iotlb_all = hv_iommu_flush_iotlb_all,
+ .iotlb_sync = hv_iommu_iotlb_sync,
+ .free = hv_iommu_paging_domain_free,
+};
+
+static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
+{
+ int ret;
+ struct hv_iommu_domain *hv_domain;
+ struct pt_iommu_x86_64_cfg cfg = {};
+
+ hv_domain = kzalloc_obj(*hv_domain, GFP_KERNEL);
+ if (!hv_domain)
+ return ERR_PTR(-ENOMEM);
+
+ ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
+ if (ret) {
+ kfree(hv_domain);
+ return ERR_PTR(ret);
+ }
+
+ hv_domain->domain.geometry = hv_iommu_device->geometry;
+ hv_domain->pt_iommu.nid = dev_to_node(dev);
+
+ cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
+ cfg.common.hw_max_oasz_lg2 = 52;
+ cfg.top_level = (hv_iommu_device->max_iova_width > 48) ? 4 : 3;
+
+ ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64, &cfg, GFP_KERNEL);
+ if (ret) {
+ hv_delete_device_domain(hv_domain);
+ kfree(hv_domain);
+ return ERR_PTR(ret);
+ }
+
+ /* Constrain to page sizes the hypervisor supports */
+ hv_domain->domain.pgsize_bitmap &= hv_iommu_device->pgsize_bitmap;
+
+ hv_domain->domain.ops = &hv_iommu_paging_domain_ops;
+
+ ret = hv_configure_device_domain(hv_domain, __IOMMU_DOMAIN_PAGING);
+ if (ret) {
+ pt_iommu_deinit(&hv_domain->pt_iommu);
+ hv_delete_device_domain(hv_domain);
+ kfree(hv_domain);
+ return ERR_PTR(ret);
+ }
+
+ return &hv_domain->domain;
+}
+
+static struct iommu_ops hv_iommu_ops = {
+ .capable = hv_iommu_capable,
+ .domain_alloc_paging = hv_iommu_domain_alloc_paging,
+ .probe_device = hv_iommu_probe_device,
+ .release_device = hv_iommu_release_device,
+ .device_group = hv_iommu_device_group,
+ .get_resv_regions = hv_iommu_get_resv_regions,
+ .owner = THIS_MODULE,
+ .identity_domain = &hv_identity_domain.domain,
+ .blocked_domain = &hv_blocking_domain.domain,
+ .release_domain = &hv_blocking_domain.domain,
+};
+
+static int hv_iommu_detect(struct hv_output_get_iommu_capabilities *hv_iommu_cap)
+{
+ u64 status;
+ unsigned long flags;
+ struct hv_input_get_iommu_capabilities *input;
+ struct hv_output_get_iommu_capabilities *output;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ output = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*input);
+ memset(input, 0, sizeof(*input));
+ input->partition_id = HV_PARTITION_ID_SELF;
+ status = hv_do_hypercall(HVCALL_GET_IOMMU_CAPABILITIES, input, output);
+ *hv_iommu_cap = *output;
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_GET_IOMMU_CAPABILITIES failed, status %lld\n", status);
+
+ return hv_result_to_errno(status);
+}
+
+static void __init hv_init_iommu_device(struct hv_iommu_dev *hv_iommu,
+ struct hv_output_get_iommu_capabilities *hv_iommu_cap)
+{
+ ida_init(&hv_iommu->domain_ids);
+
+ hv_iommu->cap = hv_iommu_cap->iommu_cap;
+ hv_iommu->max_iova_width = hv_iommu_cap->max_iova_width;
+ if (!hv_iommu_5lvl_supported(hv_iommu->cap) &&
+ hv_iommu->max_iova_width > 48) {
+ pr_info("5-level paging not supported, limiting iova width to 48.\n");
+ hv_iommu->max_iova_width = 48;
+ }
+
+ hv_iommu->geometry = (struct iommu_domain_geometry) {
+ .aperture_start = 0,
+ .aperture_end = (((u64)1) << hv_iommu->max_iova_width) - 1,
+ .force_aperture = true,
+ };
+
+ hv_iommu->first_domain = HV_DEVICE_DOMAIN_ID_DEFAULT + 1;
+ hv_iommu->last_domain = HV_DEVICE_DOMAIN_ID_NULL - 1;
+ /* Only x86 page sizes (4K/2M/1G) are supported */
+ hv_iommu->pgsize_bitmap = hv_iommu_cap->pgsize_bitmap &
+ (SZ_4K | SZ_2M | SZ_1G);
+ if (hv_iommu->pgsize_bitmap != hv_iommu_cap->pgsize_bitmap)
+ pr_warn("unsupported page sizes masked: 0x%llx -> 0x%llx\n",
+ hv_iommu_cap->pgsize_bitmap, hv_iommu->pgsize_bitmap);
+ if (!hv_iommu->pgsize_bitmap) {
+ pr_warn("no supported page sizes, defaulting to 4K\n");
+ hv_iommu->pgsize_bitmap = SZ_4K;
+ }
+ hv_iommu_device = hv_iommu;
+}
+
+int __init hv_iommu_init(void)
+{
+ int ret = 0;
+ struct hv_iommu_dev *hv_iommu = NULL;
+ struct hv_output_get_iommu_capabilities hv_iommu_cap = {0};
+
+ if (no_iommu || iommu_detected)
+ return -ENODEV;
+
+ if (!hv_is_hyperv_initialized())
+ return -ENODEV;
+
+ ret = hv_iommu_detect(&hv_iommu_cap);
+ if (ret) {
+ pr_err("HVCALL_GET_IOMMU_CAPABILITIES failed: %d\n", ret);
+ return -ENODEV;
+ }
+
+ if (!hv_iommu_present(hv_iommu_cap.iommu_cap) ||
+ !hv_iommu_s1_domain_supported(hv_iommu_cap.iommu_cap)) {
+ pr_err("IOMMU capabilities not sufficient: cap=0x%llx\n",
+ hv_iommu_cap.iommu_cap);
+ return -ENODEV;
+ }
+
+ iommu_detected = 1;
+ pci_request_acs();
+
+ hv_iommu = kzalloc_obj(*hv_iommu, GFP_KERNEL);
+ if (!hv_iommu)
+ return -ENOMEM;
+
+ hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
+
+ ret = hv_initialize_static_domains();
+ if (ret) {
+ pr_err("static domains init failed: %d\n", ret);
+ goto err_free;
+ }
+
+ ret = iommu_device_sysfs_add(&hv_iommu->iommu, NULL, NULL, "%s", "hv-iommu");
+ if (ret) {
+ pr_err("iommu_device_sysfs_add failed: %d\n", ret);
+ goto err_delete_static_domains;
+ }
+
+ ret = iommu_device_register(&hv_iommu->iommu, &hv_iommu_ops, NULL);
+ if (ret) {
+ pr_err("iommu_device_register failed: %d\n", ret);
+ goto err_sysfs_remove;
+ }
+
+ pr_info("successfully initialized\n");
+ return 0;
+
+err_sysfs_remove:
+ iommu_device_sysfs_remove(&hv_iommu->iommu);
+err_delete_static_domains:
+ hv_delete_device_domain(&hv_blocking_domain);
+ hv_delete_device_domain(&hv_identity_domain);
+err_free:
+ kfree(hv_iommu);
+ return ret;
+}
diff --git a/drivers/iommu/hyperv/iommu.h b/drivers/iommu/hyperv/iommu.h
new file mode 100644
index 000000000000..43f20d371245
--- /dev/null
+++ b/drivers/iommu/hyperv/iommu.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Hyper-V IOMMU driver.
+ *
+ * Copyright (C) 2024-2025, Microsoft, Inc.
+ *
+ */
+
+#ifndef _HYPERV_IOMMU_H
+#define _HYPERV_IOMMU_H
+
+struct hv_iommu_dev {
+ struct iommu_device iommu;
+ struct ida domain_ids;
+
+ /* Device configuration */
+ u8 max_iova_width;
+ u8 max_pasid_width;
+ u64 cap;
+ u64 pgsize_bitmap;
+
+ struct iommu_domain_geometry geometry;
+ u64 first_domain;
+ u64 last_domain;
+};
+
+struct hv_iommu_domain {
+ union {
+ struct iommu_domain domain;
+ struct pt_iommu pt_iommu;
+ struct pt_iommu_x86_64 pt_iommu_x86_64;
+ };
+ struct hv_iommu_dev *hv_iommu;
+ struct hv_input_device_domain device_domain;
+ u64 pgsize_bitmap;
+};
+
+struct hv_pci_busdata {
+ int pci_domain_nr;
+ u32 logical_dev_id_prefix;
+ struct list_head list;
+};
+
+struct hv_iommu_endpoint {
+ struct device *dev;
+ struct hv_iommu_dev *hv_iommu;
+ struct hv_iommu_domain *hv_domain;
+};
+
+#define to_hv_iommu_domain(d) \
+ container_of(d, struct hv_iommu_domain, domain)
+
+#endif /* _HYPERV_IOMMU_H */
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index cfc8fa403dad..a4af9c8c2220 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -3715,6 +3715,7 @@ static int hv_pci_probe(struct hv_device *hdev,
struct hv_pcibus_device *hbus;
int ret, dom;
u16 dom_req;
+ u32 prefix;
char *name;
bridge = devm_pci_alloc_host_bridge(&hdev->device, 0);
@@ -3857,13 +3858,25 @@ static int hv_pci_probe(struct hv_device *hdev,
hbus->state = hv_pcibus_probed;
- ret = create_root_hv_pci_bus(hbus);
+ /* Notify pvIOMMU before any device on the bus is scanned. */
+ prefix = (hdev->dev_instance.b[5] << 24) |
+ (hdev->dev_instance.b[4] << 16) |
+ (hdev->dev_instance.b[7] << 8) |
+ (hdev->dev_instance.b[6] & 0xf8);
+
+ ret = hv_iommu_register_pci_bus(dom, prefix);
if (ret)
goto free_windows;
+ ret = create_root_hv_pci_bus(hbus);
+ if (ret)
+ goto unregister_pviommu;
+
mutex_unlock(&hbus->state_lock);
return 0;
+unregister_pviommu:
+ hv_iommu_unregister_pci_bus(dom);
free_windows:
hv_pci_free_bridge_windows(hbus);
exit_d0:
@@ -3974,8 +3987,10 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
static void hv_pci_remove(struct hv_device *hdev)
{
struct hv_pcibus_device *hbus;
+ int dom;
hbus = hv_get_drvdata(hdev);
+ dom = hbus->bridge->domain_nr;
if (hbus->state == hv_pcibus_installed) {
tasklet_disable(&hdev->channel->callback_event);
hbus->state = hv_pcibus_removing;
@@ -3994,6 +4009,8 @@ static void hv_pci_remove(struct hv_device *hdev)
hv_pci_remove_slots(hbus);
pci_remove_root_bus(hbus->bridge->bus);
pci_unlock_rescan_remove();
+
+ hv_iommu_unregister_pci_bus(dom);
}
hv_pci_bus_exit(hdev, false);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index bf601d67cecb..b71345c74568 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -73,6 +73,18 @@ extern enum hv_partition_type hv_curr_partition_type;
extern void * __percpu *hyperv_pcpu_input_arg;
extern void * __percpu *hyperv_pcpu_output_arg;
+#ifdef CONFIG_HYPERV_PVIOMMU
+int hv_iommu_register_pci_bus(int pci_domain_nr, u32 logical_dev_id_prefix);
+void hv_iommu_unregister_pci_bus(int pci_domain_nr);
+#else
+static inline int hv_iommu_register_pci_bus(int pci_domain_nr,
+ u32 logical_dev_id_prefix)
+{
+ return 0;
+}
+static inline void hv_iommu_unregister_pci_bus(int pci_domain_nr) { }
+#endif
+
u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr);
u64 hv_do_fast_hypercall8(u16 control, u64 input8);
u64 hv_do_fast_hypercall16(u16 control, u64 input1, u64 input2);
--
2.52.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
` (2 preceding siblings ...)
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
@ 2026-05-11 16:24 ` Yu Zhang
2026-05-12 23:45 ` sashiko-bot
2026-05-14 18:14 ` Michael Kelley
3 siblings, 2 replies; 19+ messages in thread
From: Yu Zhang @ 2026-05-11 16:24 UTC (permalink / raw)
To: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch
Cc: wei.liu, kys, haiyangz, decui, longli, joro, will, robin.murphy,
bhelgaas, kwilczynski, lpieralisi, mani, robh, arnd, jgg,
mhklinux, jacob.pan, tgopinath, easwar.hariharan
Add page-selective IOTLB flush using HVCALL_FLUSH_DEVICE_DOMAIN_LIST.
This hypercall accepts a list of (page_number, page_mask_shift) entries,
enabling finer-grained IOTLB invalidation compared to the domain-wide
HVCALL_FLUSH_DEVICE_DOMAIN used by hv_iommu_flush_iotlb_all().
hv_iommu_fill_iova_list() decomposes a contiguous IOVA range into a
minimal set of aligned power-of-two regions that fit in a single
hypercall input page. When the range exceeds the page capacity, the
code falls back to a full domain flush automatically.
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
---
drivers/iommu/hyperv/iommu.c | 91 +++++++++++++++++++++++++++++++++++-
include/hyperv/hvgdk_mini.h | 1 +
include/hyperv/hvhdk_mini.h | 17 +++++++
3 files changed, 108 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
index e5fc625314b5..3bca362b7815 100644
--- a/drivers/iommu/hyperv/iommu.c
+++ b/drivers/iommu/hyperv/iommu.c
@@ -486,10 +486,98 @@ static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
hv_flush_device_domain(to_hv_iommu_domain(domain));
}
+/* Max number of iova_list entries in a single hypercall input page. */
+#define HV_IOMMU_MAX_FLUSH_VA_COUNT \
+ ((HV_HYP_PAGE_SIZE - sizeof(struct hv_input_flush_device_domain_list)) / \
+ sizeof(union hv_iommu_flush_va))
+
+/* Returned by hv_iommu_fill_iova_list() when the range exceeds the capacity */
+#define HV_IOMMU_FLUSH_VA_OVERFLOW U16_MAX
+
+static inline u16 hv_iommu_fill_iova_list(union hv_iommu_flush_va *iova_list,
+ unsigned long start,
+ unsigned long end)
+{
+ unsigned long start_pfn = start >> PAGE_SHIFT;
+ unsigned long end_pfn = PAGE_ALIGN(end) >> PAGE_SHIFT;
+ unsigned long nr_pages = end_pfn - start_pfn;
+ u16 count = 0;
+
+ while (nr_pages > 0) {
+ unsigned long flush_pages;
+ int order;
+ unsigned long pfn_align;
+ unsigned long size_align;
+
+ if (count >= HV_IOMMU_MAX_FLUSH_VA_COUNT) {
+ count = HV_IOMMU_FLUSH_VA_OVERFLOW;
+ break;
+ }
+
+ if (start_pfn)
+ pfn_align = __ffs(start_pfn);
+ else
+ pfn_align = BITS_PER_LONG - 1;
+
+ size_align = __fls(nr_pages);
+ order = min(pfn_align, size_align);
+ iova_list[count].page_mask_shift = order;
+ iova_list[count].page_number = start_pfn;
+
+ flush_pages = 1UL << order;
+ start_pfn += flush_pages;
+ nr_pages -= flush_pages;
+ count++;
+ }
+
+ return count;
+}
+
+static void hv_flush_device_domain_list(struct hv_iommu_domain *hv_domain,
+ struct iommu_iotlb_gather *iotlb_gather)
+{
+ u64 status;
+ u16 count;
+ unsigned long flags;
+ struct hv_input_flush_device_domain_list *input;
+
+ local_irq_save(flags);
+
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ input->device_domain = hv_domain->device_domain;
+ input->flags |= HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT;
+ count = hv_iommu_fill_iova_list(input->iova_list,
+ iotlb_gather->start,
+ iotlb_gather->end);
+ if (count == HV_IOMMU_FLUSH_VA_OVERFLOW) {
+ /*
+ * Range exceeds hypercall page capacity. Fall back to a full
+ * domain flush.
+ */
+ struct hv_input_flush_device_domain *flush_all = (void *)input;
+
+ memset(flush_all, 0, sizeof(*flush_all));
+ flush_all->device_domain = hv_domain->device_domain;
+ status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN,
+ flush_all, NULL);
+ } else {
+ status = hv_do_rep_hypercall(
+ HVCALL_FLUSH_DEVICE_DOMAIN_LIST,
+ count, 0, input, NULL);
+ }
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ pr_err("HVCALL_FLUSH_DEVICE_DOMAIN_LIST failed, status %lld\n", status);
+}
+
static void hv_iommu_iotlb_sync(struct iommu_domain *domain,
struct iommu_iotlb_gather *iotlb_gather)
{
- hv_flush_device_domain(to_hv_iommu_domain(domain));
+ hv_flush_device_domain_list(to_hv_iommu_domain(domain), iotlb_gather);
iommu_put_pages_list(&iotlb_gather->freelist);
}
@@ -543,6 +631,7 @@ static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
cfg.common.hw_max_oasz_lg2 = 52;
+ cfg.common.features |= BIT(PT_FEAT_FLUSH_RANGE);
cfg.top_level = (hv_iommu_device->max_iova_width > 48) ? 4 : 3;
ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64, &cfg, GFP_KERNEL);
diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 5bdbb44da112..eaaf87171478 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -496,6 +496,7 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */
#define HVCALL_GET_GPA_PAGES_ACCESS_STATES 0x00c9
#define HVCALL_CONFIGURE_DEVICE_DOMAIN 0x00ce
#define HVCALL_FLUSH_DEVICE_DOMAIN 0x00d0
+#define HVCALL_FLUSH_DEVICE_DOMAIN_LIST 0x00d1
#define HVCALL_ACQUIRE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d7
#define HVCALL_RELEASE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d8
#define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db
diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
index 493608e791b4..f51d5d9467f1 100644
--- a/include/hyperv/hvhdk_mini.h
+++ b/include/hyperv/hvhdk_mini.h
@@ -671,4 +671,21 @@ struct hv_input_flush_device_domain {
u32 reserved;
} __packed;
+union hv_iommu_flush_va {
+ u64 iova;
+ struct {
+ u64 page_mask_shift : 12;
+ u64 page_number : 52;
+ };
+} __packed;
+
+
+struct hv_input_flush_device_domain_list {
+ struct hv_input_device_domain device_domain;
+#define HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT (1 << 0)
+ u32 flags;
+ u32 reserved;
+ union hv_iommu_flush_va iova_list[];
+} __packed;
+
#endif /* _HV_HVHDK_MINI_H */
--
2.52.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU
2026-05-11 16:24 ` [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
@ 2026-05-12 21:24 ` sashiko-bot
0 siblings, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-05-12 21:24 UTC (permalink / raw)
To: Yu Zhang; +Cc: linux-pci
Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Low] Missing 4-byte padding in `struct hv_input_create_device_domain` resulting in an unaligned ABI structure size.
- [Low] Preprocessor macros are defined inside structure definitions.
--
commit 424b106dd2a0b2fe17c666987a01e49923b0b297
Author: Wei Liu <wei.liu@kernel.org>
hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU
This commit introduces new hypercall interfaces to support the Hyper-V guest
IOMMU, which allows child partitions to interact with the facility for device,
domain, and IOTLB management.
> diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
> index b4cb2fa26e9bc..493608e791b45 100644
> --- a/include/hyperv/hvhdk_mini.h
> +++ b/include/hyperv/hvhdk_mini.h
[ ... ]
> +struct hv_input_create_device_domain {
> + struct hv_input_device_domain device_domain;
> + union hv_create_device_domain_flags create_device_domain_flags;
> +} __packed;
Should this structure include a 4-byte padding field at the end?
Since the device_domain field is 24 bytes and the create_device_domain_flags
field is 4 bytes, the structure size is exactly 28 bytes. Other hypercall
structures in this file explicitly use padding to maintain an 8-byte aligned
size.
While this might work for a single invocation, if this structure is ever used
in a rep (array) hypercall, would the 28-byte stride misalign subsequent
elements and cause the hypercall to fail?
[ ... ]
> +struct hv_output_get_iommu_capabilities {
> + u32 size;
> + u16 reserved;
> + u8 max_iova_width;
> + u8 max_pasid_width;
> +
> +#define HV_IOMMU_CAP_PRESENT (1ULL << 0)
> +#define HV_IOMMU_CAP_S2 (1ULL << 1)
> +#define HV_IOMMU_CAP_S1 (1ULL << 2)
> +#define HV_IOMMU_CAP_S1_5LVL (1ULL << 3)
> +#define HV_IOMMU_CAP_PASID (1ULL << 4)
> +#define HV_IOMMU_CAP_ATS (1ULL << 5)
> +#define HV_IOMMU_CAP_PRI (1ULL << 6)
This isn't a bug, but should these preprocessor macros be moved outside of the
structure definition?
Macros do not respect C scope, so placing them inside the structure definition
violates standard kernel coding style conventions and might falsely imply they
are locally scoped.
> +
> + u64 iommu_cap;
> + u64 pgsize_bitmap;
> +} __packed;
[ ... ]
> +struct hv_output_get_logical_device_property {
> +#define HV_DEVICE_IOMMU_ENABLED (1ULL << 0)
Similarly, could this macro be moved outside the structure definition to adhere
to conventional kernel styling?
> + u64 device_iommu;
> + u64 reserved;
> +} __packed;
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260511162408.1180069-1-zhangyu1@linux.microsoft.com?part=2
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
@ 2026-05-12 22:30 ` sashiko-bot
2026-05-13 18:39 ` Jacob Pan
2026-05-14 18:13 ` Michael Kelley
2 siblings, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-05-12 22:30 UTC (permalink / raw)
To: Yu Zhang; +Cc: linux-pci
Thank you for your contribution! Sashiko AI review found 5 potential issue(s) to consider:
- [High] Sleeping in atomic context on PREEMPT_RT due to spin_lock() inside local_irq_save().
- [High] Global state corruption and dangling pointer on IOMMU initialization failure.
- [Medium] Undefined behavior and potential integer overflow in IOMMU geometry calculation.
- [Medium] IOMMU driver destructively clears generic dma_ops in release_device.
- [Low] Missing ida_destroy() on the error cleanup path in hv_iommu_init().
--
commit d1b032eaf8fbec1a7366bf47848a4dfd4a2d21b6
Author: Yu Zhang <zhangyu1@linux.microsoft.com>
Subject: iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
This patch adds a para-virtualized IOMMU driver for Linux guests on Hyper-V.
It implements stage-1 IO translation via hypercalls and manages x86-compatible
page tables in guest memory.
> diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> new file mode 100644
> index 0000000000000..e5fc625314b55
> --- /dev/null
> +++ b/drivers/iommu/hyperv/iommu.c
[ ... ]
> @@ -215,14 +230,19 @@ static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev)
[ ... ]
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + if (hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64)) {
Could this cause a sleeping in atomic context BUG on PREEMPT_RT kernels?
When hardware interrupts are disabled via local_irq_save(), this code calls
hv_iommu_lookup_logical_dev_id(), which does:
spin_lock(&hv_iommu_pci_bus_lock);
On PREEMPT_RT, spinlock_t maps to an rt_mutex which can sleep. Acquiring a
sleepable lock while hardware interrupts are disabled will cause a crash.
Would it be safer to hoist the ID lookup outside the local_irq_save() block,
since it does not appear to rely on the per-CPU input page state? Similar
logic is also present in hv_iommu_attach_dev() and
hv_iommu_get_logical_device_property().
[ ... ]
> +static void hv_iommu_release_device(struct device *dev)
> +{
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (pdev->ats_enabled)
> + pci_disable_ats(pdev);
> +
> + dev_iommu_priv_set(dev, NULL);
> + set_dma_ops(dev, NULL);
Is it safe to explicitly clear the generic dma_ops here?
Device DMA operations are typically managed by the core DMA mapping subsystem.
Clearing them directly strips the device of fallback operations like SWIOTLB
during device teardown or unbind.
If there are outstanding DMA mappings, could this cause subsequent unmap
calls to execute without valid operations, potentially leaking bounce buffers?
[ ... ]
> +static void __init hv_init_iommu_device(struct hv_iommu_dev *hv_iommu,
> + struct hv_output_get_iommu_capabilities *hv_iommu_cap)
> +{
> + ida_init(&hv_iommu->domain_ids);
[ ... ]
> + hv_iommu->geometry = (struct iommu_domain_geometry) {
> + .aperture_start = 0,
> + .aperture_end = (((u64)1) << hv_iommu->max_iova_width) - 1,
Can this shift result in undefined behavior if the hypervisor returns
a max_iova_width of 64 or greater?
If 5-level paging is supported, there doesn't appear to be an upper
bound validation on max_iova_width. On x86, shifting by 64 or more
will be masked modulo 64, making 1ULL << 64 evaluate to 1, and
resulting in an aperture_end of 0.
Would it be better to cap max_iova_width at a safe maximum before
performing this calculation?
[ ... ]
> +int __init hv_iommu_init(void)
> +{
[ ... ]
> + iommu_detected = 1;
> + pci_request_acs();
> +
> + hv_iommu = kzalloc_obj(*hv_iommu, GFP_KERNEL);
> + if (!hv_iommu)
> + return -ENOMEM;
> +
> + hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
> +
> + ret = hv_initialize_static_domains();
> + if (ret) {
> + pr_err("static domains init failed: %d\n", ret);
> + goto err_free;
> + }
[ ... ]
> +err_sysfs_remove:
> + iommu_device_sysfs_remove(&hv_iommu->iommu);
> +err_delete_static_domains:
> + hv_delete_device_domain(&hv_blocking_domain);
> + hv_delete_device_domain(&hv_identity_domain);
> +err_free:
> + kfree(hv_iommu);
> + return ret;
> +}
If initialization fails and we jump to err_free, does this leave global state
corrupted?
The global pointer hv_iommu_device is assigned to hv_iommu during
hv_init_iommu_device(), and iommu_detected is set to 1. When hv_iommu is
freed, these globals aren't cleared.
Could this leave a dangling pointer and trick the core architecture into
believing a hardware IOMMU is functional, potentially skipping fallback
mechanisms like SWIOTLB?
Also, should there be a call to ida_destroy(&hv_iommu->domain_ids) in this
cleanup path to prevent leaking internal XArray nodes allocated by ida_init()?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260511162408.1180069-1-zhangyu1@linux.microsoft.com?part=3
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
@ 2026-05-12 23:45 ` sashiko-bot
2026-05-14 18:14 ` Michael Kelley
1 sibling, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-05-12 23:45 UTC (permalink / raw)
To: Yu Zhang; +Cc: linux-pci, linux-hyperv
Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [Critical] hv_iommu_fill_iova_list() incorrectly uses guest PAGE_SHIFT instead of hypervisor HV_HYP_PAGE_SHIFT, causing incomplete and incorrect IOTLB flushes on architectures with non-4KB pages.
- [High] Unconditional use of the new HVCALL_FLUSH_DEVICE_DOMAIN_LIST hypercall without a capability check or fallback.
- [Medium] Bitfield assignment on uninitialized flexible array member causes KMSAN use-of-uninitialized-value warnings.
--
commit 5fd02fd07f9fbc4f1734f312b8a33dfd79a38961
Author: Yu Zhang <zhangyu1@linux.microsoft.com>
iommu/hyperv: Add page-selective IOTLB flush support
This patch adds page-selective IOTLB flush support using the
HVCALL_FLUSH_DEVICE_DOMAIN_LIST hypercall. It decomposes IOVA ranges into
aligned power-of-two regions to fit in a single hypercall page, falling
back to a full domain flush if the range exceeds page capacity.
> diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> index e5fc625314b55..3bca362b78150 100644
> --- a/drivers/iommu/hyperv/iommu.c
> +++ b/drivers/iommu/hyperv/iommu.c
[ ... ]
> +static inline u16 hv_iommu_fill_iova_list(union hv_iommu_flush_va *iova_list,
> + unsigned long start,
> + unsigned long end)
> +{
> + unsigned long start_pfn = start >> PAGE_SHIFT;
> + unsigned long end_pfn = PAGE_ALIGN(end) >> PAGE_SHIFT;
Should this use HV_HYP_PAGE_SHIFT instead of PAGE_SHIFT?
If the kernel is compiled with a larger page size (like 64KB on ARM64),
start >> PAGE_SHIFT produces a 64KB guest PFN.
Since Hyper-V hypercalls expect 4KB PFNs (HV_HYP_PAGE_SHIFT = 12), this could
result in flushing the wrong pages and leaving stale DMA mappings active.
> + unsigned long nr_pages = end_pfn - start_pfn;
> + u16 count = 0;
> +
> + while (nr_pages > 0) {
> + unsigned long flush_pages;
> + int order;
> + unsigned long pfn_align;
> + unsigned long size_align;
> +
> + if (count >= HV_IOMMU_MAX_FLUSH_VA_COUNT) {
> + count = HV_IOMMU_FLUSH_VA_OVERFLOW;
> + break;
> + }
> +
> + if (start_pfn)
> + pfn_align = __ffs(start_pfn);
> + else
> + pfn_align = BITS_PER_LONG - 1;
> +
> + size_align = __fls(nr_pages);
> + order = min(pfn_align, size_align);
> + iova_list[count].page_mask_shift = order;
> + iova_list[count].page_number = start_pfn;
Could this assignment trigger a KMSAN use-of-uninitialized-value warning?
The input buffer is from a per-CPU page and memset in
hv_flush_device_domain_list() only zeroes the fixed-size header, leaving the
flexible array uninitialized.
Assigning to bitfields forces a read-modify-write on the uninitialized 64-bit
union hv_iommu_flush_va.
> +
> + flush_pages = 1UL << order;
> + start_pfn += flush_pages;
> + nr_pages -= flush_pages;
> + count++;
> + }
> +
> + return count;
> +}
> +
> +static void hv_flush_device_domain_list(struct hv_iommu_domain *hv_domain,
> + struct iommu_iotlb_gather *iotlb_gather)
> +{
> + u64 status;
> + u16 count;
> + unsigned long flags;
> + struct hv_input_flush_device_domain_list *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> +
> + input->device_domain = hv_domain->device_domain;
> + input->flags |= HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT;
> + count = hv_iommu_fill_iova_list(input->iova_list,
> + iotlb_gather->start,
> + iotlb_gather->end);
> + if (count == HV_IOMMU_FLUSH_VA_OVERFLOW) {
> + /*
> + * Range exceeds hypercall page capacity. Fall back to a full
> + * domain flush.
> + */
> + struct hv_input_flush_device_domain *flush_all = (void *)input;
> +
> + memset(flush_all, 0, sizeof(*flush_all));
> + flush_all->device_domain = hv_domain->device_domain;
> + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN,
> + flush_all, NULL);
> + } else {
> + status = hv_do_rep_hypercall(
> + HVCALL_FLUSH_DEVICE_DOMAIN_LIST,
> + count, 0, input, NULL);
> + }
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN_LIST failed, status %lld\n", status);
Does this code need a capability check or fallback for older hosts?
If the hypervisor doesn't support the new HVCALL_FLUSH_DEVICE_DOMAIN_LIST
hypercall, hv_do_rep_hypercall() might return an error status.
The code prints an error but doesn't fall back to HVCALL_FLUSH_DEVICE_DOMAIN,
potentially leaving the IOTLB unflushed.
> +}
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260511162408.1180069-1-zhangyu1@linux.microsoft.com?part=4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2026-05-12 22:30 ` sashiko-bot
@ 2026-05-13 18:39 ` Jacob Pan
2026-05-15 12:38 ` Yu Zhang
2026-05-14 18:13 ` Michael Kelley
2 siblings, 1 reply; 19+ messages in thread
From: Jacob Pan @ 2026-05-13 18:39 UTC (permalink / raw)
To: Yu Zhang
Cc: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch, wei.liu,
kys, haiyangz, decui, longli, joro, will, robin.murphy, bhelgaas,
kwilczynski, lpieralisi, mani, robh, arnd, jgg, mhklinux,
tgopinath, easwar.hariharan, jacob.pan
Hi Yu,
On Tue, 12 May 2026 00:24:07 +0800
Yu Zhang <zhangyu1@linux.microsoft.com> wrote:
> Add a para-virtualized IOMMU driver for Linux guests running on
> Hyper-V. This driver implements stage-1 IO translation within the
> guest OS. It integrates with the Linux IOMMU core, utilizing Hyper-V
> hypercalls for:
> - Capability discovery
> - Domain allocation, configuration, and deallocation
> - Device attachment and detachment
> - IOTLB invalidation
>
> The driver constructs x86-compatible stage-1 IO page tables in the
> guest memory using consolidated IO page table helpers. This allows
> the guest to manage stage-1 translations independently of vendor-
> specific drivers (like Intel VT-d or AMD IOMMU).
>
> Hyper-V consumes this stage-1 IO page table when a device domain is
> created and configured, and nests it with the host's stage-2 IO page
> tables, therefore eliminating the VM exits for guest IOMMU mapping
> operations. For unmapping operations, VM exits to perform the IOTLB
> flush are still unavoidable.
>
> Hyper-V identifies each PCI pass-thru device by a logical device ID
> in its hypercall interface. The vPCI driver (pci-hyperv) registers the
> per-bus portion of this ID with the pvIOMMU driver during bus probe.
> The pvIOMMU driver stores this mapping and combines it with the
> function number of the endpoint PCI device to form the complete ID
> for hypercalls.
>
> Co-developed-by: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: Wei Liu <wei.liu@kernel.org>
> Co-developed-by: Easwar Hariharan
> <easwar.hariharan@linux.microsoft.com> Signed-off-by: Easwar
> Hariharan <easwar.hariharan@linux.microsoft.com> Signed-off-by: Yu
> Zhang <zhangyu1@linux.microsoft.com> ---
> arch/x86/hyperv/hv_init.c | 4 +
> arch/x86/include/asm/mshyperv.h | 4 +
> drivers/iommu/hyperv/Kconfig | 17 +
> drivers/iommu/hyperv/Makefile | 1 +
> drivers/iommu/hyperv/iommu.c | 705
> ++++++++++++++++++++++++++++ drivers/iommu/hyperv/iommu.h |
> 54 +++ drivers/pci/controller/pci-hyperv.c | 19 +-
> include/asm-generic/mshyperv.h | 12 +
> 8 files changed, 815 insertions(+), 1 deletion(-)
> create mode 100644 drivers/iommu/hyperv/iommu.c
> create mode 100644 drivers/iommu/hyperv/iommu.h
>
> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> index 323adc93f2dc..2c8ff8e06249 100644
> --- a/arch/x86/hyperv/hv_init.c
> +++ b/arch/x86/hyperv/hv_init.c
> @@ -578,6 +578,10 @@ void __init hyperv_init(void)
> old_setup_percpu_clockev =
> x86_init.timers.setup_percpu_clockev;
> x86_init.timers.setup_percpu_clockev =
> hv_stimer_setup_percpu_clockev; +#ifdef CONFIG_HYPERV_PVIOMMU
> + x86_init.iommu.iommu_init = hv_iommu_init;
> +#endif
> +
> hv_apic_init();
>
> x86_init.pci.arch_init = hv_pci_init;
> diff --git a/arch/x86/include/asm/mshyperv.h
> b/arch/x86/include/asm/mshyperv.h index f64393e853ee..20d947c2c758
> 100644 --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -313,6 +313,10 @@ static inline void
> mshv_vtl_return_hypercall(void) {} static inline void
> __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {} #endif
>
> +#ifdef CONFIG_HYPERV_PVIOMMU
> +int __init hv_iommu_init(void);
> +#endif
> +
> #include <asm-generic/mshyperv.h>
>
> #endif
> diff --git a/drivers/iommu/hyperv/Kconfig
> b/drivers/iommu/hyperv/Kconfig index 30f40d867036..9e658d5c9a77 100644
> --- a/drivers/iommu/hyperv/Kconfig
> +++ b/drivers/iommu/hyperv/Kconfig
> @@ -8,3 +8,20 @@ config HYPERV_IOMMU
> help
> Stub IOMMU driver to handle IRQs to support Hyper-V Linux
> guest and root partitions.
> +
> +if HYPERV_IOMMU
> +config HYPERV_PVIOMMU
> + bool "Microsoft Hypervisor para-virtualized IOMMU support"
> + depends on X86 && HYPERV
> + select IOMMU_API
> + select GENERIC_PT
> + select IOMMU_PT
> + select IOMMU_PT_X86_64
nit:
If HYPERV_PVIOMMU is enabled on a (hypothetical) platform with
GENERIC_ATOMIC64=y, the select would force-enable IOMMU_PT_X86_64 even
though its depends on is unsatisfied — leading to a build failure.
In practice this can't happen today because HYPERV_PVIOMMU already
depends on X86 && HYPERV, and x86 never sets GENERIC_ATOMIC64. But
adding the explicit guard is more defensive.
i.e.
depends on !GENERIC_ATOMIC64 # for cmpxchg64 in IOMMU_PT
> + select IOMMU_IOVA
> + default HYPERV
> + help
> + Para-virtualized IOMMU driver for Linux guests running on
> + Microsoft Hyper-V. Provides DMA remapping and IOTLB
> + flush support to enable DMA isolation for devices
> + assigned to the guest.
> +endif
> diff --git a/drivers/iommu/hyperv/Makefile
> b/drivers/iommu/hyperv/Makefile index 9f557bad94ff..8669741c0a51
> 100644 --- a/drivers/iommu/hyperv/Makefile
> +++ b/drivers/iommu/hyperv/Makefile
> @@ -1,2 +1,3 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-$(CONFIG_HYPERV_IOMMU) += irq_remapping.o
> +obj-$(CONFIG_HYPERV_PVIOMMU) += iommu.o
> diff --git a/drivers/iommu/hyperv/iommu.c
> b/drivers/iommu/hyperv/iommu.c new file mode 100644
> index 000000000000..e5fc625314b5
> --- /dev/null
> +++ b/drivers/iommu/hyperv/iommu.c
> @@ -0,0 +1,705 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * Hyper-V IOMMU driver.
> + *
> + * Copyright (C) 2019, 2024-2026 Microsoft, Inc.
> + */
> +
> +#define pr_fmt(fmt) "Hyper-V pvIOMMU: " fmt
> +#define dev_fmt(fmt) pr_fmt(fmt)
> +
> +#include <linux/iommu.h>
> +#include <linux/pci.h>
> +#include <linux/dma-map-ops.h>
> +#include <linux/generic_pt/iommu.h>
> +#include <linux/pci-ats.h>
> +
> +#include <asm/iommu.h>
> +#include <asm/hypervisor.h>
> +#include <asm/mshyperv.h>
> +
> +#include "iommu.h"
> +#include "../iommu-pages.h"
> +
> +struct hv_iommu_dev *hv_iommu_device;
> +
> +/*
> + * Identity and blocking domains are static singletons: identity is
> a 1:1
> + * passthrough with no page table, blocking rejects all DMA. Neither
> holds
> + * per-IOMMU state, so one instance suffices even with multiple
> vIOMMUs.
> + */
> +static struct hv_iommu_domain hv_identity_domain;
> +static struct hv_iommu_domain hv_blocking_domain;
> +static const struct iommu_domain_ops hv_iommu_identity_domain_ops;
> +static const struct iommu_domain_ops hv_iommu_blocking_domain_ops;
> +static struct iommu_ops hv_iommu_ops;
> +static LIST_HEAD(hv_iommu_pci_bus_list);
> +static DEFINE_SPINLOCK(hv_iommu_pci_bus_lock);
> +
> +#define hv_iommu_present(iommu_cap) (iommu_cap &
> HV_IOMMU_CAP_PRESENT) +#define
> hv_iommu_s1_domain_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1)
> +#define hv_iommu_5lvl_supported(iommu_cap) (iommu_cap &
> HV_IOMMU_CAP_S1_5LVL) +#define hv_iommu_ats_supported(iommu_cap)
> (iommu_cap & HV_IOMMU_CAP_ATS) + +int hv_iommu_register_pci_bus(int
> pci_domain_nr, u32 logical_dev_id_prefix) +{
> + struct hv_pci_busdata *bus, *new;
> + int ret = 0;
> +
> + if (no_iommu || !iommu_detected)
> + return 0;
> +
> + new = kzalloc_obj(*new, GFP_KERNEL);
> + if (!new)
> + return -ENOMEM;
> +
> + spin_lock(&hv_iommu_pci_bus_lock);
> + list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
> + if (bus->pci_domain_nr != pci_domain_nr)
> + continue;
> +
> + if (bus->logical_dev_id_prefix !=
> logical_dev_id_prefix) {
> + pr_err("stale registration for PCI domain %d
> (old prefix 0x%08x, new 0x%08x)\n",
> + pci_domain_nr,
> bus->logical_dev_id_prefix,
> + logical_dev_id_prefix);
> + ret = -EEXIST;
> + }
> +
> + goto out_free;
> + }
> +
> + new->pci_domain_nr = pci_domain_nr;
> + new->logical_dev_id_prefix = logical_dev_id_prefix;
> + list_add(&new->list, &hv_iommu_pci_bus_list);
> + spin_unlock(&hv_iommu_pci_bus_lock);
> + return 0;
> +
> +out_free:
> + spin_unlock(&hv_iommu_pci_bus_lock);
> + kfree(new);
> + return ret;
> +}
> +EXPORT_SYMBOL_FOR_MODULES(hv_iommu_register_pci_bus, "pci-hyperv");
> +
> +void hv_iommu_unregister_pci_bus(int pci_domain_nr)
> +{
> + struct hv_pci_busdata *bus, *tmp;
> +
> + spin_lock(&hv_iommu_pci_bus_lock);
> + list_for_each_entry_safe(bus, tmp, &hv_iommu_pci_bus_list,
> list) {
> + if (bus->pci_domain_nr == pci_domain_nr) {
> + list_del(&bus->list);
> + kfree(bus);
> + break;
> + }
> + }
> + spin_unlock(&hv_iommu_pci_bus_lock);
> +}
> +EXPORT_SYMBOL_FOR_MODULES(hv_iommu_unregister_pci_bus, "pci-hyperv");
> +
> +/*
> + * Look up the logical device ID for a vPCI device. Returns 0 on
> success
> + * with *logical_id filled in; -ENODEV if no entry registered for
> this
> + * device's vPCI bus.
> + */
> +static int hv_iommu_lookup_logical_dev_id(struct pci_dev *pdev, u64
> *logical_id) +{
> + struct hv_pci_busdata *bus;
> + int domain = pci_domain_nr(pdev->bus);
> + int ret = -ENODEV;
> +
> + spin_lock(&hv_iommu_pci_bus_lock);
this is called under local_irq_save, should use raw_spinlock_t for RT
kernel?
> + list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
> + if (bus->pci_domain_nr == domain) {
> + *logical_id =
> (u64)bus->logical_dev_id_prefix |
> + PCI_FUNC(pdev->devfn);
> + ret = 0;
> + break;
> + }
> + }
> + spin_unlock(&hv_iommu_pci_bus_lock);
> + return ret;
> +}
> +
> +static int hv_create_device_domain(struct hv_iommu_domain
> *hv_domain, u32 domain_stage) +{
> + int ret;
> + u64 status;
> + unsigned long flags;
> + struct hv_input_create_device_domain *input;
> +
> + ret = ida_alloc_range(&hv_iommu_device->domain_ids,
> + hv_iommu_device->first_domain,
> hv_iommu_device->last_domain,
> + GFP_KERNEL);
> + if (ret < 0)
> + return ret;
> +
> + hv_domain->device_domain.partition_id = HV_PARTITION_ID_SELF;
> + hv_domain->device_domain.domain_id.type = domain_stage;
> + hv_domain->device_domain.domain_id.id = ret;
> + hv_domain->hv_iommu = hv_iommu_device;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + input->create_device_domain_flags.forward_progress_required
> = 1;
> + input->create_device_domain_flags.inherit_owning_vtl = 0;
> + status = hv_do_hypercall(HVCALL_CREATE_DEVICE_DOMAIN, input,
> NULL); +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status)) {
> + pr_err("HVCALL_CREATE_DEVICE_DOMAIN failed, status
> %lld\n", status);
> + ida_free(&hv_iommu_device->domain_ids,
> hv_domain->device_domain.domain_id.id);
> + }
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static void hv_delete_device_domain(struct hv_iommu_domain
> *hv_domain) +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_delete_device_domain *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input,
> NULL); +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_DELETE_DEVICE_DOMAIN failed, status
> %lld\n", status); +
> + ida_free(&hv_domain->hv_iommu->domain_ids,
> hv_domain->device_domain.domain_id.id); +}
> +
> +static bool hv_iommu_capable(struct device *dev, enum iommu_cap cap)
> +{
> + switch (cap) {
> + case IOMMU_CAP_CACHE_COHERENCY:
> + return true;
> + case IOMMU_CAP_DEFERRED_FLUSH:
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> +static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
> +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_flush_device_domain *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN, input,
> NULL); +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN failed, status
> %lld\n", status); +}
> +
> +static void hv_iommu_detach_dev(struct iommu_domain *domain, struct
> device *dev) +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_detach_device_domain *input;
> + struct pci_dev *pdev;
> + struct hv_iommu_domain *hv_domain =
> to_hv_iommu_domain(domain);
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> + /* See the attach function, only PCI devices for now */
> + if (!dev_is_pci(dev) || vdev->hv_domain != hv_domain)
> + return;
> +
> + pdev = to_pci_dev(dev);
> +
> + dev_dbg(dev, "detaching from domain %d\n",
> hv_domain->device_domain.domain_id.id); +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + if (hv_iommu_lookup_logical_dev_id(pdev,
> &input->device_id.as_uint64)) {
> + local_irq_restore(flags);
> + dev_warn(&pdev->dev, "no IOMMU registration for vPCI
> bus on detach\n");
> + return;
> + }
> + status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input,
> NULL); +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_DETACH_DEVICE_DOMAIN failed, status
> %lld\n", status); +
> + hv_flush_device_domain(hv_domain);
> +
> + vdev->hv_domain = NULL;
> +}
> +
> +static int hv_iommu_attach_dev(struct iommu_domain *domain, struct
> device *dev,
> + struct iommu_domain *old)
> +{
> + u64 status;
> + unsigned long flags;
> + struct pci_dev *pdev;
> + struct hv_input_attach_device_domain *input;
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> + struct hv_iommu_domain *hv_domain =
> to_hv_iommu_domain(domain);
> + int ret;
> +
> + /* Only allow PCI devices for now */
> + if (!dev_is_pci(dev))
> + return -EINVAL;
> +
> + if (vdev->hv_domain == hv_domain)
> + return 0;
> +
> + if (vdev->hv_domain)
> + hv_iommu_detach_dev(&vdev->hv_domain->domain, dev);
> +
> + pdev = to_pci_dev(dev);
> + dev_dbg(dev, "attaching to domain %d\n",
> + hv_domain->device_domain.domain_id.id);
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + ret = hv_iommu_lookup_logical_dev_id(pdev,
> &input->device_id.as_uint64);
> + if (ret) {
> + local_irq_restore(flags);
> + dev_err(&pdev->dev, "no IOMMU registration for vPCI
> bus\n");
> + return ret;
> + }
> + status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input,
> NULL); +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_ATTACH_DEVICE_DOMAIN failed, status
> %lld\n", status);
> + else
> + vdev->hv_domain = hv_domain;
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static int hv_iommu_get_logical_device_property(struct device *dev,
> + u32 code,
> + struct
> hv_output_get_logical_device_property *property) +{
> + u64 status, lid;
> + unsigned long flags;
> + int ret;
> + struct hv_input_get_logical_device_property *input;
> + struct hv_output_get_logical_device_property *output;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + output = *this_cpu_ptr(hyperv_pcpu_input_arg) +
> sizeof(*input);
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + ret = hv_iommu_lookup_logical_dev_id(to_pci_dev(dev), &lid);
> + if (ret) {
> + local_irq_restore(flags);
> + return ret;
> + }
> + input->logical_device_id = lid;
> + input->code = code;
> + status = hv_do_hypercall(HVCALL_GET_LOGICAL_DEVICE_PROPERTY,
> input, output);
> + *property = *output;
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_GET_LOGICAL_DEVICE_PROPERTY failed,
> status %lld\n", status); +
> + return hv_result_to_errno(status);
> +}
> +
> +static struct iommu_device *hv_iommu_probe_device(struct device *dev)
> +{
> + struct pci_dev *pdev;
> + struct hv_iommu_endpoint *vdev;
> + struct hv_output_get_logical_device_property
> device_iommu_property = {0}; +
> + if (!dev_is_pci(dev))
> + return ERR_PTR(-ENODEV);
> +
> + pdev = to_pci_dev(dev);
> +
> + if (hv_iommu_get_logical_device_property(dev,
> +
> HV_LOGICAL_DEVICE_PROPERTY_PVIOMMU,
> +
> &device_iommu_property) ||
> + !(device_iommu_property.device_iommu &
> HV_DEVICE_IOMMU_ENABLED))
> + return ERR_PTR(-ENODEV);
> +
> + vdev = kzalloc_obj(*vdev, GFP_KERNEL);
> + if (!vdev)
> + return ERR_PTR(-ENOMEM);
> +
> + vdev->dev = dev;
> + vdev->hv_iommu = hv_iommu_device;
> + dev_iommu_priv_set(dev, vdev);
> +
> + if (hv_iommu_ats_supported(hv_iommu_device->cap) &&
> + pci_ats_supported(pdev))
> + pci_enable_ats(pdev,
> __ffs(hv_iommu_device->pgsize_bitmap)); +
> + return &vdev->hv_iommu->iommu;
> +}
> +
> +static void hv_iommu_release_device(struct device *dev)
> +{
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (pdev->ats_enabled)
> + pci_disable_ats(pdev);
> +
> + dev_iommu_priv_set(dev, NULL);
> + set_dma_ops(dev, NULL);
I don't think this is necessary.
> +
> + kfree(vdev);
> +}
> +
> +static struct iommu_group *hv_iommu_device_group(struct device *dev)
> +{
> + if (dev_is_pci(dev))
> + return pci_device_group(dev);
> + else
> + return generic_device_group(dev);
non pci device already rejected during attach, maybe we should warn
here?
WARN_ON_ONCE(1);
return generic_device_group(dev);
> +}
> +
> +static int hv_configure_device_domain(struct hv_iommu_domain
> *hv_domain, u32 domain_type) +{
> + u64 status;
> + unsigned long flags;
> + struct pt_iommu_x86_64_hw_info pt_info;
> + struct hv_input_configure_device_domain *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + input->settings.flags.blocked = (domain_type ==
> IOMMU_DOMAIN_BLOCKED);
> + input->settings.flags.translation_enabled = (domain_type !=
> IOMMU_DOMAIN_IDENTITY); +
Should this be:
input->settings.flags.translation_enabled =
(domain_type & __IOMMU_DOMAIN_PAGING);
Otherwise, blocked domain will have translation enabled. Maybe add some
explanation of what HV expects.
> + if (domain_type & __IOMMU_DOMAIN_PAGING) {
> + pt_iommu_x86_64_hw_info(&hv_domain->pt_iommu_x86_64,
> &pt_info);
> + input->settings.page_table_root = pt_info.gcr3_pt;
> + input->settings.flags.first_stage_paging_mode =
> + pt_info.levels == 5;
> + }
> + status = hv_do_hypercall(HVCALL_CONFIGURE_DEVICE_DOMAIN,
> input, NULL); +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_CONFIGURE_DEVICE_DOMAIN failed,
> status %lld\n", status); +
> + return hv_result_to_errno(status);
> +}
> +
> +static int __init hv_initialize_static_domains(void)
> +{
> + int ret;
> + struct hv_iommu_domain *hv_domain;
> +
> + /* Default stage-1 identity domain */
> + hv_domain = &hv_identity_domain;
> +
> + ret = hv_create_device_domain(hv_domain,
> HV_DEVICE_DOMAIN_TYPE_S1);
> + if (ret)
> + return ret;
> +
> + ret = hv_configure_device_domain(hv_domain,
> IOMMU_DOMAIN_IDENTITY);
> + if (ret)
> + goto delete_identity_domain;
> +
> + hv_domain->domain.type = IOMMU_DOMAIN_IDENTITY;
> + hv_domain->domain.ops = &hv_iommu_identity_domain_ops;
> + hv_domain->domain.owner = &hv_iommu_ops;
> + hv_domain->domain.geometry = hv_iommu_device->geometry;
> + hv_domain->domain.pgsize_bitmap =
> hv_iommu_device->pgsize_bitmap; +
> + /* Default stage-1 blocked domain */
> + hv_domain = &hv_blocking_domain;
> +
> + ret = hv_create_device_domain(hv_domain,
> HV_DEVICE_DOMAIN_TYPE_S1);
> + if (ret)
> + goto delete_identity_domain;
> +
> + ret = hv_configure_device_domain(hv_domain,
> IOMMU_DOMAIN_BLOCKED);
> + if (ret)
> + goto delete_blocked_domain;
> +
> + hv_domain->domain.type = IOMMU_DOMAIN_BLOCKED;
> + hv_domain->domain.ops = &hv_iommu_blocking_domain_ops;
> + hv_domain->domain.owner = &hv_iommu_ops;
> + hv_domain->domain.geometry = hv_iommu_device->geometry;
> + hv_domain->domain.pgsize_bitmap =
> hv_iommu_device->pgsize_bitmap; +
> + return 0;
> +
> +delete_blocked_domain:
> + hv_delete_device_domain(&hv_blocking_domain);
> +delete_identity_domain:
> + hv_delete_device_domain(&hv_identity_domain);
> + return ret;
> +}
> +
> +#define INTERRUPT_RANGE_START (0xfee00000)
> +#define INTERRUPT_RANGE_END (0xfeefffff)
> +static void hv_iommu_get_resv_regions(struct device *dev,
> + struct list_head *head)
> +{
> + struct iommu_resv_region *region;
> +
> + region = iommu_alloc_resv_region(INTERRUPT_RANGE_START,
> + INTERRUPT_RANGE_END -
> INTERRUPT_RANGE_START + 1,
> + 0, IOMMU_RESV_MSI, GFP_KERNEL);
> + if (!region)
> + return;
> +
> + list_add_tail(®ion->list, head);
> +}
> +
> +static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
> +{
> + hv_flush_device_domain(to_hv_iommu_domain(domain));
> +}
> +
> +static void hv_iommu_iotlb_sync(struct iommu_domain *domain,
> + struct iommu_iotlb_gather
> *iotlb_gather) +{
> + hv_flush_device_domain(to_hv_iommu_domain(domain));
> +
> + iommu_put_pages_list(&iotlb_gather->freelist);
> +}
> +
> +static void hv_iommu_paging_domain_free(struct iommu_domain *domain)
> +{
> + struct hv_iommu_domain *hv_domain =
> to_hv_iommu_domain(domain); +
> + /* Free all remaining mappings */
> + pt_iommu_deinit(&hv_domain->pt_iommu);
> +
> + hv_delete_device_domain(hv_domain);
> +
> + kfree(hv_domain);
> +}
> +
> +static const struct iommu_domain_ops hv_iommu_identity_domain_ops = {
> + .attach_dev = hv_iommu_attach_dev,
> +};
> +
> +static const struct iommu_domain_ops hv_iommu_blocking_domain_ops = {
> + .attach_dev = hv_iommu_attach_dev,
> +};
> +
> +static const struct iommu_domain_ops hv_iommu_paging_domain_ops = {
> + .attach_dev = hv_iommu_attach_dev,
> + IOMMU_PT_DOMAIN_OPS(x86_64),
> + .flush_iotlb_all = hv_iommu_flush_iotlb_all,
> + .iotlb_sync = hv_iommu_iotlb_sync,
> + .free = hv_iommu_paging_domain_free,
> +};
> +
> +static struct iommu_domain *hv_iommu_domain_alloc_paging(struct
> device *dev) +{
> + int ret;
> + struct hv_iommu_domain *hv_domain;
> + struct pt_iommu_x86_64_cfg cfg = {};
> +
> + hv_domain = kzalloc_obj(*hv_domain, GFP_KERNEL);
> + if (!hv_domain)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = hv_create_device_domain(hv_domain,
> HV_DEVICE_DOMAIN_TYPE_S1);
> + if (ret) {
> + kfree(hv_domain);
> + return ERR_PTR(ret);
> + }
> +
> + hv_domain->domain.geometry = hv_iommu_device->geometry;
> + hv_domain->pt_iommu.nid = dev_to_node(dev);
> +
> + cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
> + cfg.common.hw_max_oasz_lg2 = 52;
> + cfg.top_level = (hv_iommu_device->max_iova_width > 48) ? 4 :
> 3; +
> + ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64,
> &cfg, GFP_KERNEL);
> + if (ret) {
> + hv_delete_device_domain(hv_domain);
> + kfree(hv_domain);
> + return ERR_PTR(ret);
> + }
> +
> + /* Constrain to page sizes the hypervisor supports */
> + hv_domain->domain.pgsize_bitmap &=
> hv_iommu_device->pgsize_bitmap; +
> + hv_domain->domain.ops = &hv_iommu_paging_domain_ops;
> +
> + ret = hv_configure_device_domain(hv_domain,
> __IOMMU_DOMAIN_PAGING);
> + if (ret) {
> + pt_iommu_deinit(&hv_domain->pt_iommu);
> + hv_delete_device_domain(hv_domain);
> + kfree(hv_domain);
> + return ERR_PTR(ret);
> + }
> +
> + return &hv_domain->domain;
> +}
> +
> +static struct iommu_ops hv_iommu_ops = {
> + .capable = hv_iommu_capable,
> + .domain_alloc_paging = hv_iommu_domain_alloc_paging,
> + .probe_device = hv_iommu_probe_device,
> + .release_device = hv_iommu_release_device,
> + .device_group = hv_iommu_device_group,
> + .get_resv_regions = hv_iommu_get_resv_regions,
> + .owner = THIS_MODULE,
> + .identity_domain = &hv_identity_domain.domain,
> + .blocked_domain =
> &hv_blocking_domain.domain,
> + .release_domain =
> &hv_blocking_domain.domain, +};
> +
> +static int hv_iommu_detect(struct hv_output_get_iommu_capabilities
> *hv_iommu_cap) +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_get_iommu_capabilities *input;
> + struct hv_output_get_iommu_capabilities *output;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + output = *this_cpu_ptr(hyperv_pcpu_input_arg) +
> sizeof(*input);
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + status = hv_do_hypercall(HVCALL_GET_IOMMU_CAPABILITIES,
> input, output);
> + *hv_iommu_cap = *output;
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_GET_IOMMU_CAPABILITIES failed, status
> %lld\n", status); +
> + return hv_result_to_errno(status);
> +}
> +
> +static void __init hv_init_iommu_device(struct hv_iommu_dev
> *hv_iommu,
> + struct hv_output_get_iommu_capabilities
> *hv_iommu_cap) +{
> + ida_init(&hv_iommu->domain_ids);
> +
> + hv_iommu->cap = hv_iommu_cap->iommu_cap;
> + hv_iommu->max_iova_width = hv_iommu_cap->max_iova_width;
> + if (!hv_iommu_5lvl_supported(hv_iommu->cap) &&
> + hv_iommu->max_iova_width > 48) {
> + pr_info("5-level paging not supported, limiting iova
> width to 48.\n");
> + hv_iommu->max_iova_width = 48;
> + }
> +
> + hv_iommu->geometry = (struct iommu_domain_geometry) {
> + .aperture_start = 0,
> + .aperture_end = (((u64)1) <<
> hv_iommu->max_iova_width) - 1,
> + .force_aperture = true,
> + };
> +
> + hv_iommu->first_domain = HV_DEVICE_DOMAIN_ID_DEFAULT + 1;
> + hv_iommu->last_domain = HV_DEVICE_DOMAIN_ID_NULL - 1;
> + /* Only x86 page sizes (4K/2M/1G) are supported */
> + hv_iommu->pgsize_bitmap = hv_iommu_cap->pgsize_bitmap &
> + (SZ_4K | SZ_2M | SZ_1G);
> + if (hv_iommu->pgsize_bitmap != hv_iommu_cap->pgsize_bitmap)
> + pr_warn("unsupported page sizes masked: 0x%llx ->
> 0x%llx\n",
> + hv_iommu_cap->pgsize_bitmap,
> hv_iommu->pgsize_bitmap);
> + if (!hv_iommu->pgsize_bitmap) {
> + pr_warn("no supported page sizes, defaulting to
> 4K\n");
> + hv_iommu->pgsize_bitmap = SZ_4K;
> + }
> + hv_iommu_device = hv_iommu;
> +}
> +
> +int __init hv_iommu_init(void)
> +{
> + int ret = 0;
> + struct hv_iommu_dev *hv_iommu = NULL;
> + struct hv_output_get_iommu_capabilities hv_iommu_cap = {0};
> +
> + if (no_iommu || iommu_detected)
> + return -ENODEV;
> +
> + if (!hv_is_hyperv_initialized())
> + return -ENODEV;
> +
> + ret = hv_iommu_detect(&hv_iommu_cap);
> + if (ret) {
> + pr_err("HVCALL_GET_IOMMU_CAPABILITIES failed: %d\n",
> ret);
> + return -ENODEV;
> + }
> +
> + if (!hv_iommu_present(hv_iommu_cap.iommu_cap) ||
> + !hv_iommu_s1_domain_supported(hv_iommu_cap.iommu_cap)) {
> + pr_err("IOMMU capabilities not sufficient:
> cap=0x%llx\n",
> + hv_iommu_cap.iommu_cap);
> + return -ENODEV;
> + }
> +
> + iommu_detected = 1;
> + pci_request_acs();
> +
> + hv_iommu = kzalloc_obj(*hv_iommu, GFP_KERNEL);
> + if (!hv_iommu)
> + return -ENOMEM;
> +
> + hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
> +
> + ret = hv_initialize_static_domains();
> + if (ret) {
> + pr_err("static domains init failed: %d\n", ret);
> + goto err_free;
> + }
> +
> + ret = iommu_device_sysfs_add(&hv_iommu->iommu, NULL, NULL,
> "%s", "hv-iommu");
> + if (ret) {
> + pr_err("iommu_device_sysfs_add failed: %d\n", ret);
> + goto err_delete_static_domains;
> + }
> +
> + ret = iommu_device_register(&hv_iommu->iommu, &hv_iommu_ops,
> NULL);
> + if (ret) {
> + pr_err("iommu_device_register failed: %d\n", ret);
> + goto err_sysfs_remove;
> + }
> +
> + pr_info("successfully initialized\n");
> + return 0;
> +
> +err_sysfs_remove:
> + iommu_device_sysfs_remove(&hv_iommu->iommu);
> +err_delete_static_domains:
> + hv_delete_device_domain(&hv_blocking_domain);
> + hv_delete_device_domain(&hv_identity_domain);
> +err_free:
> + kfree(hv_iommu);
> + return ret;
> +}
> diff --git a/drivers/iommu/hyperv/iommu.h
> b/drivers/iommu/hyperv/iommu.h new file mode 100644
> index 000000000000..43f20d371245
> --- /dev/null
> +++ b/drivers/iommu/hyperv/iommu.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * Hyper-V IOMMU driver.
> + *
> + * Copyright (C) 2024-2025, Microsoft, Inc.
> + *
> + */
> +
> +#ifndef _HYPERV_IOMMU_H
> +#define _HYPERV_IOMMU_H
> +
> +struct hv_iommu_dev {
> + struct iommu_device iommu;
> + struct ida domain_ids;
> +
> + /* Device configuration */
> + u8 max_iova_width;
> + u8 max_pasid_width;
> + u64 cap;
> + u64 pgsize_bitmap;
> +
> + struct iommu_domain_geometry geometry;
> + u64 first_domain;
> + u64 last_domain;
> +};
> +
> +struct hv_iommu_domain {
> + union {
> + struct iommu_domain domain;
> + struct pt_iommu pt_iommu;
> + struct pt_iommu_x86_64 pt_iommu_x86_64;
> + };
> + struct hv_iommu_dev *hv_iommu;
> + struct hv_input_device_domain device_domain;
> + u64 pgsize_bitmap;
> +};
> +
> +struct hv_pci_busdata {
> + int pci_domain_nr;
> + u32 logical_dev_id_prefix;
> + struct list_head list;
> +};
> +
> +struct hv_iommu_endpoint {
> + struct device *dev;
> + struct hv_iommu_dev *hv_iommu;
> + struct hv_iommu_domain *hv_domain;
> +};
> +
> +#define to_hv_iommu_domain(d) \
> + container_of(d, struct hv_iommu_domain, domain)
> +
> +#endif /* _HYPERV_IOMMU_H */
> diff --git a/drivers/pci/controller/pci-hyperv.c
> b/drivers/pci/controller/pci-hyperv.c index
> cfc8fa403dad..a4af9c8c2220 100644 ---
> a/drivers/pci/controller/pci-hyperv.c +++
> b/drivers/pci/controller/pci-hyperv.c @@ -3715,6 +3715,7 @@ static
> int hv_pci_probe(struct hv_device *hdev, struct hv_pcibus_device
> *hbus; int ret, dom;
> u16 dom_req;
> + u32 prefix;
> char *name;
>
> bridge = devm_pci_alloc_host_bridge(&hdev->device, 0);
> @@ -3857,13 +3858,25 @@ static int hv_pci_probe(struct hv_device
> *hdev,
> hbus->state = hv_pcibus_probed;
>
> - ret = create_root_hv_pci_bus(hbus);
> + /* Notify pvIOMMU before any device on the bus is scanned. */
> + prefix = (hdev->dev_instance.b[5] << 24) |
> + (hdev->dev_instance.b[4] << 16) |
> + (hdev->dev_instance.b[7] << 8) |
> + (hdev->dev_instance.b[6] & 0xf8);
> +
> + ret = hv_iommu_register_pci_bus(dom, prefix);
> if (ret)
> goto free_windows;
>
> + ret = create_root_hv_pci_bus(hbus);
> + if (ret)
> + goto unregister_pviommu;
> +
> mutex_unlock(&hbus->state_lock);
> return 0;
>
> +unregister_pviommu:
> + hv_iommu_unregister_pci_bus(dom);
> free_windows:
> hv_pci_free_bridge_windows(hbus);
> exit_d0:
> @@ -3974,8 +3987,10 @@ static int hv_pci_bus_exit(struct hv_device
> *hdev, bool keep_devs) static void hv_pci_remove(struct hv_device
> *hdev) {
> struct hv_pcibus_device *hbus;
> + int dom;
>
> hbus = hv_get_drvdata(hdev);
> + dom = hbus->bridge->domain_nr;
> if (hbus->state == hv_pcibus_installed) {
> tasklet_disable(&hdev->channel->callback_event);
> hbus->state = hv_pcibus_removing;
> @@ -3994,6 +4009,8 @@ static void hv_pci_remove(struct hv_device
> *hdev) hv_pci_remove_slots(hbus);
> pci_remove_root_bus(hbus->bridge->bus);
> pci_unlock_rescan_remove();
> +
> + hv_iommu_unregister_pci_bus(dom);
> }
>
> hv_pci_bus_exit(hdev, false);
> diff --git a/include/asm-generic/mshyperv.h
> b/include/asm-generic/mshyperv.h index bf601d67cecb..b71345c74568
> 100644 --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -73,6 +73,18 @@ extern enum hv_partition_type
> hv_curr_partition_type; extern void * __percpu *hyperv_pcpu_input_arg;
> extern void * __percpu *hyperv_pcpu_output_arg;
>
> +#ifdef CONFIG_HYPERV_PVIOMMU
> +int hv_iommu_register_pci_bus(int pci_domain_nr, u32
> logical_dev_id_prefix); +void hv_iommu_unregister_pci_bus(int
> pci_domain_nr); +#else
> +static inline int hv_iommu_register_pci_bus(int pci_domain_nr,
> + u32
> logical_dev_id_prefix) +{
> + return 0;
> +}
> +static inline void hv_iommu_unregister_pci_bus(int pci_domain_nr) { }
> +#endif
> +
> u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr);
> u64 hv_do_fast_hypercall8(u16 control, u64 input8);
> u64 hv_do_fast_hypercall16(u16 control, u64 input1, u64 input2);
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2026-05-12 22:30 ` sashiko-bot
2026-05-13 18:39 ` Jacob Pan
@ 2026-05-14 18:13 ` Michael Kelley
2026-05-15 13:59 ` Yu Zhang
2 siblings, 1 reply; 19+ messages in thread
From: Michael Kelley @ 2026-05-14 18:13 UTC (permalink / raw)
To: Yu Zhang, linux-kernel@vger.kernel.org,
linux-hyperv@vger.kernel.org, iommu@lists.linux.dev,
linux-pci@vger.kernel.org, linux-arch@vger.kernel.org
Cc: wei.liu@kernel.org, kys@microsoft.com, haiyangz@microsoft.com,
decui@microsoft.com, longli@microsoft.com, joro@8bytes.org,
will@kernel.org, robin.murphy@arm.com, bhelgaas@google.com,
kwilczynski@kernel.org, lpieralisi@kernel.org, mani@kernel.org,
robh@kernel.org, arnd@arndb.de, jgg@ziepe.ca, Michael Kelley,
jacob.pan@linux.microsoft.com, tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
>
> Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
> This driver implements stage-1 IO translation within the guest OS.
> It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
> for:
> - Capability discovery
> - Domain allocation, configuration, and deallocation
> - Device attachment and detachment
> - IOTLB invalidation
>
> The driver constructs x86-compatible stage-1 IO page tables in the
> guest memory using consolidated IO page table helpers. This allows
> the guest to manage stage-1 translations independently of vendor-
> specific drivers (like Intel VT-d or AMD IOMMU).
>
> Hyper-V consumes this stage-1 IO page table when a device domain is
> created and configured, and nests it with the host's stage-2 IO page
> tables, therefore eliminating the VM exits for guest IOMMU mapping
> operations. For unmapping operations, VM exits to perform the IOTLB
> flush are still unavoidable.
>
> Hyper-V identifies each PCI pass-thru device by a logical device ID
> in its hypercall interface. The vPCI driver (pci-hyperv) registers the
> per-bus portion of this ID with the pvIOMMU driver during bus probe.
> The pvIOMMU driver stores this mapping and combines it with the function
> number of the endpoint PCI device to form the complete ID for hypercalls.
As you are probably aware, Mukesh's patch series to support PCI
pass-thru devices also needs to get the logical device ID. Maybe the
registration mechanism needs to move somewhere that can be shared
with his code.
>
> Co-developed-by: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: Wei Liu <wei.liu@kernel.org>
> Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> ---
> arch/x86/hyperv/hv_init.c | 4 +
> arch/x86/include/asm/mshyperv.h | 4 +
> drivers/iommu/hyperv/Kconfig | 17 +
> drivers/iommu/hyperv/Makefile | 1 +
> drivers/iommu/hyperv/iommu.c | 705 ++++++++++++++++++++++++++++
> drivers/iommu/hyperv/iommu.h | 54 +++
> drivers/pci/controller/pci-hyperv.c | 19 +-
> include/asm-generic/mshyperv.h | 12 +
> 8 files changed, 815 insertions(+), 1 deletion(-)
> create mode 100644 drivers/iommu/hyperv/iommu.c
> create mode 100644 drivers/iommu/hyperv/iommu.h
>
> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> index 323adc93f2dc..2c8ff8e06249 100644
> --- a/arch/x86/hyperv/hv_init.c
> +++ b/arch/x86/hyperv/hv_init.c
> @@ -578,6 +578,10 @@ void __init hyperv_init(void)
> old_setup_percpu_clockev = x86_init.timers.setup_percpu_clockev;
> x86_init.timers.setup_percpu_clockev = hv_stimer_setup_percpu_clockev;
>
> +#ifdef CONFIG_HYPERV_PVIOMMU
> + x86_init.iommu.iommu_init = hv_iommu_init;
> +#endif
> +
> hv_apic_init();
>
> x86_init.pci.arch_init = hv_pci_init;
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> index f64393e853ee..20d947c2c758 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -313,6 +313,10 @@ static inline void mshv_vtl_return_hypercall(void) {}
> static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
> #endif
>
> +#ifdef CONFIG_HYPERV_PVIOMMU
> +int __init hv_iommu_init(void);
> +#endif
> +
> #include <asm-generic/mshyperv.h>
>
> #endif
> diff --git a/drivers/iommu/hyperv/Kconfig b/drivers/iommu/hyperv/Kconfig
> index 30f40d867036..9e658d5c9a77 100644
> --- a/drivers/iommu/hyperv/Kconfig
> +++ b/drivers/iommu/hyperv/Kconfig
> @@ -8,3 +8,20 @@ config HYPERV_IOMMU
> help
> Stub IOMMU driver to handle IRQs to support Hyper-V Linux
> guest and root partitions.
> +
> +if HYPERV_IOMMU
> +config HYPERV_PVIOMMU
> + bool "Microsoft Hypervisor para-virtualized IOMMU support"
> + depends on X86 && HYPERV
What is the intent w.r.t. 32-bit builds? Using X86 instead of X86_64
allows it. I did a 32-bit build and didn't get any build failures, which is
good. But I can't run it to see if the pvIOMMU actually works in a
32-bit build. I don't know how building X86_64 generic PT entries
would fare.
> + select IOMMU_API
> + select GENERIC_PT
> + select IOMMU_PT
> + select IOMMU_PT_X86_64
> + select IOMMU_IOVA
> + default HYPERV
> + help
> + Para-virtualized IOMMU driver for Linux guests running on
> + Microsoft Hyper-V. Provides DMA remapping and IOTLB
> + flush support to enable DMA isolation for devices
> + assigned to the guest.
> +endif
> diff --git a/drivers/iommu/hyperv/Makefile b/drivers/iommu/hyperv/Makefile
> index 9f557bad94ff..8669741c0a51 100644
> --- a/drivers/iommu/hyperv/Makefile
> +++ b/drivers/iommu/hyperv/Makefile
> @@ -1,2 +1,3 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-$(CONFIG_HYPERV_IOMMU) += irq_remapping.o
> +obj-$(CONFIG_HYPERV_PVIOMMU) += iommu.o
> diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> new file mode 100644
> index 000000000000..e5fc625314b5
> --- /dev/null
> +++ b/drivers/iommu/hyperv/iommu.c
> @@ -0,0 +1,705 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * Hyper-V IOMMU driver.
> + *
> + * Copyright (C) 2019, 2024-2026 Microsoft, Inc.
> + */
> +
> +#define pr_fmt(fmt) "Hyper-V pvIOMMU: " fmt
> +#define dev_fmt(fmt) pr_fmt(fmt)
> +
> +#include <linux/iommu.h>
> +#include <linux/pci.h>
> +#include <linux/dma-map-ops.h>
> +#include <linux/generic_pt/iommu.h>
> +#include <linux/pci-ats.h>
> +
> +#include <asm/iommu.h>
> +#include <asm/hypervisor.h>
> +#include <asm/mshyperv.h>
> +
> +#include "iommu.h"
> +#include "../iommu-pages.h"
> +
> +struct hv_iommu_dev *hv_iommu_device;
> +
> +/*
> + * Identity and blocking domains are static singletons: identity is a 1:1
> + * passthrough with no page table, blocking rejects all DMA. Neither holds
> + * per-IOMMU state, so one instance suffices even with multiple vIOMMUs.
> + */
> +static struct hv_iommu_domain hv_identity_domain;
> +static struct hv_iommu_domain hv_blocking_domain;
> +static const struct iommu_domain_ops hv_iommu_identity_domain_ops;
> +static const struct iommu_domain_ops hv_iommu_blocking_domain_ops;
> +static struct iommu_ops hv_iommu_ops;
> +static LIST_HEAD(hv_iommu_pci_bus_list);
> +static DEFINE_SPINLOCK(hv_iommu_pci_bus_lock);
> +
> +#define hv_iommu_present(iommu_cap) (iommu_cap & HV_IOMMU_CAP_PRESENT)
> +#define hv_iommu_s1_domain_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1)
> +#define hv_iommu_5lvl_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1_5LVL)
> +#define hv_iommu_ats_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_ATS)
> +
> +int hv_iommu_register_pci_bus(int pci_domain_nr, u32 logical_dev_id_prefix)
> +{
> + struct hv_pci_busdata *bus, *new;
> + int ret = 0;
> +
> + if (no_iommu || !iommu_detected)
> + return 0;
> +
> + new = kzalloc_obj(*new, GFP_KERNEL);
> + if (!new)
> + return -ENOMEM;
> +
> + spin_lock(&hv_iommu_pci_bus_lock);
> + list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
> + if (bus->pci_domain_nr != pci_domain_nr)
> + continue;
> +
> + if (bus->logical_dev_id_prefix != logical_dev_id_prefix) {
> + pr_err("stale registration for PCI domain %d (old prefix 0x%08x, new 0x%08x)\n",
> + pci_domain_nr, bus->logical_dev_id_prefix,
> + logical_dev_id_prefix);
> + ret = -EEXIST;
> + }
> +
> + goto out_free;
> + }
> +
> + new->pci_domain_nr = pci_domain_nr;
> + new->logical_dev_id_prefix = logical_dev_id_prefix;
> + list_add(&new->list, &hv_iommu_pci_bus_list);
> + spin_unlock(&hv_iommu_pci_bus_lock);
> + return 0;
> +
> +out_free:
> + spin_unlock(&hv_iommu_pci_bus_lock);
> + kfree(new);
> + return ret;
> +}
> +EXPORT_SYMBOL_FOR_MODULES(hv_iommu_register_pci_bus, "pci-hyperv");
> +
> +void hv_iommu_unregister_pci_bus(int pci_domain_nr)
> +{
> + struct hv_pci_busdata *bus, *tmp;
> +
> + spin_lock(&hv_iommu_pci_bus_lock);
> + list_for_each_entry_safe(bus, tmp, &hv_iommu_pci_bus_list, list) {
> + if (bus->pci_domain_nr == pci_domain_nr) {
> + list_del(&bus->list);
> + kfree(bus);
> + break;
> + }
> + }
> + spin_unlock(&hv_iommu_pci_bus_lock);
> +}
> +EXPORT_SYMBOL_FOR_MODULES(hv_iommu_unregister_pci_bus, "pci-hyperv");
> +
> +/*
> + * Look up the logical device ID for a vPCI device. Returns 0 on success
> + * with *logical_id filled in; -ENODEV if no entry registered for this
> + * device's vPCI bus.
> + */
> +static int hv_iommu_lookup_logical_dev_id(struct pci_dev *pdev, u64 *logical_id)
> +{
> + struct hv_pci_busdata *bus;
> + int domain = pci_domain_nr(pdev->bus);
> + int ret = -ENODEV;
> +
> + spin_lock(&hv_iommu_pci_bus_lock);
> + list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
> + if (bus->pci_domain_nr == domain) {
> + *logical_id = (u64)bus->logical_dev_id_prefix |
> + PCI_FUNC(pdev->devfn);
> + ret = 0;
> + break;
> + }
> + }
> + spin_unlock(&hv_iommu_pci_bus_lock);
> + return ret;
> +}
> +
> +static int hv_create_device_domain(struct hv_iommu_domain *hv_domain, u32 domain_stage)
> +{
> + int ret;
> + u64 status;
> + unsigned long flags;
> + struct hv_input_create_device_domain *input;
> +
> + ret = ida_alloc_range(&hv_iommu_device->domain_ids,
> + hv_iommu_device->first_domain, hv_iommu_device->last_domain,
> + GFP_KERNEL);
> + if (ret < 0)
> + return ret;
> +
> + hv_domain->device_domain.partition_id = HV_PARTITION_ID_SELF;
> + hv_domain->device_domain.domain_id.type = domain_stage;
> + hv_domain->device_domain.domain_id.id = ret;
> + hv_domain->hv_iommu = hv_iommu_device;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + input->create_device_domain_flags.forward_progress_required = 1;
> + input->create_device_domain_flags.inherit_owning_vtl = 0;
> + status = hv_do_hypercall(HVCALL_CREATE_DEVICE_DOMAIN, input, NULL);
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status)) {
> + pr_err("HVCALL_CREATE_DEVICE_DOMAIN failed, status %lld\n", status);
> + ida_free(&hv_iommu_device->domain_ids, hv_domain->device_domain.domain_id.id);
> + }
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static void hv_delete_device_domain(struct hv_iommu_domain *hv_domain)
> +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_delete_device_domain *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input, NULL);
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_DELETE_DEVICE_DOMAIN failed, status %lld\n", status);
> +
> + ida_free(&hv_domain->hv_iommu->domain_ids, hv_domain->device_domain.domain_id.id);
> +}
> +
> +static bool hv_iommu_capable(struct device *dev, enum iommu_cap cap)
> +{
> + switch (cap) {
> + case IOMMU_CAP_CACHE_COHERENCY:
> + return true;
> + case IOMMU_CAP_DEFERRED_FLUSH:
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> +static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
> +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_flush_device_domain *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
The previous version of this patch had code to set several other fields in
the input. I wanted to confirm that not setting them in this version is
intentional. Were they not needed?
> + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN, input, NULL);
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN failed, status %lld\n", status);
> +}
> +
> +static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_detach_device_domain *input;
> + struct pci_dev *pdev;
> + struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +
> + /* See the attach function, only PCI devices for now */
> + if (!dev_is_pci(dev) || vdev->hv_domain != hv_domain)
> + return;
Are these sanity checks necessary? The only caller is hv_iommu_attach_dev()
and it has already done the checks.
> +
> + pdev = to_pci_dev(dev);
> +
> + dev_dbg(dev, "detaching from domain %d\n", hv_domain->device_domain.domain_id.id);
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + if (hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64)) {
As Sashiko and Jacob Pan pointed out, doing the lookup while interrupts are disabled
is problematic. My suggestion would be to just do the lookup into a local variable
before disabling interrupts (rather than using a raw spin lock as Jacob suggested).
Same situation occurs in hv_iommu_attach_dev() and
hv_iommu_get_logical_device_property().
> + local_irq_restore(flags);
> + dev_warn(&pdev->dev, "no IOMMU registration for vPCI bus on detach\n");
> + return;
> + }
> + status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input, NULL);
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_DETACH_DEVICE_DOMAIN failed, status %lld\n", status);
> +
> + hv_flush_device_domain(hv_domain);
> +
> + vdev->hv_domain = NULL;
> +}
> +
> +static int hv_iommu_attach_dev(struct iommu_domain *domain, struct device *dev,
> + struct iommu_domain *old)
> +{
> + u64 status;
> + unsigned long flags;
> + struct pci_dev *pdev;
> + struct hv_input_attach_device_domain *input;
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> + struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
> + int ret;
> +
> + /* Only allow PCI devices for now */
> + if (!dev_is_pci(dev))
> + return -EINVAL;
> +
> + if (vdev->hv_domain == hv_domain)
> + return 0;
> +
> + if (vdev->hv_domain)
> + hv_iommu_detach_dev(&vdev->hv_domain->domain, dev);
> +
> + pdev = to_pci_dev(dev);
> + dev_dbg(dev, "attaching to domain %d\n",
> + hv_domain->device_domain.domain_id.id);
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + ret = hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64);
> + if (ret) {
> + local_irq_restore(flags);
> + dev_err(&pdev->dev, "no IOMMU registration for vPCI bus\n");
> + return ret;
> + }
> + status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input, NULL);
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_ATTACH_DEVICE_DOMAIN failed, status %lld\n", status);
> + else
> + vdev->hv_domain = hv_domain;
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static int hv_iommu_get_logical_device_property(struct device *dev,
> + u32 code,
> + struct hv_output_get_logical_device_property *property)
> +{
> + u64 status, lid;
> + unsigned long flags;
> + int ret;
> + struct hv_input_get_logical_device_property *input;
> + struct hv_output_get_logical_device_property *output;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + output = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*input);
Nit: The other way to set output is:
output = input + 1;
I think this produces slightly better code because of not needing to
reference the per-cpu variable hyperv_pcpu_input_arg a 2nd time.
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + ret = hv_iommu_lookup_logical_dev_id(to_pci_dev(dev), &lid);
> + if (ret) {
> + local_irq_restore(flags);
> + return ret;
> + }
> + input->logical_device_id = lid;
> + input->code = code;
> + status = hv_do_hypercall(HVCALL_GET_LOGICAL_DEVICE_PROPERTY, input, output);
> + *property = *output;
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_GET_LOGICAL_DEVICE_PROPERTY failed, status %lld\n", status);
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static struct iommu_device *hv_iommu_probe_device(struct device *dev)
> +{
> + struct pci_dev *pdev;
> + struct hv_iommu_endpoint *vdev;
> + struct hv_output_get_logical_device_property device_iommu_property = {0};
> +
> + if (!dev_is_pci(dev))
> + return ERR_PTR(-ENODEV);
> +
> + pdev = to_pci_dev(dev);
> +
> + if (hv_iommu_get_logical_device_property(dev,
> + HV_LOGICAL_DEVICE_PROPERTY_PVIOMMU,
> + &device_iommu_property) ||
> + !(device_iommu_property.device_iommu & HV_DEVICE_IOMMU_ENABLED))
> + return ERR_PTR(-ENODEV);
> +
> + vdev = kzalloc_obj(*vdev, GFP_KERNEL);
> + if (!vdev)
> + return ERR_PTR(-ENOMEM);
> +
> + vdev->dev = dev;
> + vdev->hv_iommu = hv_iommu_device;
> + dev_iommu_priv_set(dev, vdev);
> +
> + if (hv_iommu_ats_supported(hv_iommu_device->cap) &&
> + pci_ats_supported(pdev))
> + pci_enable_ats(pdev, __ffs(hv_iommu_device->pgsize_bitmap));
> +
> + return &vdev->hv_iommu->iommu;
> +}
> +
> +static void hv_iommu_release_device(struct device *dev)
> +{
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (pdev->ats_enabled)
> + pci_disable_ats(pdev);
> +
> + dev_iommu_priv_set(dev, NULL);
> + set_dma_ops(dev, NULL);
Previous versions of this function did hv_iommu_detach_dev(). With that call
removed from here, hv_iommu_detach_dev() is only called when attaching a
domain to a device that already has a domain attached. Is it the case that
Hyper-V doesn't require the detach as a cleanup step?
> +
> + kfree(vdev);
> +}
> +
> +static struct iommu_group *hv_iommu_device_group(struct device *dev)
> +{
> + if (dev_is_pci(dev))
> + return pci_device_group(dev);
> + else
> + return generic_device_group(dev);
> +}
> +
> +static int hv_configure_device_domain(struct hv_iommu_domain *hv_domain, u32 domain_type)
> +{
> + u64 status;
> + unsigned long flags;
> + struct pt_iommu_x86_64_hw_info pt_info;
> + struct hv_input_configure_device_domain *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->device_domain = hv_domain->device_domain;
> + input->settings.flags.blocked = (domain_type == IOMMU_DOMAIN_BLOCKED);
> + input->settings.flags.translation_enabled = (domain_type != IOMMU_DOMAIN_IDENTITY);
> +
> + if (domain_type & __IOMMU_DOMAIN_PAGING) {
> + pt_iommu_x86_64_hw_info(&hv_domain->pt_iommu_x86_64, &pt_info);
> + input->settings.page_table_root = pt_info.gcr3_pt;
> + input->settings.flags.first_stage_paging_mode =
> + pt_info.levels == 5;
> + }
> + status = hv_do_hypercall(HVCALL_CONFIGURE_DEVICE_DOMAIN, input, NULL);
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_CONFIGURE_DEVICE_DOMAIN failed, status %lld\n", status);
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static int __init hv_initialize_static_domains(void)
> +{
> + int ret;
> + struct hv_iommu_domain *hv_domain;
> +
> + /* Default stage-1 identity domain */
> + hv_domain = &hv_identity_domain;
> +
> + ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
> + if (ret)
> + return ret;
> +
> + ret = hv_configure_device_domain(hv_domain, IOMMU_DOMAIN_IDENTITY);
> + if (ret)
> + goto delete_identity_domain;
> +
> + hv_domain->domain.type = IOMMU_DOMAIN_IDENTITY;
> + hv_domain->domain.ops = &hv_iommu_identity_domain_ops;
> + hv_domain->domain.owner = &hv_iommu_ops;
> + hv_domain->domain.geometry = hv_iommu_device->geometry;
> + hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
> +
> + /* Default stage-1 blocked domain */
> + hv_domain = &hv_blocking_domain;
> +
> + ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
> + if (ret)
> + goto delete_identity_domain;
> +
> + ret = hv_configure_device_domain(hv_domain, IOMMU_DOMAIN_BLOCKED);
> + if (ret)
> + goto delete_blocked_domain;
> +
> + hv_domain->domain.type = IOMMU_DOMAIN_BLOCKED;
> + hv_domain->domain.ops = &hv_iommu_blocking_domain_ops;
> + hv_domain->domain.owner = &hv_iommu_ops;
> + hv_domain->domain.geometry = hv_iommu_device->geometry;
> + hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
> +
> + return 0;
> +
> +delete_blocked_domain:
> + hv_delete_device_domain(&hv_blocking_domain);
> +delete_identity_domain:
> + hv_delete_device_domain(&hv_identity_domain);
> + return ret;
> +}
> +
> +#define INTERRUPT_RANGE_START (0xfee00000)
> +#define INTERRUPT_RANGE_END (0xfeefffff)
> +static void hv_iommu_get_resv_regions(struct device *dev,
> + struct list_head *head)
> +{
> + struct iommu_resv_region *region;
> +
> + region = iommu_alloc_resv_region(INTERRUPT_RANGE_START,
> + INTERRUPT_RANGE_END - INTERRUPT_RANGE_START + 1,
> + 0, IOMMU_RESV_MSI, GFP_KERNEL);
> + if (!region)
> + return;
> +
> + list_add_tail(®ion->list, head);
> +}
> +
> +static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
> +{
> + hv_flush_device_domain(to_hv_iommu_domain(domain));
> +}
> +
> +static void hv_iommu_iotlb_sync(struct iommu_domain *domain,
> + struct iommu_iotlb_gather *iotlb_gather)
> +{
> + hv_flush_device_domain(to_hv_iommu_domain(domain));
> +
> + iommu_put_pages_list(&iotlb_gather->freelist);
> +}
> +
> +static void hv_iommu_paging_domain_free(struct iommu_domain *domain)
> +{
> + struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
> +
> + /* Free all remaining mappings */
> + pt_iommu_deinit(&hv_domain->pt_iommu);
> +
> + hv_delete_device_domain(hv_domain);
> +
> + kfree(hv_domain);
> +}
> +
> +static const struct iommu_domain_ops hv_iommu_identity_domain_ops = {
> + .attach_dev = hv_iommu_attach_dev,
> +};
> +
> +static const struct iommu_domain_ops hv_iommu_blocking_domain_ops = {
> + .attach_dev = hv_iommu_attach_dev,
> +};
> +
> +static const struct iommu_domain_ops hv_iommu_paging_domain_ops = {
> + .attach_dev = hv_iommu_attach_dev,
> + IOMMU_PT_DOMAIN_OPS(x86_64),
> + .flush_iotlb_all = hv_iommu_flush_iotlb_all,
> + .iotlb_sync = hv_iommu_iotlb_sync,
> + .free = hv_iommu_paging_domain_free,
> +};
> +
> +static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
> +{
> + int ret;
> + struct hv_iommu_domain *hv_domain;
> + struct pt_iommu_x86_64_cfg cfg = {};
> +
> + hv_domain = kzalloc_obj(*hv_domain, GFP_KERNEL);
> + if (!hv_domain)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
> + if (ret) {
> + kfree(hv_domain);
> + return ERR_PTR(ret);
> + }
> +
> + hv_domain->domain.geometry = hv_iommu_device->geometry;
> + hv_domain->pt_iommu.nid = dev_to_node(dev);
> +
> + cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
> + cfg.common.hw_max_oasz_lg2 = 52;
> + cfg.top_level = (hv_iommu_device->max_iova_width > 48) ? 4 : 3;
> +
> + ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64, &cfg, GFP_KERNEL);
> + if (ret) {
> + hv_delete_device_domain(hv_domain);
> + kfree(hv_domain);
> + return ERR_PTR(ret);
> + }
> +
> + /* Constrain to page sizes the hypervisor supports */
> + hv_domain->domain.pgsize_bitmap &= hv_iommu_device->pgsize_bitmap;
> +
> + hv_domain->domain.ops = &hv_iommu_paging_domain_ops;
> +
> + ret = hv_configure_device_domain(hv_domain, __IOMMU_DOMAIN_PAGING);
> + if (ret) {
> + pt_iommu_deinit(&hv_domain->pt_iommu);
> + hv_delete_device_domain(hv_domain);
> + kfree(hv_domain);
> + return ERR_PTR(ret);
> + }
> +
> + return &hv_domain->domain;
I think this function would be better if the error paths did "goto"
a cascading set of error labels. That's the typical pattern, and it's what you
use in hv_iommu_init(), for example.
> +}
> +
> +static struct iommu_ops hv_iommu_ops = {
> + .capable = hv_iommu_capable,
> + .domain_alloc_paging = hv_iommu_domain_alloc_paging,
> + .probe_device = hv_iommu_probe_device,
> + .release_device = hv_iommu_release_device,
> + .device_group = hv_iommu_device_group,
> + .get_resv_regions = hv_iommu_get_resv_regions,
> + .owner = THIS_MODULE,
> + .identity_domain = &hv_identity_domain.domain,
> + .blocked_domain = &hv_blocking_domain.domain,
> + .release_domain = &hv_blocking_domain.domain,
> +};
> +
> +static int hv_iommu_detect(struct hv_output_get_iommu_capabilities *hv_iommu_cap)
> +{
> + u64 status;
> + unsigned long flags;
> + struct hv_input_get_iommu_capabilities *input;
> + struct hv_output_get_iommu_capabilities *output;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + output = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*input);
Potentially use "output = input + 1" here as well.
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + status = hv_do_hypercall(HVCALL_GET_IOMMU_CAPABILITIES, input, output);
> + *hv_iommu_cap = *output;
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_GET_IOMMU_CAPABILITIES failed, status %lld\n", status);
> +
> + return hv_result_to_errno(status);
> +}
> +
> +static void __init hv_init_iommu_device(struct hv_iommu_dev *hv_iommu,
> + struct hv_output_get_iommu_capabilities *hv_iommu_cap)
> +{
> + ida_init(&hv_iommu->domain_ids);
> +
> + hv_iommu->cap = hv_iommu_cap->iommu_cap;
> + hv_iommu->max_iova_width = hv_iommu_cap->max_iova_width;
> + if (!hv_iommu_5lvl_supported(hv_iommu->cap) &&
> + hv_iommu->max_iova_width > 48) {
> + pr_info("5-level paging not supported, limiting iova width to 48.\n");
> + hv_iommu->max_iova_width = 48;
> + }
> +
> + hv_iommu->geometry = (struct iommu_domain_geometry) {
> + .aperture_start = 0,
> + .aperture_end = (((u64)1) << hv_iommu->max_iova_width) - 1,
> + .force_aperture = true,
> + };
> +
> + hv_iommu->first_domain = HV_DEVICE_DOMAIN_ID_DEFAULT + 1;
> + hv_iommu->last_domain = HV_DEVICE_DOMAIN_ID_NULL - 1;
> + /* Only x86 page sizes (4K/2M/1G) are supported */
> + hv_iommu->pgsize_bitmap = hv_iommu_cap->pgsize_bitmap &
> + (SZ_4K | SZ_2M | SZ_1G);
> + if (hv_iommu->pgsize_bitmap != hv_iommu_cap->pgsize_bitmap)
> + pr_warn("unsupported page sizes masked: 0x%llx -> 0x%llx\n",
> + hv_iommu_cap->pgsize_bitmap, hv_iommu->pgsize_bitmap);
> + if (!hv_iommu->pgsize_bitmap) {
> + pr_warn("no supported page sizes, defaulting to 4K\n");
> + hv_iommu->pgsize_bitmap = SZ_4K;
> + }
> + hv_iommu_device = hv_iommu;
> +}
> +
> +int __init hv_iommu_init(void)
> +{
> + int ret = 0;
> + struct hv_iommu_dev *hv_iommu = NULL;
> + struct hv_output_get_iommu_capabilities hv_iommu_cap = {0};
> +
> + if (no_iommu || iommu_detected)
> + return -ENODEV;
> +
> + if (!hv_is_hyperv_initialized())
> + return -ENODEV;
> +
> + ret = hv_iommu_detect(&hv_iommu_cap);
> + if (ret) {
> + pr_err("HVCALL_GET_IOMMU_CAPABILITIES failed: %d\n", ret);
> + return -ENODEV;
> + }
> +
> + if (!hv_iommu_present(hv_iommu_cap.iommu_cap) ||
> + !hv_iommu_s1_domain_supported(hv_iommu_cap.iommu_cap)) {
> + pr_err("IOMMU capabilities not sufficient: cap=0x%llx\n",
> + hv_iommu_cap.iommu_cap);
> + return -ENODEV;
> + }
> +
> + iommu_detected = 1;
> + pci_request_acs();
> +
> + hv_iommu = kzalloc_obj(*hv_iommu, GFP_KERNEL);
> + if (!hv_iommu)
> + return -ENOMEM;
> +
> + hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
> +
> + ret = hv_initialize_static_domains();
> + if (ret) {
> + pr_err("static domains init failed: %d\n", ret);
> + goto err_free;
> + }
> +
> + ret = iommu_device_sysfs_add(&hv_iommu->iommu, NULL, NULL, "%s", "hv-iommu");
> + if (ret) {
> + pr_err("iommu_device_sysfs_add failed: %d\n", ret);
> + goto err_delete_static_domains;
> + }
> +
> + ret = iommu_device_register(&hv_iommu->iommu, &hv_iommu_ops, NULL);
> + if (ret) {
> + pr_err("iommu_device_register failed: %d\n", ret);
> + goto err_sysfs_remove;
> + }
> +
> + pr_info("successfully initialized\n");
> + return 0;
> +
> +err_sysfs_remove:
> + iommu_device_sysfs_remove(&hv_iommu->iommu);
> +err_delete_static_domains:
> + hv_delete_device_domain(&hv_blocking_domain);
> + hv_delete_device_domain(&hv_identity_domain);
> +err_free:
> + kfree(hv_iommu);
> + return ret;
> +}
> diff --git a/drivers/iommu/hyperv/iommu.h b/drivers/iommu/hyperv/iommu.h
> new file mode 100644
> index 000000000000..43f20d371245
> --- /dev/null
> +++ b/drivers/iommu/hyperv/iommu.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * Hyper-V IOMMU driver.
> + *
> + * Copyright (C) 2024-2025, Microsoft, Inc.
> + *
> + */
> +
> +#ifndef _HYPERV_IOMMU_H
> +#define _HYPERV_IOMMU_H
> +
> +struct hv_iommu_dev {
> + struct iommu_device iommu;
> + struct ida domain_ids;
> +
> + /* Device configuration */
> + u8 max_iova_width;
> + u8 max_pasid_width;
> + u64 cap;
> + u64 pgsize_bitmap;
> +
> + struct iommu_domain_geometry geometry;
> + u64 first_domain;
> + u64 last_domain;
> +};
> +
> +struct hv_iommu_domain {
> + union {
> + struct iommu_domain domain;
> + struct pt_iommu pt_iommu;
> + struct pt_iommu_x86_64 pt_iommu_x86_64;
> + };
> + struct hv_iommu_dev *hv_iommu;
> + struct hv_input_device_domain device_domain;
> + u64 pgsize_bitmap;
> +};
> +
> +struct hv_pci_busdata {
> + int pci_domain_nr;
> + u32 logical_dev_id_prefix;
> + struct list_head list;
> +};
> +
> +struct hv_iommu_endpoint {
> + struct device *dev;
> + struct hv_iommu_dev *hv_iommu;
> + struct hv_iommu_domain *hv_domain;
> +};
> +
> +#define to_hv_iommu_domain(d) \
> + container_of(d, struct hv_iommu_domain, domain)
> +
> +#endif /* _HYPERV_IOMMU_H */
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index cfc8fa403dad..a4af9c8c2220 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -3715,6 +3715,7 @@ static int hv_pci_probe(struct hv_device *hdev,
> struct hv_pcibus_device *hbus;
> int ret, dom;
> u16 dom_req;
> + u32 prefix;
> char *name;
>
> bridge = devm_pci_alloc_host_bridge(&hdev->device, 0);
> @@ -3857,13 +3858,25 @@ static int hv_pci_probe(struct hv_device *hdev,
>
> hbus->state = hv_pcibus_probed;
>
> - ret = create_root_hv_pci_bus(hbus);
> + /* Notify pvIOMMU before any device on the bus is scanned. */
> + prefix = (hdev->dev_instance.b[5] << 24) |
> + (hdev->dev_instance.b[4] << 16) |
> + (hdev->dev_instance.b[7] << 8) |
> + (hdev->dev_instance.b[6] & 0xf8);
This assembling of the logical device id prefix duplicates the
code in hv_irq_retarget_interrupt(). Could this code save the
prefix in struct hv_pcibus_device, and then have
hv_irq_retarget_interrupt() use it? Then it would be clear
that HVCALL_RETARGET_INTERRUPT is using exactly the same
logical device id as the IOMMU hypercalls.
> +
> + ret = hv_iommu_register_pci_bus(dom, prefix);
> if (ret)
> goto free_windows;
>
> + ret = create_root_hv_pci_bus(hbus);
> + if (ret)
> + goto unregister_pviommu;
> +
> mutex_unlock(&hbus->state_lock);
> return 0;
>
> +unregister_pviommu:
> + hv_iommu_unregister_pci_bus(dom);
> free_windows:
> hv_pci_free_bridge_windows(hbus);
> exit_d0:
> @@ -3974,8 +3987,10 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool
> keep_devs)
> static void hv_pci_remove(struct hv_device *hdev)
> {
> struct hv_pcibus_device *hbus;
> + int dom;
>
> hbus = hv_get_drvdata(hdev);
> + dom = hbus->bridge->domain_nr;
Nit: Setting "dom" here feels a little weird because the value is only needed
under the "if" statement. The value must be read before the root bus is
removed, but even so moving it under the "if" statement would make more
sense to me.
> if (hbus->state == hv_pcibus_installed) {
> tasklet_disable(&hdev->channel->callback_event);
> hbus->state = hv_pcibus_removing;
> @@ -3994,6 +4009,8 @@ static void hv_pci_remove(struct hv_device *hdev)
> hv_pci_remove_slots(hbus);
> pci_remove_root_bus(hbus->bridge->bus);
> pci_unlock_rescan_remove();
> +
> + hv_iommu_unregister_pci_bus(dom);
> }
>
> hv_pci_bus_exit(hdev, false);
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index bf601d67cecb..b71345c74568 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -73,6 +73,18 @@ extern enum hv_partition_type hv_curr_partition_type;
> extern void * __percpu *hyperv_pcpu_input_arg;
> extern void * __percpu *hyperv_pcpu_output_arg;
>
> +#ifdef CONFIG_HYPERV_PVIOMMU
> +int hv_iommu_register_pci_bus(int pci_domain_nr, u32 logical_dev_id_prefix);
> +void hv_iommu_unregister_pci_bus(int pci_domain_nr);
> +#else
> +static inline int hv_iommu_register_pci_bus(int pci_domain_nr,
> + u32 logical_dev_id_prefix)
> +{
> + return 0;
> +}
> +static inline void hv_iommu_unregister_pci_bus(int pci_domain_nr) { }
> +#endif
> +
> u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr);
> u64 hv_do_fast_hypercall8(u16 control, u64 input8);
> u64 hv_do_fast_hypercall16(u16 control, u64 input1, u64 input2);
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
2026-05-12 23:45 ` sashiko-bot
@ 2026-05-14 18:14 ` Michael Kelley
2026-05-14 21:16 ` Michael Kelley
2026-05-15 16:23 ` Yu Zhang
1 sibling, 2 replies; 19+ messages in thread
From: Michael Kelley @ 2026-05-14 18:14 UTC (permalink / raw)
To: Yu Zhang, linux-kernel@vger.kernel.org,
linux-hyperv@vger.kernel.org, iommu@lists.linux.dev,
linux-pci@vger.kernel.org, linux-arch@vger.kernel.org
Cc: wei.liu@kernel.org, kys@microsoft.com, haiyangz@microsoft.com,
decui@microsoft.com, longli@microsoft.com, joro@8bytes.org,
will@kernel.org, robin.murphy@arm.com, bhelgaas@google.com,
kwilczynski@kernel.org, lpieralisi@kernel.org, mani@kernel.org,
robh@kernel.org, arnd@arndb.de, jgg@ziepe.ca, Michael Kelley,
jacob.pan@linux.microsoft.com, tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
>
> Add page-selective IOTLB flush using HVCALL_FLUSH_DEVICE_DOMAIN_LIST.
> This hypercall accepts a list of (page_number, page_mask_shift) entries,
> enabling finer-grained IOTLB invalidation compared to the domain-wide
> HVCALL_FLUSH_DEVICE_DOMAIN used by hv_iommu_flush_iotlb_all().
>
> hv_iommu_fill_iova_list() decomposes a contiguous IOVA range into a
> minimal set of aligned power-of-two regions that fit in a single
> hypercall input page. When the range exceeds the page capacity, the
> code falls back to a full domain flush automatically.
>
> Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> ---
> drivers/iommu/hyperv/iommu.c | 91 +++++++++++++++++++++++++++++++++++-
> include/hyperv/hvgdk_mini.h | 1 +
> include/hyperv/hvhdk_mini.h | 17 +++++++
> 3 files changed, 108 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> index e5fc625314b5..3bca362b7815 100644
> --- a/drivers/iommu/hyperv/iommu.c
> +++ b/drivers/iommu/hyperv/iommu.c
> @@ -486,10 +486,98 @@ static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
> hv_flush_device_domain(to_hv_iommu_domain(domain));
> }
>
> +/* Max number of iova_list entries in a single hypercall input page. */
> +#define HV_IOMMU_MAX_FLUSH_VA_COUNT \
> + ((HV_HYP_PAGE_SIZE - sizeof(struct hv_input_flush_device_domain_list)) / \
> + sizeof(union hv_iommu_flush_va))
> +
> +/* Returned by hv_iommu_fill_iova_list() when the range exceeds the capacity */
> +#define HV_IOMMU_FLUSH_VA_OVERFLOW U16_MAX
> +
> +static inline u16 hv_iommu_fill_iova_list(union hv_iommu_flush_va *iova_list,
> + unsigned long start,
> + unsigned long end)
> +{
> + unsigned long start_pfn = start >> PAGE_SHIFT;
> + unsigned long end_pfn = PAGE_ALIGN(end) >> PAGE_SHIFT;
"end" is an inclusive end address per comment in struct iommu_iotlb_gather.
So a page aligned value would typically have 0xFFF as the low order 12 bits,
and PAGE_ALIGN() will do the right thing. But I don't think the value is
*required* to be page aligned. If the value of "end" had 0x000 as the
low order 12 bits, the above calculation would fail to include the page
that has the address ending in 0x000. I think it needs to be
PAGE_ALIGN(end + 1) in order to work correctly for this corner case.
> + unsigned long nr_pages = end_pfn - start_pfn;
> + u16 count = 0;
> +
> + while (nr_pages > 0) {
> + unsigned long flush_pages;
> + int order;
> + unsigned long pfn_align;
> + unsigned long size_align;
> +
> + if (count >= HV_IOMMU_MAX_FLUSH_VA_COUNT) {
> + count = HV_IOMMU_FLUSH_VA_OVERFLOW;
> + break;
> + }
> +
> + if (start_pfn)
> + pfn_align = __ffs(start_pfn);
I don't understand why __ffs() is correct here. I would expect
__fls() so it is consistent with the calculation of size_align. But I
can only surmise how the hypercall works since there's no
documentation, so maybe my understanding of the hypercall is
wrong. If __ffs really is correct, a comment explaining why
would help. :-)
> + else
> + pfn_align = BITS_PER_LONG - 1;
> +
> + size_align = __fls(nr_pages);
> + order = min(pfn_align, size_align);
> + iova_list[count].page_mask_shift = order;
> + iova_list[count].page_number = start_pfn;
> +
> + flush_pages = 1UL << order;
> + start_pfn += flush_pages;
> + nr_pages -= flush_pages;
> + count++;
> + }
> +
> + return count;
> +}
> +
> +static void hv_flush_device_domain_list(struct hv_iommu_domain *hv_domain,
> + struct iommu_iotlb_gather *iotlb_gather)
> +{
> + u64 status;
> + u16 count;
> + unsigned long flags;
> + struct hv_input_flush_device_domain_list *input;
> +
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> +
> + input->device_domain = hv_domain->device_domain;
> + input->flags |= HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT;
I would suggest moving the memset() and setting the input fields down
under the "else" below so that they are parallel with the flush all case.
> + count = hv_iommu_fill_iova_list(input->iova_list,
> + iotlb_gather->start,
> + iotlb_gather->end);
> + if (count == HV_IOMMU_FLUSH_VA_OVERFLOW) {
> + /*
> + * Range exceeds hypercall page capacity. Fall back to a full
> + * domain flush.
> + */
> + struct hv_input_flush_device_domain *flush_all = (void *)input;
> +
> + memset(flush_all, 0, sizeof(*flush_all));
> + flush_all->device_domain = hv_domain->device_domain;
> + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN,
> + flush_all, NULL);
> + } else {
> + status = hv_do_rep_hypercall(
> + HVCALL_FLUSH_DEVICE_DOMAIN_LIST,
> + count, 0, input, NULL);
> + }
> +
> + local_irq_restore(flags);
> +
> + if (!hv_result_success(status))
> + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN_LIST failed, status %lld\n", status);
As Sashiko pointed out, a failure here can lead to all kinds of trouble because
of leaving unflushed entries. Maybe a WARN() is more appropriate? Also, maybe
a failure in the list flush should try a flush all as a fallback, with the WARN()
only if the flush all fails.
> +}
> +
> static void hv_iommu_iotlb_sync(struct iommu_domain *domain,
> struct iommu_iotlb_gather *iotlb_gather)
> {
> - hv_flush_device_domain(to_hv_iommu_domain(domain));
> + hv_flush_device_domain_list(to_hv_iommu_domain(domain), iotlb_gather);
>
> iommu_put_pages_list(&iotlb_gather->freelist);
> }
> @@ -543,6 +631,7 @@ static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
>
> cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
> cfg.common.hw_max_oasz_lg2 = 52;
> + cfg.common.features |= BIT(PT_FEAT_FLUSH_RANGE);
> cfg.top_level = (hv_iommu_device->max_iova_width > 48) ? 4 : 3;
>
> ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64, &cfg,
> GFP_KERNEL);
> diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
> index 5bdbb44da112..eaaf87171478 100644
> --- a/include/hyperv/hvgdk_mini.h
> +++ b/include/hyperv/hvgdk_mini.h
> @@ -496,6 +496,7 @@ union hv_vp_assist_msr_contents { /*
> HV_REGISTER_VP_ASSIST_PAGE */
> #define HVCALL_GET_GPA_PAGES_ACCESS_STATES 0x00c9
> #define HVCALL_CONFIGURE_DEVICE_DOMAIN 0x00ce
> #define HVCALL_FLUSH_DEVICE_DOMAIN 0x00d0
> +#define HVCALL_FLUSH_DEVICE_DOMAIN_LIST 0x00d1
> #define HVCALL_ACQUIRE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d7
> #define HVCALL_RELEASE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d8
> #define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db
> diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
> index 493608e791b4..f51d5d9467f1 100644
> --- a/include/hyperv/hvhdk_mini.h
> +++ b/include/hyperv/hvhdk_mini.h
> @@ -671,4 +671,21 @@ struct hv_input_flush_device_domain {
> u32 reserved;
> } __packed;
>
> +union hv_iommu_flush_va {
> + u64 iova;
> + struct {
> + u64 page_mask_shift : 12;
> + u64 page_number : 52;
> + };
> +} __packed;
> +
> +
> +struct hv_input_flush_device_domain_list {
> + struct hv_input_device_domain device_domain;
> +#define HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT (1 << 0)
> + u32 flags;
> + u32 reserved;
> + union hv_iommu_flush_va iova_list[];
> +} __packed;
> +
> #endif /* _HV_HVHDK_MINI_H */
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support
2026-05-14 18:14 ` Michael Kelley
@ 2026-05-14 21:16 ` Michael Kelley
2026-05-15 16:23 ` Yu Zhang
1 sibling, 0 replies; 19+ messages in thread
From: Michael Kelley @ 2026-05-14 21:16 UTC (permalink / raw)
To: Michael Kelley, Yu Zhang, linux-kernel@vger.kernel.org,
linux-hyperv@vger.kernel.org, iommu@lists.linux.dev,
linux-pci@vger.kernel.org, linux-arch@vger.kernel.org
Cc: wei.liu@kernel.org, kys@microsoft.com, haiyangz@microsoft.com,
decui@microsoft.com, longli@microsoft.com, joro@8bytes.org,
will@kernel.org, robin.murphy@arm.com, bhelgaas@google.com,
kwilczynski@kernel.org, lpieralisi@kernel.org, mani@kernel.org,
robh@kernel.org, arnd@arndb.de, jgg@ziepe.ca,
jacob.pan@linux.microsoft.com, tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
From: Michael Kelley <mhklinux@outlook.com> Sent: Thursday, May 14, 2026 11:14 AM
>
> From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
> >
> > Add page-selective IOTLB flush using HVCALL_FLUSH_DEVICE_DOMAIN_LIST.
> > This hypercall accepts a list of (page_number, page_mask_shift) entries,
> > enabling finer-grained IOTLB invalidation compared to the domain-wide
> > HVCALL_FLUSH_DEVICE_DOMAIN used by hv_iommu_flush_iotlb_all().
> >
> > hv_iommu_fill_iova_list() decomposes a contiguous IOVA range into a
> > minimal set of aligned power-of-two regions that fit in a single
> > hypercall input page. When the range exceeds the page capacity, the
> > code falls back to a full domain flush automatically.
> >
> > Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> > Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > ---
> > drivers/iommu/hyperv/iommu.c | 91 +++++++++++++++++++++++++++++++++++-
> > include/hyperv/hvgdk_mini.h | 1 +
> > include/hyperv/hvhdk_mini.h | 17 +++++++
> > 3 files changed, 108 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> > index e5fc625314b5..3bca362b7815 100644
> > --- a/drivers/iommu/hyperv/iommu.c
> > +++ b/drivers/iommu/hyperv/iommu.c
> > @@ -486,10 +486,98 @@ static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
> > hv_flush_device_domain(to_hv_iommu_domain(domain));
> > }
> >
> > +/* Max number of iova_list entries in a single hypercall input page. */
> > +#define HV_IOMMU_MAX_FLUSH_VA_COUNT \
> > + ((HV_HYP_PAGE_SIZE - sizeof(struct hv_input_flush_device_domain_list)) / \
> > + sizeof(union hv_iommu_flush_va))
> > +
> > +/* Returned by hv_iommu_fill_iova_list() when the range exceeds the capacity */
> > +#define HV_IOMMU_FLUSH_VA_OVERFLOW U16_MAX
> > +
> > +static inline u16 hv_iommu_fill_iova_list(union hv_iommu_flush_va *iova_list,
> > + unsigned long start,
> > + unsigned long end)
> > +{
> > + unsigned long start_pfn = start >> PAGE_SHIFT;
> > + unsigned long end_pfn = PAGE_ALIGN(end) >> PAGE_SHIFT;
>
> "end" is an inclusive end address per comment in struct iommu_iotlb_gather.
> So a page aligned value would typically have 0xFFF as the low order 12 bits,
> and PAGE_ALIGN() will do the right thing. But I don't think the value is
> *required* to be page aligned. If the value of "end" had 0x000 as the
> low order 12 bits, the above calculation would fail to include the page
> that has the address ending in 0x000. I think it needs to be
> PAGE_ALIGN(end + 1) in order to work correctly for this corner case.
>
One follow-on comment: the macros HVPFN_UP() and HVPFN_DOWN()
would likely be useful in setting start_pfn and end_pfn.
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-13 18:39 ` Jacob Pan
@ 2026-05-15 12:38 ` Yu Zhang
0 siblings, 0 replies; 19+ messages in thread
From: Yu Zhang @ 2026-05-15 12:38 UTC (permalink / raw)
To: Jacob Pan
Cc: linux-kernel, linux-hyperv, iommu, linux-pci, linux-arch, wei.liu,
kys, haiyangz, decui, longli, joro, will, robin.murphy, bhelgaas,
kwilczynski, lpieralisi, mani, robh, arnd, jgg, mhklinux,
tgopinath, easwar.hariharan
[...]
> > diff --git a/drivers/iommu/hyperv/Kconfig
> > b/drivers/iommu/hyperv/Kconfig index 30f40d867036..9e658d5c9a77 100644
> > --- a/drivers/iommu/hyperv/Kconfig
> > +++ b/drivers/iommu/hyperv/Kconfig
> > @@ -8,3 +8,20 @@ config HYPERV_IOMMU
> > help
> > Stub IOMMU driver to handle IRQs to support Hyper-V Linux
> > guest and root partitions.
> > +
> > +if HYPERV_IOMMU
> > +config HYPERV_PVIOMMU
> > + bool "Microsoft Hypervisor para-virtualized IOMMU support"
> > + depends on X86 && HYPERV
> > + select IOMMU_API
> > + select GENERIC_PT
> > + select IOMMU_PT
> > + select IOMMU_PT_X86_64
> nit:
> If HYPERV_PVIOMMU is enabled on a (hypothetical) platform with
> GENERIC_ATOMIC64=y, the select would force-enable IOMMU_PT_X86_64 even
> though its depends on is unsatisfied — leading to a build failure.
>
> In practice this can't happen today because HYPERV_PVIOMMU already
> depends on X86 && HYPERV, and x86 never sets GENERIC_ATOMIC64. But
> adding the explicit guard is more defensive.
> i.e.
> depends on !GENERIC_ATOMIC64 # for cmpxchg64 in IOMMU_PT
>
Good point. Will add "depends on !GENERIC_ATOMIC64".
[...]
> > +
> > +/*
> > + * Look up the logical device ID for a vPCI device. Returns 0 on
> > success
> > + * with *logical_id filled in; -ENODEV if no entry registered for
> > this
> > + * device's vPCI bus.
> > + */
> > +static int hv_iommu_lookup_logical_dev_id(struct pci_dev *pdev, u64
> > *logical_id) +{
> > + struct hv_pci_busdata *bus;
> > + int domain = pci_domain_nr(pdev->bus);
> > + int ret = -ENODEV;
> > +
> > + spin_lock(&hv_iommu_pci_bus_lock);
> this is called under local_irq_save, should use raw_spinlock_t for RT
> kernel?
>
Yes, this is problematic on PREEMPT_RT. Michael also suggested hoisting
the lookup before the local_irq_save() section instead of using a raw
spinlock, which I think is a great idea - all 3 call sites (detach_dev,
attach_dev, get_logical_device_property) will be changed.
> > + list_for_each_entry(bus, &hv_iommu_pci_bus_list, list) {
> > + if (bus->pci_domain_nr == domain) {
> > + *logical_id =
> > (u64)bus->logical_dev_id_prefix |
> > + PCI_FUNC(pdev->devfn);
> > + ret = 0;
> > + break;
> > + }
> > + }
> > + spin_unlock(&hv_iommu_pci_bus_lock);
> > + return ret;
> > +}
> > +
[...]
> > +static void hv_iommu_release_device(struct device *dev)
> > +{
> > + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> > + struct pci_dev *pdev = to_pci_dev(dev);
> > +
> > + if (pdev->ats_enabled)
> > + pci_disable_ats(pdev);
> > +
> > + dev_iommu_priv_set(dev, NULL);
> > + set_dma_ops(dev, NULL);
> I don't think this is necessary.
>
Oh, yes. Thanks!
> > +
> > + kfree(vdev);
> > +}
> > +
> > +static struct iommu_group *hv_iommu_device_group(struct device *dev)
> > +{
> > + if (dev_is_pci(dev))
> > + return pci_device_group(dev);
> > + else
> > + return generic_device_group(dev);
> non pci device already rejected during attach, maybe we should warn
> here?
> WARN_ON_ONCE(1);
> return generic_device_group(dev);
>
Good idea. Will add WARN_ON_ONCE(1).
> > +}
> > +
> > +static int hv_configure_device_domain(struct hv_iommu_domain
> > *hv_domain, u32 domain_type) +{
> > + u64 status;
> > + unsigned long flags;
> > + struct pt_iommu_x86_64_hw_info pt_info;
> > + struct hv_input_configure_device_domain *input;
> > +
> > + local_irq_save(flags);
> > +
> > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > + memset(input, 0, sizeof(*input));
> > + input->device_domain = hv_domain->device_domain;
> > + input->settings.flags.blocked = (domain_type ==
> > IOMMU_DOMAIN_BLOCKED);
> > + input->settings.flags.translation_enabled = (domain_type !=
> > IOMMU_DOMAIN_IDENTITY); +
> Should this be:
> input->settings.flags.translation_enabled =
> (domain_type & __IOMMU_DOMAIN_PAGING);
> Otherwise, blocked domain will have translation enabled. Maybe add some
> explanation of what HV expects.
>
I do agree this is not intuitive, but current hypervisor implementation
requires "blocked == 1" to be paired with "translation_enabled = 1",
otherwise it returns HV_STATUS_INVALID_PARAMETER. But I can add some
comment at least.
Thanks for the thorough review, Jacob!
B.R.
Yu
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-14 18:13 ` Michael Kelley
@ 2026-05-15 13:59 ` Yu Zhang
2026-05-15 14:51 ` Michael Kelley
0 siblings, 1 reply; 19+ messages in thread
From: Yu Zhang @ 2026-05-15 13:59 UTC (permalink / raw)
To: Michael Kelley
Cc: linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
iommu@lists.linux.dev, linux-pci@vger.kernel.org,
linux-arch@vger.kernel.org, wei.liu@kernel.org, kys@microsoft.com,
haiyangz@microsoft.com, decui@microsoft.com, longli@microsoft.com,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
bhelgaas@google.com, kwilczynski@kernel.org,
lpieralisi@kernel.org, mani@kernel.org, robh@kernel.org,
arnd@arndb.de, jgg@ziepe.ca, jacob.pan@linux.microsoft.com,
tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
On Thu, May 14, 2026 at 06:13:24PM +0000, Michael Kelley wrote:
> From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
> >
> > Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
> > This driver implements stage-1 IO translation within the guest OS.
> > It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
> > for:
> > - Capability discovery
> > - Domain allocation, configuration, and deallocation
> > - Device attachment and detachment
> > - IOTLB invalidation
> >
> > The driver constructs x86-compatible stage-1 IO page tables in the
> > guest memory using consolidated IO page table helpers. This allows
> > the guest to manage stage-1 translations independently of vendor-
> > specific drivers (like Intel VT-d or AMD IOMMU).
> >
> > Hyper-V consumes this stage-1 IO page table when a device domain is
> > created and configured, and nests it with the host's stage-2 IO page
> > tables, therefore eliminating the VM exits for guest IOMMU mapping
> > operations. For unmapping operations, VM exits to perform the IOTLB
> > flush are still unavoidable.
> >
> > Hyper-V identifies each PCI pass-thru device by a logical device ID
> > in its hypercall interface. The vPCI driver (pci-hyperv) registers the
> > per-bus portion of this ID with the pvIOMMU driver during bus probe.
> > The pvIOMMU driver stores this mapping and combines it with the function
> > number of the endpoint PCI device to form the complete ID for hypercalls.
>
> As you are probably aware, Mukesh's patch series to support PCI
> pass-thru devices also needs to get the logical device ID. Maybe the
> registration mechanism needs to move somewhere that can be shared
> with his code.
>
Thank you so much for the review, Michael!
Yes, I looked at Mukesh's series and noticed his hv_pci_vmbus_device_id()
in pci-hyperv.c has the same dev_instance byte manipulation. We do need
a common registration mechanism.
Any suggestion on where to put it? drivers/hv/hv_common.c seems like a
natural place, but the register/lookup functions are currently only
meaningful when CONFIG_HYPERV_PVIOMMU is set. If Mukesh's pass-thru
code also needs them, we might need a new shared Kconfig option that
both can select. Open to better ideas.
Adding Mukesh to the loop. :)
> >
> > Co-developed-by: Wei Liu <wei.liu@kernel.org>
> > Signed-off-by: Wei Liu <wei.liu@kernel.org>
> > Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> > ---
> > arch/x86/hyperv/hv_init.c | 4 +
> > arch/x86/include/asm/mshyperv.h | 4 +
> > drivers/iommu/hyperv/Kconfig | 17 +
> > drivers/iommu/hyperv/Makefile | 1 +
> > drivers/iommu/hyperv/iommu.c | 705 ++++++++++++++++++++++++++++
> > drivers/iommu/hyperv/iommu.h | 54 +++
> > drivers/pci/controller/pci-hyperv.c | 19 +-
> > include/asm-generic/mshyperv.h | 12 +
> > 8 files changed, 815 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/iommu/hyperv/iommu.c
> > create mode 100644 drivers/iommu/hyperv/iommu.h
> >
> > diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> > index 323adc93f2dc..2c8ff8e06249 100644
> > --- a/arch/x86/hyperv/hv_init.c
> > +++ b/arch/x86/hyperv/hv_init.c
> > @@ -578,6 +578,10 @@ void __init hyperv_init(void)
> > old_setup_percpu_clockev = x86_init.timers.setup_percpu_clockev;
> > x86_init.timers.setup_percpu_clockev = hv_stimer_setup_percpu_clockev;
> >
> > +#ifdef CONFIG_HYPERV_PVIOMMU
> > + x86_init.iommu.iommu_init = hv_iommu_init;
> > +#endif
> > +
> > hv_apic_init();
> >
> > x86_init.pci.arch_init = hv_pci_init;
> > diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> > index f64393e853ee..20d947c2c758 100644
> > --- a/arch/x86/include/asm/mshyperv.h
> > +++ b/arch/x86/include/asm/mshyperv.h
> > @@ -313,6 +313,10 @@ static inline void mshv_vtl_return_hypercall(void) {}
> > static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
> > #endif
> >
> > +#ifdef CONFIG_HYPERV_PVIOMMU
> > +int __init hv_iommu_init(void);
> > +#endif
> > +
> > #include <asm-generic/mshyperv.h>
> >
> > #endif
> > diff --git a/drivers/iommu/hyperv/Kconfig b/drivers/iommu/hyperv/Kconfig
> > index 30f40d867036..9e658d5c9a77 100644
> > --- a/drivers/iommu/hyperv/Kconfig
> > +++ b/drivers/iommu/hyperv/Kconfig
> > @@ -8,3 +8,20 @@ config HYPERV_IOMMU
> > help
> > Stub IOMMU driver to handle IRQs to support Hyper-V Linux
> > guest and root partitions.
> > +
> > +if HYPERV_IOMMU
> > +config HYPERV_PVIOMMU
> > + bool "Microsoft Hypervisor para-virtualized IOMMU support"
> > + depends on X86 && HYPERV
>
> What is the intent w.r.t. 32-bit builds? Using X86 instead of X86_64
> allows it. I did a 32-bit build and didn't get any build failures, which is
> good. But I can't run it to see if the pvIOMMU actually works in a
> 32-bit build. I don't know how building X86_64 generic PT entries
> would fare.
>
Sorry, no intent to support 32-bit. I'll change to `depends on X86_64 && HYPERV`
[...]
> > +static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
> > +{
> > + u64 status;
> > + unsigned long flags;
> > + struct hv_input_flush_device_domain *input;
> > +
> > + local_irq_save(flags);
> > +
> > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > + memset(input, 0, sizeof(*input));
> > + input->device_domain = hv_domain->device_domain;
>
> The previous version of this patch had code to set several other fields in
> the input. I wanted to confirm that not setting them in this version is
> intentional. Were they not needed?
>
Oh. The RFC v1 set partition_id, owner_vtl, domain_id.type, and domain_id.id
individually. In this version, I just simplified it to a struct assignment.
No functional change.
> > + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN, input, NULL);
> > +
> > + local_irq_restore(flags);
> > +
> > + if (!hv_result_success(status))
> > + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN failed, status %lld\n", status);
> > +}
> > +
> > +static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev)
> > +{
> > + u64 status;
> > + unsigned long flags;
> > + struct hv_input_detach_device_domain *input;
> > + struct pci_dev *pdev;
> > + struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
> > + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> > +
> > + /* See the attach function, only PCI devices for now */
> > + if (!dev_is_pci(dev) || vdev->hv_domain != hv_domain)
> > + return;
>
> Are these sanity checks necessary? The only caller is hv_iommu_attach_dev()
> and it has already done the checks.
You're right, they're redundant.
> > +
> > + pdev = to_pci_dev(dev);
> > +
> > + dev_dbg(dev, "detaching from domain %d\n", hv_domain->device_domain.domain_id.id);
> > +
> > + local_irq_save(flags);
> > +
> > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > + memset(input, 0, sizeof(*input));
> > + input->partition_id = HV_PARTITION_ID_SELF;
> > + if (hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64)) {
>
> As Sashiko and Jacob Pan pointed out, doing the lookup while interrupts are disabled
> is problematic. My suggestion would be to just do the lookup into a local variable
> before disabling interrupts (rather than using a raw spin lock as Jacob suggested).
>
> Same situation occurs in hv_iommu_attach_dev() and
> hv_iommu_get_logical_device_property().
>
Thanks! I would also prefer to look up before disabling interrupts.
> > + local_irq_restore(flags);
> > + dev_warn(&pdev->dev, "no IOMMU registration for vPCI bus on detach\n");
> > + return;
> > + }
> > + status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input, NULL);
> > +
> > + local_irq_restore(flags);
> > +
> > + if (!hv_result_success(status))
> > + pr_err("HVCALL_DETACH_DEVICE_DOMAIN failed, status %lld\n", status);
> > +
> > + hv_flush_device_domain(hv_domain);
> > +
> > + vdev->hv_domain = NULL;
> > +}
> > +
[...]
> > +static int hv_iommu_get_logical_device_property(struct device *dev,
> > + u32 code,
> > + struct hv_output_get_logical_device_property *property)
> > +{
> > + u64 status, lid;
> > + unsigned long flags;
> > + int ret;
> > + struct hv_input_get_logical_device_property *input;
> > + struct hv_output_get_logical_device_property *output;
> > +
> > + local_irq_save(flags);
> > +
> > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > + output = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*input);
>
> Nit: The other way to set output is:
>
> output = input + 1;
>
> I think this produces slightly better code because of not needing to
> reference the per-cpu variable hyperv_pcpu_input_arg a 2nd time.
>
>
Indeed! It's more elegant. :)
> > + memset(input, 0, sizeof(*input));
> > + input->partition_id = HV_PARTITION_ID_SELF;
> > + ret = hv_iommu_lookup_logical_dev_id(to_pci_dev(dev), &lid);
> > + if (ret) {
> > + local_irq_restore(flags);
> > + return ret;
> > + }
> > + input->logical_device_id = lid;
> > + input->code = code;
> > + status = hv_do_hypercall(HVCALL_GET_LOGICAL_DEVICE_PROPERTY, input, output);
> > + *property = *output;
> > +
> > + local_irq_restore(flags);
> > +
> > + if (!hv_result_success(status))
> > + pr_err("HVCALL_GET_LOGICAL_DEVICE_PROPERTY failed, status %lld\n", status);
> > +
> > + return hv_result_to_errno(status);
> > +}
> > +
[...]
> > +static void hv_iommu_release_device(struct device *dev)
> > +{
> > + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> > + struct pci_dev *pdev = to_pci_dev(dev);
> > +
> > + if (pdev->ats_enabled)
> > + pci_disable_ats(pdev);
> > +
> > + dev_iommu_priv_set(dev, NULL);
> > + set_dma_ops(dev, NULL);
>
> Previous versions of this function did hv_iommu_detach_dev(). With that call
> removed from here, hv_iommu_detach_dev() is only called when attaching a
> domain to a device that already has a domain attached. Is it the case that
> Hyper-V doesn't require the detach as a cleanup step?
>
The IOMMU core attaches the device to release_domain (our blocking domain)
before calling release_device(), so I believe the explicit detach in the RFC
was redundant. I simply didn't realize that at the time.
Sorry I forgot to mention this in the changelog.
[...]
> > +static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
> > +{
> > + int ret;
> > + struct hv_iommu_domain *hv_domain;
> > + struct pt_iommu_x86_64_cfg cfg = {};
> > +
> > + hv_domain = kzalloc_obj(*hv_domain, GFP_KERNEL);
> > + if (!hv_domain)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
> > + if (ret) {
> > + kfree(hv_domain);
> > + return ERR_PTR(ret);
> > + }
> > +
> > + hv_domain->domain.geometry = hv_iommu_device->geometry;
> > + hv_domain->pt_iommu.nid = dev_to_node(dev);
> > +
> > + cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
> > + cfg.common.hw_max_oasz_lg2 = 52;
> > + cfg.top_level = (hv_iommu_device->max_iova_width > 48) ? 4 : 3;
> > +
> > + ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64, &cfg, GFP_KERNEL);
> > + if (ret) {
> > + hv_delete_device_domain(hv_domain);
> > + kfree(hv_domain);
> > + return ERR_PTR(ret);
> > + }
> > +
> > + /* Constrain to page sizes the hypervisor supports */
> > + hv_domain->domain.pgsize_bitmap &= hv_iommu_device->pgsize_bitmap;
> > +
> > + hv_domain->domain.ops = &hv_iommu_paging_domain_ops;
> > +
> > + ret = hv_configure_device_domain(hv_domain, __IOMMU_DOMAIN_PAGING);
> > + if (ret) {
> > + pt_iommu_deinit(&hv_domain->pt_iommu);
> > + hv_delete_device_domain(hv_domain);
> > + kfree(hv_domain);
> > + return ERR_PTR(ret);
> > + }
> > +
> > + return &hv_domain->domain;
>
> I think this function would be better if the error paths did "goto"
> a cascading set of error labels. That's the typical pattern, and it's what you
> use in hv_iommu_init(), for example.
>
Good point. Will restructure to use goto-based error labels
> > +}
> > +
> > +static struct iommu_ops hv_iommu_ops = {
> > + .capable = hv_iommu_capable,
> > + .domain_alloc_paging = hv_iommu_domain_alloc_paging,
> > + .probe_device = hv_iommu_probe_device,
> > + .release_device = hv_iommu_release_device,
> > + .device_group = hv_iommu_device_group,
> > + .get_resv_regions = hv_iommu_get_resv_regions,
> > + .owner = THIS_MODULE,
> > + .identity_domain = &hv_identity_domain.domain,
> > + .blocked_domain = &hv_blocking_domain.domain,
> > + .release_domain = &hv_blocking_domain.domain,
> > +};
> > +
> > +static int hv_iommu_detect(struct hv_output_get_iommu_capabilities *hv_iommu_cap)
> > +{
> > + u64 status;
> > + unsigned long flags;
> > + struct hv_input_get_iommu_capabilities *input;
> > + struct hv_output_get_iommu_capabilities *output;
> > +
> > + local_irq_save(flags);
> > +
> > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > + output = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*input);
>
> Potentially use "output = input + 1" here as well.
>
Yes. Thanks!
[...]
> > @@ -3857,13 +3858,25 @@ static int hv_pci_probe(struct hv_device *hdev,
> >
> > hbus->state = hv_pcibus_probed;
> >
> > - ret = create_root_hv_pci_bus(hbus);
> > + /* Notify pvIOMMU before any device on the bus is scanned. */
> > + prefix = (hdev->dev_instance.b[5] << 24) |
> > + (hdev->dev_instance.b[4] << 16) |
> > + (hdev->dev_instance.b[7] << 8) |
> > + (hdev->dev_instance.b[6] & 0xf8);
>
> This assembling of the logical device id prefix duplicates the
> code in hv_irq_retarget_interrupt(). Could this code save the
> prefix in struct hv_pcibus_device, and then have
> hv_irq_retarget_interrupt() use it? Then it would be clear
> that HVCALL_RETARGET_INTERRUPT is using exactly the same
> logical device id as the IOMMU hypercalls.
>
Good point. I think we can do it. :)
> > +
> > + ret = hv_iommu_register_pci_bus(dom, prefix);
> > if (ret)
> > goto free_windows;
> >
> > + ret = create_root_hv_pci_bus(hbus);
> > + if (ret)
> > + goto unregister_pviommu;
> > +
> > mutex_unlock(&hbus->state_lock);
> > return 0;
> >
> > +unregister_pviommu:
> > + hv_iommu_unregister_pci_bus(dom);
> > free_windows:
> > hv_pci_free_bridge_windows(hbus);
> > exit_d0:
> > @@ -3974,8 +3987,10 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool
> > keep_devs)
> > static void hv_pci_remove(struct hv_device *hdev)
> > {
> > struct hv_pcibus_device *hbus;
> > + int dom;
> >
> > hbus = hv_get_drvdata(hdev);
> > + dom = hbus->bridge->domain_nr;
>
> Nit: Setting "dom" here feels a little weird because the value is only needed
> under the "if" statement. The value must be read before the root bus is
> removed, but even so moving it under the "if" statement would make more
> sense to me.
>
Sure. Thanks!
> > if (hbus->state == hv_pcibus_installed) {
> > tasklet_disable(&hdev->channel->callback_event);
> > hbus->state = hv_pcibus_removing;
> > @@ -3994,6 +4009,8 @@ static void hv_pci_remove(struct hv_device *hdev)
> > hv_pci_remove_slots(hbus);
> > pci_remove_root_bus(hbus->bridge->bus);
> > pci_unlock_rescan_remove();
> > +
> > + hv_iommu_unregister_pci_bus(dom);
> > }
> >
> > hv_pci_bus_exit(hdev, false);
B.R.
Yu
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-15 13:59 ` Yu Zhang
@ 2026-05-15 14:51 ` Michael Kelley
2026-05-15 16:53 ` Yu Zhang
0 siblings, 1 reply; 19+ messages in thread
From: Michael Kelley @ 2026-05-15 14:51 UTC (permalink / raw)
To: Yu Zhang, Michael Kelley
Cc: linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
iommu@lists.linux.dev, linux-pci@vger.kernel.org,
linux-arch@vger.kernel.org, wei.liu@kernel.org, kys@microsoft.com,
haiyangz@microsoft.com, decui@microsoft.com, longli@microsoft.com,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
bhelgaas@google.com, kwilczynski@kernel.org,
lpieralisi@kernel.org, mani@kernel.org, robh@kernel.org,
arnd@arndb.de, jgg@ziepe.ca, jacob.pan@linux.microsoft.com,
tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Friday, May 15, 2026 7:00 AM
>
> On Thu, May 14, 2026 at 06:13:24PM +0000, Michael Kelley wrote:
> > From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
> > >
> > > Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
> > > This driver implements stage-1 IO translation within the guest OS.
> > > It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
> > > for:
> > > - Capability discovery
> > > - Domain allocation, configuration, and deallocation
> > > - Device attachment and detachment
> > > - IOTLB invalidation
> > >
> > > The driver constructs x86-compatible stage-1 IO page tables in the
> > > guest memory using consolidated IO page table helpers. This allows
> > > the guest to manage stage-1 translations independently of vendor-
> > > specific drivers (like Intel VT-d or AMD IOMMU).
> > >
> > > Hyper-V consumes this stage-1 IO page table when a device domain is
> > > created and configured, and nests it with the host's stage-2 IO page
> > > tables, therefore eliminating the VM exits for guest IOMMU mapping
> > > operations. For unmapping operations, VM exits to perform the IOTLB
> > > flush are still unavoidable.
> > >
> > > Hyper-V identifies each PCI pass-thru device by a logical device ID
> > > in its hypercall interface. The vPCI driver (pci-hyperv) registers the
> > > per-bus portion of this ID with the pvIOMMU driver during bus probe.
> > > The pvIOMMU driver stores this mapping and combines it with the function
> > > number of the endpoint PCI device to form the complete ID for hypercalls.
> >
> > As you are probably aware, Mukesh's patch series to support PCI
> > pass-thru devices also needs to get the logical device ID. Maybe the
> > registration mechanism needs to move somewhere that can be shared
> > with his code.
> >
>
> Thank you so much for the review, Michael!
>
> Yes, I looked at Mukesh's series and noticed his hv_pci_vmbus_device_id()
> in pci-hyperv.c has the same dev_instance byte manipulation. We do need
> a common registration mechanism.
>
> Any suggestion on where to put it? drivers/hv/hv_common.c seems like a
> natural place, but the register/lookup functions are currently only
> meaningful when CONFIG_HYPERV_PVIOMMU is set. If Mukesh's pass-thru
> code also needs them, we might need a new shared Kconfig option that
> both can select. Open to better ideas.
Unfortunately, I have not looked at Mukesh's series in detail yet, so
I don't have enough knowledge of the full situation to offer a good
recommendation.
>
> [...]
>
> > > +static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
> > > +{
> > > + u64 status;
> > > + unsigned long flags;
> > > + struct hv_input_flush_device_domain *input;
> > > +
> > > + local_irq_save(flags);
> > > +
> > > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > > + memset(input, 0, sizeof(*input));
> > > + input->device_domain = hv_domain->device_domain;
> >
> > The previous version of this patch had code to set several other fields in
> > the input. I wanted to confirm that not setting them in this version is
> > intentional. Were they not needed?
> >
>
> Oh. The RFC v1 set partition_id, owner_vtl, domain_id.type, and domain_id.id
> individually. In this version, I just simplified it to a struct assignment.
> No functional change.
Of course! I should have looked more closely at the details before making
this comment. :-(
[...]
> >
> > Previous versions of this function did hv_iommu_detach_dev(). With that call
> > removed from here, hv_iommu_detach_dev() is only called when attaching a
> > domain to a device that already has a domain attached. Is it the case that
> > Hyper-V doesn't require the detach as a cleanup step?
> >
>
> The IOMMU core attaches the device to release_domain (our blocking domain)
> before calling release_device(), so I believe the explicit detach in the RFC
> was redundant. I simply didn't realize that at the time.
>
Got it. But after the IOMMU core attaches the device to the blocking
domain, there's the possibility that the vPCI device is rescinded by
Hyper-V and it goes away entirely. Or the device might be subjected
to an "unbind/bind" cycle in Linux. Does the detach need to be done
on the blocking domain in such cases? In this version of the patches, the
Hyper-V "attach" and "detach" hypercalls still end up unbalanced. That
seems a bit untidy at best, and I wonder if there are scenarios where
Hyper-V will complain about the lack of balance.
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support
2026-05-14 18:14 ` Michael Kelley
2026-05-14 21:16 ` Michael Kelley
@ 2026-05-15 16:23 ` Yu Zhang
2026-05-15 18:00 ` Michael Kelley
1 sibling, 1 reply; 19+ messages in thread
From: Yu Zhang @ 2026-05-15 16:23 UTC (permalink / raw)
To: Michael Kelley
Cc: linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
iommu@lists.linux.dev, linux-pci@vger.kernel.org,
linux-arch@vger.kernel.org, wei.liu@kernel.org, kys@microsoft.com,
haiyangz@microsoft.com, decui@microsoft.com, longli@microsoft.com,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
bhelgaas@google.com, kwilczynski@kernel.org,
lpieralisi@kernel.org, mani@kernel.org, robh@kernel.org,
arnd@arndb.de, jgg@ziepe.ca, jacob.pan@linux.microsoft.com,
tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
On Thu, May 14, 2026 at 06:14:22PM +0000, Michael Kelley wrote:
> From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
> >
> > Add page-selective IOTLB flush using HVCALL_FLUSH_DEVICE_DOMAIN_LIST.
> > This hypercall accepts a list of (page_number, page_mask_shift) entries,
> > enabling finer-grained IOTLB invalidation compared to the domain-wide
> > HVCALL_FLUSH_DEVICE_DOMAIN used by hv_iommu_flush_iotlb_all().
> >
> > hv_iommu_fill_iova_list() decomposes a contiguous IOVA range into a
> > minimal set of aligned power-of-two regions that fit in a single
> > hypercall input page. When the range exceeds the page capacity, the
> > code falls back to a full domain flush automatically.
> >
> > Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> > Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > ---
> > drivers/iommu/hyperv/iommu.c | 91 +++++++++++++++++++++++++++++++++++-
> > include/hyperv/hvgdk_mini.h | 1 +
> > include/hyperv/hvhdk_mini.h | 17 +++++++
> > 3 files changed, 108 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> > index e5fc625314b5..3bca362b7815 100644
> > --- a/drivers/iommu/hyperv/iommu.c
> > +++ b/drivers/iommu/hyperv/iommu.c
> > @@ -486,10 +486,98 @@ static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
> > hv_flush_device_domain(to_hv_iommu_domain(domain));
> > }
> >
> > +/* Max number of iova_list entries in a single hypercall input page. */
> > +#define HV_IOMMU_MAX_FLUSH_VA_COUNT \
> > + ((HV_HYP_PAGE_SIZE - sizeof(struct hv_input_flush_device_domain_list)) / \
> > + sizeof(union hv_iommu_flush_va))
> > +
> > +/* Returned by hv_iommu_fill_iova_list() when the range exceeds the capacity */
> > +#define HV_IOMMU_FLUSH_VA_OVERFLOW U16_MAX
> > +
> > +static inline u16 hv_iommu_fill_iova_list(union hv_iommu_flush_va *iova_list,
> > + unsigned long start,
> > + unsigned long end)
> > +{
> > + unsigned long start_pfn = start >> PAGE_SHIFT;
> > + unsigned long end_pfn = PAGE_ALIGN(end) >> PAGE_SHIFT;
>
> "end" is an inclusive end address per comment in struct iommu_iotlb_gather.
> So a page aligned value would typically have 0xFFF as the low order 12 bits,
> and PAGE_ALIGN() will do the right thing. But I don't think the value is
> *required* to be page aligned. If the value of "end" had 0x000 as the
> low order 12 bits, the above calculation would fail to include the page
> that has the address ending in 0x000. I think it needs to be
> PAGE_ALIGN(end + 1) in order to work correctly for this corner case.
>
Good catch! Will use HVPFN_DOWN(start) and HVPFN_UP(end + 1) as you
suggested in your follow-up mail.
> > + unsigned long nr_pages = end_pfn - start_pfn;
> > + u16 count = 0;
> > +
> > + while (nr_pages > 0) {
> > + unsigned long flush_pages;
> > + int order;
> > + unsigned long pfn_align;
> > + unsigned long size_align;
> > +
> > + if (count >= HV_IOMMU_MAX_FLUSH_VA_COUNT) {
> > + count = HV_IOMMU_FLUSH_VA_OVERFLOW;
> > + break;
> > + }
> > +
> > + if (start_pfn)
> > + pfn_align = __ffs(start_pfn);
>
> I don't understand why __ffs() is correct here. I would expect
> __fls() so it is consistent with the calculation of size_align. But I
> can only surmise how the hypercall works since there's no
> documentation, so maybe my understanding of the hypercall is
> wrong. If __ffs really is correct, a comment explaining why
> would help. :-)
>
The use of __ffs() is intentional. Each flush entry invalidates a
naturally aligned 2^N page block, and the hypervisor requires the
page_number to be aligned to 2^page_mask_shift.
Here __ffs() and __fls() serve different purposes:
- __ffs(start_pfn) is about the alignment constraint, e.g., how
large a block can this address support?
- __fls(nr_pages) is about the size constraint, e.g., how large
a block can the remaining range hold?
Taking min() of both ensures each entry is both properly aligned
and within bounds.
Thanks for raising this — it definitely deserves a comment. I had to
stare at it for a while myself to remember why. :)
> > + else
> > + pfn_align = BITS_PER_LONG - 1;
> > +
> > + size_align = __fls(nr_pages);
> > + order = min(pfn_align, size_align);
> > + iova_list[count].page_mask_shift = order;
> > + iova_list[count].page_number = start_pfn;
> > +
> > + flush_pages = 1UL << order;
> > + start_pfn += flush_pages;
> > + nr_pages -= flush_pages;
> > + count++;
> > + }
> > +
> > + return count;
> > +}
> > +
> > +static void hv_flush_device_domain_list(struct hv_iommu_domain *hv_domain,
> > + struct iommu_iotlb_gather *iotlb_gather)
> > +{
> > + u64 status;
> > + u16 count;
> > + unsigned long flags;
> > + struct hv_input_flush_device_domain_list *input;
> > +
> > + local_irq_save(flags);
> > +
> > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > + memset(input, 0, sizeof(*input));
> > +
> > + input->device_domain = hv_domain->device_domain;
> > + input->flags |= HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT;
>
> I would suggest moving the memset() and setting the input fields down
> under the "else" below so that they are parallel with the flush all case.
>
I agree the structure should be more symmetric. Yet I guess the memset and
hv_iommu_fill_iova_list() need to stay before the branch since the fill
writes directly into input->iova_list[]. :)
> > + count = hv_iommu_fill_iova_list(input->iova_list,
> > + iotlb_gather->start,
> > + iotlb_gather->end);
> > + if (count == HV_IOMMU_FLUSH_VA_OVERFLOW) {
> > + /*
> > + * Range exceeds hypercall page capacity. Fall back to a full
> > + * domain flush.
> > + */
> > + struct hv_input_flush_device_domain *flush_all = (void *)input;
> > +
> > + memset(flush_all, 0, sizeof(*flush_all));
> > + flush_all->device_domain = hv_domain->device_domain;
> > + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN,
> > + flush_all, NULL);
> > + } else {
> > + status = hv_do_rep_hypercall(
> > + HVCALL_FLUSH_DEVICE_DOMAIN_LIST,
> > + count, 0, input, NULL);
> > + }
> > +
> > + local_irq_restore(flags);
> > +
> > + if (!hv_result_success(status))
> > + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN_LIST failed, status %lld\n", status);
>
> As Sashiko pointed out, a failure here can lead to all kinds of trouble because
> of leaving unflushed entries. Maybe a WARN() is more appropriate? Also, maybe
> a failure in the list flush should try a flush all as a fallback, with the WARN()
> only if the flush all fails.
>
Good idea. How about we restructure this routine to sth. like this:
memset(input, 0, sizeof(*input));
count = hv_iommu_fill_iova_list(...);
if (count != HV_IOMMU_FLUSH_VA_OVERFLOW) {
input->device_domain = ...;
...
status = hv_do_rep_hypercall(FLUSH_DEVICE_DOMAIN_LIST, ...);
if (hv_result_success(status))
goto out;
}
/* overflow or list flush failed: fallback to full domain flush */
flush_all = (void *)input;
memset(flush_all, 0, sizeof(*flush_all));
flush_all->device_domain = ...;
status = hv_do_hypercall(FLUSH_DEVICE_DOMAIN, ...);
WARN(!hv_result_success(status), "IOTLB flush failed, status %lld\n", status);
out:
local_irq_restore(flags);
B.R.
Yu
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-15 14:51 ` Michael Kelley
@ 2026-05-15 16:53 ` Yu Zhang
2026-05-15 17:36 ` Michael Kelley
0 siblings, 1 reply; 19+ messages in thread
From: Yu Zhang @ 2026-05-15 16:53 UTC (permalink / raw)
To: Michael Kelley
Cc: linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
iommu@lists.linux.dev, linux-pci@vger.kernel.org,
linux-arch@vger.kernel.org, wei.liu@kernel.org, kys@microsoft.com,
haiyangz@microsoft.com, decui@microsoft.com, longli@microsoft.com,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
bhelgaas@google.com, kwilczynski@kernel.org,
lpieralisi@kernel.org, mani@kernel.org, robh@kernel.org,
arnd@arndb.de, jgg@ziepe.ca, jacob.pan@linux.microsoft.com,
tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com, Mukesh R
On Fri, May 15, 2026 at 02:51:38PM +0000, Michael Kelley wrote:
> From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Friday, May 15, 2026 7:00 AM
> >
> > On Thu, May 14, 2026 at 06:13:24PM +0000, Michael Kelley wrote:
> > > From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
> > > >
> > > > Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
> > > > This driver implements stage-1 IO translation within the guest OS.
> > > > It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
> > > > for:
> > > > - Capability discovery
> > > > - Domain allocation, configuration, and deallocation
> > > > - Device attachment and detachment
> > > > - IOTLB invalidation
> > > >
> > > > The driver constructs x86-compatible stage-1 IO page tables in the
> > > > guest memory using consolidated IO page table helpers. This allows
> > > > the guest to manage stage-1 translations independently of vendor-
> > > > specific drivers (like Intel VT-d or AMD IOMMU).
> > > >
> > > > Hyper-V consumes this stage-1 IO page table when a device domain is
> > > > created and configured, and nests it with the host's stage-2 IO page
> > > > tables, therefore eliminating the VM exits for guest IOMMU mapping
> > > > operations. For unmapping operations, VM exits to perform the IOTLB
> > > > flush are still unavoidable.
> > > >
> > > > Hyper-V identifies each PCI pass-thru device by a logical device ID
> > > > in its hypercall interface. The vPCI driver (pci-hyperv) registers the
> > > > per-bus portion of this ID with the pvIOMMU driver during bus probe.
> > > > The pvIOMMU driver stores this mapping and combines it with the function
> > > > number of the endpoint PCI device to form the complete ID for hypercalls.
> > >
> > > As you are probably aware, Mukesh's patch series to support PCI
> > > pass-thru devices also needs to get the logical device ID. Maybe the
> > > registration mechanism needs to move somewhere that can be shared
> > > with his code.
> > >
> >
> > Thank you so much for the review, Michael!
> >
> > Yes, I looked at Mukesh's series and noticed his hv_pci_vmbus_device_id()
> > in pci-hyperv.c has the same dev_instance byte manipulation. We do need
> > a common registration mechanism.
> >
> > Any suggestion on where to put it? drivers/hv/hv_common.c seems like a
> > natural place, but the register/lookup functions are currently only
> > meaningful when CONFIG_HYPERV_PVIOMMU is set. If Mukesh's pass-thru
> > code also needs them, we might need a new shared Kconfig option that
> > both can select. Open to better ideas.
>
> Unfortunately, I have not looked at Mukesh's series in detail yet, so
> I don't have enough knowledge of the full situation to offer a good
> recommendation.
>
Sorry I forgot to Cc Mukesh in the previous reply. :(
@Mukesh, any thoughts on sharing the logical device ID registration mechanism?
> >
> > [...]
> >
> > > > +static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
> > > > +{
> > > > + u64 status;
> > > > + unsigned long flags;
> > > > + struct hv_input_flush_device_domain *input;
> > > > +
> > > > + local_irq_save(flags);
> > > > +
> > > > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > > > + memset(input, 0, sizeof(*input));
> > > > + input->device_domain = hv_domain->device_domain;
> > >
> > > The previous version of this patch had code to set several other fields in
> > > the input. I wanted to confirm that not setting them in this version is
> > > intentional. Were they not needed?
> > >
> >
> > Oh. The RFC v1 set partition_id, owner_vtl, domain_id.type, and domain_id.id
> > individually. In this version, I just simplified it to a struct assignment.
> > No functional change.
>
> Of course! I should have looked more closely at the details before making
> this comment. :-(
>
> [...]
>
> > >
> > > Previous versions of this function did hv_iommu_detach_dev(). With that call
> > > removed from here, hv_iommu_detach_dev() is only called when attaching a
> > > domain to a device that already has a domain attached. Is it the case that
> > > Hyper-V doesn't require the detach as a cleanup step?
> > >
> >
> > The IOMMU core attaches the device to release_domain (our blocking domain)
> > before calling release_device(), so I believe the explicit detach in the RFC
> > was redundant. I simply didn't realize that at the time.
> >
>
> Got it. But after the IOMMU core attaches the device to the blocking
> domain, there's the possibility that the vPCI device is rescinded by
> Hyper-V and it goes away entirely. Or the device might be subjected
> to an "unbind/bind" cycle in Linux. Does the detach need to be done
> on the blocking domain in such cases? In this version of the patches, the
> Hyper-V "attach" and "detach" hypercalls still end up unbalanced. That
> seems a bit untidy at best, and I wonder if there are scenarios where
> Hyper-V will complain about the lack of balance.
>
Thank you, Michael. May I ask what "the vPCI device is rescinded by
Hyper-V and it goes away entirely" mean?
I realized it's a bit untidy. But I want to understand this issue more
clearly first. :)
B.R.
Yu
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
2026-05-15 16:53 ` Yu Zhang
@ 2026-05-15 17:36 ` Michael Kelley
0 siblings, 0 replies; 19+ messages in thread
From: Michael Kelley @ 2026-05-15 17:36 UTC (permalink / raw)
To: Yu Zhang, Michael Kelley
Cc: linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
iommu@lists.linux.dev, linux-pci@vger.kernel.org,
linux-arch@vger.kernel.org, wei.liu@kernel.org, kys@microsoft.com,
haiyangz@microsoft.com, decui@microsoft.com, longli@microsoft.com,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
bhelgaas@google.com, kwilczynski@kernel.org,
lpieralisi@kernel.org, mani@kernel.org, robh@kernel.org,
arnd@arndb.de, jgg@ziepe.ca, jacob.pan@linux.microsoft.com,
tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com, Mukesh R
From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Friday, May 15, 2026 9:54 AM
>
> On Fri, May 15, 2026 at 02:51:38PM +0000, Michael Kelley wrote:
> > From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Friday, May 15, 2026 7:00 AM
> > >
> > > On Thu, May 14, 2026 at 06:13:24PM +0000, Michael Kelley wrote:
> > > > From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
[....]
> > > >
> > > > Previous versions of this function did hv_iommu_detach_dev(). With that call
> > > > removed from here, hv_iommu_detach_dev() is only called when attaching a
> > > > domain to a device that already has a domain attached. Is it the case that
> > > > Hyper-V doesn't require the detach as a cleanup step?
> > > >
> > >
> > > The IOMMU core attaches the device to release_domain (our blocking domain)
> > > before calling release_device(), so I believe the explicit detach in the RFC
> > > was redundant. I simply didn't realize that at the time.
> > >
> >
> > Got it. But after the IOMMU core attaches the device to the blocking
> > domain, there's the possibility that the vPCI device is rescinded by
> > Hyper-V and it goes away entirely. Or the device might be subjected
> > to an "unbind/bind" cycle in Linux. Does the detach need to be done
> > on the blocking domain in such cases? In this version of the patches, the
> > Hyper-V "attach" and "detach" hypercalls still end up unbalanced. That
> > seems a bit untidy at best, and I wonder if there are scenarios where
> > Hyper-V will complain about the lack of balance.
> >
>
> Thank you, Michael. May I ask what "the vPCI device is rescinded by
> Hyper-V and it goes away entirely" mean?
>
See the documentation at Documentation/virt/hyperv/vpci.rst in a
kernel source code tree, and particularly the section entitled "PCI Device
Removal". Such removals can and do happen in running Azure guest
VMs. Start with that info and then I'll do my best to answer follow-up
questions you may have.
The unbind/bind case is separate, but has some of the same effects in
that Linux should be removing all setup of the PCI device. There's actually
two unbind steps -- one to unbind the device-specific driver (e.g., the
Mellanox MLX5 driver or the NMVe driver) driver from the device, and
potentially a second to unbind the VMBus vPCI driver from the device.
These unbind/bind sequences can be done in the Linux guest without
the Hyper-V host rescinding the device.
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support
2026-05-15 16:23 ` Yu Zhang
@ 2026-05-15 18:00 ` Michael Kelley
0 siblings, 0 replies; 19+ messages in thread
From: Michael Kelley @ 2026-05-15 18:00 UTC (permalink / raw)
To: Yu Zhang, Michael Kelley
Cc: linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
iommu@lists.linux.dev, linux-pci@vger.kernel.org,
linux-arch@vger.kernel.org, wei.liu@kernel.org, kys@microsoft.com,
haiyangz@microsoft.com, decui@microsoft.com, longli@microsoft.com,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
bhelgaas@google.com, kwilczynski@kernel.org,
lpieralisi@kernel.org, mani@kernel.org, robh@kernel.org,
arnd@arndb.de, jgg@ziepe.ca, jacob.pan@linux.microsoft.com,
tgopinath@linux.microsoft.com,
easwar.hariharan@linux.microsoft.com
From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Friday, May 15, 2026 9:24 AM
>
> On Thu, May 14, 2026 at 06:14:22PM +0000, Michael Kelley wrote:
> > From: Yu Zhang <zhangyu1@linux.microsoft.com> Sent: Monday, May 11, 2026 9:24 AM
> > >
[....]
> > > + unsigned long nr_pages = end_pfn - start_pfn;
> > > + u16 count = 0;
> > > +
> > > + while (nr_pages > 0) {
> > > + unsigned long flush_pages;
> > > + int order;
> > > + unsigned long pfn_align;
> > > + unsigned long size_align;
> > > +
> > > + if (count >= HV_IOMMU_MAX_FLUSH_VA_COUNT) {
> > > + count = HV_IOMMU_FLUSH_VA_OVERFLOW;
> > > + break;
> > > + }
> > > +
> > > + if (start_pfn)
> > > + pfn_align = __ffs(start_pfn);
> >
> > I don't understand why __ffs() is correct here. I would expect
> > __fls() so it is consistent with the calculation of size_align. But I
> > can only surmise how the hypercall works since there's no
> > documentation, so maybe my understanding of the hypercall is
> > wrong. If __ffs really is correct, a comment explaining why
> > would help. :-)
> >
>
> The use of __ffs() is intentional. Each flush entry invalidates a
> naturally aligned 2^N page block, and the hypervisor requires the
> page_number to be aligned to 2^page_mask_shift.
>
> Here __ffs() and __fls() serve different purposes:
> - __ffs(start_pfn) is about the alignment constraint, e.g., how
> large a block can this address support?
> - __fls(nr_pages) is about the size constraint, e.g., how large
> a block can the remaining range hold?
>
> Taking min() of both ensures each entry is both properly aligned
> and within bounds.
>
> Thanks for raising this - it definitely deserves a comment. I had to
> stare at it for a while myself to remember why. :)
Hmmm. Something about this still nags at me. I'll run some
experiments to either convince myself that you are right, or to
come up with a counterexample.
A related thought occurred to me. If each flush entry that is passed
to Hyper-V describes a naturally aligned 2^N page block, I don't
think the HV_IOMMU_MAX_FLUSH_VA_COUNT can ever
be reached. The number of entries is limited by the number of
bits in a PFN and the pages count, both of which are 64. And with
52 bit physical addressing and 4KiB pages, the actual size of
a PFN and pages count is even smaller than 64.
HV_IOMMU_MAX_FLUSH_VA_COUNT is the number of 8 byte
union hv_iommu_flush_va entries that fit in a 4KiB page, so
it's ~500.
My statement applies to a single flush range. If multiple flush
ranges were strung together in a single hypercall, a larger count
could be reached, but hv_flush_device_domain_list() only does
a single range. So I don't think the overflow case in
hv_flush_device_domain_list() can ever happen. But let me
do my experiments, and I will also look at this aspect to confirm
if it's right.
>
> > > + else
> > > + pfn_align = BITS_PER_LONG - 1;
> > > +
> > > + size_align = __fls(nr_pages);
> > > + order = min(pfn_align, size_align);
> > > + iova_list[count].page_mask_shift = order;
> > > + iova_list[count].page_number = start_pfn;
> > > +
> > > + flush_pages = 1UL << order;
> > > + start_pfn += flush_pages;
> > > + nr_pages -= flush_pages;
> > > + count++;
> > > + }
> > > +
> > > + return count;
> > > +}
> > > +
> > > +static void hv_flush_device_domain_list(struct hv_iommu_domain *hv_domain,
> > > + struct iommu_iotlb_gather *iotlb_gather)
> > > +{
> > > + u64 status;
> > > + u16 count;
> > > + unsigned long flags;
> > > + struct hv_input_flush_device_domain_list *input;
> > > +
> > > + local_irq_save(flags);
> > > +
> > > + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> > > + memset(input, 0, sizeof(*input));
> > > +
> > > + input->device_domain = hv_domain->device_domain;
> > > + input->flags |= HV_FLUSH_DEVICE_DOMAIN_LIST_IOMMU_FORMAT;
> >
> > I would suggest moving the memset() and setting the input fields down
> > under the "else" below so that they are parallel with the flush all case.
> >
>
> I agree the structure should be more symmetric. Yet I guess the memset and
> hv_iommu_fill_iova_list() need to stay before the branch since the fill
> writes directly into input->iova_list[]. :)
Agreed.
>
> > > + count = hv_iommu_fill_iova_list(input->iova_list,
> > > + iotlb_gather->start,
> > > + iotlb_gather->end);
> > > + if (count == HV_IOMMU_FLUSH_VA_OVERFLOW) {
> > > + /*
> > > + * Range exceeds hypercall page capacity. Fall back to a full
> > > + * domain flush.
> > > + */
> > > + struct hv_input_flush_device_domain *flush_all = (void *)input;
> > > +
> > > + memset(flush_all, 0, sizeof(*flush_all));
> > > + flush_all->device_domain = hv_domain->device_domain;
> > > + status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN,
> > > + flush_all, NULL);
> > > + } else {
> > > + status = hv_do_rep_hypercall(
> > > + HVCALL_FLUSH_DEVICE_DOMAIN_LIST,
> > > + count, 0, input, NULL);
> > > + }
> > > +
> > > + local_irq_restore(flags);
> > > +
> > > + if (!hv_result_success(status))
> > > + pr_err("HVCALL_FLUSH_DEVICE_DOMAIN_LIST failed, status %lld\n", status);
> >
> > As Sashiko pointed out, a failure here can lead to all kinds of trouble because
> > of leaving unflushed entries. Maybe a WARN() is more appropriate? Also, maybe
> > a failure in the list flush should try a flush all as a fallback, with the WARN()
> > only if the flush all fails.
> >
>
> Good idea. How about we restructure this routine to sth. like this:
>
>
> memset(input, 0, sizeof(*input));
> count = hv_iommu_fill_iova_list(...);
>
> if (count != HV_IOMMU_FLUSH_VA_OVERFLOW) {
> input->device_domain = ...;
> ...
> status = hv_do_rep_hypercall(FLUSH_DEVICE_DOMAIN_LIST, ...);
> if (hv_result_success(status))
> goto out;
> }
>
> /* overflow or list flush failed: fallback to full domain flush */
> flush_all = (void *)input;
> memset(flush_all, 0, sizeof(*flush_all));
> flush_all->device_domain = ...;
> status = hv_do_hypercall(FLUSH_DEVICE_DOMAIN, ...);
> WARN(!hv_result_success(status), "IOTLB flush failed, status %lld\n", status);
>
> out:
> local_irq_restore(flags);
>
Yes, I think this works. But per my earlier comment, if I'm right that
the overflow case never occurs, it could be simplified further to just
do the list flush with the full flush as the error fallback. Then WARN
if the full flush fails.
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2026-05-15 18:00 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
2026-05-11 16:24 ` [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
2026-05-11 16:24 ` [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
2026-05-12 21:24 ` sashiko-bot
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2026-05-12 22:30 ` sashiko-bot
2026-05-13 18:39 ` Jacob Pan
2026-05-15 12:38 ` Yu Zhang
2026-05-14 18:13 ` Michael Kelley
2026-05-15 13:59 ` Yu Zhang
2026-05-15 14:51 ` Michael Kelley
2026-05-15 16:53 ` Yu Zhang
2026-05-15 17:36 ` Michael Kelley
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
2026-05-12 23:45 ` sashiko-bot
2026-05-14 18:14 ` Michael Kelley
2026-05-14 21:16 ` Michael Kelley
2026-05-15 16:23 ` Yu Zhang
2026-05-15 18:00 ` Michael Kelley
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.