iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests
@ 2025-12-09  5:11 Yu Zhang
  2025-12-09  5:11 ` [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id() Yu Zhang
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-09  5:11 UTC (permalink / raw)
  To: linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

This patch series introduces a para-virtualized IOMMU driver for
Linux guests running on Microsoft Hyper-V. The primary objective
is to enable hardware-assisted DMA isolation and scalable device
assignment for Hyper-V child partitions, bypassing the performance
overhead and complexity associated with emulated IOMMU hardware.

The driver implements the following core functionality:
*   Hypercall-based Enumeration
    Unlike traditional ACPI-based discovery (e.g., DMAR/IVRS),
    this driver enumerates the Hyper-V IOMMU capabilities directly
    via hypercalls. This approach allows the guest to discover
    IOMMU presence and features without requiring specific virtual
    firmware extensions or modifications.

*   Domain Management
    The driver manages IOMMU domains through a new set of Hyper-V
    hypercall interfaces, handling domain allocation, attachment,
    and detachment for endpoint devices.

*   IOTLB Invalidation
    IOTLB invalidation requests are marshaled and issued to the
    hypervisor through the same hypercall mechanism.

*   Nested Translation Support
    This implementation leverages guest-managed stage-1 I/O page
    tables nested with host stage-2 translations. It is built
    upon the consolidated IOMMU page table framework designed by
    Jason Gunthorpe [1]. This design eliminates the need for complex
    emulation during map operations and ensures scalability across
    different architectures.

Implementation Notes:
*   Architecture Independence
    While the current implementation only supports x86 platforms (Intel
    VT-d and AMD IOMMU), the driver design aims to be as architecture-
    agnostic as possible. To achieve this, initialization occurs via
    `device_initcall` rather than `x86_init.iommu.iommu_init`, and shutdown
    is handled via `syscore_ops` instead of `x86_platform.iommu_shutdown`.

*   MSI Region Handling
    In this RFC, the hardware MSI region is hard-coded to the standard
    x86 interrupt range (0xfee00000 - 0xfeefffff). Future updates may
    allow this configuration to be queried via hypercalls if new hardware
    platforms are to be supported.

*   Reserved Regions (RMRR)
    There is currently no requirement to support assigned devices with
    ACPI RMRR limitations. Consequently, this patch series does not specify
    or query reserved memory regions.

Testing:
This series has been validated using dmatest with Intel DSA devices
assigned to the child partition. The tests confirmed successful DMA
transactions under the para-virtualized IOMMU.

Future Work:
*   Page-selective IOTLB Invalidation
    The current implementation relies on full-domain flushes. Support
    for page-selective invalidation is planned for a future series.

*   Advanced Features
    Support for vSVA and virtual PRI will be addressed in subsequent
    updates.

*   Root Partition Co-existence
    Ensure compatibility with the distinct para-virtualized IOMMU driver
    used by Hyper-V's Linux root partition, in which the DMA remapping
    is not achieved by stage-1 IO page tables and another set of iommu
    ops is provided.

[1] https://github.com/jgunthorpe/linux/tree/iommu_pt_all

Easwar Hariharan (2):
  PCI: hv: Create and export hv_build_logical_dev_id()
  iommu: Move Hyper-V IOMMU driver to its own subdirectory

Wei Liu (1):
  hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU

Yu Zhang (2):
  hyperv: allow hypercall output pages to be allocated for child
    partitions
  iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest

 drivers/hv/hv_common.c                        |  21 +-
 drivers/iommu/Kconfig                         |  10 +-
 drivers/iommu/Makefile                        |   2 +-
 drivers/iommu/hyperv/Kconfig                  |  24 +
 drivers/iommu/hyperv/Makefile                 |   3 +
 drivers/iommu/hyperv/iommu.c                  | 608 ++++++++++++++++++
 drivers/iommu/hyperv/iommu.h                  |  53 ++
 .../irq_remapping.c}                          |   2 +-
 drivers/pci/controller/pci-hyperv.c           |  28 +-
 include/asm-generic/mshyperv.h                |   2 +
 include/hyperv/hvgdk_mini.h                   |   8 +
 include/hyperv/hvhdk_mini.h                   | 123 ++++
 12 files changed, 850 insertions(+), 34 deletions(-)
 create mode 100644 drivers/iommu/hyperv/Kconfig
 create mode 100644 drivers/iommu/hyperv/Makefile
 create mode 100644 drivers/iommu/hyperv/iommu.c
 create mode 100644 drivers/iommu/hyperv/iommu.h
 rename drivers/iommu/{hyperv-iommu.c => hyperv/irq_remapping.c} (99%)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id()
  2025-12-09  5:11 [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
@ 2025-12-09  5:11 ` Yu Zhang
  2025-12-09  5:21   ` Randy Dunlap
  2025-12-10 21:39   ` Bjorn Helgaas
  2025-12-09  5:11 ` [RFC v1 2/5] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-09  5:11 UTC (permalink / raw)
  To: linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

From: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>

Hyper-V uses a logical device ID to identify a PCI endpoint device for
child partitions. This ID will also be required for future hypercalls
used by the Hyper-V IOMMU driver.

Refactor the logic for building this logical device ID into a standalone
helper function and export the interface for wider use.

Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
 drivers/pci/controller/pci-hyperv.c | 28 ++++++++++++++++++++--------
 include/asm-generic/mshyperv.h      |  2 ++
 2 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index 146b43981b27..4b82e06b5d93 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -598,15 +598,31 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
 
 #define hv_msi_prepare		pci_msi_prepare
 
+/**
+ * Build a "Device Logical ID" out of this PCI bus's instance GUID and the
+ * function number of the device.
+ */
+u64 hv_build_logical_dev_id(struct pci_dev *pdev)
+{
+	struct pci_bus *pbus = pdev->bus;
+	struct hv_pcibus_device *hbus = container_of(pbus->sysdata,
+						struct hv_pcibus_device, sysdata);
+
+	return (u64)((hbus->hdev->dev_instance.b[5] << 24) |
+		     (hbus->hdev->dev_instance.b[4] << 16) |
+		     (hbus->hdev->dev_instance.b[7] << 8)  |
+		     (hbus->hdev->dev_instance.b[6] & 0xf8) |
+		     PCI_FUNC(pdev->devfn));
+}
+EXPORT_SYMBOL_GPL(hv_build_logical_dev_id);
+
 /**
  * hv_irq_retarget_interrupt() - "Unmask" the IRQ by setting its current
  * affinity.
  * @data:	Describes the IRQ
  *
  * Build new a destination for the MSI and make a hypercall to
- * update the Interrupt Redirection Table. "Device Logical ID"
- * is built out of this PCI bus's instance GUID and the function
- * number of the device.
+ * update the Interrupt Redirection Table.
  */
 static void hv_irq_retarget_interrupt(struct irq_data *data)
 {
@@ -642,11 +658,7 @@ static void hv_irq_retarget_interrupt(struct irq_data *data)
 	params->int_entry.source = HV_INTERRUPT_SOURCE_MSI;
 	params->int_entry.msi_entry.address.as_uint32 = int_desc->address & 0xffffffff;
 	params->int_entry.msi_entry.data.as_uint32 = int_desc->data;
-	params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
-			   (hbus->hdev->dev_instance.b[4] << 16) |
-			   (hbus->hdev->dev_instance.b[7] << 8) |
-			   (hbus->hdev->dev_instance.b[6] & 0xf8) |
-			   PCI_FUNC(pdev->devfn);
+	params->device_id = hv_build_logical_dev_id(pdev);
 	params->int_target.vector = hv_msi_get_int_vector(data);
 
 	if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) {
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 64ba6bc807d9..1a205ed69435 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -71,6 +71,8 @@ extern enum hv_partition_type hv_curr_partition_type;
 extern void * __percpu *hyperv_pcpu_input_arg;
 extern void * __percpu *hyperv_pcpu_output_arg;
 
+extern u64 hv_build_logical_dev_id(struct pci_dev *pdev);
+
 u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr);
 u64 hv_do_fast_hypercall8(u16 control, u64 input8);
 u64 hv_do_fast_hypercall16(u16 control, u64 input1, u64 input2);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v1 2/5] iommu: Move Hyper-V IOMMU driver to its own subdirectory
  2025-12-09  5:11 [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
  2025-12-09  5:11 ` [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id() Yu Zhang
@ 2025-12-09  5:11 ` Yu Zhang
  2025-12-09  5:11 ` [RFC v1 3/5] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-09  5:11 UTC (permalink / raw)
  To: linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

From: Easwar Hariharan <eahariha@linux.microsoft.com>

The Hyper-V IOMMU driver currently only supports IRQ remapping.
As it will be adding DMA remapping support, prepare a directory
to contain all the different feature files.

This is a simple rename commit and has no functional changes.

Signed-off-by: Easwar Hariharan <eahariha@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
 drivers/iommu/Kconfig                                  | 10 +---------
 drivers/iommu/Makefile                                 |  2 +-
 drivers/iommu/hyperv/Kconfig                           | 10 ++++++++++
 drivers/iommu/hyperv/Makefile                          |  2 ++
 .../iommu/{hyperv-iommu.c => hyperv/irq_remapping.c}   |  2 +-
 5 files changed, 15 insertions(+), 11 deletions(-)
 create mode 100644 drivers/iommu/hyperv/Kconfig
 create mode 100644 drivers/iommu/hyperv/Makefile
 rename drivers/iommu/{hyperv-iommu.c => hyperv/irq_remapping.c} (99%)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index c9ae3221cd6f..661ff4e764cc 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -194,6 +194,7 @@ config MSM_IOMMU
 source "drivers/iommu/amd/Kconfig"
 source "drivers/iommu/arm/Kconfig"
 source "drivers/iommu/intel/Kconfig"
+source "drivers/iommu/hyperv/Kconfig"
 source "drivers/iommu/iommufd/Kconfig"
 source "drivers/iommu/riscv/Kconfig"
 
@@ -350,15 +351,6 @@ config MTK_IOMMU_V1
 
 	  if unsure, say N here.
 
-config HYPERV_IOMMU
-	bool "Hyper-V IRQ Handling"
-	depends on HYPERV && X86
-	select IOMMU_API
-	default HYPERV
-	help
-	  Stub IOMMU driver to handle IRQs to support Hyper-V Linux
-	  guest and root partitions.
-
 config VIRTIO_IOMMU
 	tristate "Virtio IOMMU driver"
 	depends on VIRTIO
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index b17ef9818759..757dc377cb66 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_AMD_IOMMU) += amd/
 obj-$(CONFIG_INTEL_IOMMU) += intel/
 obj-$(CONFIG_RISCV_IOMMU) += riscv/
 obj-$(CONFIG_GENERIC_PT) += generic_pt/fmt/
+obj-$(CONFIG_HYPERV_IOMMU) += hyperv/
 obj-$(CONFIG_IOMMU_API) += iommu.o
 obj-$(CONFIG_IOMMU_SUPPORT) += iommu-pages.o
 obj-$(CONFIG_IOMMU_API) += iommu-traces.o
@@ -29,7 +30,6 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
 obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
-obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
 obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
 obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
 obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o
diff --git a/drivers/iommu/hyperv/Kconfig b/drivers/iommu/hyperv/Kconfig
new file mode 100644
index 000000000000..30f40d867036
--- /dev/null
+++ b/drivers/iommu/hyperv/Kconfig
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# HyperV paravirtualized IOMMU support
+config HYPERV_IOMMU
+	bool "Hyper-V IRQ Handling"
+	depends on HYPERV && X86
+	select IOMMU_API
+	default HYPERV
+	help
+	  Stub IOMMU driver to handle IRQs to support Hyper-V Linux
+	  guest and root partitions.
diff --git a/drivers/iommu/hyperv/Makefile b/drivers/iommu/hyperv/Makefile
new file mode 100644
index 000000000000..9f557bad94ff
--- /dev/null
+++ b/drivers/iommu/hyperv/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_HYPERV_IOMMU) += irq_remapping.o
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv/irq_remapping.c
similarity index 99%
rename from drivers/iommu/hyperv-iommu.c
rename to drivers/iommu/hyperv/irq_remapping.c
index 0961ac805944..f2c4c7d67302 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv/irq_remapping.c
@@ -22,7 +22,7 @@
 #include <asm/hypervisor.h>
 #include <asm/mshyperv.h>
 
-#include "irq_remapping.h"
+#include "../irq_remapping.h"
 
 #ifdef CONFIG_IRQ_REMAP
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v1 3/5] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU
  2025-12-09  5:11 [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
  2025-12-09  5:11 ` [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id() Yu Zhang
  2025-12-09  5:11 ` [RFC v1 2/5] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
@ 2025-12-09  5:11 ` Yu Zhang
  2025-12-09  5:11 ` [RFC v1 4/5] hyperv: allow hypercall output pages to be allocated for child partitions Yu Zhang
  2025-12-09  5:11 ` [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
  4 siblings, 0 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-09  5:11 UTC (permalink / raw)
  To: linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

From: Wei Liu <wei.liu@kernel.org>

Hyper-V guest IOMMU is a para-virtualized IOMMU based on hypercalls.
Introduce the hypercalls used by the child partition to interact with
this facility.

These hypercalls fall into below categories:
- Detection and capability: HVCALL_GET_IOMMU_CAPABILITIES is used to
  detect the existence and capabilities of the guest IOMMU.

- Device management: HVCALL_GET_LOGICAL_DEVICE_PROPERTY is used to
  check whether an endpoint device is managed by the guest IOMMU.

- Domain management: A set of hypercalls is provided to handle the
  creation, configuration, and deletion of guest domains, as well as
  the attachment/detachment of endpoint devices to/from those domains.

- IOTLB flushing: HVCALL_FLUSH_DEVICE_DOMAIN is used to ask Hyper-V
  for a domain-selective IOTLB flush(which in its handler may flush
  the device TLB as well). Page-selective IOTLB flushes will be offered
  by new hypercalls in future patches.

Signed-off-by: Wei Liu <wei.liu@kernel.org>
Co-developed-by: Jacob Pan <jacob.pan@linux.microsoft.com>
Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>
Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Co-developed-by: Yu Zhang <zhangyu1@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
 include/hyperv/hvgdk_mini.h |   8 +++
 include/hyperv/hvhdk_mini.h | 123 ++++++++++++++++++++++++++++++++++++
 2 files changed, 131 insertions(+)

diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 77abddfc750e..e5b302bbfe14 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -478,10 +478,16 @@ union hv_vp_assist_msr_contents {	 /* HV_REGISTER_VP_ASSIST_PAGE */
 #define HVCALL_GET_VP_INDEX_FROM_APIC_ID			0x009a
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE	0x00af
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST	0x00b0
+#define HVCALL_CREATE_DEVICE_DOMAIN			0x00b1
+#define HVCALL_ATTACH_DEVICE_DOMAIN			0x00b2
 #define HVCALL_SIGNAL_EVENT_DIRECT			0x00c0
 #define HVCALL_POST_MESSAGE_DIRECT			0x00c1
 #define HVCALL_DISPATCH_VP				0x00c2
+#define HVCALL_DETACH_DEVICE_DOMAIN			0x00c4
+#define HVCALL_DELETE_DEVICE_DOMAIN			0x00c5
 #define HVCALL_GET_GPA_PAGES_ACCESS_STATES		0x00c9
+#define HVCALL_CONFIGURE_DEVICE_DOMAIN			0x00ce
+#define HVCALL_FLUSH_DEVICE_DOMAIN			0x00d0
 #define HVCALL_ACQUIRE_SPARSE_SPA_PAGE_HOST_ACCESS	0x00d7
 #define HVCALL_RELEASE_SPARSE_SPA_PAGE_HOST_ACCESS	0x00d8
 #define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY	0x00db
@@ -492,6 +498,8 @@ union hv_vp_assist_msr_contents {	 /* HV_REGISTER_VP_ASSIST_PAGE */
 #define HVCALL_GET_VP_CPUID_VALUES			0x00f4
 #define HVCALL_MMIO_READ				0x0106
 #define HVCALL_MMIO_WRITE				0x0107
+#define HVCALL_GET_IOMMU_CAPABILITIES			0x0125
+#define HVCALL_GET_LOGICAL_DEVICE_PROPERTY		0x0127
 
 /* HV_HYPERCALL_INPUT */
 #define HV_HYPERCALL_RESULT_MASK	GENMASK_ULL(15, 0)
diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
index 858f6a3925b3..ba6b91746b13 100644
--- a/include/hyperv/hvhdk_mini.h
+++ b/include/hyperv/hvhdk_mini.h
@@ -400,4 +400,127 @@ union hv_device_id {		/* HV_DEVICE_ID */
 	} acpi;
 } __packed;
 
+/* Device domain types */
+#define HV_DEVICE_DOMAIN_TYPE_S1	1 /* Stage 1 domain */
+
+/* ID for default domain and NULL domain */
+#define HV_DEVICE_DOMAIN_ID_DEFAULT 0
+#define HV_DEVICE_DOMAIN_ID_NULL    0xFFFFFFFFULL
+
+union hv_device_domain_id {
+	u64 as_uint64;
+	struct {
+		u32 type: 4;
+		u32 reserved: 28;
+		u32 id;
+	} __packed;
+};
+
+struct hv_input_device_domain {
+	u64 partition_id;
+	union hv_input_vtl owner_vtl;
+	u8 padding[7];
+	union hv_device_domain_id domain_id;
+} __packed;
+
+union hv_create_device_domain_flags {
+	u32 as_uint32;
+	struct {
+		u32 forward_progress_required: 1;
+		u32 inherit_owning_vtl: 1;
+		u32 reserved: 30;
+	} __packed;
+};
+
+struct hv_input_create_device_domain {
+	struct hv_input_device_domain device_domain;
+	union hv_create_device_domain_flags create_device_domain_flags;
+} __packed;
+
+struct hv_input_delete_device_domain {
+	struct hv_input_device_domain device_domain;
+} __packed;
+
+struct hv_input_attach_device_domain {
+	struct hv_input_device_domain device_domain;
+	union hv_device_id device_id;
+} __packed;
+
+struct hv_input_detach_device_domain {
+	u64 partition_id;
+	union hv_device_id device_id;
+} __packed;
+
+struct hv_device_domain_settings {
+	struct {
+		/*
+		 * Enable translations. If not enabled, all transaction bypass
+		 * S1 translations.
+		 */
+		u64 translation_enabled: 1;
+		u64 blocked: 1;
+		/*
+		 * First stage address translation paging mode:
+		 * 0: 4-level paging (default)
+		 * 1: 5-level paging
+		 */
+		u64 first_stage_paging_mode: 1;
+		u64 reserved: 61;
+	} flags;
+
+	/* Address of translation table */
+	u64 page_table_root;
+} __packed;
+
+struct hv_input_configure_device_domain {
+	struct hv_input_device_domain device_domain;
+	struct hv_device_domain_settings settings;
+} __packed;
+
+struct hv_input_get_iommu_capabilities {
+	u64 partition_id;
+	u64 reserved;
+} __packed;
+
+struct hv_output_get_iommu_capabilities {
+	u32 size;
+	u16 reserved;
+	u8  max_iova_width;
+	u8  max_pasid_width;
+
+#define HV_IOMMU_CAP_PRESENT (1ULL << 0)
+#define HV_IOMMU_CAP_S2 (1ULL << 1)
+#define HV_IOMMU_CAP_S1 (1ULL << 2)
+#define HV_IOMMU_CAP_S1_5LVL (1ULL << 3)
+#define HV_IOMMU_CAP_PASID (1ULL << 4)
+#define HV_IOMMU_CAP_ATS (1ULL << 5)
+#define HV_IOMMU_CAP_PRI (1ULL << 6)
+
+	u64 iommu_cap;
+	u64 pgsize_bitmap;
+} __packed;
+
+enum hv_logical_device_property_code {
+	HV_LOGICAL_DEVICE_PROPERTY_PVIOMMU = 10,
+};
+
+struct hv_input_get_logical_device_property {
+	u64 partition_id;
+	u64 logical_device_id;
+	enum hv_logical_device_property_code code;
+	u32 reserved;
+} __packed;
+
+struct hv_output_get_logical_device_property {
+#define HV_DEVICE_IOMMU_ENABLED (1ULL << 0)
+	u64 device_iommu;
+	u64 reserved;
+} __packed;
+
+struct hv_input_flush_device_domain {
+	struct hv_input_device_domain device_domain;
+	u32 flags;
+	u32 reserved;
+} __packed;
+
 #endif /* _HV_HVHDK_MINI_H */
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v1 4/5] hyperv: allow hypercall output pages to be allocated for child partitions
  2025-12-09  5:11 [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
                   ` (2 preceding siblings ...)
  2025-12-09  5:11 ` [RFC v1 3/5] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
@ 2025-12-09  5:11 ` Yu Zhang
  2025-12-09  5:11 ` [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
  4 siblings, 0 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-09  5:11 UTC (permalink / raw)
  To: linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

Previously, the allocation of per-CPU output argument pages was restricted
to root partitions or those operating in VTL mode.

Remove this restriction to support guest IOMMU related hypercalls, which
require valid output pages to function correctly.

While unconditionally allocating per-CPU output pages scales with the number
of vCPUs, and potentially adding overhead for guests that may not utilize the
IOMMU, this change anticipates that future hypercalls from child partitions
may also require these output pages.

Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
 drivers/hv/hv_common.c | 21 ++++++---------------
 1 file changed, 6 insertions(+), 15 deletions(-)

diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index e109a620c83f..034fb2592884 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -255,11 +255,6 @@ static void hv_kmsg_dump_register(void)
 	}
 }
 
-static inline bool hv_output_page_exists(void)
-{
-	return hv_parent_partition() || IS_ENABLED(CONFIG_HYPERV_VTL_MODE);
-}
-
 void __init hv_get_partition_id(void)
 {
 	struct hv_output_get_partition_id *output;
@@ -371,11 +366,9 @@ int __init hv_common_init(void)
 	hyperv_pcpu_input_arg = alloc_percpu(void  *);
 	BUG_ON(!hyperv_pcpu_input_arg);
 
-	/* Allocate the per-CPU state for output arg for root */
-	if (hv_output_page_exists()) {
-		hyperv_pcpu_output_arg = alloc_percpu(void *);
-		BUG_ON(!hyperv_pcpu_output_arg);
-	}
+	/* Allocate the per-CPU state for output arg*/
+	hyperv_pcpu_output_arg = alloc_percpu(void *);
+	BUG_ON(!hyperv_pcpu_output_arg);
 
 	if (hv_parent_partition()) {
 		hv_synic_eventring_tail = alloc_percpu(u8 *);
@@ -473,7 +466,7 @@ int hv_common_cpu_init(unsigned int cpu)
 	u8 **synic_eventring_tail;
 	u64 msr_vp_index;
 	gfp_t flags;
-	const int pgcount = hv_output_page_exists() ? 2 : 1;
+	const int pgcount = 2;
 	void *mem;
 	int ret = 0;
 
@@ -491,10 +484,8 @@ int hv_common_cpu_init(unsigned int cpu)
 		if (!mem)
 			return -ENOMEM;
 
-		if (hv_output_page_exists()) {
-			outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg);
-			*outputarg = (char *)mem + HV_HYP_PAGE_SIZE;
-		}
+		outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg);
+		*outputarg = (char *)mem + HV_HYP_PAGE_SIZE;
 
 		if (!ms_hyperv.paravisor_present &&
 		    (hv_isolation_type_snp() || hv_isolation_type_tdx())) {
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
  2025-12-09  5:11 [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
                   ` (3 preceding siblings ...)
  2025-12-09  5:11 ` [RFC v1 4/5] hyperv: allow hypercall output pages to be allocated for child partitions Yu Zhang
@ 2025-12-09  5:11 ` Yu Zhang
  2025-12-10 17:15   ` Easwar Hariharan
  4 siblings, 1 reply; 12+ messages in thread
From: Yu Zhang @ 2025-12-09  5:11 UTC (permalink / raw)
  To: linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
This driver implements stage-1 IO translation within the guest OS.
It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
for:
 - Capability discovery
 - Domain allocation, configuration, and deallocation
 - Device attachment and detachment
 - IOTLB invalidation

The driver constructs x86-compatible stage-1 IO page tables in the
guest memory using consolidated IO page table helpers. This allows
the guest to manage stage-1 translations independently of vendor-
specific drivers (like Intel VT-d or AMD IOMMU).

Hyper-v consumes this stage-1 IO page table, when a device domain is
created and configured, and nests it with the host's stage-2 IO page
tables, therefore elemenating the VM exits for guest IOMMU mapping
operations.

For guest IOMMU unmapping operations, VM exits to perform the IOTLB
flush(and possibly the device TLB flush) is still unavoidable. For
now, HVCALL_FLUSH_DEVICE_DOMAIN	is used to implement a domain-selective
IOTLB flush. New hypercalls for finer-grained hypercall will be provided
in future patches.

Co-developed-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Co-developed-by: Jacob Pan <jacob.pan@linux.microsoft.com>
Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>
Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
---
 drivers/iommu/hyperv/Kconfig  |  14 +
 drivers/iommu/hyperv/Makefile |   1 +
 drivers/iommu/hyperv/iommu.c  | 608 ++++++++++++++++++++++++++++++++++
 drivers/iommu/hyperv/iommu.h  |  53 +++
 4 files changed, 676 insertions(+)
 create mode 100644 drivers/iommu/hyperv/iommu.c
 create mode 100644 drivers/iommu/hyperv/iommu.h

diff --git a/drivers/iommu/hyperv/Kconfig b/drivers/iommu/hyperv/Kconfig
index 30f40d867036..fa3c77752d7b 100644
--- a/drivers/iommu/hyperv/Kconfig
+++ b/drivers/iommu/hyperv/Kconfig
@@ -8,3 +8,17 @@ config HYPERV_IOMMU
 	help
 	  Stub IOMMU driver to handle IRQs to support Hyper-V Linux
 	  guest and root partitions.
+
+if HYPERV_IOMMU
+config HYPERV_PVIOMMU
+	bool "Microsoft Hypervisor para-virtualized IOMMU support"
+	depends on X86 && HYPERV && PCI_HYPERV
+	depends on IOMMU_PT
+	select IOMMU_API
+	select IOMMU_DMA
+	select DMA_OPS
+	select IOMMU_IOVA
+	default HYPERV
+	help
+	  A para-virtualized IOMMU for Microsoft Hypervisor guest.
+endif
diff --git a/drivers/iommu/hyperv/Makefile b/drivers/iommu/hyperv/Makefile
index 9f557bad94ff..8669741c0a51 100644
--- a/drivers/iommu/hyperv/Makefile
+++ b/drivers/iommu/hyperv/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_HYPERV_IOMMU) += irq_remapping.o
+obj-$(CONFIG_HYPERV_PVIOMMU) += iommu.o
diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
new file mode 100644
index 000000000000..3d0aff868e16
--- /dev/null
+++ b/drivers/iommu/hyperv/iommu.c
@@ -0,0 +1,608 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Hyper-V IOMMU driver.
+ *
+ * Copyright (C) 2019, 2024-2025 Microsoft, Inc.
+ */
+
+#include <linux/iommu.h>
+#include <linux/pci.h>
+#include <linux/dma-map-ops.h>
+#include <linux/generic_pt/iommu.h>
+#include <linux/syscore_ops.h>
+#include <linux/pci-ats.h>
+
+#include <asm/iommu.h>
+#include <asm/hypervisor.h>
+#include <asm/mshyperv.h>
+
+#include "iommu.h"
+#include "../dma-iommu.h"
+#include "../iommu-pages.h"
+
+static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev);
+static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain);
+struct hv_iommu_dev *hv_iommu_device;
+static struct hv_iommu_domain hv_identity_domain;
+static struct hv_iommu_domain hv_blocking_domain;
+static const struct iommu_domain_ops hv_iommu_identity_domain_ops;
+static const struct iommu_domain_ops hv_iommu_blocking_domain_ops;
+static struct iommu_ops hv_iommu_ops;
+
+#define hv_iommu_present(iommu_cap) (iommu_cap & HV_IOMMU_CAP_PRESENT)
+#define hv_iommu_s1_domain_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1)
+#define hv_iommu_5lvl_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_S1_5LVL)
+#define hv_iommu_ats_supported(iommu_cap) (iommu_cap & HV_IOMMU_CAP_ATS)
+
+static int hv_create_device_domain(struct hv_iommu_domain *hv_domain, u32 domain_stage)
+{
+	int ret;
+	u64 status;
+	unsigned long flags;
+	struct hv_input_create_device_domain *input;
+
+	ret = ida_alloc_range(&hv_iommu_device->domain_ids,
+			hv_iommu_device->first_domain, hv_iommu_device->last_domain,
+			GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+
+	hv_domain->device_domain.partition_id = HV_PARTITION_ID_SELF;
+	hv_domain->device_domain.domain_id.type = domain_stage;
+	hv_domain->device_domain.domain_id.id = ret;
+	hv_domain->hv_iommu = hv_iommu_device;
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	memset(input, 0, sizeof(*input));
+	input->device_domain = hv_domain->device_domain;
+	input->create_device_domain_flags.forward_progress_required = 1;
+	input->create_device_domain_flags.inherit_owning_vtl = 0;
+	status = hv_do_hypercall(HVCALL_CREATE_DEVICE_DOMAIN, input, NULL);
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status)) {
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+		ida_free(&hv_iommu_device->domain_ids, hv_domain->device_domain.domain_id.id);
+	}
+
+	return hv_result_to_errno(status);
+}
+
+static void hv_delete_device_domain(struct hv_iommu_domain *hv_domain)
+{
+	u64 status;
+	unsigned long flags;
+	struct hv_input_delete_device_domain *input;
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	memset(input, 0, sizeof(*input));
+	input->device_domain = hv_domain->device_domain;
+	status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input, NULL);
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status))
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+
+	ida_free(&hv_domain->hv_iommu->domain_ids, hv_domain->device_domain.domain_id.id);
+}
+
+static bool hv_iommu_capable(struct device *dev, enum iommu_cap cap)
+{
+	switch (cap) {
+	case IOMMU_CAP_CACHE_COHERENCY:
+		return true;
+	case IOMMU_CAP_DEFERRED_FLUSH:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static int hv_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	u64 status;
+	unsigned long flags;
+	struct pci_dev *pdev;
+	struct hv_input_attach_device_domain *input;
+	struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
+
+	/* Only allow PCI devices for now */
+	if (!dev_is_pci(dev))
+		return -EINVAL;
+
+	if (vdev->hv_domain == hv_domain)
+		return 0;
+
+	if (vdev->hv_domain)
+		hv_iommu_detach_dev(&vdev->hv_domain->domain, dev);
+
+	pdev = to_pci_dev(dev);
+	dev_dbg(dev, "Attaching (%strusted) to %d\n", pdev->untrusted ? "un" : "",
+		hv_domain->device_domain.domain_id.id);
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	memset(input, 0, sizeof(*input));
+	input->device_domain = hv_domain->device_domain;
+	input->device_id.as_uint64 = hv_build_logical_dev_id(pdev);
+	status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input, NULL);
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status)) {
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+	} else {
+		vdev->hv_domain = hv_domain;
+		spin_lock_irqsave(&hv_domain->lock, flags);
+		list_add(&vdev->list, &hv_domain->dev_list);
+		spin_unlock_irqrestore(&hv_domain->lock, flags);
+	}
+
+	return hv_result_to_errno(status);
+}
+
+static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	u64 status;
+	unsigned long flags;
+	struct hv_input_detach_device_domain *input;
+	struct pci_dev *pdev;
+	struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
+	struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	/* See the attach function, only PCI devices for now */
+	if (!dev_is_pci(dev) || vdev->hv_domain != hv_domain)
+		return;
+
+	pdev = to_pci_dev(dev);
+
+	dev_dbg(dev, "Detaching from %d\n", hv_domain->device_domain.domain_id.id);
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	memset(input, 0, sizeof(*input));
+	input->partition_id = HV_PARTITION_ID_SELF;
+	input->device_id.as_uint64 = hv_build_logical_dev_id(pdev);
+	status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input, NULL);
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status))
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+
+	spin_lock_irqsave(&hv_domain->lock, flags);
+	hv_flush_device_domain(hv_domain);
+	list_del(&vdev->list);
+	spin_unlock_irqrestore(&hv_domain->lock, flags);
+
+	vdev->hv_domain = NULL;
+}
+
+static int hv_iommu_get_logical_device_property(struct device *dev,
+					enum hv_logical_device_property_code code,
+					struct hv_output_get_logical_device_property *property)
+{
+	u64 status;
+	unsigned long flags;
+	struct hv_input_get_logical_device_property *input;
+	struct hv_output_get_logical_device_property *output;
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	output = *this_cpu_ptr(hyperv_pcpu_output_arg);
+	memset(input, 0, sizeof(*input));
+	memset(output, 0, sizeof(*output));
+	input->partition_id = HV_PARTITION_ID_SELF;
+	input->logical_device_id = hv_build_logical_dev_id(to_pci_dev(dev));
+	input->code = code;
+	status = hv_do_hypercall(HVCALL_GET_LOGICAL_DEVICE_PROPERTY, input, output);
+	*property = *output;
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status))
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+
+	return hv_result_to_errno(status);
+}
+
+static struct iommu_device *hv_iommu_probe_device(struct device *dev)
+{
+	struct pci_dev *pdev;
+	struct hv_iommu_endpoint *vdev;
+	struct hv_output_get_logical_device_property device_iommu_property = {0};
+
+	if (!dev_is_pci(dev))
+		return ERR_PTR(-ENODEV);
+
+	if (hv_iommu_get_logical_device_property(dev,
+						 HV_LOGICAL_DEVICE_PROPERTY_PVIOMMU,
+						 &device_iommu_property) ||
+	    !(device_iommu_property.device_iommu & HV_DEVICE_IOMMU_ENABLED))
+		return ERR_PTR(-ENODEV);
+
+	pdev = to_pci_dev(dev);
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return ERR_PTR(-ENOMEM);
+
+	vdev->dev = dev;
+	vdev->hv_iommu = hv_iommu_device;
+	dev_iommu_priv_set(dev, vdev);
+
+	if (hv_iommu_ats_supported(hv_iommu_device->cap) &&
+	    pci_ats_supported(pdev))
+		pci_enable_ats(pdev, __ffs(hv_iommu_device->pgsize_bitmap));
+
+	return &vdev->hv_iommu->iommu;
+}
+
+static void hv_iommu_release_device(struct device *dev)
+{
+	struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
+
+	if (vdev->hv_domain)
+		hv_iommu_detach_dev(&vdev->hv_domain->domain, dev);
+
+	dev_iommu_priv_set(dev, NULL);
+	set_dma_ops(dev, NULL);
+
+	kfree(vdev);
+}
+
+static struct iommu_group *hv_iommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int hv_configure_device_domain(struct hv_iommu_domain *hv_domain, u32 domain_type)
+{
+	u64 status;
+	unsigned long flags;
+	struct pt_iommu_x86_64_hw_info pt_info;
+	struct hv_input_configure_device_domain *input;
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	memset(input, 0, sizeof(*input));
+	input->device_domain = hv_domain->device_domain;
+	input->settings.flags.blocked = (domain_type == IOMMU_DOMAIN_BLOCKED);
+	input->settings.flags.translation_enabled = (domain_type != IOMMU_DOMAIN_IDENTITY);
+
+	if (domain_type & __IOMMU_DOMAIN_PAGING) {
+		pt_iommu_x86_64_hw_info(&hv_domain->pt_iommu_x86_64, &pt_info);
+		input->settings.page_table_root = pt_info.gcr3_pt;
+		input->settings.flags.first_stage_paging_mode =
+			pt_info.levels == 5;
+	}
+	status = hv_do_hypercall(HVCALL_CONFIGURE_DEVICE_DOMAIN, input, NULL);
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status))
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+
+	return hv_result_to_errno(status);
+}
+
+static int __init hv_initialize_static_domains(void)
+{
+	int ret;
+	struct hv_iommu_domain *hv_domain;
+
+	/* Default stage-1 identity domain */
+	hv_domain = &hv_identity_domain;
+	memset(hv_domain, 0, sizeof(*hv_domain));
+
+	ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
+	if (ret)
+		return ret;
+
+	ret = hv_configure_device_domain(hv_domain, IOMMU_DOMAIN_IDENTITY);
+	if (ret)
+		goto delete_identity_domain;
+
+	hv_domain->domain.type = IOMMU_DOMAIN_IDENTITY;
+	hv_domain->domain.ops = &hv_iommu_identity_domain_ops;
+	hv_domain->domain.owner = &hv_iommu_ops;
+	hv_domain->domain.geometry = hv_iommu_device->geometry;
+	hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
+	INIT_LIST_HEAD(&hv_domain->dev_list);
+
+	/* Default stage-1 blocked domain */
+	hv_domain = &hv_blocking_domain;
+	memset(hv_domain, 0, sizeof(*hv_domain));
+
+	ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
+	if (ret)
+		goto delete_identity_domain;
+
+	ret = hv_configure_device_domain(hv_domain, IOMMU_DOMAIN_BLOCKED);
+	if (ret)
+		goto delete_blocked_domain;
+
+	hv_domain->domain.type = IOMMU_DOMAIN_BLOCKED;
+	hv_domain->domain.ops = &hv_iommu_blocking_domain_ops;
+	hv_domain->domain.owner = &hv_iommu_ops;
+	hv_domain->domain.geometry = hv_iommu_device->geometry;
+	hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
+	INIT_LIST_HEAD(&hv_domain->dev_list);
+
+	return 0;
+
+delete_blocked_domain:
+	hv_delete_device_domain(&hv_blocking_domain);
+delete_identity_domain:
+	hv_delete_device_domain(&hv_identity_domain);
+	return ret;
+}
+
+#define INTERRUPT_RANGE_START	(0xfee00000)
+#define INTERRUPT_RANGE_END	(0xfeefffff)
+static void hv_iommu_get_resv_regions(struct device *dev,
+		struct list_head *head)
+{
+	struct iommu_resv_region *region;
+
+	region = iommu_alloc_resv_region(INTERRUPT_RANGE_START,
+				      INTERRUPT_RANGE_END - INTERRUPT_RANGE_START + 1,
+				      0, IOMMU_RESV_MSI, GFP_KERNEL);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+}
+
+static void hv_flush_device_domain(struct hv_iommu_domain *hv_domain)
+{
+	u64 status;
+	unsigned long flags;
+	struct hv_input_flush_device_domain *input;
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	memset(input, 0, sizeof(*input));
+	input->device_domain.partition_id = hv_domain->device_domain.partition_id;
+	input->device_domain.owner_vtl = hv_domain->device_domain.owner_vtl;
+	input->device_domain.domain_id.type = hv_domain->device_domain.domain_id.type;
+	input->device_domain.domain_id.id = hv_domain->device_domain.domain_id.id;
+	status = hv_do_hypercall(HVCALL_FLUSH_DEVICE_DOMAIN, input, NULL);
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status))
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+}
+
+static void hv_iommu_flush_iotlb_all(struct iommu_domain *domain)
+{
+	hv_flush_device_domain(to_hv_iommu_domain(domain));
+}
+
+static void hv_iommu_iotlb_sync(struct iommu_domain *domain,
+				struct iommu_iotlb_gather *iotlb_gather)
+{
+	hv_flush_device_domain(to_hv_iommu_domain(domain));
+
+	iommu_put_pages_list(&iotlb_gather->freelist);
+}
+
+static void hv_iommu_paging_domain_free(struct iommu_domain *domain)
+{
+	struct hv_iommu_domain *hv_domain = to_hv_iommu_domain(domain);
+
+	/* Free all remaining mappings */
+	pt_iommu_deinit(&hv_domain->pt_iommu);
+
+	hv_delete_device_domain(hv_domain);
+
+	kfree(hv_domain);
+}
+
+static const struct iommu_domain_ops hv_iommu_identity_domain_ops = {
+	.attach_dev	= hv_iommu_attach_dev,
+};
+
+static const struct iommu_domain_ops hv_iommu_blocking_domain_ops = {
+	.attach_dev	= hv_iommu_attach_dev,
+};
+
+static const struct iommu_domain_ops hv_iommu_paging_domain_ops = {
+	.attach_dev	= hv_iommu_attach_dev,
+	IOMMU_PT_DOMAIN_OPS(x86_64),
+	.flush_iotlb_all = hv_iommu_flush_iotlb_all,
+	.iotlb_sync = hv_iommu_iotlb_sync,
+	.free = hv_iommu_paging_domain_free,
+};
+
+static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
+{
+	int ret;
+	struct hv_iommu_domain *hv_domain;
+	struct pt_iommu_x86_64_cfg cfg = {};
+
+	hv_domain = kzalloc(sizeof(*hv_domain), GFP_KERNEL);
+	if (!hv_domain)
+		return ERR_PTR(-ENOMEM);
+
+	ret = hv_create_device_domain(hv_domain, HV_DEVICE_DOMAIN_TYPE_S1);
+	if (ret) {
+		kfree(hv_domain);
+		return ERR_PTR(ret);
+	}
+
+	hv_domain->domain.pgsize_bitmap = hv_iommu_device->pgsize_bitmap;
+	hv_domain->domain.geometry = hv_iommu_device->geometry;
+	hv_domain->pt_iommu.nid = dev_to_node(dev);
+	INIT_LIST_HEAD(&hv_domain->dev_list);
+	spin_lock_init(&hv_domain->lock);
+
+	cfg.common.hw_max_vasz_lg2 = hv_iommu_device->max_iova_width;
+	cfg.common.hw_max_oasz_lg2 = 52;
+
+	ret = pt_iommu_x86_64_init(&hv_domain->pt_iommu_x86_64, &cfg, GFP_KERNEL);
+	if (ret) {
+		hv_delete_device_domain(hv_domain);
+		return ERR_PTR(ret);
+	}
+
+	hv_domain->domain.ops = &hv_iommu_paging_domain_ops;
+
+	ret = hv_configure_device_domain(hv_domain, __IOMMU_DOMAIN_PAGING);
+	if (ret) {
+		pt_iommu_deinit(&hv_domain->pt_iommu);
+		hv_delete_device_domain(hv_domain);
+		return ERR_PTR(ret);
+	}
+
+	return &hv_domain->domain;
+}
+
+static struct iommu_ops hv_iommu_ops = {
+	.capable		  = hv_iommu_capable,
+	.domain_alloc_paging	  = hv_iommu_domain_alloc_paging,
+	.probe_device		  = hv_iommu_probe_device,
+	.release_device		  = hv_iommu_release_device,
+	.device_group		  = hv_iommu_device_group,
+	.get_resv_regions	  = hv_iommu_get_resv_regions,
+	.owner			  = THIS_MODULE,
+	.identity_domain	  = &hv_identity_domain.domain,
+	.blocked_domain		  = &hv_blocking_domain.domain,
+	.release_domain		  = &hv_blocking_domain.domain,
+};
+
+static void hv_iommu_shutdown(void)
+{
+	iommu_device_sysfs_remove(&hv_iommu_device->iommu);
+
+	kfree(hv_iommu_device);
+}
+
+static struct syscore_ops hv_iommu_syscore_ops = {
+	.shutdown = hv_iommu_shutdown,
+};
+
+static int hv_iommu_detect(struct hv_output_get_iommu_capabilities *hv_iommu_cap)
+{
+	u64 status;
+	unsigned long flags;
+	struct hv_input_get_iommu_capabilities *input;
+	struct hv_output_get_iommu_capabilities *output;
+
+	local_irq_save(flags);
+
+	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+	output = *this_cpu_ptr(hyperv_pcpu_output_arg);
+	memset(input, 0, sizeof(*input));
+	memset(output, 0, sizeof(*output));
+	input->partition_id = HV_PARTITION_ID_SELF;
+	status = hv_do_hypercall(HVCALL_GET_IOMMU_CAPABILITIES, input, output);
+	*hv_iommu_cap = *output;
+
+	local_irq_restore(flags);
+
+	if (!hv_result_success(status))
+		pr_err("%s: hypercall failed, status %lld\n", __func__, status);
+
+	return hv_result_to_errno(status);
+}
+
+static void __init hv_init_iommu_device(struct hv_iommu_dev *hv_iommu,
+			struct hv_output_get_iommu_capabilities *hv_iommu_cap)
+{
+	ida_init(&hv_iommu->domain_ids);
+
+	hv_iommu->cap = hv_iommu_cap->iommu_cap;
+	hv_iommu->max_iova_width = hv_iommu_cap->max_iova_width;
+	if (!hv_iommu_5lvl_supported(hv_iommu->cap) &&
+	    hv_iommu->max_iova_width > 48) {
+		pr_err("5-level paging not supported, limiting iova width to 48.\n");
+		hv_iommu->max_iova_width = 48;
+	}
+
+	hv_iommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start = 0,
+		.aperture_end = (((u64)1) << hv_iommu_cap->max_iova_width) - 1,
+		.force_aperture = true,
+	};
+
+	hv_iommu->first_domain = HV_DEVICE_DOMAIN_ID_DEFAULT + 1;
+	hv_iommu->last_domain = HV_DEVICE_DOMAIN_ID_NULL - 1;
+	hv_iommu->pgsize_bitmap = hv_iommu_cap->pgsize_bitmap;
+	hv_iommu_device = hv_iommu;
+}
+
+static int __init hv_iommu_init(void)
+{
+	int ret = 0;
+	struct hv_iommu_dev *hv_iommu = NULL;
+	struct hv_output_get_iommu_capabilities hv_iommu_cap = {0};
+
+	if (no_iommu || iommu_detected)
+		return -ENODEV;
+
+	if (!hv_is_hyperv_initialized())
+		return -ENODEV;
+
+	if (hv_iommu_detect(&hv_iommu_cap) ||
+	    !hv_iommu_present(hv_iommu_cap.iommu_cap) ||
+	    !hv_iommu_s1_domain_supported(hv_iommu_cap.iommu_cap))
+		return -ENODEV;
+
+	iommu_detected = 1;
+	pci_request_acs();
+
+	hv_iommu = kzalloc(sizeof(*hv_iommu), GFP_KERNEL);
+	if (!hv_iommu)
+		return -ENOMEM;
+
+	hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
+
+	ret = hv_initialize_static_domains();
+	if (ret) {
+		pr_err("hv_initialize_static_domains failed: %d\n", ret);
+		goto err_sysfs_remove;
+	}
+
+	ret = iommu_device_sysfs_add(&hv_iommu->iommu, NULL, NULL, "%s", "hv-iommu");
+	if (ret) {
+		pr_err("iommu_device_sysfs_add failed: %d\n", ret);
+		goto err_free;
+	}
+
+
+	ret = iommu_device_register(&hv_iommu->iommu, &hv_iommu_ops, NULL);
+	if (ret) {
+		pr_err("iommu_device_register failed: %d\n", ret);
+		goto err_sysfs_remove;
+	}
+
+	register_syscore_ops(&hv_iommu_syscore_ops);
+
+	pr_info("Microsoft Hypervisor IOMMU initialized\n");
+	return 0;
+
+err_sysfs_remove:
+	iommu_device_sysfs_remove(&hv_iommu->iommu);
+err_free:
+	kfree(hv_iommu);
+	return ret;
+}
+
+device_initcall(hv_iommu_init);
diff --git a/drivers/iommu/hyperv/iommu.h b/drivers/iommu/hyperv/iommu.h
new file mode 100644
index 000000000000..c8657e791a6e
--- /dev/null
+++ b/drivers/iommu/hyperv/iommu.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Hyper-V IOMMU driver.
+ *
+ * Copyright (C) 2024-2025, Microsoft, Inc.
+ *
+ */
+
+#ifndef _HYPERV_IOMMU_H
+#define _HYPERV_IOMMU_H
+
+struct hv_iommu_dev {
+	struct iommu_device iommu;
+	struct ida domain_ids;
+
+	/* Device configuration */
+	u8  max_iova_width;
+	u8  max_pasid_width;
+	u64 cap;
+	u64 pgsize_bitmap;
+
+	struct iommu_domain_geometry geometry;
+	u64 first_domain;
+	u64 last_domain;
+};
+
+struct hv_iommu_domain {
+	union {
+		struct iommu_domain    domain;
+		struct pt_iommu        pt_iommu;
+		struct pt_iommu_x86_64 pt_iommu_x86_64;
+	};
+	struct hv_iommu_dev *hv_iommu;
+	struct hv_input_device_domain device_domain;
+	u64		pgsize_bitmap;
+
+	spinlock_t lock; /* protects dev_list and TLB flushes */
+	/* List of devices in this DMA domain */
+	struct list_head dev_list;
+};
+
+struct hv_iommu_endpoint {
+	struct device *dev;
+	struct hv_iommu_dev *hv_iommu;
+	struct hv_iommu_domain *hv_domain;
+	struct list_head list; /* For domain->dev_list */
+};
+
+#define to_hv_iommu_domain(d) \
+	container_of(d, struct hv_iommu_domain, domain)
+
+#endif /* _HYPERV_IOMMU_H */
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id()
  2025-12-09  5:11 ` [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id() Yu Zhang
@ 2025-12-09  5:21   ` Randy Dunlap
  2025-12-10 17:03     ` Easwar Hariharan
  2025-12-10 21:39   ` Bjorn Helgaas
  1 sibling, 1 reply; 12+ messages in thread
From: Randy Dunlap @ 2025-12-09  5:21 UTC (permalink / raw)
  To: Yu Zhang, linux-kernel, linux-hyperv, iommu, linux-pci
  Cc: kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, easwar.hariharan,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

Hi--

On 12/8/25 9:11 PM, Yu Zhang wrote:
> From: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> 
> Hyper-V uses a logical device ID to identify a PCI endpoint device for
> child partitions. This ID will also be required for future hypercalls
> used by the Hyper-V IOMMU driver.
> 
> Refactor the logic for building this logical device ID into a standalone
> helper function and export the interface for wider use.
> 
> Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> ---
>  drivers/pci/controller/pci-hyperv.c | 28 ++++++++++++++++++++--------
>  include/asm-generic/mshyperv.h      |  2 ++
>  2 files changed, 22 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index 146b43981b27..4b82e06b5d93 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -598,15 +598,31 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
>  
>  #define hv_msi_prepare		pci_msi_prepare
>  
> +/**
> + * Build a "Device Logical ID" out of this PCI bus's instance GUID and the
> + * function number of the device.
> + */

Don't use kernel-doc notation "/**" unless you are using kernel-doc comments.
You could just convert it to a kernel-doc style comment...

> +u64 hv_build_logical_dev_id(struct pci_dev *pdev)
> +{

thanks.
-- 
~Randy


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id()
  2025-12-09  5:21   ` Randy Dunlap
@ 2025-12-10 17:03     ` Easwar Hariharan
  0 siblings, 0 replies; 12+ messages in thread
From: Easwar Hariharan @ 2025-12-10 17:03 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Yu Zhang, linux-kernel, linux-hyperv, iommu, linux-pci,
	easwar.hariharan, kys, haiyangz, wei.liu, decui, lpieralisi,
	kwilczynski, mani, robh, bhelgaas, arnd, joro, will, robin.murphy,
	jacob.pan, nunodasneves, mrathor, mhklinux, peterz, linux-arch

On 12/8/2025 9:21 PM, Randy Dunlap wrote:
> Hi--
> 
> On 12/8/25 9:11 PM, Yu Zhang wrote:
>> From: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
>>
>> Hyper-V uses a logical device ID to identify a PCI endpoint device for
>> child partitions. This ID will also be required for future hypercalls
>> used by the Hyper-V IOMMU driver.
>>
>> Refactor the logic for building this logical device ID into a standalone
>> helper function and export the interface for wider use.
>>
>> Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
>> Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
>> ---
>>  drivers/pci/controller/pci-hyperv.c | 28 ++++++++++++++++++++--------
>>  include/asm-generic/mshyperv.h      |  2 ++
>>  2 files changed, 22 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
>> index 146b43981b27..4b82e06b5d93 100644
>> --- a/drivers/pci/controller/pci-hyperv.c
>> +++ b/drivers/pci/controller/pci-hyperv.c
>> @@ -598,15 +598,31 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
>>  
>>  #define hv_msi_prepare		pci_msi_prepare
>>  
>> +/**
>> + * Build a "Device Logical ID" out of this PCI bus's instance GUID and the
>> + * function number of the device.
>> + */
> 
> Don't use kernel-doc notation "/**" unless you are using kernel-doc comments.
> You could just convert it to a kernel-doc style comment...

Thank you for the review, I will fix in a future revision.

Thanks,
Easwar (he/him)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
  2025-12-09  5:11 ` [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
@ 2025-12-10 17:15   ` Easwar Hariharan
  2025-12-11  8:41     ` Yu Zhang
  0 siblings, 1 reply; 12+ messages in thread
From: Easwar Hariharan @ 2025-12-10 17:15 UTC (permalink / raw)
  To: Yu Zhang
  Cc: linux-kernel, linux-hyperv, iommu, linux-pci, easwar.hariharan,
	kys, haiyangz, wei.liu, decui, lpieralisi, kwilczynski, mani,
	robh, bhelgaas, arnd, joro, will, robin.murphy, jacob.pan,
	nunodasneves, mrathor, mhklinux, peterz, linux-arch

On 12/8/2025 9:11 PM, Yu Zhang wrote:
> Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
> This driver implements stage-1 IO translation within the guest OS.
> It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
> for:
>  - Capability discovery
>  - Domain allocation, configuration, and deallocation
>  - Device attachment and detachment
>  - IOTLB invalidation
> 
> The driver constructs x86-compatible stage-1 IO page tables in the
> guest memory using consolidated IO page table helpers. This allows
> the guest to manage stage-1 translations independently of vendor-
> specific drivers (like Intel VT-d or AMD IOMMU).
> 
> Hyper-v consumes this stage-1 IO page table, when a device domain is
> created and configured, and nests it with the host's stage-2 IO page
> tables, therefore elemenating the VM exits for guest IOMMU mapping
> operations.
> 
> For guest IOMMU unmapping operations, VM exits to perform the IOTLB
> flush(and possibly the device TLB flush) is still unavoidable. For
> now, HVCALL_FLUSH_DEVICE_DOMAIN	is used to implement a domain-selective
> IOTLB flush. New hypercalls for finer-grained hypercall will be provided
> in future patches.
> 
> Co-developed-by: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: Wei Liu <wei.liu@kernel.org>
> Co-developed-by: Jacob Pan <jacob.pan@linux.microsoft.com>
> Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>
> Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> ---
>  drivers/iommu/hyperv/Kconfig  |  14 +
>  drivers/iommu/hyperv/Makefile |   1 +
>  drivers/iommu/hyperv/iommu.c  | 608 ++++++++++++++++++++++++++++++++++
>  drivers/iommu/hyperv/iommu.h  |  53 +++
>  4 files changed, 676 insertions(+)
>  create mode 100644 drivers/iommu/hyperv/iommu.c
>  create mode 100644 drivers/iommu/hyperv/iommu.h
> 

<snip>

> +
> +static int __init hv_iommu_init(void)
> +{
> +	int ret = 0;
> +	struct hv_iommu_dev *hv_iommu = NULL;
> +	struct hv_output_get_iommu_capabilities hv_iommu_cap = {0};
> +
> +	if (no_iommu || iommu_detected)
> +		return -ENODEV;
> +
> +	if (!hv_is_hyperv_initialized())
> +		return -ENODEV;
> +
> +	if (hv_iommu_detect(&hv_iommu_cap) ||
> +	    !hv_iommu_present(hv_iommu_cap.iommu_cap) ||
> +	    !hv_iommu_s1_domain_supported(hv_iommu_cap.iommu_cap))
> +		return -ENODEV;
> +
> +	iommu_detected = 1;
> +	pci_request_acs();
> +
> +	hv_iommu = kzalloc(sizeof(*hv_iommu), GFP_KERNEL);
> +	if (!hv_iommu)
> +		return -ENOMEM;
> +
> +	hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
> +
> +	ret = hv_initialize_static_domains();
> +	if (ret) {
> +		pr_err("hv_initialize_static_domains failed: %d\n", ret);
> +		goto err_sysfs_remove;

This should be goto err_free since we haven't done the sysfs_add yet

> +	}
> +
> +	ret = iommu_device_sysfs_add(&hv_iommu->iommu, NULL, NULL, "%s", "hv-iommu");
> +	if (ret) {
> +		pr_err("iommu_device_sysfs_add failed: %d\n", ret);
> +		goto err_free;

And this should be probably a goto delete_static_domains that cleans up the allocated static
domains...

> +	}
> +
> +
> +	ret = iommu_device_register(&hv_iommu->iommu, &hv_iommu_ops, NULL);
> +	if (ret) {
> +		pr_err("iommu_device_register failed: %d\n", ret);
> +		goto err_sysfs_remove;
> +	}
> +
> +	register_syscore_ops(&hv_iommu_syscore_ops);
> +
> +	pr_info("Microsoft Hypervisor IOMMU initialized\n");
> +	return 0;
> +
> +err_sysfs_remove:
> +	iommu_device_sysfs_remove(&hv_iommu->iommu);
> +err_free:
> +	kfree(hv_iommu);
> +	return ret;
> +}
> +
> +device_initcall(hv_iommu_init);

<snip>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id()
  2025-12-09  5:11 ` [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id() Yu Zhang
  2025-12-09  5:21   ` Randy Dunlap
@ 2025-12-10 21:39   ` Bjorn Helgaas
  2025-12-11  8:31     ` Yu Zhang
  1 sibling, 1 reply; 12+ messages in thread
From: Bjorn Helgaas @ 2025-12-10 21:39 UTC (permalink / raw)
  To: Yu Zhang
  Cc: linux-kernel, linux-hyperv, iommu, linux-pci, kys, haiyangz,
	wei.liu, decui, lpieralisi, kwilczynski, mani, robh, bhelgaas,
	arnd, joro, will, robin.murphy, easwar.hariharan, jacob.pan,
	nunodasneves, mrathor, mhklinux, peterz, linux-arch

On Tue, Dec 09, 2025 at 01:11:24PM +0800, Yu Zhang wrote:
> From: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> 
> Hyper-V uses a logical device ID to identify a PCI endpoint device for
> child partitions. This ID will also be required for future hypercalls
> used by the Hyper-V IOMMU driver.
> 
> Refactor the logic for building this logical device ID into a standalone
> helper function and export the interface for wider use.
> 
> Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

> ---
>  drivers/pci/controller/pci-hyperv.c | 28 ++++++++++++++++++++--------
>  include/asm-generic/mshyperv.h      |  2 ++
>  2 files changed, 22 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index 146b43981b27..4b82e06b5d93 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -598,15 +598,31 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
>  
>  #define hv_msi_prepare		pci_msi_prepare
>  
> +/**
> + * Build a "Device Logical ID" out of this PCI bus's instance GUID and the
> + * function number of the device.
> + */
> +u64 hv_build_logical_dev_id(struct pci_dev *pdev)
> +{
> +	struct pci_bus *pbus = pdev->bus;
> +	struct hv_pcibus_device *hbus = container_of(pbus->sysdata,
> +						struct hv_pcibus_device, sysdata);
> +
> +	return (u64)((hbus->hdev->dev_instance.b[5] << 24) |
> +		     (hbus->hdev->dev_instance.b[4] << 16) |
> +		     (hbus->hdev->dev_instance.b[7] << 8)  |
> +		     (hbus->hdev->dev_instance.b[6] & 0xf8) |
> +		     PCI_FUNC(pdev->devfn));
> +}
> +EXPORT_SYMBOL_GPL(hv_build_logical_dev_id);
> +
>  /**
>   * hv_irq_retarget_interrupt() - "Unmask" the IRQ by setting its current
>   * affinity.
>   * @data:	Describes the IRQ
>   *
>   * Build new a destination for the MSI and make a hypercall to
> - * update the Interrupt Redirection Table. "Device Logical ID"
> - * is built out of this PCI bus's instance GUID and the function
> - * number of the device.
> + * update the Interrupt Redirection Table.
>   */
>  static void hv_irq_retarget_interrupt(struct irq_data *data)
>  {
> @@ -642,11 +658,7 @@ static void hv_irq_retarget_interrupt(struct irq_data *data)
>  	params->int_entry.source = HV_INTERRUPT_SOURCE_MSI;
>  	params->int_entry.msi_entry.address.as_uint32 = int_desc->address & 0xffffffff;
>  	params->int_entry.msi_entry.data.as_uint32 = int_desc->data;
> -	params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
> -			   (hbus->hdev->dev_instance.b[4] << 16) |
> -			   (hbus->hdev->dev_instance.b[7] << 8) |
> -			   (hbus->hdev->dev_instance.b[6] & 0xf8) |
> -			   PCI_FUNC(pdev->devfn);
> +	params->device_id = hv_build_logical_dev_id(pdev);
>  	params->int_target.vector = hv_msi_get_int_vector(data);
>  
>  	if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) {
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index 64ba6bc807d9..1a205ed69435 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -71,6 +71,8 @@ extern enum hv_partition_type hv_curr_partition_type;
>  extern void * __percpu *hyperv_pcpu_input_arg;
>  extern void * __percpu *hyperv_pcpu_output_arg;
>  
> +extern u64 hv_build_logical_dev_id(struct pci_dev *pdev);

Curious why you would include the "extern" in this declaration?  It's
not *wrong*, but it's not necessary, and other declarations in this
file omit it, e.g., the ones below:

>  u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr);
>  u64 hv_do_fast_hypercall8(u16 control, u64 input8);
>  u64 hv_do_fast_hypercall16(u16 control, u64 input1, u64 input2);
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id()
  2025-12-10 21:39   ` Bjorn Helgaas
@ 2025-12-11  8:31     ` Yu Zhang
  0 siblings, 0 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-11  8:31 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-kernel, linux-hyperv, iommu, linux-pci, kys, haiyangz,
	wei.liu, decui, lpieralisi, kwilczynski, mani, robh, bhelgaas,
	arnd, joro, will, robin.murphy, easwar.hariharan, jacob.pan,
	nunodasneves, mrathor, mhklinux, peterz, linux-arch

On Wed, Dec 10, 2025 at 03:39:45PM -0600, Bjorn Helgaas wrote:
> On Tue, Dec 09, 2025 at 01:11:24PM +0800, Yu Zhang wrote:
> > From: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > 
> > Hyper-V uses a logical device ID to identify a PCI endpoint device for
> > child partitions. This ID will also be required for future hypercalls
> > used by the Hyper-V IOMMU driver.
> > 
> > Refactor the logic for building this logical device ID into a standalone
> > helper function and export the interface for wider use.
> > 
> > Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> 
> Acked-by: Bjorn Helgaas <bhelgaas@google.com>
> 
> > ---
> >  drivers/pci/controller/pci-hyperv.c | 28 ++++++++++++++++++++--------
> >  include/asm-generic/mshyperv.h      |  2 ++
> >  2 files changed, 22 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> > index 146b43981b27..4b82e06b5d93 100644
> > --- a/drivers/pci/controller/pci-hyperv.c
> > +++ b/drivers/pci/controller/pci-hyperv.c
> > @@ -598,15 +598,31 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
> >  
> >  #define hv_msi_prepare		pci_msi_prepare
> >  
> > +/**
> > + * Build a "Device Logical ID" out of this PCI bus's instance GUID and the
> > + * function number of the device.
> > + */
> > +u64 hv_build_logical_dev_id(struct pci_dev *pdev)
> > +{
> > +	struct pci_bus *pbus = pdev->bus;
> > +	struct hv_pcibus_device *hbus = container_of(pbus->sysdata,
> > +						struct hv_pcibus_device, sysdata);
> > +
> > +	return (u64)((hbus->hdev->dev_instance.b[5] << 24) |
> > +		     (hbus->hdev->dev_instance.b[4] << 16) |
> > +		     (hbus->hdev->dev_instance.b[7] << 8)  |
> > +		     (hbus->hdev->dev_instance.b[6] & 0xf8) |
> > +		     PCI_FUNC(pdev->devfn));
> > +}
> > +EXPORT_SYMBOL_GPL(hv_build_logical_dev_id);
> > +
> >  /**
> >   * hv_irq_retarget_interrupt() - "Unmask" the IRQ by setting its current
> >   * affinity.
> >   * @data:	Describes the IRQ
> >   *
> >   * Build new a destination for the MSI and make a hypercall to
> > - * update the Interrupt Redirection Table. "Device Logical ID"
> > - * is built out of this PCI bus's instance GUID and the function
> > - * number of the device.
> > + * update the Interrupt Redirection Table.
> >   */
> >  static void hv_irq_retarget_interrupt(struct irq_data *data)
> >  {
> > @@ -642,11 +658,7 @@ static void hv_irq_retarget_interrupt(struct irq_data *data)
> >  	params->int_entry.source = HV_INTERRUPT_SOURCE_MSI;
> >  	params->int_entry.msi_entry.address.as_uint32 = int_desc->address & 0xffffffff;
> >  	params->int_entry.msi_entry.data.as_uint32 = int_desc->data;
> > -	params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
> > -			   (hbus->hdev->dev_instance.b[4] << 16) |
> > -			   (hbus->hdev->dev_instance.b[7] << 8) |
> > -			   (hbus->hdev->dev_instance.b[6] & 0xf8) |
> > -			   PCI_FUNC(pdev->devfn);
> > +	params->device_id = hv_build_logical_dev_id(pdev);
> >  	params->int_target.vector = hv_msi_get_int_vector(data);
> >  
> >  	if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) {
> > diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> > index 64ba6bc807d9..1a205ed69435 100644
> > --- a/include/asm-generic/mshyperv.h
> > +++ b/include/asm-generic/mshyperv.h
> > @@ -71,6 +71,8 @@ extern enum hv_partition_type hv_curr_partition_type;
> >  extern void * __percpu *hyperv_pcpu_input_arg;
> >  extern void * __percpu *hyperv_pcpu_output_arg;
> >  
> > +extern u64 hv_build_logical_dev_id(struct pci_dev *pdev);
> 
> Curious why you would include the "extern" in this declaration?  It's
> not *wrong*, but it's not necessary, and other declarations in this
> file omit it, e.g., the ones below:
> 

Thank you, Bjorn.
Actually I don't think it is necessary. Will remove it in v2. :)

B.R.
Yu

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
  2025-12-10 17:15   ` Easwar Hariharan
@ 2025-12-11  8:41     ` Yu Zhang
  0 siblings, 0 replies; 12+ messages in thread
From: Yu Zhang @ 2025-12-11  8:41 UTC (permalink / raw)
  To: Easwar Hariharan
  Cc: linux-kernel, linux-hyperv, iommu, linux-pci, kys, haiyangz,
	wei.liu, decui, lpieralisi, kwilczynski, mani, robh, bhelgaas,
	arnd, joro, will, robin.murphy, jacob.pan, nunodasneves, mrathor,
	mhklinux, peterz, linux-arch

On Wed, Dec 10, 2025 at 09:15:18AM -0800, Easwar Hariharan wrote:
> On 12/8/2025 9:11 PM, Yu Zhang wrote:
> > Add a para-virtualized IOMMU driver for Linux guests running on Hyper-V.
> > This driver implements stage-1 IO translation within the guest OS.
> > It integrates with the Linux IOMMU core, utilizing Hyper-V hypercalls
> > for:
> >  - Capability discovery
> >  - Domain allocation, configuration, and deallocation
> >  - Device attachment and detachment
> >  - IOTLB invalidation
> > 
> > The driver constructs x86-compatible stage-1 IO page tables in the
> > guest memory using consolidated IO page table helpers. This allows
> > the guest to manage stage-1 translations independently of vendor-
> > specific drivers (like Intel VT-d or AMD IOMMU).
> > 
> > Hyper-v consumes this stage-1 IO page table, when a device domain is
> > created and configured, and nests it with the host's stage-2 IO page
> > tables, therefore elemenating the VM exits for guest IOMMU mapping
> > operations.
> > 
> > For guest IOMMU unmapping operations, VM exits to perform the IOTLB
> > flush(and possibly the device TLB flush) is still unavoidable. For
> > now, HVCALL_FLUSH_DEVICE_DOMAIN	is used to implement a domain-selective
> > IOTLB flush. New hypercalls for finer-grained hypercall will be provided
> > in future patches.
> > 
> > Co-developed-by: Wei Liu <wei.liu@kernel.org>
> > Signed-off-by: Wei Liu <wei.liu@kernel.org>
> > Co-developed-by: Jacob Pan <jacob.pan@linux.microsoft.com>
> > Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>
> > Co-developed-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > Signed-off-by: Easwar Hariharan <easwar.hariharan@linux.microsoft.com>
> > Signed-off-by: Yu Zhang <zhangyu1@linux.microsoft.com>
> > ---
> >  drivers/iommu/hyperv/Kconfig  |  14 +
> >  drivers/iommu/hyperv/Makefile |   1 +
> >  drivers/iommu/hyperv/iommu.c  | 608 ++++++++++++++++++++++++++++++++++
> >  drivers/iommu/hyperv/iommu.h  |  53 +++
> >  4 files changed, 676 insertions(+)
> >  create mode 100644 drivers/iommu/hyperv/iommu.c
> >  create mode 100644 drivers/iommu/hyperv/iommu.h
> > 
> 
> <snip>
> 
> > +
> > +static int __init hv_iommu_init(void)
> > +{
> > +	int ret = 0;
> > +	struct hv_iommu_dev *hv_iommu = NULL;
> > +	struct hv_output_get_iommu_capabilities hv_iommu_cap = {0};
> > +
> > +	if (no_iommu || iommu_detected)
> > +		return -ENODEV;
> > +
> > +	if (!hv_is_hyperv_initialized())
> > +		return -ENODEV;
> > +
> > +	if (hv_iommu_detect(&hv_iommu_cap) ||
> > +	    !hv_iommu_present(hv_iommu_cap.iommu_cap) ||
> > +	    !hv_iommu_s1_domain_supported(hv_iommu_cap.iommu_cap))
> > +		return -ENODEV;
> > +
> > +	iommu_detected = 1;
> > +	pci_request_acs();
> > +
> > +	hv_iommu = kzalloc(sizeof(*hv_iommu), GFP_KERNEL);
> > +	if (!hv_iommu)
> > +		return -ENOMEM;
> > +
> > +	hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
> > +
> > +	ret = hv_initialize_static_domains();
> > +	if (ret) {
> > +		pr_err("hv_initialize_static_domains failed: %d\n", ret);
> > +		goto err_sysfs_remove;
> 
> This should be goto err_free since we haven't done the sysfs_add yet
> 
> > +	}
> > +
> > +	ret = iommu_device_sysfs_add(&hv_iommu->iommu, NULL, NULL, "%s", "hv-iommu");
> > +	if (ret) {
> > +		pr_err("iommu_device_sysfs_add failed: %d\n", ret);
> > +		goto err_free;
> 
> And this should be probably a goto delete_static_domains that cleans up the allocated static
> domains...
> 

Nice catch. And thanks! :)

Yu

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-12-11  8:41 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-09  5:11 [RFC v1 0/5] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
2025-12-09  5:11 ` [RFC v1 1/5] PCI: hv: Create and export hv_build_logical_dev_id() Yu Zhang
2025-12-09  5:21   ` Randy Dunlap
2025-12-10 17:03     ` Easwar Hariharan
2025-12-10 21:39   ` Bjorn Helgaas
2025-12-11  8:31     ` Yu Zhang
2025-12-09  5:11 ` [RFC v1 2/5] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
2025-12-09  5:11 ` [RFC v1 3/5] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
2025-12-09  5:11 ` [RFC v1 4/5] hyperv: allow hypercall output pages to be allocated for child partitions Yu Zhang
2025-12-09  5:11 ` [RFC v1 5/5] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2025-12-10 17:15   ` Easwar Hariharan
2025-12-11  8:41     ` Yu Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).