* [PATCH V1 00/13] PCI passthru on Hyper-V (Part I)
@ 2026-04-22 2:32 Mukesh R
2026-04-22 2:32 ` [PATCH V1 01/13] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c Mukesh R
` (12 more replies)
0 siblings, 13 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Implement passthru of PCI devices to unprivileged virtual machines
(VMs) when Linux is running as a privileged VM on Microsoft Hyper-V
hypervisor. This support is made to fit within the workings of VFIO
framework, and any VMM needing to use it must use the VFIO subsystem.
This supports both full device passthru and SR-IOV based VFs.
At a high level, the hypervisor supports traditional mapped iommu domains
that use explicit map and unmap hypercalls for mapping and unmapping guest
RAM into the iommu subsystem. Hyper-V also has a concept of direct attach
devices whereby the iommu subsystem simply uses the guest HW page table
(ept/npt/..). This series adds support for both, and both are made to
work with the VFIO subsystem.
While this Part I focuses on memory mappings, upcoming Part II
will focus on irq bypass along with some minor irq remapping
updates.
Based on: cd9f2e7d6e5b (origin/hyperv-next)
Testing:
o Most testing done on hyperv-next:e733a9e28180 using Cloud Hypervisor (51).
o Limited testing on : cd9f2e7d6e5b
o Tested with impending Part II irq patches.
o All tests involved PF passthru of devices using MSIx.
o Following combinations were tested:
- L1VH(1): test 1: Mellanox ConnectX-6 Lx passthru
test 2: NVIDIA Tesla Tesla T4 GPU.
test 3: Both of above simultaneous passthru
- Baremetal dom0/root: All of above.
(1) L1VH: this is a semi privileged VM that runs on Windows root on
Hyper-V, and allows users to create more child VMs.
Pending: This to establish a baseline for further enhancements.
o arm64 : some delta to make this work on arm64 (in progress).
o device sleep/wakeup.
o More stress testing
o CH reports it could not unbind vfio group upon guest shutdown. Need
to reboot for now.
o Qemu support (in progress).
Changes in V1:
o patch 1: Don't tie hyperv-irq.c to CONFIG_HYPERV_IOMMU.
o patch 4: Redesigned to address security vulnerability found by copilot
with passing tgid as a parameter. Also, do tgid setting right
after setting pt_id.
o patch 5: Remove unused type parameter from mshv_device_ops.device_create
o patch 7: mshv_partition_ioctl_create_device cleanup on copy_to_user.
o patch 10: Add export of hv_build_devid_type_pci here to get rid of
patch 11.
o patch 12: Move functions to build device ids from patch 11 here for
the benefit of arm64. Rename file to: hyperv-iommu-root.c.
o patch 13: removed to be made part of interrupt part II of this support.
o patch 14: get rid of fast path to reduce review noise.
o New (last) patch to pin ram regions if device passthru to a VM.
Thanks,
-Mukesh
Mukesh R (13):
iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c
x86/hyperv: cosmetic changes in irqdomain.c for readability
x86/hyperv: add insufficient memory support in irqdomain.c
mshv: Provide a way to get partition id if running in a VMM process
mshv: Declarations and definitions for VFIO-MSHV bridge device
mshv: Implement mshv bridge device for VFIO
mshv: Add ioctl support for MSHV-VFIO bridge device
PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg
mshv: Import data structs around device passthru from hyperv headers
PCI: hv: Build device id for a VMBus device, export PCI devid function
x86/hyperv: Implement hyperv virtual iommu
mshv: Populate mmio mappings for PCI passthru
mshv: pin all ram mem regions if partition has device passthru
MAINTAINERS | 3 +-
arch/x86/hyperv/irqdomain.c | 229 +++--
arch/x86/include/asm/mshyperv.h | 4 +
arch/x86/kernel/pci-dma.c | 2 +
drivers/hv/Makefile | 3 +-
drivers/hv/mshv_root.h | 26 +
drivers/hv/mshv_root_main.c | 256 ++++-
drivers/hv/mshv_vfio.c | 211 ++++
drivers/iommu/Kconfig | 5 +-
drivers/iommu/Makefile | 3 +-
drivers/iommu/hyperv-iommu-root.c | 899 ++++++++++++++++++
.../iommu/{hyperv-iommu.c => hyperv-irq.c} | 2 +-
drivers/iommu/irq_remapping.c | 2 +-
drivers/pci/controller/pci-hyperv.c | 120 ++-
include/asm-generic/mshyperv.h | 34 +
include/hyperv/hvgdk_mini.h | 11 +
include/hyperv/hvhdk_mini.h | 112 +++
include/linux/hyperv.h | 6 +
include/uapi/linux/mshv.h | 31 +
19 files changed, 1790 insertions(+), 169 deletions(-)
create mode 100644 drivers/hv/mshv_vfio.c
create mode 100644 drivers/iommu/hyperv-iommu-root.c
rename drivers/iommu/{hyperv-iommu.c => hyperv-irq.c} (99%)
--
2.51.2.vfs.0.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH V1 01/13] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 02/13] x86/hyperv: cosmetic changes in irqdomain.c for readability Mukesh R
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
This file actually implements irq remapping, so rename to more appropriate
hyperv-irq.c. A new file to implement hyperv iommu will be introduced
later. Also, it should not be tied to HYPERV_IOMMU, but to CONFIG_HYPERV
and IRQ_REMAP. The file already has #ifdef CONFIG_IRQ_REMAP.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
MAINTAINERS | 2 +-
drivers/iommu/Makefile | 2 +-
drivers/iommu/{hyperv-iommu.c => hyperv-irq.c} | 2 +-
drivers/iommu/irq_remapping.c | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
rename drivers/iommu/{hyperv-iommu.c => hyperv-irq.c} (99%)
diff --git a/MAINTAINERS b/MAINTAINERS
index d1cc0e12fe1f..f803a6a38fee 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11914,7 +11914,7 @@ F: drivers/clocksource/hyperv_timer.c
F: drivers/hid/hid-hyperv.c
F: drivers/hv/
F: drivers/input/serio/hyperv-keyboard.c
-F: drivers/iommu/hyperv-iommu.c
+F: drivers/iommu/hyperv-irq.c
F: drivers/net/ethernet/microsoft/
F: drivers/net/hyperv/
F: drivers/pci/controller/pci-hyperv-intf.c
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 0275821f4ef9..335ea77cced6 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -30,7 +30,7 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
-obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
+obj-$(CONFIG_HYPERV) += hyperv-irq.o
obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-irq.c
similarity index 99%
rename from drivers/iommu/hyperv-iommu.c
rename to drivers/iommu/hyperv-irq.c
index 479103261ae6..cc49c7cbc434 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv-irq.c
@@ -331,4 +331,4 @@ static const struct irq_domain_ops hyperv_root_ir_domain_ops = {
.free = hyperv_root_irq_remapping_free,
};
-#endif
+#endif /* CONFIG_IRQ_REMAP */
diff --git a/drivers/iommu/irq_remapping.c b/drivers/iommu/irq_remapping.c
index c2443659812a..41bf65e4ea88 100644
--- a/drivers/iommu/irq_remapping.c
+++ b/drivers/iommu/irq_remapping.c
@@ -108,7 +108,7 @@ int __init irq_remapping_prepare(void)
else if (IS_ENABLED(CONFIG_AMD_IOMMU) &&
amd_iommu_irq_ops.prepare() == 0)
remap_ops = &amd_iommu_irq_ops;
- else if (IS_ENABLED(CONFIG_HYPERV_IOMMU) &&
+ else if (IS_ENABLED(CONFIG_HYPERV) &&
hyperv_irq_remap_ops.prepare() == 0)
remap_ops = &hyperv_irq_remap_ops;
else
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 02/13] x86/hyperv: cosmetic changes in irqdomain.c for readability
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
2026-04-22 2:32 ` [PATCH V1 01/13] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 03/13] x86/hyperv: add insufficient memory support in irqdomain.c Mukesh R
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Make cosmetic changes:
o Rename struct pci_dev *dev to *pdev since there are cases of
struct device *dev in the file and all over the kernel
o Rename hv_build_pci_dev_id to hv_build_devid_type_pci in anticipation
of building different types of device ids
o Fix checkpatch.pl issues with return and extraneous printk
o Replace spaces with tabs
o Rename struct hv_devid *xxx to struct hv_devid *hv_devid given code
paths involve many types of device ids
o Fix indentation in a large if block by using goto.
There are no functional changes.
Reviewed-by: Anirudh Rayabharam (Microsoft) <anirudh@anirudhrb.com>
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
arch/x86/hyperv/irqdomain.c | 198 +++++++++++++++++++-----------------
1 file changed, 104 insertions(+), 94 deletions(-)
diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
index 365e364268d9..b3ad50a874dc 100644
--- a/arch/x86/hyperv/irqdomain.c
+++ b/arch/x86/hyperv/irqdomain.c
@@ -1,5 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
-
/*
* Irqdomain for Linux to run as the root partition on Microsoft Hypervisor.
*
@@ -14,8 +13,8 @@
#include <linux/irqchip/irq-msi-lib.h>
#include <asm/mshyperv.h>
-static int hv_map_interrupt(union hv_device_id device_id, bool level,
- int cpu, int vector, struct hv_interrupt_entry *entry)
+static int hv_map_interrupt(union hv_device_id hv_devid, bool level,
+ int cpu, int vector, struct hv_interrupt_entry *ret_entry)
{
struct hv_input_map_device_interrupt *input;
struct hv_output_map_device_interrupt *output;
@@ -32,7 +31,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level,
intr_desc = &input->interrupt_descriptor;
memset(input, 0, sizeof(*input));
input->partition_id = hv_current_partition_id;
- input->device_id = device_id.as_uint64;
+ input->device_id = hv_devid.as_uint64;
intr_desc->interrupt_type = HV_X64_INTERRUPT_TYPE_FIXED;
intr_desc->vector_count = 1;
intr_desc->target.vector = vector;
@@ -44,7 +43,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level,
intr_desc->target.vp_set.valid_bank_mask = 0;
intr_desc->target.vp_set.format = HV_GENERIC_SET_SPARSE_4K;
- nr_bank = cpumask_to_vpset(&(intr_desc->target.vp_set), cpumask_of(cpu));
+ nr_bank = cpumask_to_vpset(&intr_desc->target.vp_set, cpumask_of(cpu));
if (nr_bank < 0) {
local_irq_restore(flags);
pr_err("%s: unable to generate VP set\n", __func__);
@@ -61,7 +60,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level,
status = hv_do_rep_hypercall(HVCALL_MAP_DEVICE_INTERRUPT, 0, var_size,
input, output);
- *entry = output->interrupt_entry;
+ *ret_entry = output->interrupt_entry;
local_irq_restore(flags);
@@ -71,21 +70,19 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level,
return hv_result_to_errno(status);
}
-static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *old_entry)
+static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *irq_entry)
{
unsigned long flags;
struct hv_input_unmap_device_interrupt *input;
- struct hv_interrupt_entry *intr_entry;
u64 status;
local_irq_save(flags);
input = *this_cpu_ptr(hyperv_pcpu_input_arg);
memset(input, 0, sizeof(*input));
- intr_entry = &input->interrupt_entry;
input->partition_id = hv_current_partition_id;
input->device_id = id;
- *intr_entry = *old_entry;
+ input->interrupt_entry = *irq_entry;
status = hv_do_hypercall(HVCALL_UNMAP_DEVICE_INTERRUPT, input, NULL);
local_irq_restore(flags);
@@ -115,67 +112,71 @@ static int get_rid_cb(struct pci_dev *pdev, u16 alias, void *data)
return 0;
}
-static union hv_device_id hv_build_pci_dev_id(struct pci_dev *dev)
+static union hv_device_id hv_build_devid_type_pci(struct pci_dev *pdev)
{
- union hv_device_id dev_id;
+ int pos;
+ union hv_device_id hv_devid;
struct rid_data data = {
.bridge = NULL,
- .rid = PCI_DEVID(dev->bus->number, dev->devfn)
+ .rid = PCI_DEVID(pdev->bus->number, pdev->devfn)
};
- pci_for_each_dma_alias(dev, get_rid_cb, &data);
+ pci_for_each_dma_alias(pdev, get_rid_cb, &data);
- dev_id.as_uint64 = 0;
- dev_id.device_type = HV_DEVICE_TYPE_PCI;
- dev_id.pci.segment = pci_domain_nr(dev->bus);
+ hv_devid.as_uint64 = 0;
+ hv_devid.device_type = HV_DEVICE_TYPE_PCI;
+ hv_devid.pci.segment = pci_domain_nr(pdev->bus);
- dev_id.pci.bdf.bus = PCI_BUS_NUM(data.rid);
- dev_id.pci.bdf.device = PCI_SLOT(data.rid);
- dev_id.pci.bdf.function = PCI_FUNC(data.rid);
- dev_id.pci.source_shadow = HV_SOURCE_SHADOW_NONE;
+ hv_devid.pci.bdf.bus = PCI_BUS_NUM(data.rid);
+ hv_devid.pci.bdf.device = PCI_SLOT(data.rid);
+ hv_devid.pci.bdf.function = PCI_FUNC(data.rid);
+ hv_devid.pci.source_shadow = HV_SOURCE_SHADOW_NONE;
- if (data.bridge) {
- int pos;
+ if (data.bridge == NULL)
+ goto out;
- /*
- * Microsoft Hypervisor requires a bus range when the bridge is
- * running in PCI-X mode.
- *
- * To distinguish conventional vs PCI-X bridge, we can check
- * the bridge's PCI-X Secondary Status Register, Secondary Bus
- * Mode and Frequency bits. See PCI Express to PCI/PCI-X Bridge
- * Specification Revision 1.0 5.2.2.1.3.
- *
- * Value zero means it is in conventional mode, otherwise it is
- * in PCI-X mode.
- */
+ /*
+ * Microsoft Hypervisor requires a bus range when the bridge is
+ * running in PCI-X mode.
+ *
+ * To distinguish conventional vs PCI-X bridge, we can check
+ * the bridge's PCI-X Secondary Status Register, Secondary Bus
+ * Mode and Frequency bits. See PCI Express to PCI/PCI-X Bridge
+ * Specification Revision 1.0 5.2.2.1.3.
+ *
+ * Value zero means it is in conventional mode, otherwise it is
+ * in PCI-X mode.
+ */
- pos = pci_find_capability(data.bridge, PCI_CAP_ID_PCIX);
- if (pos) {
- u16 status;
+ pos = pci_find_capability(data.bridge, PCI_CAP_ID_PCIX);
+ if (pos) {
+ u16 status;
- pci_read_config_word(data.bridge, pos +
- PCI_X_BRIDGE_SSTATUS, &status);
+ pci_read_config_word(data.bridge, pos + PCI_X_BRIDGE_SSTATUS,
+ &status);
- if (status & PCI_X_SSTATUS_FREQ) {
- /* Non-zero, PCI-X mode */
- u8 sec_bus, sub_bus;
+ if (status & PCI_X_SSTATUS_FREQ) {
+ /* Non-zero, PCI-X mode */
+ u8 sec_bus, sub_bus;
- dev_id.pci.source_shadow = HV_SOURCE_SHADOW_BRIDGE_BUS_RANGE;
+ hv_devid.pci.source_shadow =
+ HV_SOURCE_SHADOW_BRIDGE_BUS_RANGE;
- pci_read_config_byte(data.bridge, PCI_SECONDARY_BUS, &sec_bus);
- dev_id.pci.shadow_bus_range.secondary_bus = sec_bus;
- pci_read_config_byte(data.bridge, PCI_SUBORDINATE_BUS, &sub_bus);
- dev_id.pci.shadow_bus_range.subordinate_bus = sub_bus;
- }
+ pci_read_config_byte(data.bridge, PCI_SECONDARY_BUS,
+ &sec_bus);
+ hv_devid.pci.shadow_bus_range.secondary_bus = sec_bus;
+ pci_read_config_byte(data.bridge, PCI_SUBORDINATE_BUS,
+ &sub_bus);
+ hv_devid.pci.shadow_bus_range.subordinate_bus = sub_bus;
}
}
- return dev_id;
+out:
+ return hv_devid;
}
-/**
- * hv_map_msi_interrupt() - "Map" the MSI IRQ in the hypervisor.
+/*
+ * hv_map_msi_interrupt() - Map the MSI IRQ in the hypervisor.
* @data: Describes the IRQ
* @out_entry: Hypervisor (MSI) interrupt entry (can be NULL)
*
@@ -188,22 +189,23 @@ int hv_map_msi_interrupt(struct irq_data *data,
{
struct irq_cfg *cfg = irqd_cfg(data);
struct hv_interrupt_entry dummy;
- union hv_device_id device_id;
+ union hv_device_id hv_devid;
struct msi_desc *msidesc;
- struct pci_dev *dev;
+ struct pci_dev *pdev;
int cpu;
msidesc = irq_data_get_msi_desc(data);
- dev = msi_desc_to_pci_dev(msidesc);
- device_id = hv_build_pci_dev_id(dev);
+ pdev = msi_desc_to_pci_dev(msidesc);
+ hv_devid = hv_build_devid_type_pci(pdev);
cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
- return hv_map_interrupt(device_id, false, cpu, cfg->vector,
+ return hv_map_interrupt(hv_devid, false, cpu, cfg->vector,
out_entry ? out_entry : &dummy);
}
EXPORT_SYMBOL_GPL(hv_map_msi_interrupt);
-static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi_msg *msg)
+static void entry_to_msi_msg(struct hv_interrupt_entry *entry,
+ struct msi_msg *msg)
{
/* High address is always 0 */
msg->address_hi = 0;
@@ -211,17 +213,19 @@ static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi
msg->data = entry->msi_entry.data.as_uint32;
}
-static int hv_unmap_msi_interrupt(struct pci_dev *dev, struct hv_interrupt_entry *old_entry);
+static int hv_unmap_msi_interrupt(struct pci_dev *pdev,
+ struct hv_interrupt_entry *irq_entry);
+
static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct hv_interrupt_entry *stored_entry;
struct irq_cfg *cfg = irqd_cfg(data);
struct msi_desc *msidesc;
- struct pci_dev *dev;
+ struct pci_dev *pdev;
int ret;
msidesc = irq_data_get_msi_desc(data);
- dev = msi_desc_to_pci_dev(msidesc);
+ pdev = msi_desc_to_pci_dev(msidesc);
if (!cfg) {
pr_debug("%s: cfg is NULL", __func__);
@@ -240,7 +244,7 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
stored_entry = data->chip_data;
data->chip_data = NULL;
- ret = hv_unmap_msi_interrupt(dev, stored_entry);
+ ret = hv_unmap_msi_interrupt(pdev, stored_entry);
kfree(stored_entry);
@@ -249,10 +253,8 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
}
stored_entry = kzalloc_obj(*stored_entry, GFP_ATOMIC);
- if (!stored_entry) {
- pr_debug("%s: failed to allocate chip data\n", __func__);
+ if (!stored_entry)
return;
- }
ret = hv_map_msi_interrupt(data, stored_entry);
if (ret) {
@@ -262,18 +264,21 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
data->chip_data = stored_entry;
entry_to_msi_msg(data->chip_data, msg);
-
- return;
}
-static int hv_unmap_msi_interrupt(struct pci_dev *dev, struct hv_interrupt_entry *old_entry)
+static int hv_unmap_msi_interrupt(struct pci_dev *pdev,
+ struct hv_interrupt_entry *irq_entry)
{
- return hv_unmap_interrupt(hv_build_pci_dev_id(dev).as_uint64, old_entry);
+ union hv_device_id hv_devid;
+
+ hv_devid = hv_build_devid_type_pci(pdev);
+ return hv_unmap_interrupt(hv_devid.as_uint64, irq_entry);
}
-static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd)
+/* NB: during map, hv_interrupt_entry is saved via data->chip_data */
+static void hv_teardown_msi_irq(struct pci_dev *pdev, struct irq_data *irqd)
{
- struct hv_interrupt_entry old_entry;
+ struct hv_interrupt_entry irq_entry;
struct msi_msg msg;
if (!irqd->chip_data) {
@@ -281,13 +286,13 @@ static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd)
return;
}
- old_entry = *(struct hv_interrupt_entry *)irqd->chip_data;
- entry_to_msi_msg(&old_entry, &msg);
+ irq_entry = *(struct hv_interrupt_entry *)irqd->chip_data;
+ entry_to_msi_msg(&irq_entry, &msg);
kfree(irqd->chip_data);
irqd->chip_data = NULL;
- (void)hv_unmap_msi_interrupt(dev, &old_entry);
+ (void)hv_unmap_msi_interrupt(pdev, &irq_entry);
}
/*
@@ -302,7 +307,8 @@ static struct irq_chip hv_pci_msi_controller = {
};
static bool hv_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
- struct irq_domain *real_parent, struct msi_domain_info *info)
+ struct irq_domain *real_parent,
+ struct msi_domain_info *info)
{
struct irq_chip *chip = info->chip;
@@ -317,7 +323,8 @@ static bool hv_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
}
#define HV_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | MSI_FLAG_PCI_MSIX)
-#define HV_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS)
+#define HV_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
+ MSI_FLAG_USE_DEF_CHIP_OPS)
static struct msi_parent_ops hv_msi_parent_ops = {
.supported_flags = HV_MSI_FLAGS_SUPPORTED,
@@ -329,14 +336,14 @@ static struct msi_parent_ops hv_msi_parent_ops = {
.init_dev_msi_info = hv_init_dev_msi_info,
};
-static int hv_msi_domain_alloc(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs,
- void *arg)
+/* Allocate nr_irqs IRQs for the given irq domain */
+static int hv_msi_domain_alloc(struct irq_domain *d, unsigned int virq,
+ unsigned int nr_irqs, void *arg)
{
/*
- * TODO: The allocation bits of hv_irq_compose_msi_msg(), i.e. everything except
- * entry_to_msi_msg() should be in here.
+ * TODO: The allocation bits of hv_irq_compose_msi_msg(), i.e.
+ * everything except entry_to_msi_msg() should be in here.
*/
-
int ret;
ret = irq_domain_alloc_irqs_parent(d, virq, nr_irqs, arg);
@@ -344,13 +351,15 @@ static int hv_msi_domain_alloc(struct irq_domain *d, unsigned int virq, unsigned
return ret;
for (int i = 0; i < nr_irqs; ++i) {
- irq_domain_set_info(d, virq + i, 0, &hv_pci_msi_controller, NULL,
- handle_edge_irq, NULL, "edge");
+ irq_domain_set_info(d, virq + i, 0, &hv_pci_msi_controller,
+ NULL, handle_edge_irq, NULL, "edge");
}
+
return 0;
}
-static void hv_msi_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs)
+static void hv_msi_domain_free(struct irq_domain *d, unsigned int virq,
+ unsigned int nr_irqs)
{
for (int i = 0; i < nr_irqs; ++i) {
struct irq_data *irqd = irq_domain_get_irq_data(d, virq);
@@ -362,6 +371,7 @@ static void hv_msi_domain_free(struct irq_domain *d, unsigned int virq, unsigned
hv_teardown_msi_irq(to_pci_dev(desc->dev), irqd);
}
+
irq_domain_free_irqs_top(d, virq, nr_irqs);
}
@@ -394,25 +404,25 @@ struct irq_domain * __init hv_create_pci_msi_domain(void)
int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry)
{
- union hv_device_id device_id;
+ union hv_device_id hv_devid;
- device_id.as_uint64 = 0;
- device_id.device_type = HV_DEVICE_TYPE_IOAPIC;
- device_id.ioapic.ioapic_id = (u8)ioapic_id;
+ hv_devid.as_uint64 = 0;
+ hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC;
+ hv_devid.ioapic.ioapic_id = (u8)ioapic_id;
- return hv_unmap_interrupt(device_id.as_uint64, entry);
+ return hv_unmap_interrupt(hv_devid.as_uint64, entry);
}
EXPORT_SYMBOL_GPL(hv_unmap_ioapic_interrupt);
int hv_map_ioapic_interrupt(int ioapic_id, bool level, int cpu, int vector,
struct hv_interrupt_entry *entry)
{
- union hv_device_id device_id;
+ union hv_device_id hv_devid;
- device_id.as_uint64 = 0;
- device_id.device_type = HV_DEVICE_TYPE_IOAPIC;
- device_id.ioapic.ioapic_id = (u8)ioapic_id;
+ hv_devid.as_uint64 = 0;
+ hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC;
+ hv_devid.ioapic.ioapic_id = (u8)ioapic_id;
- return hv_map_interrupt(device_id, level, cpu, vector, entry);
+ return hv_map_interrupt(hv_devid, level, cpu, vector, entry);
}
EXPORT_SYMBOL_GPL(hv_map_ioapic_interrupt);
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 03/13] x86/hyperv: add insufficient memory support in irqdomain.c
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
2026-04-22 2:32 ` [PATCH V1 01/13] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c Mukesh R
2026-04-22 2:32 ` [PATCH V1 02/13] x86/hyperv: cosmetic changes in irqdomain.c for readability Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 04/13] mshv: Provide a way to get partition id if running in a VMM process Mukesh R
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Intermittent insufficient memory hypercall failure have been observed in
the current map device interrupt hypercall. In case of such a failure,
we must deposit more memory and redo the hypercall. Add support for
that. Deposit memory needs partition id, make that a parameter to the
map interrupt function.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
arch/x86/hyperv/irqdomain.c | 38 +++++++++++++++++++++++++++++++------
1 file changed, 32 insertions(+), 6 deletions(-)
diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
index b3ad50a874dc..229f986e08ea 100644
--- a/arch/x86/hyperv/irqdomain.c
+++ b/arch/x86/hyperv/irqdomain.c
@@ -13,8 +13,9 @@
#include <linux/irqchip/irq-msi-lib.h>
#include <asm/mshyperv.h>
-static int hv_map_interrupt(union hv_device_id hv_devid, bool level,
- int cpu, int vector, struct hv_interrupt_entry *ret_entry)
+static u64 hv_map_interrupt_hcall(u64 ptid, union hv_device_id hv_devid,
+ bool level, int cpu, int vector,
+ struct hv_interrupt_entry *ret_entry)
{
struct hv_input_map_device_interrupt *input;
struct hv_output_map_device_interrupt *output;
@@ -30,8 +31,10 @@ static int hv_map_interrupt(union hv_device_id hv_devid, bool level,
intr_desc = &input->interrupt_descriptor;
memset(input, 0, sizeof(*input));
- input->partition_id = hv_current_partition_id;
+
+ input->partition_id = ptid;
input->device_id = hv_devid.as_uint64;
+
intr_desc->interrupt_type = HV_X64_INTERRUPT_TYPE_FIXED;
intr_desc->vector_count = 1;
intr_desc->target.vector = vector;
@@ -64,6 +67,28 @@ static int hv_map_interrupt(union hv_device_id hv_devid, bool level,
local_irq_restore(flags);
+ return status;
+}
+
+static int hv_map_interrupt(u64 ptid, union hv_device_id device_id, bool level,
+ int cpu, int vector,
+ struct hv_interrupt_entry *ret_entry)
+{
+ u64 status;
+ int rc, deposit_pgs = 16; /* don't loop forever */
+
+ while (deposit_pgs--) {
+ status = hv_map_interrupt_hcall(ptid, device_id, level, cpu,
+ vector, ret_entry);
+
+ if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY)
+ break;
+
+ rc = hv_call_deposit_pages(NUMA_NO_NODE, ptid, 1);
+ if (rc)
+ break;
+ }
+
if (!hv_result_success(status))
hv_status_err(status, "\n");
@@ -199,8 +224,8 @@ int hv_map_msi_interrupt(struct irq_data *data,
hv_devid = hv_build_devid_type_pci(pdev);
cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
- return hv_map_interrupt(hv_devid, false, cpu, cfg->vector,
- out_entry ? out_entry : &dummy);
+ return hv_map_interrupt(hv_current_partition_id, hv_devid, false, cpu,
+ cfg->vector, out_entry ? out_entry : &dummy);
}
EXPORT_SYMBOL_GPL(hv_map_msi_interrupt);
@@ -423,6 +448,7 @@ int hv_map_ioapic_interrupt(int ioapic_id, bool level, int cpu, int vector,
hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC;
hv_devid.ioapic.ioapic_id = (u8)ioapic_id;
- return hv_map_interrupt(hv_devid, level, cpu, vector, entry);
+ return hv_map_interrupt(hv_current_partition_id, hv_devid, level, cpu,
+ vector, entry);
}
EXPORT_SYMBOL_GPL(hv_map_ioapic_interrupt);
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 04/13] mshv: Provide a way to get partition id if running in a VMM process
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (2 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 03/13] x86/hyperv: add insufficient memory support in irqdomain.c Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 05/13] mshv: Declarations and definitions for VFIO-MSHV bridge device Mukesh R
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Many PCI passthru related hypercalls require partition id of the target
guest. Guests are actually managed by MSHV driver and the partition id
is only maintained there. Add a field in the partition struct in MSHV
driver to save the tgid of the VMM process creating the partition,
and add a function there to retrieve partition id if current process
is a VMM process.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/hv/mshv_root.h | 1 +
drivers/hv/mshv_root_main.c | 22 ++++++++++++++++++++++
include/asm-generic/mshyperv.h | 5 +++++
3 files changed, 28 insertions(+)
diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
index 1f086dcb7aa1..a85c24dcc701 100644
--- a/drivers/hv/mshv_root.h
+++ b/drivers/hv/mshv_root.h
@@ -138,6 +138,7 @@ struct mshv_partition {
struct mshv_girq_routing_table __rcu *pt_girq_tbl;
u64 isolation_type;
+ pid_t pt_vmm_tgid;
bool import_completed;
bool pt_initialized;
#if IS_ENABLED(CONFIG_DEBUG_FS)
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index bd1359eb58dd..02c107458be9 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -1908,6 +1908,27 @@ mshv_partition_release(struct inode *inode, struct file *filp)
return 0;
}
+/* Given a process tgid, return partition id if it is a VMM process */
+u64 mshv_current_partid(void)
+{
+ struct mshv_partition *pt;
+ int i;
+ u64 ret_ptid = HV_PARTITION_ID_INVALID;
+
+ rcu_read_lock();
+
+ hash_for_each_rcu(mshv_root.pt_htable, i, pt, pt_hnode) {
+ if (pt->pt_vmm_tgid == current->tgid) {
+ ret_ptid = pt->pt_id;
+ break;
+ }
+ }
+
+ rcu_read_unlock();
+ return ret_ptid;
+}
+EXPORT_SYMBOL_GPL(mshv_current_partid);
+
static int
add_partition(struct mshv_partition *partition)
{
@@ -2073,6 +2094,7 @@ mshv_ioctl_create_partition(void __user *user_arg, struct device *module_dev)
goto cleanup_irq_srcu;
partition->pt_id = pt_id;
+ partition->pt_vmm_tgid = current->tgid;
ret = add_partition(partition);
if (ret)
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index bf601d67cecb..e8cbc4e3f7ad 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -350,6 +350,7 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id);
int hv_call_notify_all_processors_started(void);
bool hv_lp_exists(u32 lp_index);
int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags);
+u64 mshv_current_partid(void);
#else /* CONFIG_MSHV_ROOT */
static inline bool hv_root_partition(void) { return false; }
@@ -380,6 +381,10 @@ static inline int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u3
{
return -EOPNOTSUPP;
}
+static inline u64 mshv_current_partid(void)
+{
+ return HV_PARTITION_ID_INVALID;
+}
#endif /* CONFIG_MSHV_ROOT */
static inline int hv_deposit_memory(u64 partition_id, u64 status)
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 05/13] mshv: Declarations and definitions for VFIO-MSHV bridge device
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (3 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 04/13] mshv: Provide a way to get partition id if running in a VMM process Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 06/13] mshv: Implement mshv bridge device for VFIO Mukesh R
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Add data structs needed by the subsequent patch that introduces a new
module to implement VFIO-MSHV pseudo device.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/hv/mshv_root.h | 19 +++++++++++++++++++
include/uapi/linux/mshv.h | 30 ++++++++++++++++++++++++++++++
2 files changed, 49 insertions(+)
diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
index a85c24dcc701..b9880d0bdc4d 100644
--- a/drivers/hv/mshv_root.h
+++ b/drivers/hv/mshv_root.h
@@ -227,6 +227,25 @@ struct port_table_info {
};
};
+struct mshv_device {
+ const struct mshv_device_ops *device_ops;
+ struct mshv_partition *device_pt;
+ void *device_private;
+ struct hlist_node device_ptnode;
+};
+
+struct mshv_device_ops {
+ const char *device_name;
+ long (*device_create)(struct mshv_device *dev);
+ void (*device_release)(struct mshv_device *dev);
+ long (*device_set_attr)(struct mshv_device *dev,
+ struct mshv_device_attr *attr);
+ long (*device_has_attr)(struct mshv_device *dev,
+ struct mshv_device_attr *attr);
+};
+
+extern struct mshv_device_ops mshv_vfio_device_ops;
+
int mshv_update_routing_table(struct mshv_partition *partition,
const struct mshv_user_irq_entry *entries,
unsigned int numents);
diff --git a/include/uapi/linux/mshv.h b/include/uapi/linux/mshv.h
index 32ff92b6342b..4373a8243951 100644
--- a/include/uapi/linux/mshv.h
+++ b/include/uapi/linux/mshv.h
@@ -404,4 +404,34 @@ struct mshv_sint_mask {
/* hv_hvcall device */
#define MSHV_HVCALL_SETUP _IOW(MSHV_IOCTL, 0x1E, struct mshv_vtl_hvcall_setup)
#define MSHV_HVCALL _IOWR(MSHV_IOCTL, 0x1F, struct mshv_vtl_hvcall)
+
+/* device passhthru */
+#define MSHV_CREATE_DEVICE_TEST 1
+
+enum {
+ MSHV_DEV_TYPE_VFIO,
+ MSHV_DEV_TYPE_MAX,
+};
+
+struct mshv_create_device {
+ __u32 type; /* in: MSHV_DEV_TYPE_xxx */
+ __u32 fd; /* out: device handle */
+ __u32 flags; /* in: MSHV_CREATE_DEVICE_xxx */
+};
+
+#define MSHV_DEV_VFIO_FILE 1
+#define MSHV_DEV_VFIO_FILE_ADD 1
+#define MSHV_DEV_VFIO_FILE_DEL 2
+
+struct mshv_device_attr {
+ __u32 flags; /* no flags currently defined */
+ __u32 group; /* device-defined */
+ __u64 attr; /* group-defined */
+ __u64 addr; /* userspace address of attr data */
+};
+
+/* Device fds created with MSHV_CREATE_DEVICE */
+#define MSHV_SET_DEVICE_ATTR _IOW(MSHV_IOCTL, 0x00, struct mshv_device_attr)
+#define MSHV_HAS_DEVICE_ATTR _IOW(MSHV_IOCTL, 0x01, struct mshv_device_attr)
+
#endif
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 06/13] mshv: Implement mshv bridge device for VFIO
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (4 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 05/13] mshv: Declarations and definitions for VFIO-MSHV bridge device Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 07/13] mshv: Add ioctl support for MSHV-VFIO bridge device Mukesh R
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Add a new file to implement VFIO-MSHV bridge pseudo device. These
functions are called in the VFIO framework, and credits to kvm/vfio.c
as this file was adapted from it.
Co-developed-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/hv/Makefile | 3 +-
drivers/hv/mshv_vfio.c | 211 ++++++++++++++++++++++++++++++++++++++
include/uapi/linux/mshv.h | 1 +
3 files changed, 214 insertions(+), 1 deletion(-)
create mode 100644 drivers/hv/mshv_vfio.c
diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile
index 888a748cc7cb..9ab6fc254c38 100644
--- a/drivers/hv/Makefile
+++ b/drivers/hv/Makefile
@@ -14,7 +14,8 @@ hv_vmbus-y := vmbus_drv.o \
hv_vmbus-$(CONFIG_HYPERV_TESTING) += hv_debugfs.o
hv_utils-y := hv_util.o hv_kvp.o hv_snapshot.o hv_utils_transport.o
mshv_root-y := mshv_root_main.o mshv_synic.o mshv_eventfd.o mshv_irq.o \
- mshv_root_hv_call.o mshv_portid_table.o mshv_regions.o
+ mshv_root_hv_call.o mshv_portid_table.o mshv_regions.o \
+ mshv_vfio.o
mshv_root-$(CONFIG_DEBUG_FS) += mshv_debugfs.o
mshv_root-$(CONFIG_TRACEPOINTS) += mshv_trace.o
mshv_vtl-y := mshv_vtl_main.o
diff --git a/drivers/hv/mshv_vfio.c b/drivers/hv/mshv_vfio.c
new file mode 100644
index 000000000000..00a97920e25b
--- /dev/null
+++ b/drivers/hv/mshv_vfio.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * VFIO-MSHV bridge pseudo device
+ *
+ * Heavily inspired by the VFIO-KVM bridge pseudo device.
+ */
+#include <linux/errno.h>
+#include <linux/file.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/vfio.h>
+#include <asm/mshyperv.h>
+
+#include "mshv.h"
+#include "mshv_root.h"
+
+struct mshv_vfio_file {
+ struct list_head node;
+ struct file *file; /* list of struct mshv_vfio_file */
+};
+
+struct mshv_vfio {
+ struct list_head file_list;
+ struct mutex lock;
+};
+
+static bool mshv_vfio_file_is_valid(struct file *file)
+{
+ bool (*fn)(struct file *file);
+ bool ret;
+
+ fn = symbol_get(vfio_file_is_valid);
+ if (!fn)
+ return false;
+
+ ret = fn(file);
+
+ symbol_put(vfio_file_is_valid);
+
+ return ret;
+}
+
+static long mshv_vfio_file_add(struct mshv_device *mshvdev, unsigned int fd)
+{
+ struct mshv_vfio *mshv_vfio = mshvdev->device_private;
+ struct mshv_vfio_file *mvf;
+ struct file *filp;
+ long ret = 0;
+
+ filp = fget(fd);
+ if (!filp)
+ return -EBADF;
+
+ /* Ensure the FD is a vfio FD. */
+ if (!mshv_vfio_file_is_valid(filp)) {
+ ret = -EINVAL;
+ goto out_fput;
+ }
+
+ mutex_lock(&mshv_vfio->lock);
+
+ list_for_each_entry(mvf, &mshv_vfio->file_list, node) {
+ if (mvf->file == filp) {
+ ret = -EEXIST;
+ goto out_unlock;
+ }
+ }
+
+ mvf = kzalloc(sizeof(*mvf), GFP_KERNEL_ACCOUNT);
+ if (!mvf) {
+ ret = -ENOMEM;
+ goto out_unlock;
+ }
+
+ mvf->file = get_file(filp);
+ list_add_tail(&mvf->node, &mshv_vfio->file_list);
+
+out_unlock:
+ mutex_unlock(&mshv_vfio->lock);
+out_fput:
+ fput(filp);
+ return ret;
+}
+
+static long mshv_vfio_file_del(struct mshv_device *mshvdev, unsigned int fd)
+{
+ struct mshv_vfio *mshv_vfio = mshvdev->device_private;
+ struct mshv_vfio_file *mvf;
+ long ret;
+
+ CLASS(fd, f)(fd);
+
+ if (fd_empty(f))
+ return -EBADF;
+
+ ret = -ENOENT;
+ mutex_lock(&mshv_vfio->lock);
+
+ list_for_each_entry(mvf, &mshv_vfio->file_list, node) {
+ if (mvf->file != fd_file(f))
+ continue;
+
+ list_del(&mvf->node);
+ fput(mvf->file);
+ kfree(mvf);
+ ret = 0;
+ break;
+ }
+
+ mutex_unlock(&mshv_vfio->lock);
+ return ret;
+}
+
+static long mshv_vfio_set_file(struct mshv_device *mshvdev, long attr,
+ void __user *arg)
+{
+ int32_t __user *argp = arg;
+ int32_t fd;
+
+ switch (attr) {
+ case MSHV_DEV_VFIO_FILE_ADD:
+ if (get_user(fd, argp))
+ return -EFAULT;
+ return mshv_vfio_file_add(mshvdev, fd);
+
+ case MSHV_DEV_VFIO_FILE_DEL:
+ if (get_user(fd, argp))
+ return -EFAULT;
+ return mshv_vfio_file_del(mshvdev, fd);
+ }
+
+ return -ENXIO;
+}
+
+static long mshv_vfio_set_attr(struct mshv_device *mshvdev,
+ struct mshv_device_attr *attr)
+{
+ switch (attr->group) {
+ case MSHV_DEV_VFIO_FILE:
+ return mshv_vfio_set_file(mshvdev, attr->attr,
+ u64_to_user_ptr(attr->addr));
+ }
+
+ return -ENXIO;
+}
+
+static long mshv_vfio_has_attr(struct mshv_device *mshvdev,
+ struct mshv_device_attr *attr)
+{
+ switch (attr->group) {
+ case MSHV_DEV_VFIO_FILE:
+ switch (attr->attr) {
+ case MSHV_DEV_VFIO_FILE_ADD:
+ case MSHV_DEV_VFIO_FILE_DEL:
+ return 0;
+ }
+
+ break;
+ }
+
+ return -ENXIO;
+}
+
+static long mshv_vfio_create_device(struct mshv_device *mshvdev)
+{
+ struct mshv_device *tmp;
+ struct mshv_vfio *mshv_vfio;
+
+ /* Only one VFIO "device" per VM */
+ hlist_for_each_entry(tmp, &mshvdev->device_pt->pt_devices,
+ device_ptnode)
+ if (tmp->device_ops == &mshv_vfio_device_ops)
+ return -EBUSY;
+
+ mshv_vfio = kzalloc(sizeof(*mshv_vfio), GFP_KERNEL_ACCOUNT);
+ if (mshv_vfio == NULL)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&mshv_vfio->file_list);
+ mutex_init(&mshv_vfio->lock);
+
+ mshvdev->device_private = mshv_vfio;
+
+ return 0;
+}
+
+/* This is called from mshv_device_fop_release() */
+static void mshv_vfio_release_device(struct mshv_device *mshvdev)
+{
+ struct mshv_vfio *mv = mshvdev->device_private;
+ struct mshv_vfio_file *mvf, *tmp;
+
+ list_for_each_entry_safe(mvf, tmp, &mv->file_list, node) {
+ fput(mvf->file);
+ list_del(&mvf->node);
+ kfree(mvf);
+ }
+
+ kfree(mv);
+ kfree(mshvdev);
+}
+
+struct mshv_device_ops mshv_vfio_device_ops = {
+ .device_name = "mshv-vfio",
+ .device_create = mshv_vfio_create_device,
+ .device_release = mshv_vfio_release_device,
+ .device_set_attr = mshv_vfio_set_attr,
+ .device_has_attr = mshv_vfio_has_attr,
+};
diff --git a/include/uapi/linux/mshv.h b/include/uapi/linux/mshv.h
index 4373a8243951..6404e8a98237 100644
--- a/include/uapi/linux/mshv.h
+++ b/include/uapi/linux/mshv.h
@@ -254,6 +254,7 @@ struct mshv_root_hvcall {
#define MSHV_GET_GPAP_ACCESS_BITMAP _IOWR(MSHV_IOCTL, 0x06, struct mshv_gpap_access_bitmap)
/* Generic hypercall */
#define MSHV_ROOT_HVCALL _IOWR(MSHV_IOCTL, 0x07, struct mshv_root_hvcall)
+#define MSHV_CREATE_DEVICE _IOWR(MSHV_IOCTL, 0x08, struct mshv_create_device)
/*
********************************
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 07/13] mshv: Add ioctl support for MSHV-VFIO bridge device
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (5 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 06/13] mshv: Implement mshv bridge device for VFIO Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 08/13] PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg Mukesh R
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Add ioctl support for creating MSHV devices for a partition. At
present only VFIO device types are supported, but more could be
added. At a high level, a partition ioctl to create device verifies
it is of type VFIO and does some setup for bridge code in mshv_vfio.c.
Adapted from KVM device ioctls.
Co-developed-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/hv/mshv_root_main.c | 116 ++++++++++++++++++++++++++++++++++++
1 file changed, 116 insertions(+)
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 02c107458be9..6ceb5f608589 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -1625,6 +1625,119 @@ mshv_partition_ioctl_initialize(struct mshv_partition *partition)
return ret;
}
+static long mshv_device_attr_ioctl(struct mshv_device *mshv_dev, int cmd,
+ ulong uarg)
+{
+ struct mshv_device_attr attr;
+ const struct mshv_device_ops *devops = mshv_dev->device_ops;
+
+ if (copy_from_user(&attr, (void __user *)uarg, sizeof(attr)))
+ return -EFAULT;
+
+ switch (cmd) {
+ case MSHV_SET_DEVICE_ATTR:
+ if (devops->device_set_attr)
+ return devops->device_set_attr(mshv_dev, &attr);
+ break;
+ case MSHV_HAS_DEVICE_ATTR:
+ if (devops->device_has_attr)
+ return devops->device_has_attr(mshv_dev, &attr);
+ break;
+ }
+
+ return -EPERM;
+}
+
+static long mshv_device_fop_ioctl(struct file *filp, unsigned int cmd,
+ ulong uarg)
+{
+ struct mshv_device *mshv_dev = filp->private_data;
+
+ switch (cmd) {
+ case MSHV_SET_DEVICE_ATTR:
+ case MSHV_HAS_DEVICE_ATTR:
+ return mshv_device_attr_ioctl(mshv_dev, cmd, uarg);
+ }
+
+ return -ENOTTY;
+}
+
+static int mshv_device_fop_release(struct inode *inode, struct file *filp)
+{
+ struct mshv_device *mshv_dev = filp->private_data;
+ struct mshv_partition *partition = mshv_dev->device_pt;
+
+ if (mshv_dev->device_ops->device_release) {
+ mutex_lock(&partition->pt_mutex);
+ hlist_del(&mshv_dev->device_ptnode);
+ mshv_dev->device_ops->device_release(mshv_dev);
+ mutex_unlock(&partition->pt_mutex);
+ }
+
+ mshv_partition_put(partition);
+ return 0;
+}
+
+static const struct file_operations mshv_device_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = mshv_device_fop_ioctl,
+ .release = mshv_device_fop_release,
+};
+
+static long mshv_partition_ioctl_create_device(struct mshv_partition *partition,
+ void __user *uarg)
+{
+ long rc;
+ struct mshv_create_device devargk;
+ struct mshv_device *mshv_dev;
+ const struct mshv_device_ops *vfio_ops;
+
+ if (copy_from_user(&devargk, uarg, sizeof(devargk)))
+ return -EFAULT;
+
+ /* At present, only VFIO is supported */
+ if (devargk.type != MSHV_DEV_TYPE_VFIO)
+ return -ENODEV;
+
+ if (devargk.flags & MSHV_CREATE_DEVICE_TEST)
+ return 0;
+
+ /* This is freed later by mshv_vfio_release_device() */
+ mshv_dev = kzalloc(sizeof(*mshv_dev), GFP_KERNEL_ACCOUNT);
+ if (mshv_dev == NULL)
+ return -ENOMEM;
+
+ vfio_ops = &mshv_vfio_device_ops;
+ mshv_dev->device_ops = vfio_ops;
+ mshv_dev->device_pt = partition;
+
+ rc = vfio_ops->device_create(mshv_dev);
+ if (rc < 0) {
+ kfree(mshv_dev);
+ return rc;
+ }
+
+ hlist_add_head(&mshv_dev->device_ptnode, &partition->pt_devices);
+
+ mshv_partition_get(partition);
+ rc = anon_inode_getfd(vfio_ops->device_name, &mshv_device_fops,
+ mshv_dev, O_RDWR | O_CLOEXEC);
+ if (rc < 0)
+ goto undo_out;
+
+ devargk.fd = rc;
+ if (copy_to_user(uarg, &devargk, sizeof(devargk)))
+ return -EFAULT; /* cleanup in mshv_device_fop_release() */
+
+ return 0;
+
+undo_out:
+ hlist_del(&mshv_dev->device_ptnode);
+ vfio_ops->device_release(mshv_dev); /* will kfree(mshv_dev) */
+ mshv_partition_put(partition);
+ return rc;
+}
+
static long
mshv_partition_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
{
@@ -1661,6 +1774,9 @@ mshv_partition_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
case MSHV_ROOT_HVCALL:
ret = mshv_ioctl_passthru_hvcall(partition, true, uarg);
break;
+ case MSHV_CREATE_DEVICE:
+ ret = mshv_partition_ioctl_create_device(partition, uarg);
+ break;
default:
ret = -ENOTTY;
}
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 08/13] PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (6 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 07/13] mshv: Add ioctl support for MSHV-VFIO bridge device Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 09/13] mshv: Import data structs around device passthru from hyperv headers Mukesh R
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Main change here is to rename hv_compose_msi_msg to
hv_vmbus_compose_msi_msg as we introduce hv_compose_msi_msg in upcoming
patches that builds MSI messages for both VMBus and non-VMBus cases. VMBus
is not used on baremetal root partition for example. While at it, replace
spaces with tabs and fix some formatting involving excessive line wraps.
There is no functional change.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/pci/controller/pci-hyperv.c | 95 +++++++++++++++--------------
1 file changed, 48 insertions(+), 47 deletions(-)
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index cfc8fa403dad..ed6b399afc80 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -30,7 +30,7 @@
* function's configuration space is zero.
*
* The rest of this driver mostly maps PCI concepts onto underlying Hyper-V
- * facilities. For instance, the configuration space of a function exposed
+ * facilities. For instance, the configuration space of a function exposed
* by Hyper-V is mapped into a single page of memory space, and the
* read and write handlers for config space must be aware of this mechanism.
* Similarly, device setup and teardown involves messages sent to and from
@@ -109,33 +109,33 @@ enum pci_message_type {
/*
* Version 1.1
*/
- PCI_MESSAGE_BASE = 0x42490000,
- PCI_BUS_RELATIONS = PCI_MESSAGE_BASE + 0,
- PCI_QUERY_BUS_RELATIONS = PCI_MESSAGE_BASE + 1,
- PCI_POWER_STATE_CHANGE = PCI_MESSAGE_BASE + 4,
+ PCI_MESSAGE_BASE = 0x42490000,
+ PCI_BUS_RELATIONS = PCI_MESSAGE_BASE + 0,
+ PCI_QUERY_BUS_RELATIONS = PCI_MESSAGE_BASE + 1,
+ PCI_POWER_STATE_CHANGE = PCI_MESSAGE_BASE + 4,
PCI_QUERY_RESOURCE_REQUIREMENTS = PCI_MESSAGE_BASE + 5,
- PCI_QUERY_RESOURCE_RESOURCES = PCI_MESSAGE_BASE + 6,
- PCI_BUS_D0ENTRY = PCI_MESSAGE_BASE + 7,
- PCI_BUS_D0EXIT = PCI_MESSAGE_BASE + 8,
- PCI_READ_BLOCK = PCI_MESSAGE_BASE + 9,
- PCI_WRITE_BLOCK = PCI_MESSAGE_BASE + 0xA,
- PCI_EJECT = PCI_MESSAGE_BASE + 0xB,
- PCI_QUERY_STOP = PCI_MESSAGE_BASE + 0xC,
- PCI_REENABLE = PCI_MESSAGE_BASE + 0xD,
- PCI_QUERY_STOP_FAILED = PCI_MESSAGE_BASE + 0xE,
- PCI_EJECTION_COMPLETE = PCI_MESSAGE_BASE + 0xF,
- PCI_RESOURCES_ASSIGNED = PCI_MESSAGE_BASE + 0x10,
- PCI_RESOURCES_RELEASED = PCI_MESSAGE_BASE + 0x11,
- PCI_INVALIDATE_BLOCK = PCI_MESSAGE_BASE + 0x12,
- PCI_QUERY_PROTOCOL_VERSION = PCI_MESSAGE_BASE + 0x13,
- PCI_CREATE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x14,
- PCI_DELETE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x15,
+ PCI_QUERY_RESOURCE_RESOURCES = PCI_MESSAGE_BASE + 6,
+ PCI_BUS_D0ENTRY = PCI_MESSAGE_BASE + 7,
+ PCI_BUS_D0EXIT = PCI_MESSAGE_BASE + 8,
+ PCI_READ_BLOCK = PCI_MESSAGE_BASE + 9,
+ PCI_WRITE_BLOCK = PCI_MESSAGE_BASE + 0xA,
+ PCI_EJECT = PCI_MESSAGE_BASE + 0xB,
+ PCI_QUERY_STOP = PCI_MESSAGE_BASE + 0xC,
+ PCI_REENABLE = PCI_MESSAGE_BASE + 0xD,
+ PCI_QUERY_STOP_FAILED = PCI_MESSAGE_BASE + 0xE,
+ PCI_EJECTION_COMPLETE = PCI_MESSAGE_BASE + 0xF,
+ PCI_RESOURCES_ASSIGNED = PCI_MESSAGE_BASE + 0x10,
+ PCI_RESOURCES_RELEASED = PCI_MESSAGE_BASE + 0x11,
+ PCI_INVALIDATE_BLOCK = PCI_MESSAGE_BASE + 0x12,
+ PCI_QUERY_PROTOCOL_VERSION = PCI_MESSAGE_BASE + 0x13,
+ PCI_CREATE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x14,
+ PCI_DELETE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x15,
PCI_RESOURCES_ASSIGNED2 = PCI_MESSAGE_BASE + 0x16,
PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17,
PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */
PCI_BUS_RELATIONS2 = PCI_MESSAGE_BASE + 0x19,
- PCI_RESOURCES_ASSIGNED3 = PCI_MESSAGE_BASE + 0x1A,
- PCI_CREATE_INTERRUPT_MESSAGE3 = PCI_MESSAGE_BASE + 0x1B,
+ PCI_RESOURCES_ASSIGNED3 = PCI_MESSAGE_BASE + 0x1A,
+ PCI_CREATE_INTERRUPT_MESSAGE3 = PCI_MESSAGE_BASE + 0x1B,
PCI_MESSAGE_MAXIMUM
};
@@ -1774,20 +1774,21 @@ static u32 hv_compose_msi_req_v1(
* via the HVCALL_RETARGET_INTERRUPT hypercall. But the choice of dummy vCPU is
* not irrelevant because Hyper-V chooses the physical CPU to handle the
* interrupts based on the vCPU specified in message sent to the vPCI VSP in
- * hv_compose_msi_msg(). Hyper-V's choice of pCPU is not visible to the guest,
- * but assigning too many vPCI device interrupts to the same pCPU can cause a
- * performance bottleneck. So we spread out the dummy vCPUs to influence Hyper-V
- * to spread out the pCPUs that it selects.
+ * hv_vmbus_compose_msi_msg(). Hyper-V's choice of pCPU is not visible to the
+ * guest, but assigning too many vPCI device interrupts to the same pCPU can
+ * cause a performance bottleneck. So we spread out the dummy vCPUs to influence
+ * Hyper-V to spread out the pCPUs that it selects.
*
* For the single-MSI and MSI-X cases, it's OK for hv_compose_msi_req_get_cpu()
* to always return the same dummy vCPU, because a second call to
- * hv_compose_msi_msg() contains the "real" vCPU, causing Hyper-V to choose a
- * new pCPU for the interrupt. But for the multi-MSI case, the second call to
- * hv_compose_msi_msg() exits without sending a message to the vPCI VSP, so the
- * original dummy vCPU is used. This dummy vCPU must be round-robin'ed so that
- * the pCPUs are spread out. All interrupts for a multi-MSI device end up using
- * the same pCPU, even though the vCPUs will be spread out by later calls
- * to hv_irq_unmask(), but that is the best we can do now.
+ * hv_vmbus_compose_msi_msg() contains the "real" vCPU, causing Hyper-V to
+ * choose a new pCPU for the interrupt. But for the multi-MSI case, the second
+ * call to hv_vmbus_compose_msi_msg() exits without sending a message to the
+ * vPCI VSP, so the original dummy vCPU is used. This dummy vCPU must be
+ * round-robin'ed so that the pCPUs are spread out. All interrupts for a
+ * multi-MSI device end up using the same pCPU, even though the vCPUs will be
+ * spread out by later calls to hv_irq_unmask(), but that is the best we can do
+ * now.
*
* With Hyper-V in Nov 2022, the HVCALL_RETARGET_INTERRUPT hypercall does *not*
* cause Hyper-V to reselect the pCPU based on the specified vCPU. Such an
@@ -1862,7 +1863,7 @@ static u32 hv_compose_msi_req_v3(
}
/**
- * hv_compose_msi_msg() - Supplies a valid MSI address/data
+ * hv_vmbus_compose_msi_msg() - Supplies a valid MSI address/data
* @data: Everything about this MSI
* @msg: Buffer that is filled in by this function
*
@@ -1872,7 +1873,7 @@ static u32 hv_compose_msi_req_v3(
* response supplies a data value and address to which that data
* should be written to trigger that interrupt.
*/
-static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+static void hv_vmbus_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct hv_pcibus_device *hbus;
struct vmbus_channel *channel;
@@ -1954,7 +1955,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}
/*
- * The vector we select here is a dummy value. The correct
+ * The vector we select here is a dummy value. The correct
* value gets sent to the hypervisor in unmask(). This needs
* to be aligned with the count, and also not zero. Multi-msi
* is powers of 2 up to 32, so 32 will always work here.
@@ -2046,7 +2047,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
/*
* Make sure that the ring buffer data structure doesn't get
- * freed while we dereference the ring buffer pointer. Test
+ * freed while we dereference the ring buffer pointer. Test
* for the channel's onchannel_callback being NULL within a
* sched_lock critical section. See also the inline comments
* in vmbus_reset_channel_cb().
@@ -2146,7 +2147,7 @@ static const struct msi_parent_ops hv_pcie_msi_parent_ops = {
/* HW Interrupt Chip Descriptor */
static struct irq_chip hv_msi_irq_chip = {
.name = "Hyper-V PCIe MSI",
- .irq_compose_msi_msg = hv_compose_msi_msg,
+ .irq_compose_msi_msg = hv_vmbus_compose_msi_msg,
.irq_set_affinity = irq_chip_set_affinity_parent,
.irq_ack = irq_chip_ack_parent,
.irq_eoi = irq_chip_eoi_parent,
@@ -2158,8 +2159,8 @@ static int hv_pcie_domain_alloc(struct irq_domain *d, unsigned int virq, unsigne
void *arg)
{
/*
- * TODO: Allocating and populating struct tran_int_desc in hv_compose_msi_msg()
- * should be moved here.
+ * TODO: Allocating and populating struct tran_int_desc in
+ * hv_vmbus_compose_msi_msg() should be moved here.
*/
int ret;
@@ -2226,7 +2227,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
/**
* get_bar_size() - Get the address space consumed by a BAR
* @bar_val: Value that a BAR returned after -1 was written
- * to it.
+ * to it.
*
* This function returns the size of the BAR, rounded up to 1
* page. It has to be rounded up because the hypervisor's page
@@ -2580,7 +2581,7 @@ static void q_resource_requirements(void *context, struct pci_response *resp,
* new_pcichild_device() - Create a new child device
* @hbus: The internal struct tracking this root PCI bus.
* @desc: The information supplied so far from the host
- * about the device.
+ * about the device.
*
* This function creates the tracking structure for a new child
* device and kicks off the process of figuring out what it is.
@@ -3105,7 +3106,7 @@ static void hv_pci_onchannelcallback(void *context)
* sure that the packet pointer is still valid during the call:
* here 'valid' means that there's a task still waiting for the
* completion, and that the packet data is still on the waiting
- * task's stack. Cf. hv_compose_msi_msg().
+ * task's stack. Cf. hv_vmbus_compose_msi_msg().
*/
comp_packet->completion_func(comp_packet->compl_ctxt,
response,
@@ -3422,7 +3423,7 @@ static int hv_allocate_config_window(struct hv_pcibus_device *hbus)
* vmbus_allocate_mmio() gets used for allocating both device endpoint
* resource claims (those which cannot be overlapped) and the ranges
* which are valid for the children of this bus, which are intended
- * to be overlapped by those children. Set the flag on this claim
+ * to be overlapped by those children. Set the flag on this claim
* meaning that this region can't be overlapped.
*/
@@ -4069,7 +4070,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
irq_data = irq_get_irq_data(entry->irq);
if (WARN_ON_ONCE(!irq_data))
return -EINVAL;
- hv_compose_msi_msg(irq_data, &entry->msg);
+ hv_vmbus_compose_msi_msg(irq_data, &entry->msg);
}
return 0;
}
@@ -4077,7 +4078,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
/*
* Upon resume, pci_restore_msi_state() -> ... -> __pci_write_msi_msg()
* directly writes the MSI/MSI-X registers via MMIO, but since Hyper-V
- * doesn't trap and emulate the MMIO accesses, here hv_compose_msi_msg()
+ * doesn't trap and emulate the MMIO accesses, here hv_vmbus_compose_msi_msg()
* must be used to ask Hyper-V to re-create the IOMMU Interrupt Remapping
* Table entries.
*/
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 09/13] mshv: Import data structs around device passthru from hyperv headers
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (7 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 08/13] PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 10/13] PCI: hv: Build device id for a VMBus device, export PCI devid function Mukesh R
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Copy/import from Hyper-V public headers, definitions and declarations that
are related to attaching and detaching of device domains, and building
device ids for those purposes.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
include/hyperv/hvgdk_mini.h | 11 ++++
include/hyperv/hvhdk_mini.h | 112 ++++++++++++++++++++++++++++++++++++
2 files changed, 123 insertions(+)
diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 6a4e8b9d570f..da622fb06440 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -326,6 +326,9 @@ union hv_hypervisor_version_info {
/* stimer Direct Mode is available */
#define HV_STIMER_DIRECT_MODE_AVAILABLE BIT(19)
+#define HV_DEVICE_DOMAIN_AVAILABLE BIT(24)
+#define HV_S1_DEVICE_DOMAIN_AVAILABLE BIT(25)
+
/*
* Implementation recommendations. Indicates which behaviors the hypervisor
* recommends the OS implement for optimal performance.
@@ -475,6 +478,8 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */
#define HVCALL_MAP_DEVICE_INTERRUPT 0x007c
#define HVCALL_UNMAP_DEVICE_INTERRUPT 0x007d
#define HVCALL_RETARGET_INTERRUPT 0x007e
+#define HVCALL_ATTACH_DEVICE 0x0082
+#define HVCALL_DETACH_DEVICE 0x0083
#define HVCALL_NOTIFY_PARTITION_EVENT 0x0087
#define HVCALL_ENTER_SLEEP_STATE 0x0084
#define HVCALL_NOTIFY_PORT_RING_EMPTY 0x008b
@@ -486,9 +491,15 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */
#define HVCALL_GET_VP_INDEX_FROM_APIC_ID 0x009a
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
+#define HVCALL_CREATE_DEVICE_DOMAIN 0x00b1
+#define HVCALL_ATTACH_DEVICE_DOMAIN 0x00b2
+#define HVCALL_MAP_DEVICE_GPA_PAGES 0x00b3
+#define HVCALL_UNMAP_DEVICE_GPA_PAGES 0x00b4
#define HVCALL_SIGNAL_EVENT_DIRECT 0x00c0
#define HVCALL_POST_MESSAGE_DIRECT 0x00c1
#define HVCALL_DISPATCH_VP 0x00c2
+#define HVCALL_DETACH_DEVICE_DOMAIN 0x00c4
+#define HVCALL_DELETE_DEVICE_DOMAIN 0x00c5
#define HVCALL_GET_GPA_PAGES_ACCESS_STATES 0x00c9
#define HVCALL_ACQUIRE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d7
#define HVCALL_RELEASE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d8
diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
index b4cb2fa26e9b..60425052a799 100644
--- a/include/hyperv/hvhdk_mini.h
+++ b/include/hyperv/hvhdk_mini.h
@@ -468,6 +468,32 @@ struct hv_send_ipi_ex { /* HV_INPUT_SEND_SYNTHETIC_CLUSTER_IPI_EX */
struct hv_vpset vp_set;
} __packed;
+union hv_attdev_flags { /* HV_ATTACH_DEVICE_FLAGS */
+ struct {
+ u32 logical_id : 1;
+ u32 resvd0 : 1;
+ u32 ats_enabled : 1;
+ u32 virt_func : 1;
+ u32 shared_irq_child : 1;
+ u32 virt_dev : 1;
+ u32 ats_supported : 1;
+ u32 small_irt : 1;
+ u32 resvd : 24;
+ } __packed;
+ u32 as_uint32;
+};
+
+union hv_dev_pci_caps { /* HV_DEVICE_PCI_CAPABILITIES */
+ struct {
+ u32 max_pasid_width : 5;
+ u32 invalidate_qdepth : 5;
+ u32 global_inval : 1;
+ u32 prg_response_req : 1;
+ u32 resvd : 20;
+ } __packed;
+ u32 as_uint32;
+};
+
typedef u16 hv_pci_rid; /* HV_PCI_RID */
typedef u16 hv_pci_segment; /* HV_PCI_SEGMENT */
typedef u64 hv_logical_device_id;
@@ -547,4 +573,90 @@ union hv_device_id { /* HV_DEVICE_ID */
} acpi;
} __packed;
+struct hv_input_attach_device { /* HV_INPUT_ATTACH_DEVICE */
+ u64 partition_id;
+ union hv_device_id device_id;
+ union hv_attdev_flags attdev_flags;
+ u8 attdev_vtl;
+ u8 rsvd0;
+ u16 rsvd1;
+ u64 logical_devid;
+ union hv_dev_pci_caps dev_pcicaps;
+ u16 pf_pci_rid;
+ u16 resvd2;
+} __packed;
+
+struct hv_input_detach_device { /* HV_INPUT_DETACH_DEVICE */
+ u64 partition_id;
+ u64 logical_devid;
+} __packed;
+
+
+/* 3 domain types: stage 1, stage 2, and SOC */
+#define HV_DEVICE_DOMAIN_TYPE_S2 0 /* HV_DEVICE_DOMAIN_ID_TYPE_S2 */
+#define HV_DEVICE_DOMAIN_TYPE_S1 1 /* HV_DEVICE_DOMAIN_ID_TYPE_S1 */
+#define HV_DEVICE_DOMAIN_TYPE_SOC 2 /* HV_DEVICE_DOMAIN_ID_TYPE_SOC */
+
+/* ID for stage 2 default domain and NULL domain */
+#define HV_DEVICE_DOMAIN_ID_S2_DEFAULT 0
+#define HV_DEVICE_DOMAIN_ID_S2_NULL 0xFFFFFFFFULL
+
+union hv_device_domain_id {
+ u64 as_uint64;
+ struct {
+ u32 type : 4;
+ u32 reserved : 28;
+ u32 id;
+ };
+} __packed;
+
+struct hv_input_device_domain { /* HV_INPUT_DEVICE_DOMAIN */
+ u64 partition_id;
+ union hv_input_vtl owner_vtl;
+ u8 padding[7];
+ union hv_device_domain_id domain_id;
+} __packed;
+
+union hv_create_device_domain_flags { /* HV_CREATE_DEVICE_DOMAIN_FLAGS */
+ u32 as_uint32;
+ struct {
+ u32 forward_progress_required : 1;
+ u32 inherit_owning_vtl : 1;
+ u32 reserved : 30;
+ } __packed;
+} __packed;
+
+struct hv_input_create_device_domain { /* HV_INPUT_CREATE_DEVICE_DOMAIN */
+ struct hv_input_device_domain device_domain;
+ union hv_create_device_domain_flags create_device_domain_flags;
+} __packed;
+
+struct hv_input_delete_device_domain { /* HV_INPUT_DELETE_DEVICE_DOMAIN */
+ struct hv_input_device_domain device_domain;
+} __packed;
+
+struct hv_input_attach_device_domain { /* HV_INPUT_ATTACH_DEVICE_DOMAIN */
+ struct hv_input_device_domain device_domain;
+ union hv_device_id device_id;
+} __packed;
+
+struct hv_input_detach_device_domain { /* HV_INPUT_DETACH_DEVICE_DOMAIN */
+ u64 partition_id;
+ union hv_device_id device_id;
+} __packed;
+
+struct hv_input_map_device_gpa_pages { /* HV_INPUT_MAP_DEVICE_GPA_PAGES */
+ struct hv_input_device_domain device_domain;
+ union hv_input_vtl target_vtl;
+ u8 padding[3];
+ u32 map_flags;
+ u64 target_device_va_base;
+ u64 gpa_page_list[];
+} __packed;
+
+struct hv_input_unmap_device_gpa_pages { /* HV_INPUT_UNMAP_DEVICE_GPA_PAGES */
+ struct hv_input_device_domain device_domain;
+ u64 target_device_va_base;
+} __packed;
+
#endif /* _HV_HVHDK_MINI_H */
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 10/13] PCI: hv: Build device id for a VMBus device, export PCI devid function
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (8 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 09/13] mshv: Import data structs around device passthru from hyperv headers Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 11/13] x86/hyperv: Implement hyperv virtual iommu Mukesh R
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
On Hyper-V, most hypercalls related to PCI passthru to map/unmap regions,
interrupts, etc need a device id as a parameter. This device id refers
to that specific device during the lifetime of passthru.
An L1VH VM only contains VMBus based devices. A device id for a VMBus
device is slightly different in that it uses the hv_pcibus_device info
for building it to make sure it matches exactly what the hypervisor
expects. This VMBus based device id is needed when attaching devices in
an L1VH based guest VM. Before building it, a check is done to make sure
the device is a valid VMBus device.
In remaining cases, PCI device id is used. So, also make pci device
id build function public.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
arch/x86/hyperv/irqdomain.c | 9 +++++----
arch/x86/include/asm/mshyperv.h | 4 ++++
drivers/pci/controller/pci-hyperv.c | 25 +++++++++++++++++++++++++
include/asm-generic/mshyperv.h | 7 +++++++
4 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
index 229f986e08ea..527835b99a70 100644
--- a/arch/x86/hyperv/irqdomain.c
+++ b/arch/x86/hyperv/irqdomain.c
@@ -137,7 +137,7 @@ static int get_rid_cb(struct pci_dev *pdev, u16 alias, void *data)
return 0;
}
-static union hv_device_id hv_build_devid_type_pci(struct pci_dev *pdev)
+u64 hv_build_devid_type_pci(struct pci_dev *pdev)
{
int pos;
union hv_device_id hv_devid;
@@ -197,8 +197,9 @@ static union hv_device_id hv_build_devid_type_pci(struct pci_dev *pdev)
}
out:
- return hv_devid;
+ return hv_devid.as_uint64;
}
+EXPORT_SYMBOL_GPL(hv_build_devid_type_pci);
/*
* hv_map_msi_interrupt() - Map the MSI IRQ in the hypervisor.
@@ -221,7 +222,7 @@ int hv_map_msi_interrupt(struct irq_data *data,
msidesc = irq_data_get_msi_desc(data);
pdev = msi_desc_to_pci_dev(msidesc);
- hv_devid = hv_build_devid_type_pci(pdev);
+ hv_devid.as_uint64 = hv_build_devid_type_pci(pdev);
cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
return hv_map_interrupt(hv_current_partition_id, hv_devid, false, cpu,
@@ -296,7 +297,7 @@ static int hv_unmap_msi_interrupt(struct pci_dev *pdev,
{
union hv_device_id hv_devid;
- hv_devid = hv_build_devid_type_pci(pdev);
+ hv_devid.as_uint64 = hv_build_devid_type_pci(pdev);
return hv_unmap_interrupt(hv_devid.as_uint64, irq_entry);
}
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index f64393e853ee..039e8a986be3 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -271,6 +271,10 @@ static inline u64 hv_get_non_nested_msr(unsigned int reg) { return 0; }
static inline int hv_apicid_to_vp_index(u32 apic_id) { return -EINVAL; }
#endif /* CONFIG_HYPERV */
+#if IS_ENABLED(CONFIG_HYPERV_IOMMU)
+u64 hv_build_devid_type_pci(struct pci_dev *pdev);
+#endif /* IS_ENABLED(CONFIG_HYPERV_IOMMU) */
+
struct mshv_vtl_cpu_context {
union {
struct {
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index ed6b399afc80..8f6b818ee09b 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -578,6 +578,8 @@ static void hv_pci_onchannelcallback(void *context);
#define DELIVERY_MODE APIC_DELIVERY_MODE_FIXED
#define HV_MSI_CHIP_FLAGS MSI_CHIP_FLAG_SET_ACK
+static bool hv_vmbus_pci_device(struct pci_bus *pbus);
+
static int hv_pci_irqchip_init(void)
{
return 0;
@@ -1005,6 +1007,24 @@ static struct irq_domain *hv_pci_get_root_domain(void)
static void hv_arch_irq_unmask(struct irq_data *data) { }
#endif /* CONFIG_ARM64 */
+u64 hv_pci_vmbus_device_id(struct pci_dev *pdev)
+{
+ struct hv_pcibus_device *hbus;
+ struct pci_bus *pbus = pdev->bus;
+
+ if (!hv_vmbus_pci_device(pbus))
+ return 0;
+
+ hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
+
+ return (hbus->hdev->dev_instance.b[5] << 24) |
+ (hbus->hdev->dev_instance.b[4] << 16) |
+ (hbus->hdev->dev_instance.b[7] << 8) |
+ (hbus->hdev->dev_instance.b[6] & 0xf8) |
+ PCI_FUNC(pdev->devfn);
+}
+EXPORT_SYMBOL_GPL(hv_pci_vmbus_device_id);
+
/**
* hv_pci_generic_compl() - Invoked for a completion packet
* @context: Set up by the sender of the packet.
@@ -1403,6 +1423,11 @@ static struct pci_ops hv_pcifront_ops = {
.write = hv_pcifront_write_config,
};
+static bool hv_vmbus_pci_device(struct pci_bus *pbus)
+{
+ return pbus->ops == &hv_pcifront_ops;
+}
+
/*
* Paravirtual backchannel
*
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index e8cbc4e3f7ad..fe5ddd1c43ff 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -329,6 +329,13 @@ static inline enum hv_isolation_type hv_get_isolation_type(void)
}
#endif /* CONFIG_HYPERV */
+#if IS_ENABLED(CONFIG_PCI_HYPERV)
+u64 hv_pci_vmbus_device_id(struct pci_dev *pdev);
+#else /* IS_ENABLED(CONFIG_PCI_HYPERV) */
+static inline u64 hv_pci_vmbus_device_id(struct pci_dev *pdev)
+{ return 0; }
+#endif /* IS_ENABLED(CONFIG_PCI_HYPERV) */
+
#if IS_ENABLED(CONFIG_MSHV_ROOT)
static inline bool hv_root_partition(void)
{
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 11/13] x86/hyperv: Implement hyperv virtual iommu
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (9 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 10/13] PCI: hv: Build device id for a VMBus device, export PCI devid function Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 12/13] mshv: Populate mmio mappings for PCI passthru Mukesh R
2026-04-22 2:32 ` [PATCH V1 13/13] mshv: pin all ram mem regions if partition has device passthru Mukesh R
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Add a new file to implement management of device domains, mapping and
unmapping of iommu memory, and other iommu_ops to fit within the VFIO
framework for PCI passthru on Hyper-V running Linux as root or L1VH
parent. This also implements direct attach mechanism for PCI passthru,
and it is also made to work within the VFIO framework.
At a high level, during boot the hypervisor creates a default identity
domain and attaches all devices to it. This nicely maps to Linux iommu
subsystem IOMMU_DOMAIN_IDENTITY domain. As a result, Linux does not
need to explicitly ask Hyper-V to attach devices and do maps/unmaps
during boot. As mentioned previously, Hyper-V supports two ways to do
PCI passthru:
1. Device Domain: root must create a device domain in the hypervisor,
and do map/unmap hypercalls for mapping and unmapping guest RAM.
All hypervisor communications use device id of type PCI for
identifying and referencing the device.
2. Direct Attach: the hypervisor will simply use the guest's HW
page table for mappings, thus the host need not do map/unmap
device memory hypercalls. As such, direct attach passthru setup
during guest boot is extremely fast. A direct attached device
must be referenced via logical device id and not via the PCI
device id.
At present, L1VH root/parent only supports direct attaches. Also direct
attach is default in non-L1VH cases because there are some significant
performance issues with device domain implementation currently for guests
with higher RAM (say more than 8GB), and that unfortunately cannot be
addressed in the short term.
Co-developed-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
MAINTAINERS | 1 +
arch/x86/kernel/pci-dma.c | 2 +
drivers/iommu/Kconfig | 5 +-
drivers/iommu/Makefile | 1 +
drivers/iommu/hyperv-iommu-root.c | 899 ++++++++++++++++++++++++++++++
include/asm-generic/mshyperv.h | 24 +-
include/linux/hyperv.h | 6 +
7 files changed, 934 insertions(+), 4 deletions(-)
create mode 100644 drivers/iommu/hyperv-iommu-root.c
diff --git a/MAINTAINERS b/MAINTAINERS
index f803a6a38fee..8ae040b89a56 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11914,6 +11914,7 @@ F: drivers/clocksource/hyperv_timer.c
F: drivers/hid/hid-hyperv.c
F: drivers/hv/
F: drivers/input/serio/hyperv-keyboard.c
+F: drivers/iommu/hyperv-iommu-root.c
F: drivers/iommu/hyperv-irq.c
F: drivers/net/ethernet/microsoft/
F: drivers/net/hyperv/
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index 6267363e0189..cfeee6505e17 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -8,6 +8,7 @@
#include <linux/gfp.h>
#include <linux/pci.h>
#include <linux/amd-iommu.h>
+#include <linux/hyperv.h>
#include <asm/proto.h>
#include <asm/dma.h>
@@ -105,6 +106,7 @@ void __init pci_iommu_alloc(void)
gart_iommu_hole_init();
amd_iommu_detect();
detect_intel_iommu();
+ hv_iommu_detect();
swiotlb_init(x86_swiotlb_enable, x86_swiotlb_flags);
}
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f86262b11416..7909cf4373a6 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -352,13 +352,12 @@ config MTK_IOMMU_V1
if unsure, say N here.
config HYPERV_IOMMU
- bool "Hyper-V IRQ Handling"
+ bool "Hyper-V IOMMU Unit"
depends on HYPERV && X86
select IOMMU_API
default HYPERV
help
- Stub IOMMU driver to handle IRQs to support Hyper-V Linux
- guest and root partitions.
+ Hyper-V pseudo IOMMU unit.
config VIRTIO_IOMMU
tristate "Virtio IOMMU driver"
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 335ea77cced6..296fbc6ca829 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,6 +31,7 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
obj-$(CONFIG_HYPERV) += hyperv-irq.o
+obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu-root.o
obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o
diff --git a/drivers/iommu/hyperv-iommu-root.c b/drivers/iommu/hyperv-iommu-root.c
new file mode 100644
index 000000000000..492de5a1cf23
--- /dev/null
+++ b/drivers/iommu/hyperv-iommu-root.c
@@ -0,0 +1,899 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Hyper-V root vIOMMU driver.
+ * Copyright (C) 2026, Microsoft, Inc.
+ */
+
+#include <linux/pci.h>
+#include <linux/dma-map-ops.h>
+#include <linux/interval_tree.h>
+#include <linux/hyperv.h>
+#include "dma-iommu.h"
+#include <asm/iommu.h>
+#include <asm/mshyperv.h>
+
+/* We will not claim these PCI devices, eg hypervisor needs it for debugger */
+static char *pci_devs_to_skip;
+static int __init hv_iommu_setup_skip(char *str)
+{
+ pci_devs_to_skip = str;
+
+ return 0;
+}
+/* hv_iommu_skip=(SSSS:BB:DD.F)(SSSS:BB:DD.F) */
+__setup("hv_iommu_skip=", hv_iommu_setup_skip);
+
+bool hv_no_attdev; /* disable direct device attach for passthru */
+EXPORT_SYMBOL_GPL(hv_no_attdev);
+static int __init setup_hv_no_attdev(char *str)
+{
+ hv_no_attdev = true;
+ return 0;
+}
+__setup("hv_no_attdev", setup_hv_no_attdev);
+
+/* Iommu device that we export to the world. HyperV supports max of one */
+static struct iommu_device hv_virt_iommu;
+
+struct hv_domain {
+ struct iommu_domain iommu_dom;
+ u32 domid_num; /* as opposed to domain_id.type */
+ bool attached_dom; /* is this direct attached dom? */
+ u64 partid; /* partition id */
+ spinlock_t mappings_lock; /* protects mappings_tree */
+ struct rb_root_cached mappings_tree; /* iova to pa lookup tree */
+};
+
+#define to_hv_domain(d) container_of(d, struct hv_domain, iommu_dom)
+
+struct hv_iommu_mapping {
+ phys_addr_t paddr;
+ struct interval_tree_node iova;
+ u32 flags;
+};
+
+/*
+ * By default, during boot the hypervisor creates one Stage 2 (S2) default
+ * domain. Stage 2 means that the page table is controlled by the hypervisor.
+ * S2 default: access to entire root partition memory. This for us easily
+ * maps to IOMMU_DOMAIN_IDENTITY in the iommu subsystem, and
+ * is called HV_DEVICE_DOMAIN_ID_S2_DEFAULT in the hypervisor.
+ *
+ * Device Management:
+ * There are two ways to manage device attaches to domains:
+ * 1. Domain Attach: A device domain is created in the hypervisor, the
+ * device is attached to this domain, and then memory
+ * ranges are mapped in the map callbacks.
+ * 2. Direct Attach: No need to create a domain in the hypervisor for direct
+ * attached devices. A hypercall is made to tell the
+ * hypervisor to attach the device to a guest. There is
+ * no need for explicit memory mappings because the
+ * hypervisor will just use the guest HW page table.
+ *
+ * Since a direct attach is much faster, it is the default. This can be
+ * changed via hv_no_attdev.
+ *
+ * L1VH: hypervisor only supports direct attach.
+ */
+
+/*
+ * Create dummy domain to correspond to hypervisor prebuilt default identity
+ * domain (dummy because we do not make hypercall to create them).
+ */
+static struct hv_domain hv_def_identity_dom;
+
+static bool hv_special_domain(struct hv_domain *hvdom)
+{
+ return hvdom == &hv_def_identity_dom;
+}
+
+struct iommu_domain_geometry default_geometry = (struct iommu_domain_geometry) {
+ .aperture_start = 0,
+ .aperture_end = -1UL,
+ .force_aperture = true,
+};
+
+/*
+ * Since the relevant hypercalls can only fit less than 512 PFNs in the pfn
+ * array, report 1M max.
+ */
+#define HV_IOMMU_PGSIZES (SZ_4K | SZ_1M)
+
+static u32 unique_id; /* unique numeric id of a new domain */
+
+static void hv_iommu_detach_dev(struct iommu_domain *immdom,
+ struct device *dev);
+static size_t hv_iommu_unmap_pages(struct iommu_domain *immdom, ulong iova,
+ size_t pgsize, size_t pgcount,
+ struct iommu_iotlb_gather *gather);
+
+/*
+ * If the current thread is a VMM thread, return the partition id of the VM it
+ * is managing, else return HV_PARTITION_ID_INVALID.
+ */
+u64 hv_get_current_partid(void)
+{
+ u64 (*fn)(void);
+ u64 ptid;
+
+ fn = symbol_get(mshv_current_partid);
+ if (!fn)
+ return HV_PARTITION_ID_INVALID;
+
+ ptid = fn();
+ symbol_put(mshv_current_partid);
+
+ return ptid;
+}
+EXPORT_SYMBOL_GPL(hv_get_current_partid);
+
+/* If this is a VMM thread, then this domain is for a guest vm */
+static bool hv_curr_thread_is_vmm(void)
+{
+ return hv_get_current_partid() != HV_PARTITION_ID_INVALID;
+}
+
+/* As opposed to some host app like SPDK etc... */
+static bool hv_dom_owner_is_vmm(struct hv_domain *hvdom)
+{
+ return hvdom && hvdom->partid != HV_PARTITION_ID_INVALID;
+}
+
+static bool hv_iommu_capable(struct device *dev, enum iommu_cap cap)
+{
+ switch (cap) {
+ case IOMMU_CAP_CACHE_COHERENCY:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/*
+ * Check if given pci device is a direct attached device. Caller must have
+ * verified pdev is a valid pci device.
+ */
+bool hv_pcidev_is_attached_dev(struct pci_dev *pdev)
+{
+ struct iommu_domain *iommu_domain;
+ struct hv_domain *hvdom;
+ struct device *dev = &pdev->dev;
+
+ iommu_domain = iommu_get_domain_for_dev(dev);
+ if (iommu_domain) {
+ hvdom = to_hv_domain(iommu_domain);
+ return hvdom->attached_dom;
+ }
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(hv_pcidev_is_attached_dev);
+
+bool hv_pcidev_is_pthru_dev(struct pci_dev *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct hv_domain *hvdom = dev_iommu_priv_get(dev);
+
+ if (hvdom && !hv_special_domain(hvdom))
+ return true;
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(hv_pcidev_is_pthru_dev);
+
+/* Build device id for direct attached devices */
+static u64 hv_build_devid_type_logical(struct pci_dev *pdev)
+{
+ hv_pci_segment segment;
+ union hv_device_id hv_devid;
+ union hv_pci_bdf bdf = {.as_uint16 = 0};
+ u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn);
+
+ segment = pci_domain_nr(pdev->bus);
+ bdf.bus = PCI_BUS_NUM(rid);
+ bdf.device = PCI_SLOT(rid);
+ bdf.function = PCI_FUNC(rid);
+
+ hv_devid.as_uint64 = 0;
+ hv_devid.device_type = HV_DEVICE_TYPE_LOGICAL;
+ hv_devid.logical.id = (u64)segment << 16 | bdf.as_uint16;
+
+ return hv_devid.as_uint64;
+}
+
+u64 hv_build_devid_oftype(struct pci_dev *pdev, enum hv_device_type type)
+{
+ if (type == HV_DEVICE_TYPE_LOGICAL) {
+ if (hv_l1vh_partition())
+ return hv_pci_vmbus_device_id(pdev);
+ else
+ return hv_build_devid_type_logical(pdev);
+ } else if (type == HV_DEVICE_TYPE_PCI)
+#ifdef CONFIG_X86
+ return hv_build_devid_type_pci(pdev);
+#else
+ return 0;
+#endif
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hv_build_devid_oftype);
+
+/* Create a new device domain in the hypervisor */
+static int hv_iommu_create_hyp_devdom(struct hv_domain *hvdom)
+{
+ u64 status;
+ struct hv_input_device_domain *ddp;
+ struct hv_input_create_device_domain *input;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ ddp = &input->device_domain;
+ ddp->partition_id = HV_PARTITION_ID_SELF;
+ ddp->domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2;
+ ddp->domain_id.id = hvdom->domid_num;
+
+ input->create_device_domain_flags.forward_progress_required = 1;
+ input->create_device_domain_flags.inherit_owning_vtl = 0;
+
+ status = hv_do_hypercall(HVCALL_CREATE_DEVICE_DOMAIN, input, NULL);
+
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+
+ return hv_result_to_errno(status);
+}
+
+/* During boot, all devices are attached to this */
+static struct iommu_domain *hv_iommu_domain_alloc_identity(struct device *dev)
+{
+ return &hv_def_identity_dom.iommu_dom;
+}
+
+static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev)
+{
+ struct hv_domain *hvdom;
+ int rc;
+
+ if (hv_l1vh_partition() && !hv_curr_thread_is_vmm()) {
+ pr_err("Hyper-V: l1vh iommu does not support host devices\n");
+ return NULL;
+ }
+
+ hvdom = kzalloc(sizeof(struct hv_domain), GFP_KERNEL);
+ if (hvdom == NULL)
+ return NULL;
+
+ spin_lock_init(&hvdom->mappings_lock);
+ hvdom->mappings_tree = RB_ROOT_CACHED;
+
+ /* Called under iommu group mutex, so single threaded */
+ if (++unique_id == HV_DEVICE_DOMAIN_ID_S2_DEFAULT) /* ie, 0 */
+ goto out_err;
+
+ hvdom->domid_num = unique_id;
+ hvdom->partid = hv_get_current_partid();
+ hvdom->iommu_dom.geometry = default_geometry;
+ hvdom->iommu_dom.pgsize_bitmap = HV_IOMMU_PGSIZES;
+
+ /* For guests, by default we do direct attaches, so no domain in hyp */
+ if (hv_dom_owner_is_vmm(hvdom) && !hv_no_attdev)
+ hvdom->attached_dom = true;
+ else {
+ rc = hv_iommu_create_hyp_devdom(hvdom);
+ if (rc)
+ goto out_err;
+ }
+
+ return &hvdom->iommu_dom;
+
+out_err:
+ unique_id--;
+ kfree(hvdom);
+ return NULL;
+}
+
+static void hv_iommu_domain_free(struct iommu_domain *immdom)
+{
+ struct hv_domain *hvdom = to_hv_domain(immdom);
+ unsigned long flags;
+ u64 status;
+ struct hv_input_delete_device_domain *input;
+
+ if (hv_special_domain(hvdom))
+ return;
+
+ if (!hv_dom_owner_is_vmm(hvdom) || hv_no_attdev) {
+ struct hv_input_device_domain *ddp;
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ ddp = &input->device_domain;
+ memset(input, 0, sizeof(*input));
+
+ ddp->partition_id = HV_PARTITION_ID_SELF;
+ ddp->domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2;
+ ddp->domain_id.id = hvdom->domid_num;
+
+ status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input,
+ NULL);
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+ }
+
+ kfree(hvdom);
+}
+
+/* Attach a device to a domain previously created in the hypervisor */
+static int hv_iommu_att_dev2dom(struct hv_domain *hvdom, struct pci_dev *pdev)
+{
+ unsigned long flags;
+ u64 status;
+ enum hv_device_type dev_type;
+ struct hv_input_attach_device_domain *input;
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ input->device_domain.partition_id = HV_PARTITION_ID_SELF;
+ input->device_domain.domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2;
+ input->device_domain.domain_id.id = hvdom->domid_num;
+
+ /* NB: Upon guest shutdown, device is re-attached to the default domain
+ * without explicit detach.
+ */
+ if (hv_l1vh_partition())
+ dev_type = HV_DEVICE_TYPE_LOGICAL;
+ else
+ dev_type = HV_DEVICE_TYPE_PCI;
+
+ input->device_id.as_uint64 = hv_build_devid_oftype(pdev, dev_type);
+
+ status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input, NULL);
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+
+ return hv_result_to_errno(status);
+}
+
+/* Caller must have validated that dev is a valid pci dev */
+static int hv_iommu_direct_attach_device(struct pci_dev *pdev, u64 ptid)
+{
+ struct hv_input_attach_device *input;
+ u64 status;
+ int rc;
+ unsigned long flags;
+ union hv_device_id host_devid;
+ enum hv_device_type dev_type;
+
+ if (ptid == HV_PARTITION_ID_INVALID) {
+ pr_err("Hyper-V: Invalid partition id in direct attach\n");
+ return -EINVAL;
+ }
+
+ if (hv_l1vh_partition())
+ dev_type = HV_DEVICE_TYPE_LOGICAL;
+ else
+ dev_type = HV_DEVICE_TYPE_PCI;
+
+ host_devid.as_uint64 = hv_build_devid_oftype(pdev, dev_type);
+
+ do {
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+ input->partition_id = ptid;
+ input->device_id = host_devid;
+
+ /* Hypervisor associates logical_id with this device, and in
+ * some hypercalls like retarget interrupts, logical_id must be
+ * used instead of the BDF. It is a required parameter.
+ */
+ input->attdev_flags.logical_id = 1;
+ input->logical_devid =
+ hv_build_devid_oftype(pdev, HV_DEVICE_TYPE_LOGICAL);
+
+ status = hv_do_hypercall(HVCALL_ATTACH_DEVICE, input, NULL);
+ local_irq_restore(flags);
+
+ if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) {
+ rc = hv_call_deposit_pages(NUMA_NO_NODE, ptid, 1);
+ if (rc)
+ break;
+ }
+ } while (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+
+ return hv_result_to_errno(status);
+}
+
+/* Attach a device for passthru to guest VMs, host apps like SPDK, etc */
+static int hv_iommu_attach_dev(struct iommu_domain *immdom, struct device *dev,
+ struct iommu_domain *old)
+{
+ struct pci_dev *pdev;
+ int rc;
+ struct hv_domain *hvdom_new = to_hv_domain(immdom);
+ struct hv_domain *hvdom_prev = dev_iommu_priv_get(dev);
+
+ /* Only allow PCI devices for now */
+ if (!dev_is_pci(dev))
+ return -EINVAL;
+
+ pdev = to_pci_dev(dev);
+
+ if (hv_l1vh_partition() && !hv_special_domain(hvdom_new) &&
+ !hvdom_new->attached_dom)
+ return -EINVAL;
+
+ /* VFIO does not do explicit detach calls, hence check first if we need
+ * to detach first. Also, in case of guest shutdown, it's the VMM
+ * thread that attaches it back to the hv_def_identity_dom, and
+ * hvdom_prev will not be null then. It is null during boot.
+ */
+ if (hvdom_prev)
+ if (!hv_l1vh_partition() || !hv_special_domain(hvdom_prev))
+ hv_iommu_detach_dev(&hvdom_prev->iommu_dom, dev);
+
+ if (hv_l1vh_partition() && hv_special_domain(hvdom_new)) {
+ dev_iommu_priv_set(dev, hvdom_new); /* sets "private" field */
+ return 0;
+ }
+
+ if (hvdom_new->attached_dom)
+ rc = hv_iommu_direct_attach_device(pdev, hvdom_new->partid);
+ else
+ rc = hv_iommu_att_dev2dom(hvdom_new, pdev);
+
+ if (rc == 0)
+ dev_iommu_priv_set(dev, hvdom_new); /* sets "private" field */
+
+ return rc;
+}
+
+static void hv_iommu_det_dev_from_guest(struct pci_dev *pdev, u64 ptid)
+{
+ struct hv_input_detach_device *input;
+ u64 status, log_devid;
+ unsigned long flags;
+
+ log_devid = hv_build_devid_oftype(pdev, HV_DEVICE_TYPE_LOGICAL);
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ input->partition_id = ptid;
+ input->logical_devid = log_devid;
+ status = hv_do_hypercall(HVCALL_DETACH_DEVICE, input, NULL);
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+}
+
+static void hv_iommu_det_dev_from_dom(struct pci_dev *pdev)
+{
+ u64 status, devid;
+ unsigned long flags;
+ struct hv_input_detach_device_domain *input;
+
+ devid = hv_build_devid_oftype(pdev, HV_DEVICE_TYPE_PCI);
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ input->partition_id = HV_PARTITION_ID_SELF;
+ input->device_id.as_uint64 = devid;
+ status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input, NULL);
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+}
+
+static void hv_iommu_detach_dev(struct iommu_domain *immdom, struct device *dev)
+{
+ struct pci_dev *pdev;
+ struct hv_domain *hvdom = to_hv_domain(immdom);
+
+ /* See the attach function, only PCI devices for now */
+ if (!dev_is_pci(dev))
+ return;
+
+ pdev = to_pci_dev(dev);
+
+ if (hvdom->attached_dom)
+ hv_iommu_det_dev_from_guest(pdev, hvdom->partid);
+
+ /* Do not reset attached_dom, hv_iommu_unmap_pages happens
+ * next.
+ */
+ else
+ hv_iommu_det_dev_from_dom(pdev);
+}
+
+static int hv_iommu_add_tree_mapping(struct hv_domain *hvdom,
+ unsigned long iova, phys_addr_t paddr,
+ size_t size, u32 flags)
+{
+ unsigned long irqflags;
+ struct hv_iommu_mapping *mapping;
+
+ mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+ if (!mapping)
+ return -ENOMEM;
+
+ mapping->paddr = paddr;
+ mapping->iova.start = iova;
+ mapping->iova.last = iova + size - 1;
+ mapping->flags = flags;
+
+ spin_lock_irqsave(&hvdom->mappings_lock, irqflags);
+ interval_tree_insert(&mapping->iova, &hvdom->mappings_tree);
+ spin_unlock_irqrestore(&hvdom->mappings_lock, irqflags);
+
+ return 0;
+}
+
+static size_t hv_iommu_del_tree_mappings(struct hv_domain *hvdom,
+ unsigned long iova, size_t size)
+{
+ unsigned long flags;
+ size_t unmapped = 0;
+ unsigned long last = iova + size - 1;
+ struct hv_iommu_mapping *mapping = NULL;
+ struct interval_tree_node *node, *next;
+
+ spin_lock_irqsave(&hvdom->mappings_lock, flags);
+ next = interval_tree_iter_first(&hvdom->mappings_tree, iova, last);
+ while (next) {
+ node = next;
+ mapping = container_of(node, struct hv_iommu_mapping, iova);
+ next = interval_tree_iter_next(node, iova, last);
+
+ /* Trying to split a mapping? Not supported for now. */
+ if (mapping->iova.start < iova)
+ break;
+
+ unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+ interval_tree_remove(node, &hvdom->mappings_tree);
+ kfree(mapping);
+ }
+ spin_unlock_irqrestore(&hvdom->mappings_lock, flags);
+
+ return unmapped;
+}
+
+/* Return: must return exact status from the hypercall without changes */
+static u64 hv_iommu_map_pgs(struct hv_domain *hvdom,
+ unsigned long iova, phys_addr_t paddr,
+ unsigned long npages, u32 map_flags)
+{
+ u64 status;
+ int i;
+ struct hv_input_map_device_gpa_pages *input;
+ unsigned long flags, pfn;
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ input->device_domain.partition_id = HV_PARTITION_ID_SELF;
+ input->device_domain.domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2;
+ input->device_domain.domain_id.id = hvdom->domid_num;
+ input->map_flags = map_flags;
+ input->target_device_va_base = iova;
+
+ pfn = paddr >> HV_HYP_PAGE_SHIFT;
+ for (i = 0; i < npages; i++, pfn++)
+ input->gpa_page_list[i] = pfn;
+
+ status = hv_do_rep_hypercall(HVCALL_MAP_DEVICE_GPA_PAGES, npages, 0,
+ input, NULL);
+
+ local_irq_restore(flags);
+ return status;
+}
+
+/*
+ * The core VFIO code loops over memory ranges calling this function with
+ * the largest size from HV_IOMMU_PGSIZES. cond_resched() is in vfio_iommu_map.
+ */
+static int hv_iommu_map_pages(struct iommu_domain *immdom, ulong iova,
+ phys_addr_t paddr, size_t pgsize, size_t pgcount,
+ int prot, gfp_t gfp, size_t *mapped)
+{
+ u32 map_flags;
+ int ret;
+ u64 status;
+ unsigned long npages, done = 0;
+ struct hv_domain *hvdom = to_hv_domain(immdom);
+ size_t size = pgsize * pgcount;
+
+ map_flags = HV_MAP_GPA_READABLE; /* required */
+ map_flags |= prot & IOMMU_WRITE ? HV_MAP_GPA_WRITABLE : 0;
+
+ ret = hv_iommu_add_tree_mapping(hvdom, iova, paddr, size, map_flags);
+ if (ret)
+ return ret;
+
+ if (hvdom->attached_dom) {
+ *mapped = size;
+ return 0;
+ }
+
+ npages = size >> HV_HYP_PAGE_SHIFT;
+ while (done < npages) {
+ ulong completed, remain = npages - done;
+
+ status = hv_iommu_map_pgs(hvdom, iova, paddr, remain,
+ map_flags);
+
+ completed = hv_repcomp(status);
+ done = done + completed;
+ iova = iova + (completed << HV_HYP_PAGE_SHIFT);
+ paddr = paddr + (completed << HV_HYP_PAGE_SHIFT);
+
+ if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) {
+ ret = hv_call_deposit_pages(NUMA_NO_NODE,
+ hv_current_partition_id,
+ 256);
+ if (ret)
+ break;
+ continue;
+ }
+ if (!hv_result_success(status))
+ break;
+ }
+
+ if (!hv_result_success(status)) {
+ size_t done_size = done << HV_HYP_PAGE_SHIFT;
+
+ hv_status_err(status, "pgs:%lx/%lx iova:%lx\n",
+ done, npages, iova);
+ /*
+ * lookup tree has all mappings [0 - size-1]. Below unmap will
+ * only remove from [0 - done], we need to remove second chunk
+ * [done+1 - size-1].
+ */
+ hv_iommu_del_tree_mappings(hvdom, iova, size - done_size);
+ hv_iommu_unmap_pages(immdom, iova - done_size, HV_HYP_PAGE_SIZE,
+ done, NULL);
+ if (mapped)
+ *mapped = 0;
+ } else
+ if (mapped)
+ *mapped = size;
+
+ return hv_result_to_errno(status);
+}
+
+static size_t hv_iommu_unmap_pages(struct iommu_domain *immdom, ulong iova,
+ size_t pgsize, size_t pgcount,
+ struct iommu_iotlb_gather *gather)
+{
+ unsigned long flags, npages;
+ struct hv_input_unmap_device_gpa_pages *input;
+ u64 status;
+ struct hv_domain *hvdom = to_hv_domain(immdom);
+ size_t unmapped, size = pgsize * pgcount;
+
+ unmapped = hv_iommu_del_tree_mappings(hvdom, iova, size);
+ if (unmapped < size)
+ pr_err("%s: could not delete all mappings (%lx:%lx/%lx)\n",
+ __func__, iova, unmapped, size);
+
+ if (hvdom->attached_dom)
+ return size;
+
+ npages = size >> HV_HYP_PAGE_SHIFT;
+
+ local_irq_save(flags);
+ input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+ memset(input, 0, sizeof(*input));
+
+ input->device_domain.partition_id = HV_PARTITION_ID_SELF;
+ input->device_domain.domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2;
+ input->device_domain.domain_id.id = hvdom->domid_num;
+ input->target_device_va_base = iova;
+
+ status = hv_do_rep_hypercall(HVCALL_UNMAP_DEVICE_GPA_PAGES, npages,
+ 0, input, NULL);
+ local_irq_restore(flags);
+
+ if (!hv_result_success(status))
+ hv_status_err(status, "\n");
+
+ return unmapped;
+}
+
+static phys_addr_t hv_iommu_iova_to_phys(struct iommu_domain *immdom,
+ dma_addr_t iova)
+{
+ unsigned long flags;
+ struct hv_iommu_mapping *mapping;
+ struct interval_tree_node *node;
+ u64 paddr = 0;
+ struct hv_domain *hvdom = to_hv_domain(immdom);
+
+ spin_lock_irqsave(&hvdom->mappings_lock, flags);
+ node = interval_tree_iter_first(&hvdom->mappings_tree, iova, iova);
+ if (node) {
+ mapping = container_of(node, struct hv_iommu_mapping, iova);
+ paddr = mapping->paddr + (iova - mapping->iova.start);
+ }
+ spin_unlock_irqrestore(&hvdom->mappings_lock, flags);
+
+ return paddr;
+}
+
+/*
+ * Currently, hypervisor does not provide list of devices it is using
+ * dynamically. So use this to allow users to manually specify devices that
+ * should be skipped. (eg. hypervisor debugger using some network device).
+ */
+static struct iommu_device *hv_iommu_probe_device(struct device *dev)
+{
+ if (!dev_is_pci(dev))
+ return ERR_PTR(-ENODEV);
+
+ if (pci_devs_to_skip && *pci_devs_to_skip) {
+ int rc, pos = 0;
+ int parsed;
+ int segment, bus, slot, func;
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ do {
+ parsed = 0;
+
+ rc = sscanf(pci_devs_to_skip + pos, " (%x:%x:%x.%x) %n",
+ &segment, &bus, &slot, &func, &parsed);
+ if (rc)
+ break;
+ if (parsed <= 0)
+ break;
+
+ if (pci_domain_nr(pdev->bus) == segment &&
+ pdev->bus->number == bus &&
+ PCI_SLOT(pdev->devfn) == slot &&
+ PCI_FUNC(pdev->devfn) == func) {
+
+ dev_info(dev, "skipped by Hyper-V IOMMU\n");
+ return ERR_PTR(-ENODEV);
+ }
+ pos += parsed;
+
+ } while (pci_devs_to_skip[pos]);
+ }
+
+ /* Device will be explicitly attached to the default domain, so no need
+ * to do dev_iommu_priv_set() here.
+ */
+
+ return &hv_virt_iommu;
+}
+
+static void hv_iommu_probe_finalize(struct device *dev)
+{
+ struct iommu_domain *immdom = iommu_get_domain_for_dev(dev);
+
+ if (immdom && immdom->type == IOMMU_DOMAIN_DMA)
+ iommu_setup_dma_ops(dev, immdom);
+ else
+ set_dma_ops(dev, NULL);
+}
+
+static void hv_iommu_release_device(struct device *dev)
+{
+ struct hv_domain *hvdom = dev_iommu_priv_get(dev);
+
+ /* Need to detach device from device domain if necessary. */
+ if (hvdom)
+ hv_iommu_detach_dev(&hvdom->iommu_dom, dev);
+
+ dev_iommu_priv_set(dev, NULL);
+ set_dma_ops(dev, NULL);
+}
+
+static struct iommu_group *hv_iommu_device_group(struct device *dev)
+{
+ if (dev_is_pci(dev))
+ return pci_device_group(dev);
+ else
+ return generic_device_group(dev);
+}
+
+static int hv_iommu_def_domain_type(struct device *dev)
+{
+ /* The hypervisor always creates this by default during boot */
+ return IOMMU_DOMAIN_IDENTITY;
+}
+
+static struct iommu_ops hv_iommu_ops = {
+ .capable = hv_iommu_capable,
+ .domain_alloc_identity = hv_iommu_domain_alloc_identity,
+ .domain_alloc_paging = hv_iommu_domain_alloc_paging,
+ .probe_device = hv_iommu_probe_device,
+ .probe_finalize = hv_iommu_probe_finalize,
+ .release_device = hv_iommu_release_device,
+ .def_domain_type = hv_iommu_def_domain_type,
+ .device_group = hv_iommu_device_group,
+ .default_domain_ops = &(const struct iommu_domain_ops) {
+ .attach_dev = hv_iommu_attach_dev,
+ .map_pages = hv_iommu_map_pages,
+ .unmap_pages = hv_iommu_unmap_pages,
+ .iova_to_phys = hv_iommu_iova_to_phys,
+ .free = hv_iommu_domain_free,
+ },
+ .owner = THIS_MODULE,
+};
+
+static void __init hv_initialize_special_domains(void)
+{
+ hv_def_identity_dom.iommu_dom.geometry = default_geometry;
+ hv_def_identity_dom.domid_num = HV_DEVICE_DOMAIN_ID_S2_DEFAULT; /* 0 */
+}
+
+static int __init hv_iommu_init(void)
+{
+ int ret;
+ struct iommu_device *iommup = &hv_virt_iommu;
+
+ if (!hv_is_hyperv_initialized())
+ return -ENODEV;
+
+ ret = iommu_device_sysfs_add(iommup, NULL, NULL, "%s", "hyperv-iommu");
+ if (ret) {
+ pr_err("Hyper-V: iommu_device_sysfs_add failed: %d\n", ret);
+ return ret;
+ }
+
+ /* This must come before iommu_device_register because the latter calls
+ * into the hooks.
+ */
+ hv_initialize_special_domains();
+
+ ret = iommu_device_register(iommup, &hv_iommu_ops, NULL);
+ if (ret) {
+ pr_err("Hyper-V: iommu_device_register failed: %d\n", ret);
+ goto err_sysfs_remove;
+ }
+
+ pr_info("Hyper-V IOMMU initialized\n");
+
+ return 0;
+
+err_sysfs_remove:
+ iommu_device_sysfs_remove(iommup);
+ return ret;
+}
+
+void __init hv_iommu_detect(void)
+{
+ if (no_iommu || iommu_detected)
+ return;
+
+ /* For l1vh, always expose an iommu unit */
+ if (!hv_l1vh_partition())
+ if (!(ms_hyperv.misc_features & HV_DEVICE_DOMAIN_AVAILABLE))
+ return;
+
+ iommu_detected = 1;
+ x86_init.iommu.iommu_init = hv_iommu_init;
+
+ pci_request_acs();
+}
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index fe5ddd1c43ff..edbcfc2a9b60 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -331,11 +331,33 @@ static inline enum hv_isolation_type hv_get_isolation_type(void)
#if IS_ENABLED(CONFIG_PCI_HYPERV)
u64 hv_pci_vmbus_device_id(struct pci_dev *pdev);
-#else /* IS_ENABLED(CONFIG_PCI_HYPERV) */
+#else /* IS_ENABLED(CONFIG_PCI_HYPERV) */
static inline u64 hv_pci_vmbus_device_id(struct pci_dev *pdev)
{ return 0; }
#endif /* IS_ENABLED(CONFIG_PCI_HYPERV) */
+#if IS_ENABLED(CONFIG_HYPERV_IOMMU)
+u64 hv_get_current_partid(void);
+bool hv_pcidev_is_attached_dev(struct pci_dev *pdev);
+bool hv_pcidev_is_pthru_dev(struct pci_dev *pdev);
+u64 hv_build_devid_oftype(struct pci_dev *pdev, enum hv_device_type type);
+
+#else /* Remove following after arm64 implementation is done */
+
+static inline bool hv_pcidev_is_attached_dev(struct pci_dev *pdev)
+{ return false; }
+
+static inline bool hv_pcidev_is_pthru_dev(struct pci_dev *pdev)
+{ return false; }
+
+static inline u64 hv_build_devid_oftype(struct pci_dev *pdev,
+ enum hv_device_type type)
+{ return 0; }
+
+static inline u64 hv_get_current_partid(void)
+{ return HV_PARTITION_ID_INVALID; }
+#endif /* IS_ENABLED(CONFIG_HYPERV_IOMMU) */
+
#if IS_ENABLED(CONFIG_MSHV_ROOT)
static inline bool hv_root_partition(void)
{
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 5459e776ec17..6eee1cbf6f23 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1769,4 +1769,10 @@ static inline unsigned long virt_to_hvpfn(void *addr)
#define HVPFN_DOWN(x) ((x) >> HV_HYP_PAGE_SHIFT)
#define page_to_hvpfn(page) (page_to_pfn(page) * NR_HV_HYP_PAGES_IN_PAGE)
+#ifdef CONFIG_HYPERV_IOMMU
+void __init hv_iommu_detect(void);
+#else
+static inline void hv_iommu_detect(void) { }
+#endif /* CONFIG_HYPERV_IOMMU */
+
#endif /* _HYPERV_H */
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 12/13] mshv: Populate mmio mappings for PCI passthru
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (10 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 11/13] x86/hyperv: Implement hyperv virtual iommu Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
2026-04-22 2:32 ` [PATCH V1 13/13] mshv: pin all ram mem regions if partition has device passthru Mukesh R
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Upon guest access, in case of missing mmio mapping, the hypervisor
generates an unmapped gpa intercept. In this path, lookup the PCI
resource pfn for the guest gpa, and ask the hypervisor to map it
via hypercall. The PCI resource pfn is maintained by the VFIO driver,
and obtained via fixup_user_fault call (similar to KVM).
Also, VFIO no longer puts the mmio pfn in vma->vm_pgoff. So, remove
code that is using it to map mmio space. It is broken and will cause
panic.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/hv/mshv_root_main.c | 113 ++++++++++++++++++++++++++++++------
1 file changed, 96 insertions(+), 17 deletions(-)
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 6ceb5f608589..a7864463961b 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -46,6 +46,9 @@ MODULE_DESCRIPTION("Microsoft Hyper-V root partition VMM interface /dev/mshv");
#define HV_VP_COUNTER_ROOT_DISPATCH_THREAD_BLOCKED 95
#endif
+static bool hv_nofull_mmio; /* don't map entire mmio region upon fault */
+module_param(hv_nofull_mmio, bool, 0644);
+
struct mshv_root mshv_root;
enum hv_scheduler_type hv_scheduler_type;
@@ -641,6 +644,94 @@ mshv_partition_region_by_gfn_get(struct mshv_partition *p, u64 gfn)
return region;
}
+/*
+ * Check if uaddr is for mmio range. If yes, return 0 with mmio_pfn filled in
+ * else just return -errno.
+ */
+static int mshv_chk_get_mmio_start_pfn(u64 uaddr, u64 *mmio_pfnp)
+{
+ struct vm_area_struct *vma;
+ bool is_mmio;
+ struct follow_pfnmap_args pfnmap_args;
+ int rc = -EINVAL;
+
+ mmap_read_lock(current->mm);
+ vma = vma_lookup(current->mm, uaddr);
+ is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0;
+ if (!is_mmio)
+ goto unlock_mmap_out;
+
+ pfnmap_args.vma = vma;
+ pfnmap_args.address = uaddr;
+
+ rc = follow_pfnmap_start(&pfnmap_args);
+ if (rc) {
+ rc = fixup_user_fault(current->mm, uaddr, FAULT_FLAG_WRITE,
+ NULL);
+ if (rc)
+ goto unlock_mmap_out;
+
+ rc = follow_pfnmap_start(&pfnmap_args);
+ if (rc)
+ goto unlock_mmap_out;
+ }
+
+ *mmio_pfnp = pfnmap_args.pfn;
+ follow_pfnmap_end(&pfnmap_args);
+
+unlock_mmap_out:
+ mmap_read_unlock(current->mm);
+ return rc;
+}
+
+/*
+ * Check if the unmapped gpa belongs to mmio space. If yes, resolve it.
+ *
+ * Returns: True if valid mmio intercept and handled, else false.
+ */
+static bool mshv_handle_unmapped_gpa(struct mshv_vp *vp)
+{
+ struct hv_message *hvmsg = vp->vp_intercept_msg_page;
+ u64 gfn, uaddr, mmio_spa, numpgs;
+ struct mshv_mem_region *rg;
+ int rc = -EINVAL;
+ struct mshv_partition *pt = vp->vp_partition;
+#if defined(CONFIG_X86_64)
+ struct hv_x64_memory_intercept_message *msg =
+ (struct hv_x64_memory_intercept_message *)hvmsg->u.payload;
+#elif defined(CONFIG_ARM64)
+ struct hv_arm64_memory_intercept_message *msg =
+ (struct hv_arm64_memory_intercept_message *)hvmsg->u.payload;
+#endif
+
+ gfn = msg->guest_physical_address >> HV_HYP_PAGE_SHIFT;
+
+ rg = mshv_partition_region_by_gfn_get(pt, gfn);
+ if (rg == NULL)
+ return false;
+ if (rg->mreg_type != MSHV_REGION_TYPE_MMIO)
+ goto put_rg_out;
+
+ uaddr = rg->start_uaddr + ((gfn - rg->start_gfn) << HV_HYP_PAGE_SHIFT);
+
+ rc = mshv_chk_get_mmio_start_pfn(uaddr, &mmio_spa);
+ if (rc)
+ goto put_rg_out;
+
+ if (!hv_nofull_mmio) { /* default case */
+ mmio_spa = mmio_spa - (gfn - rg->start_gfn);
+ gfn = rg->start_gfn;
+ numpgs = rg->nr_pages;
+ } else
+ numpgs = 1;
+
+ rc = hv_call_map_mmio_pages(pt->pt_id, gfn, mmio_spa, numpgs);
+
+put_rg_out:
+ mshv_region_put(rg);
+ return rc == 0;
+}
+
/**
* mshv_handle_gpa_intercept - Handle GPA (Guest Physical Address) intercepts.
* @vp: Pointer to the virtual processor structure.
@@ -699,6 +790,8 @@ static bool mshv_handle_gpa_intercept(struct mshv_vp *vp)
static bool mshv_vp_handle_intercept(struct mshv_vp *vp)
{
switch (vp->vp_intercept_msg_page->header.message_type) {
+ case HVMSG_UNMAPPED_GPA:
+ return mshv_handle_unmapped_gpa(vp);
case HVMSG_GPA_INTERCEPT:
return mshv_handle_gpa_intercept(vp);
}
@@ -1322,16 +1415,8 @@ static int mshv_prepare_pinned_region(struct mshv_mem_region *region)
}
/*
- * This maps two things: guest RAM and for pci passthru mmio space.
- *
- * mmio:
- * - vfio overloads vm_pgoff to store the mmio start pfn/spa.
- * - Two things need to happen for mapping mmio range:
- * 1. mapped in the uaddr so VMM can access it.
- * 2. mapped in the hwpt (gfn <-> mmio phys addr) so guest can access it.
- *
- * This function takes care of the second. The first one is managed by vfio,
- * and hence is taken care of via vfio_pci_mmap_fault().
+ * This is called for both user ram and mmio space. The mmio space is not
+ * mapped here, but later during intercept on demand.
*/
static long
mshv_map_user_memory(struct mshv_partition *partition,
@@ -1340,7 +1425,6 @@ mshv_map_user_memory(struct mshv_partition *partition,
struct mshv_mem_region *region;
struct vm_area_struct *vma;
bool is_mmio;
- ulong mmio_pfn;
long ret;
if (mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP) ||
@@ -1350,7 +1434,6 @@ mshv_map_user_memory(struct mshv_partition *partition,
mmap_read_lock(current->mm);
vma = vma_lookup(current->mm, mem->userspace_addr);
is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0;
- mmio_pfn = is_mmio ? vma->vm_pgoff : 0;
mmap_read_unlock(current->mm);
if (!vma)
@@ -1376,11 +1459,7 @@ mshv_map_user_memory(struct mshv_partition *partition,
region->nr_pages,
HV_MAP_GPA_NO_ACCESS, NULL);
break;
- case MSHV_REGION_TYPE_MMIO:
- ret = hv_call_map_mmio_pages(partition->pt_id,
- region->start_gfn,
- mmio_pfn,
- region->nr_pages);
+ default:
break;
}
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH V1 13/13] mshv: pin all ram mem regions if partition has device passthru
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
` (11 preceding siblings ...)
2026-04-22 2:32 ` [PATCH V1 12/13] mshv: Populate mmio mappings for PCI passthru Mukesh R
@ 2026-04-22 2:32 ` Mukesh R
12 siblings, 0 replies; 14+ messages in thread
From: Mukesh R @ 2026-04-22 2:32 UTC (permalink / raw)
To: hpa, robin.murphy, robh, wei.liu, mrathor, mhklinux, muislam,
namjain, magnuskulke, anbelski, linux-kernel, linux-hyperv, iommu,
linux-pci, linux-arch
Cc: kys, haiyangz, decui, longli, tglx, mingo, bp, dave.hansen, x86,
joro, will, lpieralisi, kwilczynski, bhelgaas, arnd
Given the sporadic errors, mostly from high end devices, when regions are
not pinned and a PCI device is passthru'd to a VM, for now, force all
regions for the VM to be pinned. Even tho VFIO may pin them, the regions
would still be marked movable, so do it upfront in mshv.
Signed-off-by: Mukesh R <mrathor@linux.microsoft.com>
---
drivers/hv/mshv_root.h | 6 ++++++
drivers/hv/mshv_root_main.c | 5 ++++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
index b9880d0bdc4d..32260df84f86 100644
--- a/drivers/hv/mshv_root.h
+++ b/drivers/hv/mshv_root.h
@@ -141,6 +141,7 @@ struct mshv_partition {
pid_t pt_vmm_tgid;
bool import_completed;
bool pt_initialized;
+ bool pt_regions_pinned;
#if IS_ENABLED(CONFIG_DEBUG_FS)
struct dentry *pt_stats_dentry;
struct dentry *pt_vp_dentry;
@@ -277,6 +278,11 @@ static inline bool mshv_partition_encrypted(struct mshv_partition *partition)
return partition->isolation_type == HV_PARTITION_ISOLATION_TYPE_SNP;
}
+static inline bool mshv_pt_regions_pinned(struct mshv_partition *pt)
+{
+ return pt->pt_regions_pinned || mshv_partition_encrypted(pt);
+}
+
struct mshv_partition *mshv_partition_get(struct mshv_partition *partition);
void mshv_partition_put(struct mshv_partition *partition);
struct mshv_partition *mshv_partition_find(u64 partition_id) __must_hold(RCU);
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index a7864463961b..251cf88a2b0b 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -1333,7 +1333,7 @@ static int mshv_partition_create_region(struct mshv_partition *partition,
if (is_mmio)
rg->mreg_type = MSHV_REGION_TYPE_MMIO;
- else if (mshv_partition_encrypted(partition) ||
+ else if (mshv_pt_regions_pinned(partition) ||
!mshv_region_movable_init(rg))
rg->mreg_type = MSHV_REGION_TYPE_MEM_PINNED;
else
@@ -1406,6 +1406,9 @@ static int mshv_prepare_pinned_region(struct mshv_mem_region *region)
goto err_out;
}
+ /* For now, all regions must be pinned if there is device passthru. */
+ partition->pt_regions_pinned = true;
+
return 0;
invalidate_region:
--
2.51.2.vfs.0.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-04-22 2:34 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-22 2:32 [PATCH V1 00/13] PCI passthru on Hyper-V (Part I) Mukesh R
2026-04-22 2:32 ` [PATCH V1 01/13] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c Mukesh R
2026-04-22 2:32 ` [PATCH V1 02/13] x86/hyperv: cosmetic changes in irqdomain.c for readability Mukesh R
2026-04-22 2:32 ` [PATCH V1 03/13] x86/hyperv: add insufficient memory support in irqdomain.c Mukesh R
2026-04-22 2:32 ` [PATCH V1 04/13] mshv: Provide a way to get partition id if running in a VMM process Mukesh R
2026-04-22 2:32 ` [PATCH V1 05/13] mshv: Declarations and definitions for VFIO-MSHV bridge device Mukesh R
2026-04-22 2:32 ` [PATCH V1 06/13] mshv: Implement mshv bridge device for VFIO Mukesh R
2026-04-22 2:32 ` [PATCH V1 07/13] mshv: Add ioctl support for MSHV-VFIO bridge device Mukesh R
2026-04-22 2:32 ` [PATCH V1 08/13] PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg Mukesh R
2026-04-22 2:32 ` [PATCH V1 09/13] mshv: Import data structs around device passthru from hyperv headers Mukesh R
2026-04-22 2:32 ` [PATCH V1 10/13] PCI: hv: Build device id for a VMBus device, export PCI devid function Mukesh R
2026-04-22 2:32 ` [PATCH V1 11/13] x86/hyperv: Implement hyperv virtual iommu Mukesh R
2026-04-22 2:32 ` [PATCH V1 12/13] mshv: Populate mmio mappings for PCI passthru Mukesh R
2026-04-22 2:32 ` [PATCH V1 13/13] mshv: pin all ram mem regions if partition has device passthru Mukesh R
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox