From: sashiko-bot@kernel.org
To: "Yu Zhang" <zhangyu1@linux.microsoft.com>
Cc: linux-pci@vger.kernel.org
Subject: Re: [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
Date: Tue, 12 May 2026 22:30:25 +0000 [thread overview]
Message-ID: <20260512223026.072B1C2BCB8@smtp.kernel.org> (raw)
In-Reply-To: <20260511162408.1180069-4-zhangyu1@linux.microsoft.com>
Thank you for your contribution! Sashiko AI review found 5 potential issue(s) to consider:
- [High] Sleeping in atomic context on PREEMPT_RT due to spin_lock() inside local_irq_save().
- [High] Global state corruption and dangling pointer on IOMMU initialization failure.
- [Medium] Undefined behavior and potential integer overflow in IOMMU geometry calculation.
- [Medium] IOMMU driver destructively clears generic dma_ops in release_device.
- [Low] Missing ida_destroy() on the error cleanup path in hv_iommu_init().
--
commit d1b032eaf8fbec1a7366bf47848a4dfd4a2d21b6
Author: Yu Zhang <zhangyu1@linux.microsoft.com>
Subject: iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest
This patch adds a para-virtualized IOMMU driver for Linux guests on Hyper-V.
It implements stage-1 IO translation via hypercalls and manages x86-compatible
page tables in guest memory.
> diff --git a/drivers/iommu/hyperv/iommu.c b/drivers/iommu/hyperv/iommu.c
> new file mode 100644
> index 0000000000000..e5fc625314b55
> --- /dev/null
> +++ b/drivers/iommu/hyperv/iommu.c
[ ... ]
> @@ -215,14 +230,19 @@ static void hv_iommu_detach_dev(struct iommu_domain *domain, struct device *dev)
[ ... ]
> + local_irq_save(flags);
> +
> + input = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + memset(input, 0, sizeof(*input));
> + input->partition_id = HV_PARTITION_ID_SELF;
> + if (hv_iommu_lookup_logical_dev_id(pdev, &input->device_id.as_uint64)) {
Could this cause a sleeping in atomic context BUG on PREEMPT_RT kernels?
When hardware interrupts are disabled via local_irq_save(), this code calls
hv_iommu_lookup_logical_dev_id(), which does:
spin_lock(&hv_iommu_pci_bus_lock);
On PREEMPT_RT, spinlock_t maps to an rt_mutex which can sleep. Acquiring a
sleepable lock while hardware interrupts are disabled will cause a crash.
Would it be safer to hoist the ID lookup outside the local_irq_save() block,
since it does not appear to rely on the per-CPU input page state? Similar
logic is also present in hv_iommu_attach_dev() and
hv_iommu_get_logical_device_property().
[ ... ]
> +static void hv_iommu_release_device(struct device *dev)
> +{
> + struct hv_iommu_endpoint *vdev = dev_iommu_priv_get(dev);
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (pdev->ats_enabled)
> + pci_disable_ats(pdev);
> +
> + dev_iommu_priv_set(dev, NULL);
> + set_dma_ops(dev, NULL);
Is it safe to explicitly clear the generic dma_ops here?
Device DMA operations are typically managed by the core DMA mapping subsystem.
Clearing them directly strips the device of fallback operations like SWIOTLB
during device teardown or unbind.
If there are outstanding DMA mappings, could this cause subsequent unmap
calls to execute without valid operations, potentially leaking bounce buffers?
[ ... ]
> +static void __init hv_init_iommu_device(struct hv_iommu_dev *hv_iommu,
> + struct hv_output_get_iommu_capabilities *hv_iommu_cap)
> +{
> + ida_init(&hv_iommu->domain_ids);
[ ... ]
> + hv_iommu->geometry = (struct iommu_domain_geometry) {
> + .aperture_start = 0,
> + .aperture_end = (((u64)1) << hv_iommu->max_iova_width) - 1,
Can this shift result in undefined behavior if the hypervisor returns
a max_iova_width of 64 or greater?
If 5-level paging is supported, there doesn't appear to be an upper
bound validation on max_iova_width. On x86, shifting by 64 or more
will be masked modulo 64, making 1ULL << 64 evaluate to 1, and
resulting in an aperture_end of 0.
Would it be better to cap max_iova_width at a safe maximum before
performing this calculation?
[ ... ]
> +int __init hv_iommu_init(void)
> +{
[ ... ]
> + iommu_detected = 1;
> + pci_request_acs();
> +
> + hv_iommu = kzalloc_obj(*hv_iommu, GFP_KERNEL);
> + if (!hv_iommu)
> + return -ENOMEM;
> +
> + hv_init_iommu_device(hv_iommu, &hv_iommu_cap);
> +
> + ret = hv_initialize_static_domains();
> + if (ret) {
> + pr_err("static domains init failed: %d\n", ret);
> + goto err_free;
> + }
[ ... ]
> +err_sysfs_remove:
> + iommu_device_sysfs_remove(&hv_iommu->iommu);
> +err_delete_static_domains:
> + hv_delete_device_domain(&hv_blocking_domain);
> + hv_delete_device_domain(&hv_identity_domain);
> +err_free:
> + kfree(hv_iommu);
> + return ret;
> +}
If initialization fails and we jump to err_free, does this leave global state
corrupted?
The global pointer hv_iommu_device is assigned to hv_iommu during
hv_init_iommu_device(), and iommu_detected is set to 1. When hv_iommu is
freed, these globals aren't cleared.
Could this leave a dangling pointer and trick the core architecture into
believing a hardware IOMMU is functional, potentially skipping fallback
mechanisms like SWIOTLB?
Also, should there be a call to ida_destroy(&hv_iommu->domain_ids) in this
cleanup path to prevent leaking internal XArray nodes allocated by ida_init()?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260511162408.1180069-1-zhangyu1@linux.microsoft.com?part=3
next prev parent reply other threads:[~2026-05-12 22:30 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 16:24 [PATCH v1 0/4] Hyper-V: Add para-virtualized IOMMU support for Linux guests Yu Zhang
2026-05-11 16:24 ` [PATCH v1 1/4] iommu: Move Hyper-V IOMMU driver to its own subdirectory Yu Zhang
2026-05-11 16:24 ` [PATCH v1 2/4] hyperv: Introduce new hypercall interfaces used by Hyper-V guest IOMMU Yu Zhang
2026-05-12 21:24 ` sashiko-bot
2026-05-11 16:24 ` [PATCH v1 3/4] iommu/hyperv: Add para-virtualized IOMMU support for Hyper-V guest Yu Zhang
2026-05-12 22:30 ` sashiko-bot [this message]
2026-05-11 16:24 ` [PATCH v1 4/4] iommu/hyperv: Add page-selective IOTLB flush support Yu Zhang
2026-05-12 23:45 ` sashiko-bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260512223026.072B1C2BCB8@smtp.kernel.org \
--to=sashiko-bot@kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=sashiko@lists.linux.dev \
--cc=zhangyu1@linux.microsoft.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox