From: "Cédric Le Goater" <clg@redhat.com>
To: Zhenzhong Duan <zhenzhong.duan@intel.com>, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, eric.auger@redhat.com,
mst@redhat.com, jasowang@redhat.com, peterx@redhat.com,
ddutile@redhat.com, jgg@nvidia.com, nicolinc@nvidia.com,
shameerali.kolothum.thodi@huawei.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, kevin.tian@intel.com,
yi.l.liu@intel.com, chao.p.peng@intel.com
Subject: Re: [PATCH v1 1/6] backends/iommufd: Add a helper to invalidate user-managed HWPT
Date: Wed, 28 May 2025 11:59:25 +0200 [thread overview]
Message-ID: <538e848b-148a-49f1-bf06-f534ff44bf87@redhat.com> (raw)
In-Reply-To: <20250528060409.3710008-2-zhenzhong.duan@intel.com>
Hello Zhenzhong,
On 5/28/25 08:04, Zhenzhong Duan wrote:
> This helper passes cache invalidation request from guest to invalidate
> stage-1 page table cache in host hardware.
>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> include/system/iommufd.h | 4 ++++
> backends/iommufd.c | 33 +++++++++++++++++++++++++++++++++
> backends/trace-events | 1 +
> 3 files changed, 38 insertions(+)
>
> diff --git a/include/system/iommufd.h b/include/system/iommufd.h
> index cbab75bfbf..5399519626 100644
> --- a/include/system/iommufd.h
> +++ b/include/system/iommufd.h
> @@ -61,6 +61,10 @@ bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be, uint32_t hwpt_id,
> uint64_t iova, ram_addr_t size,
> uint64_t page_size, uint64_t *data,
> Error **errp);
> +bool iommufd_backend_invalidate_cache(IOMMUFDBackend *be, uint32_t id,
> + uint32_t data_type, uint32_t entry_len,
> + uint32_t *entry_num, void *data_ptr,
> + Error **errp);
>
> #define TYPE_HOST_IOMMU_DEVICE_IOMMUFD TYPE_HOST_IOMMU_DEVICE "-iommufd"
> #endif
> diff --git a/backends/iommufd.c b/backends/iommufd.c
> index b73f75cd0b..c8788a6438 100644
> --- a/backends/iommufd.c
> +++ b/backends/iommufd.c
> @@ -311,6 +311,39 @@ bool iommufd_backend_get_device_info(IOMMUFDBackend *be, uint32_t devid,
> return true;
> }
>
> +bool iommufd_backend_invalidate_cache(IOMMUFDBackend *be, uint32_t id,
> + uint32_t data_type, uint32_t entry_len,
> + uint32_t *entry_num, void *data_ptr,
> + Error **errp)
> +{
> + int ret, fd = be->fd;
> + uint32_t total_entries = *entry_num;
> + struct iommu_hwpt_invalidate cache = {
> + .size = sizeof(cache),
> + .hwpt_id = id,
> + .data_type = data_type,
> + .entry_len = entry_len,
> + .entry_num = total_entries,
> + .data_uptr = (uintptr_t)data_ptr,
Minor, other helpers use a 'data' variable name.
> + };
> +
> + ret = ioctl(fd, IOMMU_HWPT_INVALIDATE, &cache);
> + trace_iommufd_backend_invalidate_cache(fd, id, data_type, entry_len,
> + total_entries, cache.entry_num,
> + (uintptr_t)data_ptr,
> + ret ? errno : 0);
> + if (ret) {
> + *entry_num = cache.entry_num;
> + error_setg_errno(errp, errno, "IOMMU_HWPT_INVALIDATE failed:"
> + " totally %d entries, processed %d entries",
> + total_entries, cache.entry_num);
> + } else {
> + g_assert(total_entries == cache.entry_num);
Killing the VMM because a kernel device ioctl failed is brute force.
Can't we update the 'Error *' parameter instead to report that the
invalidation is partial or something went wrong ?
What kind of errors are we trying to catch ?
Looking at the kernel iommufd_hwpt_invalidate() routine and
intel_nested_cache_invalidate_user(), it doesn't seem possible to
return a different number of cache entries. Are you anticipating
other implementations (sMMU) ?
Thanks,
C.
next prev parent reply other threads:[~2025-05-28 10:00 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-28 6:04 [PATCH v1 0/6] VFIO and IOMMU prerequisite stuff for IOMMU nesting support Zhenzhong Duan
2025-05-28 6:04 ` [PATCH v1 1/6] backends/iommufd: Add a helper to invalidate user-managed HWPT Zhenzhong Duan
2025-05-28 9:59 ` Cédric Le Goater [this message]
2025-05-29 6:46 ` Duan, Zhenzhong
2025-05-29 21:07 ` Nicolin Chen
2025-05-28 6:04 ` [PATCH v1 2/6] vfio/iommufd: Add properties and handlers to TYPE_HOST_IOMMU_DEVICE_IOMMUFD Zhenzhong Duan
2025-05-28 10:29 ` Cédric Le Goater
2025-05-29 6:50 ` Duan, Zhenzhong
2025-05-28 6:04 ` [PATCH v1 3/6] vfio/iommufd: Initialize iommufd specific members in HostIOMMUDeviceIOMMUFD Zhenzhong Duan
2025-05-28 10:29 ` Cédric Le Goater
2025-05-28 6:04 ` [PATCH v1 4/6] vfio/iommufd: Implement [at|de]tach_hwpt handlers Zhenzhong Duan
2025-05-28 10:29 ` Cédric Le Goater
2025-05-28 6:04 ` [PATCH v1 5/6] vfio/iommufd: Save vendor specific device info Zhenzhong Duan
2025-05-28 6:04 ` [PATCH v1 6/6] iommufd: Implement query of host VTD IOMMU's capability Zhenzhong Duan
2025-05-28 10:47 ` Cédric Le Goater
2025-05-29 7:16 ` Duan, Zhenzhong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=538e848b-148a-49f1-bf06-f534ff44bf87@redhat.com \
--to=clg@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=chao.p.peng@intel.com \
--cc=clement.mathieu--drif@eviden.com \
--cc=ddutile@redhat.com \
--cc=eric.auger@redhat.com \
--cc=jasowang@redhat.com \
--cc=jgg@nvidia.com \
--cc=joao.m.martins@oracle.com \
--cc=kevin.tian@intel.com \
--cc=mst@redhat.com \
--cc=nicolinc@nvidia.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=yi.l.liu@intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).