* [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3
@ 2025-12-10 13:37 Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 01/16] backends/iommufd: Update iommufd_backend_get_device_info Shameer Kolothum
` (16 more replies)
0 siblings, 17 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
Hi,
This RFC series adds initial support for NVIDIA Tegra241 CMDQV
(Command Queue Virtualisation), an extension to ARM SMMUv3 that
provides hardware accelerated virtual command queues (VCMDQs) for
guests. CMDQV allows guests to issue SMMU invalidation commands
directly to hardware without VM exits, significantly reducing TLBI
overhead.
Thanks to Nicolin for the initial patches and testing on which this RFC
is based.
This is based on v6[0] of the SMMUv3 accel series, which is still under
review, though nearing convergence. This is sent as an RFC, with the goal
of gathering early feedback on the CMDQV design and its integration with
the SMMUv3 acceleration path.
Background:
Tegra241 CMDQV extends SMMUv3 by allocating per-VM "virtual interfaces"
(VINTFs), each hosting up to 128 VCMDQs.
Each VINTF exposes two 64KB MMIO pages:
- Page0 – guest owned control and status registers (directly mapped
into the VM)
- Page1 – queue configuration registers (trapped/emulated by QEMU)
Unlike the standard SMMU CMDQ, a guest owned Tegra241 VCMDQ does not
support the full command set. Only a subset, primarily invalidation
related commands, is accepted by the CMDQV hardware. For this reason,
a distinct CMDQV device must be exposed to the guest, and the guest OS
must include a Tegra241 CMDQV aware driver to take advantage of the
hardware acceleration.
VCMDQ support is integrated via the IOMMU_HW_QUEUE_ALLOC mechanism,
allowing QEMU to attach guest configured VCMDQ buffers to the
underlying CMDQV hardware through IOMMUFD. The Linux kernel already
supports the full CMDQV virtualisation model via IOMMUFD[0].
Summary of QEMU changes:
- Integrated into the existing SMMUv3 accel path via a
"tegra241-cmdqv" property.
- Support for allocating vIOMMU objects of type
IOMMU_VIOMMU_TYPE_TEGRA241_CMDQV.
- Mapping and emulation of the CMDQV MMIO register layout.
- VCMDQ/VINTF read/write handling and queue allocation using IOMMUFD
APIs.
- Reset and initialisation hooks, including checks for at least one
cold-plugged device.
- CMDQV hardware reads guest queue memory using host physical addresses
provided through IOMMUFD, which requires that the VCMDQ buffer be
physically contiguous not only in guest PA space but also in host
PA space. When Tegra241 CMDQV is enabled, QEMU must therefore only
expose a CMDQV size that the host can reliably back with contiguous
physical memory. Because of this constraint, it is suggested to use
huge pages to back the guest RAM.
- ACPI DSDT node generation for CMDQV devices on the virt machine.
These patches have been sanity tested on NVIDIA Grace platforms.
ToDo / revisit:
- Prevent hot-unplug of the last device associated with vIOMMU as
this might allow associating a different host SMMU/CMDQV.
- Locking requirements around error event propagation.
Feedback and testing are very welcome.
Thanks,
Shameer
[0] https://lore.kernel.org/qemu-devel/20251120132213.56581-1-skolothumtho@nvidia.com/
[1] https://lore.kernel.org/all/cover.1752126748.git.nicolinc@nvidia.com/
Nicolin Chen (12):
backends/iommufd: Update iommufd_backend_get_device_info
backends/iommufd: Update iommufd_backend_alloc_viommu to allow user
ptr
backends/iommufd: Introduce iommufd_backend_alloc_hw_queue
backends/iommufd: Introduce iommufd_backend_viommu_mmap
hw/arm/tegra241-cmdqv: Add initial Tegra241 CMDQ-Virtualisation
support
hw/arm/tegra241-cmdqv: Map VINTF Page0 into guest
hw/arm/tegra241-cmdqv: Add read emulation support for registers
system/physmem: Add helper to check whether a guest PA maps to RAM
hw/arm/tegra241-cmdqv:: Add write emulation for registers
hw/arm/tegra241-cmdqv: Add reset handler
hw/arm/tegra241-cmdqv: Limit queue size based on backend page size
hw/arm/virt-acpi: Advertise Tegra241 CMDQV nodes in DSDT
Shameer Kolothum (4):
hw/arm/tegra241-cmdqv: Allocate vEVENTQ object
hw/arm/tegra241-cmdqv: Read and propagate Tegra241 CMDQV errors
virt-acpi-build: Rename AcpiIortSMMUv3Dev to AcpiSMMUv3Dev
hw/arm/smmuv3: Add tegra241-cmdqv property for SMMUv3 device
backends/iommufd.c | 65 ++++
backends/trace-events | 2 +
hw/arm/Kconfig | 5 +
hw/arm/meson.build | 1 +
hw/arm/smmuv3-accel.c | 16 +-
hw/arm/smmuv3.c | 18 +
hw/arm/tegra241-cmdqv.c | 759 ++++++++++++++++++++++++++++++++++++++
hw/arm/tegra241-cmdqv.h | 337 +++++++++++++++++
hw/arm/trace-events | 5 +
hw/arm/virt-acpi-build.c | 110 +++++-
hw/vfio/iommufd.c | 6 +-
include/exec/cpu-common.h | 2 +
include/hw/arm/smmuv3.h | 3 +
include/hw/arm/virt.h | 2 +
include/system/iommufd.h | 16 +
system/physmem.c | 12 +
16 files changed, 1332 insertions(+), 27 deletions(-)
create mode 100644 hw/arm/tegra241-cmdqv.c
create mode 100644 hw/arm/tegra241-cmdqv.h
--
2.43.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [RFC PATCH 01/16] backends/iommufd: Update iommufd_backend_get_device_info
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 02/16] backends/iommufd: Update iommufd_backend_alloc_viommu to allow user ptr Shameer Kolothum
` (15 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
The updated IOMMUFD uAPI introduces the ability for userspace to request
a specific hardware info data type via IOMMU_GET_HW_INFO. Update
iommufd_backend_get_device_info() to set IOMMU_HW_INFO_FLAG_INPUT_TYPE
when a non-zero type is supplied, and adjust all callers to pass an
explicitly initialised type value.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
backends/iommufd.c | 7 +++++++
hw/arm/smmuv3-accel.c | 2 +-
hw/vfio/iommufd.c | 6 ++----
3 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/backends/iommufd.c b/backends/iommufd.c
index 633aecd525..938c8fe669 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -386,16 +386,23 @@ bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
return true;
}
+/*
+ * @type can carry a desired HW info type defined in the uapi headers. If caller
+ * doesn't have one, indicating it wants the default type, then @type should be
+ * zeroed (i.e. IOMMU_HW_INFO_TYPE_DEFAULT).
+ */
bool iommufd_backend_get_device_info(IOMMUFDBackend *be, uint32_t devid,
uint32_t *type, void *data, uint32_t len,
uint64_t *caps, uint8_t *max_pasid_log2,
Error **errp)
{
struct iommu_hw_info info = {
+ .flags = (*type) ? IOMMU_HW_INFO_FLAG_INPUT_TYPE : 0,
.size = sizeof(info),
.dev_id = devid,
.data_len = len,
.data_uptr = (uintptr_t)data,
+ .in_data_type = *type,
};
if (ioctl(be->fd, IOMMU_GET_HW_INFO, &info)) {
diff --git a/hw/arm/smmuv3-accel.c b/hw/arm/smmuv3-accel.c
index d320c62b04..300c35ccb5 100644
--- a/hw/arm/smmuv3-accel.c
+++ b/hw/arm/smmuv3-accel.c
@@ -115,7 +115,7 @@ smmuv3_accel_hw_compatible(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
Error **errp)
{
struct iommu_hw_info_arm_smmuv3 info;
- uint32_t data_type;
+ uint32_t data_type = 0;
uint64_t caps;
if (!iommufd_backend_get_device_info(idev->iommufd, idev->devid, &data_type,
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index bbe944d7cc..670bdfc53b 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -306,7 +306,7 @@ static bool iommufd_cdev_autodomains_get(VFIODevice *vbasedev,
ERRP_GUARD();
IOMMUFDBackend *iommufd = vbasedev->iommufd;
VFIOContainer *bcontainer = VFIO_IOMMU(container);
- uint32_t type, flags = 0;
+ uint32_t type = 0, flags = 0;
uint64_t hw_caps;
VFIOIOASHwpt *hwpt;
uint32_t hwpt_id;
@@ -631,8 +631,6 @@ skip_ioas_alloc:
bcontainer->initialized = true;
found_container:
- vbasedev->cpr.ioas_id = container->ioas_id;
-
ret = ioctl(devfd, VFIO_DEVICE_GET_INFO, &dev_info);
if (ret) {
error_setg_errno(errp, errno, "error getting device info");
@@ -889,7 +887,7 @@ static bool hiod_iommufd_vfio_realize(HostIOMMUDevice *hiod, void *opaque,
HostIOMMUDeviceIOMMUFD *idev;
HostIOMMUDeviceCaps *caps = &hiod->caps;
VendorCaps *vendor_caps = &caps->vendor_caps;
- enum iommu_hw_info_type type;
+ enum iommu_hw_info_type type = 0;
uint8_t max_pasid_log2;
uint64_t hw_caps;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 02/16] backends/iommufd: Update iommufd_backend_alloc_viommu to allow user ptr
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 01/16] backends/iommufd: Update iommufd_backend_get_device_info Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 03/16] backends/iommufd: Introduce iommufd_backend_alloc_hw_queue Shameer Kolothum
` (14 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
The updated IOMMUFD VIOMMU_ALLOC uAPI allows userspace to provide a data
buffer when creating a vIOMMU (e.g. for Tegra241 CMDQV). Extend
iommufd_backend_alloc_viommu() to pass a user pointer and size to the
kernel.
Update the caller accordingly.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
backends/iommufd.c | 3 +++
hw/arm/smmuv3-accel.c | 4 ++--
include/system/iommufd.h | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/backends/iommufd.c b/backends/iommufd.c
index 938c8fe669..2f6fa832a7 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -459,6 +459,7 @@ bool iommufd_backend_invalidate_cache(IOMMUFDBackend *be, uint32_t id,
bool iommufd_backend_alloc_viommu(IOMMUFDBackend *be, uint32_t dev_id,
uint32_t viommu_type, uint32_t hwpt_id,
+ void *data_ptr, uint32_t len,
uint32_t *out_viommu_id, Error **errp)
{
int ret;
@@ -467,6 +468,8 @@ bool iommufd_backend_alloc_viommu(IOMMUFDBackend *be, uint32_t dev_id,
.type = viommu_type,
.dev_id = dev_id,
.hwpt_id = hwpt_id,
+ .data_len = len,
+ .data_uptr = (uintptr_t)data_ptr,
};
ret = ioctl(be->fd, IOMMU_VIOMMU_ALLOC, &alloc_viommu);
diff --git a/hw/arm/smmuv3-accel.c b/hw/arm/smmuv3-accel.c
index 300c35ccb5..939898c9b0 100644
--- a/hw/arm/smmuv3-accel.c
+++ b/hw/arm/smmuv3-accel.c
@@ -503,8 +503,8 @@ smmuv3_accel_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
SMMUv3AccelState *accel;
if (!iommufd_backend_alloc_viommu(idev->iommufd, idev->devid,
- IOMMU_VIOMMU_TYPE_ARM_SMMUV3,
- s2_hwpt_id, &viommu_id, errp)) {
+ IOMMU_VIOMMU_TYPE_ARM_SMMUV3, s2_hwpt_id,
+ NULL, 0, &viommu_id, errp)) {
return false;
}
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index 9770ff1484..a3e8087b3a 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -87,6 +87,7 @@ bool iommufd_backend_alloc_hwpt(IOMMUFDBackend *be, uint32_t dev_id,
Error **errp);
bool iommufd_backend_alloc_viommu(IOMMUFDBackend *be, uint32_t dev_id,
uint32_t viommu_type, uint32_t hwpt_id,
+ void *data_ptr, uint32_t len,
uint32_t *out_hwpt, Error **errp);
bool iommufd_backend_alloc_vdev(IOMMUFDBackend *be, uint32_t dev_id,
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 03/16] backends/iommufd: Introduce iommufd_backend_alloc_hw_queue
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 01/16] backends/iommufd: Update iommufd_backend_get_device_info Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 02/16] backends/iommufd: Update iommufd_backend_alloc_viommu to allow user ptr Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 04/16] backends/iommufd: Introduce iommufd_backend_viommu_mmap Shameer Kolothum
` (13 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Add a helper to allocate an iommufd backed HW queue for a vIOMMU.
While at it, define a struct IOMMUFDHWqueue for use by vendor
implementations.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
backends/iommufd.c | 31 +++++++++++++++++++++++++++++++
backends/trace-events | 1 +
include/system/iommufd.h | 11 +++++++++++
3 files changed, 43 insertions(+)
diff --git a/backends/iommufd.c b/backends/iommufd.c
index 2f6fa832a7..a644763239 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -544,6 +544,37 @@ bool iommufd_backend_alloc_veventq(IOMMUFDBackend *be, uint32_t viommu_id,
return true;
}
+bool iommufd_backend_alloc_hw_queue(IOMMUFDBackend *be, uint32_t viommu_id,
+ uint32_t data_type, uint32_t index,
+ uint64_t addr, uint64_t length,
+ uint32_t *out_hw_queue_id, Error **errp)
+{
+ int ret;
+ struct iommu_hw_queue_alloc alloc_hw_queue = {
+ .size = sizeof(alloc_hw_queue),
+ .flags = 0,
+ .viommu_id = viommu_id,
+ .type = data_type,
+ .index = index,
+ .nesting_parent_iova = addr,
+ .length = length,
+ };
+
+ ret = ioctl(be->fd, IOMMU_HW_QUEUE_ALLOC, &alloc_hw_queue);
+
+ trace_iommufd_backend_alloc_hw_queue(be->fd, viommu_id, data_type,
+ index, addr, length,
+ alloc_hw_queue.out_hw_queue_id, ret);
+ if (ret) {
+ error_setg_errno(errp, errno, "IOMMU_HW_QUEUE_ALLOC failed");
+ return false;
+ }
+
+ g_assert(out_hw_queue_id);
+ *out_hw_queue_id = alloc_hw_queue.out_hw_queue_id;
+ return true;
+}
+
bool host_iommu_device_iommufd_attach_hwpt(HostIOMMUDeviceIOMMUFD *idev,
uint32_t hwpt_id, Error **errp)
{
diff --git a/backends/trace-events b/backends/trace-events
index 5afa7a40be..a22ad30e55 100644
--- a/backends/trace-events
+++ b/backends/trace-events
@@ -24,3 +24,4 @@ iommufd_backend_invalidate_cache(int iommufd, uint32_t id, uint32_t data_type, u
iommufd_backend_alloc_viommu(int iommufd, uint32_t dev_id, uint32_t type, uint32_t hwpt_id, uint32_t viommu_id, int ret) " iommufd=%d type=%u dev_id=%u hwpt_id=%u viommu_id=%u (%d)"
iommufd_backend_alloc_vdev(int iommufd, uint32_t dev_id, uint32_t viommu_id, uint64_t virt_id, uint32_t vdev_id, int ret) " iommufd=%d dev_id=%u viommu_id=%u virt_id=0x%"PRIx64" vdev_id=%u (%d)"
iommufd_viommu_alloc_eventq(int iommufd, uint32_t viommu_id, uint32_t type, uint32_t veventq_id, uint32_t veventq_fd, int ret) " iommufd=%d viommu_id=%u type=%u veventq_id=%u veventq_fd=%u (%d)"
+iommufd_backend_alloc_hw_queue(int iommufd, uint32_t viommu_id, uint32_t vqueue_type, uint32_t index, uint64_t addr, uint64_t size, uint32_t vqueue_id, int ret) " iommufd=%d viommu_id=%u vqueue_type=%u index=%u addr=0x%"PRIx64" size=0x%"PRIx64" vqueue_id=%u (%d)"
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index a3e8087b3a..9b8602a558 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -63,6 +63,12 @@ typedef struct IOMMUFDVeventq {
uint32_t veventq_fd;
} IOMMUFDVeventq;
+/* HW queue object for a vIOMMU-specific HW-accelerated queue */
+typedef struct IOMMUFDHWqueue {
+ IOMMUFDViommu *viommu;
+ uint32_t hw_queue_id;
+} IOMMUFDHWqueue;
+
bool iommufd_backend_connect(IOMMUFDBackend *be, Error **errp);
void iommufd_backend_disconnect(IOMMUFDBackend *be);
@@ -99,6 +105,11 @@ bool iommufd_backend_alloc_veventq(IOMMUFDBackend *be, uint32_t viommu_id,
uint32_t *out_veventq_id,
uint32_t *out_veventq_fd, Error **errp);
+bool iommufd_backend_alloc_hw_queue(IOMMUFDBackend *be, uint32_t viommu_id,
+ uint32_t data_type, uint32_t index,
+ uint64_t addr, uint64_t length,
+ uint32_t *out_hw_queue_id, Error **errp);
+
bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be, uint32_t hwpt_id,
bool start, Error **errp);
bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be, uint32_t hwpt_id,
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 04/16] backends/iommufd: Introduce iommufd_backend_viommu_mmap
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (2 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 03/16] backends/iommufd: Introduce iommufd_backend_alloc_hw_queue Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 05/16] hw/arm/tegra241-cmdqv: Add initial Tegra241 CMDQ-Virtualisation support Shameer Kolothum
` (12 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Add a backend helper to mmap hardware MMIO regions exposed via iommufd for
a vIOMMU instance. This allows user space to access HW-accelerated MMIO
pages provided by the vIOMMU.
The caller is responsible for unmapping the returned region.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
backends/iommufd.c | 24 ++++++++++++++++++++++++
backends/trace-events | 1 +
include/system/iommufd.h | 4 ++++
3 files changed, 29 insertions(+)
diff --git a/backends/iommufd.c b/backends/iommufd.c
index a644763239..015e5249d6 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -575,6 +575,30 @@ bool iommufd_backend_alloc_hw_queue(IOMMUFDBackend *be, uint32_t viommu_id,
return true;
}
+/*
+ * Helper to mmap HW MMIO regions exposed via iommufd for a vIOMMU instance.
+ * The caller is responsible for unmapping the mapped region.
+ */
+bool iommufd_backend_viommu_mmap(IOMMUFDBackend *be, uint32_t viommu_id,
+ uint64_t size, off_t offset, void **out_ptr,
+ Error **errp)
+{
+ g_assert(viommu_id);
+ g_assert(out_ptr);
+
+ *out_ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, be->fd,
+ offset);
+ if (*out_ptr == MAP_FAILED) {
+ error_setg_errno(errp, errno, "failed to mmap (size=0x%" PRIx64
+ " offset=0x%" PRIx64 ") for viommu (id=%d)",
+ size, offset, viommu_id);
+ return false;
+ }
+
+ trace_iommufd_backend_viommu_mmap(be->fd, viommu_id, size, offset);
+ return true;
+}
+
bool host_iommu_device_iommufd_attach_hwpt(HostIOMMUDeviceIOMMUFD *idev,
uint32_t hwpt_id, Error **errp)
{
diff --git a/backends/trace-events b/backends/trace-events
index a22ad30e55..046f453caa 100644
--- a/backends/trace-events
+++ b/backends/trace-events
@@ -25,3 +25,4 @@ iommufd_backend_alloc_viommu(int iommufd, uint32_t dev_id, uint32_t type, uint32
iommufd_backend_alloc_vdev(int iommufd, uint32_t dev_id, uint32_t viommu_id, uint64_t virt_id, uint32_t vdev_id, int ret) " iommufd=%d dev_id=%u viommu_id=%u virt_id=0x%"PRIx64" vdev_id=%u (%d)"
iommufd_viommu_alloc_eventq(int iommufd, uint32_t viommu_id, uint32_t type, uint32_t veventq_id, uint32_t veventq_fd, int ret) " iommufd=%d viommu_id=%u type=%u veventq_id=%u veventq_fd=%u (%d)"
iommufd_backend_alloc_hw_queue(int iommufd, uint32_t viommu_id, uint32_t vqueue_type, uint32_t index, uint64_t addr, uint64_t size, uint32_t vqueue_id, int ret) " iommufd=%d viommu_id=%u vqueue_type=%u index=%u addr=0x%"PRIx64" size=0x%"PRIx64" vqueue_id=%u (%d)"
+iommufd_backend_viommu_mmap(int iommufd, uint32_t viommu_id, uint64_t size, uint64_t offset) " iommufd=%d viommu_id=%u size=0x%"PRIx64" offset=0x%"PRIx64
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index 9b8602a558..e3905c9a40 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -110,6 +110,10 @@ bool iommufd_backend_alloc_hw_queue(IOMMUFDBackend *be, uint32_t viommu_id,
uint64_t addr, uint64_t length,
uint32_t *out_hw_queue_id, Error **errp);
+bool iommufd_backend_viommu_mmap(IOMMUFDBackend *be, uint32_t viommu_id,
+ uint64_t size, off_t offset, void **out_ptr,
+ Error **errp);
+
bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be, uint32_t hwpt_id,
bool start, Error **errp);
bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be, uint32_t hwpt_id,
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 05/16] hw/arm/tegra241-cmdqv: Add initial Tegra241 CMDQ-Virtualisation support
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (3 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 04/16] backends/iommufd: Introduce iommufd_backend_viommu_mmap Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 06/16] hw/arm/tegra241-cmdqv: Map VINTF Page0 into guest Shameer Kolothum
` (11 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Introduce initial support for NVIDIA Tegra241 CMDQ-Virtualisation (CMDQV),
an extension to SMMUv3 providing virtualizable hardware command queues.
This adds the basic MMIO handling, and integration hooks in the SMMUv3
accelerated path. When enabled, the SMMUv3 backend allocates a Tegra241
specific vIOMMU object via IOMMUFD and exposes a CMDQV MMIO region and
IRQ to the guest.
The "tegra241-cmdqv" property isn't user visible yet and it will be
introduced in a later patch once all the supporting pieces are ready.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/Kconfig | 5 ++++
hw/arm/meson.build | 1 +
hw/arm/smmuv3-accel.c | 10 +++++--
hw/arm/smmuv3.c | 4 +++
hw/arm/tegra241-cmdqv.c | 65 +++++++++++++++++++++++++++++++++++++++++
hw/arm/tegra241-cmdqv.h | 40 +++++++++++++++++++++++++
include/hw/arm/smmuv3.h | 3 ++
7 files changed, 126 insertions(+), 2 deletions(-)
create mode 100644 hw/arm/tegra241-cmdqv.c
create mode 100644 hw/arm/tegra241-cmdqv.h
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
index 702b79a02b..42b6b95285 100644
--- a/hw/arm/Kconfig
+++ b/hw/arm/Kconfig
@@ -37,6 +37,7 @@ config ARM_VIRT
select VIRTIO_MEM_SUPPORTED
select ACPI_CXL
select ACPI_HMAT
+ select TEGRA241_CMDQV
config CUBIEBOARD
bool
@@ -634,6 +635,10 @@ config ARM_SMMUV3_ACCEL
bool
depends on ARM_SMMUV3 && IOMMUFD
+config TEGRA241_CMDQV
+ bool
+ depends on ARM_SMMUV3_ACCEL
+
config FSL_IMX6UL
bool
default y
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
index c250487e64..4ec91db50a 100644
--- a/hw/arm/meson.build
+++ b/hw/arm/meson.build
@@ -86,6 +86,7 @@ arm_common_ss.add(when: 'CONFIG_FSL_IMX8MP', if_true: files('fsl-imx8mp.c'))
arm_common_ss.add(when: 'CONFIG_FSL_IMX8MP_EVK', if_true: files('imx8mp-evk.c'))
arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
arm_ss.add(when: 'CONFIG_ARM_SMMUV3_ACCEL', if_true: files('smmuv3-accel.c'))
+arm_ss.add(when: 'CONFIG_TEGRA241_CMDQV', if_true: files('tegra241-cmdqv.c'))
arm_common_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
arm_common_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
arm_common_ss.add(when: 'CONFIG_XEN', if_true: files(
diff --git a/hw/arm/smmuv3-accel.c b/hw/arm/smmuv3-accel.c
index 939898c9b0..e50c4b3bb7 100644
--- a/hw/arm/smmuv3-accel.c
+++ b/hw/arm/smmuv3-accel.c
@@ -18,6 +18,7 @@
#include "smmuv3-internal.h"
#include "smmuv3-accel.h"
+#include "tegra241-cmdqv.h"
/*
* The root region aliases the global system memory, and shared_as_sysmem
@@ -499,10 +500,15 @@ smmuv3_accel_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
.ste = { SMMU_STE_VALID, 0x0ULL },
};
uint32_t s2_hwpt_id = idev->hwpt_id;
- uint32_t viommu_id, hwpt_id;
+ uint32_t viommu_id = 0, hwpt_id;
SMMUv3AccelState *accel;
- if (!iommufd_backend_alloc_viommu(idev->iommufd, idev->devid,
+ if (s->tegra241_cmdqv && !tegra241_cmdqv_alloc_viommu(s, idev, &viommu_id,
+ errp)) {
+ return false;
+ }
+
+ if (!viommu_id && !iommufd_backend_alloc_viommu(idev->iommufd, idev->devid,
IOMMU_VIOMMU_TYPE_ARM_SMMUV3, s2_hwpt_id,
NULL, 0, &viommu_id, errp)) {
return false;
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 9b7b85fb49..02e1a925a4 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -36,6 +36,7 @@
#include "smmuv3-accel.h"
#include "smmuv3-internal.h"
#include "smmu-internal.h"
+#include "tegra241-cmdqv.h"
#define PTW_RECORD_FAULT(ptw_info, cfg) (((ptw_info).stage == SMMU_STAGE_1 && \
(cfg)->record_faults) || \
@@ -2017,6 +2018,9 @@ static void smmu_realize(DeviceState *d, Error **errp)
smmu_init_irq(s, dev);
smmuv3_init_id_regs(s);
+ if (s->tegra241_cmdqv) {
+ tegra241_cmdqv_init(s);
+ }
}
static const VMStateDescription vmstate_smmuv3_queue = {
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
new file mode 100644
index 0000000000..899325877e
--- /dev/null
+++ b/hw/arm/tegra241-cmdqv.c
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2025, NVIDIA CORPORATION
+ * NVIDIA Tegra241 CMDQ-Virtualization extension for SMMUv3
+ *
+ * Written by Nicolin Chen, Shameer Kolothum
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+
+#include "hw/arm/smmuv3.h"
+#include "smmuv3-accel.h"
+#include "tegra241-cmdqv.h"
+
+static uint64_t tegra241_cmdqv_read(void *opaque, hwaddr offset, unsigned size)
+{
+ return 0;
+}
+
+static void tegra241_cmdqv_write(void *opaque, hwaddr offset, uint64_t value,
+ unsigned size)
+{
+}
+
+static const MemoryRegionOps mmio_cmdqv_ops = {
+ .read = tegra241_cmdqv_read,
+ .write = tegra241_cmdqv_write,
+ .endianness = DEVICE_LITTLE_ENDIAN,
+};
+
+bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
+ uint32_t *out_viommu_id, Error **errp)
+{
+ Tegra241CMDQV *cmdqv = s->cmdqv;
+
+ if (!iommufd_backend_alloc_viommu(idev->iommufd, idev->devid,
+ IOMMU_VIOMMU_TYPE_TEGRA241_CMDQV,
+ idev->hwpt_id, &cmdqv->cmdqv_data,
+ sizeof(cmdqv->cmdqv_data), out_viommu_id,
+ errp)) {
+ error_append_hint(errp, "NVIDIA Tegra241 CMDQV is unsupported");
+ s->tegra241_cmdqv = false;
+ return false;
+ }
+ return true;
+}
+
+void tegra241_cmdqv_init(SMMUv3State *s)
+{
+ SysBusDevice *sbd = SYS_BUS_DEVICE(OBJECT(s));
+ Tegra241CMDQV *cmdqv;
+
+ if (!s->tegra241_cmdqv) {
+ return;
+ }
+
+ cmdqv = g_new0(Tegra241CMDQV, 1);
+ memory_region_init_io(&cmdqv->mmio_cmdqv, OBJECT(s), &mmio_cmdqv_ops, cmdqv,
+ "tegra241-cmdqv", TEGRA241_CMDQV_IO_LEN);
+ sysbus_init_mmio(sbd, &cmdqv->mmio_cmdqv);
+ sysbus_init_irq(sbd, &cmdqv->irq);
+ cmdqv->smmu = s;
+ s->cmdqv = cmdqv;
+}
diff --git a/hw/arm/tegra241-cmdqv.h b/hw/arm/tegra241-cmdqv.h
new file mode 100644
index 0000000000..9bc72b24d9
--- /dev/null
+++ b/hw/arm/tegra241-cmdqv.h
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2025, NVIDIA CORPORATION
+ * NVIDIA Tegra241 CMDQ-Virtualiisation extension for SMMUv3
+ *
+ * Written by Nicolin Chen, Shameer Kolothum
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef HW_TEGRA241_CMDQV_H
+#define HW_TEGRA241_CMDQV_H
+
+#include CONFIG_DEVICES
+
+#define TEGRA241_CMDQV_IO_LEN 0x50000
+
+typedef struct Tegra241CMDQV {
+ struct iommu_viommu_tegra241_cmdqv cmdqv_data;
+ SMMUv3State *smmu;
+ MemoryRegion mmio_cmdqv;
+ qemu_irq irq;
+} Tegra241CMDQV;
+
+#ifdef CONFIG_TEGRA241_CMDQV
+bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
+ uint32_t *out_viommu_id, Error **errp);
+void tegra241_cmdqv_init(SMMUv3State *s);
+#else
+static inline void tegra241_cmdqv_init(SMMUv3State *s)
+{
+}
+static inline bool
+tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
+ uint32_t *out_viommu_id, Error **errp)
+{
+ return true;
+}
+#endif
+
+#endif /* HW_TEGRA241_CMDQV_H */
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
index 2d4970fe19..8e56e480a0 100644
--- a/include/hw/arm/smmuv3.h
+++ b/include/hw/arm/smmuv3.h
@@ -73,6 +73,9 @@ struct SMMUv3State {
bool ats;
uint8_t oas;
bool pasid;
+ /* Support for NVIDIA Tegra241 SMMU CMDQV extension */
+ struct Tegra241CMDQV *cmdqv;
+ bool tegra241_cmdqv;
};
typedef enum {
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 06/16] hw/arm/tegra241-cmdqv: Map VINTF Page0 into guest
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (4 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 05/16] hw/arm/tegra241-cmdqv: Add initial Tegra241 CMDQ-Virtualisation support Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 07/16] hw/arm/tegra241-cmdqv: Add read emulation support for registers Shameer Kolothum
` (10 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Tegra241 CMDQV assigns each VINTF a 128KB MMIO region split into two
64 KB pages:
- Page0: guest accessible control/status registers for all VCMDQs
- Page1: configuration registers (queue GPA/size) that must be trapped
by the VMM and translated before programming the HW queue.
This patch implements the Page0 handling in QEMU. Using the vintf offset
returned by IOMMUFD during VIOMMU allocation, QEMU maps Page0 into
guest physical address space and exposes it via two guest MMIO windows:
- 0x10000 :VCMDQ register
- 0x30000 :VINTF register
The mapping is lazily initialized on first read/write.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/tegra241-cmdqv.c | 60 +++++++++++++++++++++++++++++++++++++++++
hw/arm/tegra241-cmdqv.h | 5 ++++
2 files changed, 65 insertions(+)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index 899325877e..d8858322dc 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -13,14 +13,74 @@
#include "smmuv3-accel.h"
#include "tegra241-cmdqv.h"
+static bool tegra241_cmdqv_init_vcmdq_page0(Tegra241CMDQV *cmdqv, Error **errp)
+{
+ SMMUv3State *smmu = cmdqv->smmu;
+ SMMUv3AccelState *s_accel = smmu->s_accel;
+ IOMMUFDViommu *viommu;
+ char *name;
+
+ if (!s_accel) {
+ return true;
+ }
+
+ viommu = &s_accel->viommu;
+ if (!iommufd_backend_viommu_mmap(viommu->iommufd, viommu->viommu_id,
+ VCMDQ_REG_PAGE_SIZE,
+ cmdqv->cmdqv_data.out_vintf_mmap_offset,
+ &cmdqv->vcmdq_page0, errp)) {
+ cmdqv->vcmdq_page0 = NULL;
+ return false;
+ }
+
+ name = g_strdup_printf("%s vcmdq", memory_region_name(&cmdqv->mmio_cmdqv));
+ memory_region_init_ram_device_ptr(&cmdqv->mmio_vcmdq_page,
+ memory_region_owner(&cmdqv->mmio_cmdqv),
+ name, 0x10000, cmdqv->vcmdq_page0);
+ memory_region_add_subregion_overlap(&cmdqv->mmio_cmdqv, 0x10000,
+ &cmdqv->mmio_vcmdq_page, 1);
+ g_free(name);
+
+ name = g_strdup_printf("%s vintf", memory_region_name(&cmdqv->mmio_cmdqv));
+ memory_region_init_ram_device_ptr(&cmdqv->mmio_vintf_page,
+ memory_region_owner(&cmdqv->mmio_cmdqv),
+ name, 0x10000, cmdqv->vcmdq_page0);
+ memory_region_add_subregion_overlap(&cmdqv->mmio_cmdqv, 0x30000,
+ &cmdqv->mmio_vintf_page, 1);
+ g_free(name);
+
+ return true;
+}
+
static uint64_t tegra241_cmdqv_read(void *opaque, hwaddr offset, unsigned size)
{
+ Tegra241CMDQV *cmdqv = (Tegra241CMDQV *)opaque;
+ Error *local_err = NULL;
+
+ if (!cmdqv->vcmdq_page0) {
+ tegra241_cmdqv_init_vcmdq_page0(cmdqv, &local_err);
+ if (local_err) {
+ error_report_err(local_err);
+ local_err = NULL;
+ }
+ }
+
return 0;
}
static void tegra241_cmdqv_write(void *opaque, hwaddr offset, uint64_t value,
unsigned size)
{
+ Tegra241CMDQV *cmdqv = (Tegra241CMDQV *)opaque;
+ Error *local_err = NULL;
+
+ if (!cmdqv->vcmdq_page0) {
+ tegra241_cmdqv_init_vcmdq_page0(cmdqv, &local_err);
+ if (local_err) {
+ error_report_err(local_err);
+ local_err = NULL;
+ }
+ }
}
static const MemoryRegionOps mmio_cmdqv_ops = {
diff --git a/hw/arm/tegra241-cmdqv.h b/hw/arm/tegra241-cmdqv.h
index 9bc72b24d9..ccdf0651be 100644
--- a/hw/arm/tegra241-cmdqv.h
+++ b/hw/arm/tegra241-cmdqv.h
@@ -19,8 +19,13 @@ typedef struct Tegra241CMDQV {
SMMUv3State *smmu;
MemoryRegion mmio_cmdqv;
qemu_irq irq;
+ MemoryRegion mmio_vcmdq_page;
+ MemoryRegion mmio_vintf_page;
+ void *vcmdq_page0;
} Tegra241CMDQV;
+#define VCMDQ_REG_PAGE_SIZE 0x10000
+
#ifdef CONFIG_TEGRA241_CMDQV
bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
uint32_t *out_viommu_id, Error **errp);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 07/16] hw/arm/tegra241-cmdqv: Add read emulation support for registers
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (5 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 06/16] hw/arm/tegra241-cmdqv: Map VINTF Page0 into guest Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 08/16] system/physmem: Add helper to check whether a guest PA maps to RAM Shameer Kolothum
` (9 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Implement read support for Tegra241 CMDQV register blocks, including VINTF
and per VCMDQ register regions. The patch decodes offsets, extracts queue
indices, and returns the corresponding cached register state.
Subsequent patch will add write support.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/tegra241-cmdqv.c | 144 +++++++++++++++++++-
hw/arm/tegra241-cmdqv.h | 282 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 425 insertions(+), 1 deletion(-)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index d8858322dc..185ef957bc 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -8,6 +8,7 @@
*/
#include "qemu/osdep.h"
+#include "qemu/log.h"
#include "hw/arm/smmuv3.h"
#include "smmuv3-accel.h"
@@ -52,10 +53,94 @@ static bool tegra241_cmdqv_init_vcmdq_page0(Tegra241CMDQV *cmdqv, Error **errp)
return true;
}
+/* Note that offset aligns down to 0x1000 */
+static uint64_t tegra241_cmdqv_read_vintf(Tegra241CMDQV *cmdqv, hwaddr offset)
+{
+ int i;
+
+ switch (offset) {
+ case A_VINTF0_CONFIG:
+ return cmdqv->vintf_config;
+ case A_VINTF0_STATUS:
+ return cmdqv->vintf_status;
+ case A_VINTF0_LVCMDQ_ERR_MAP_0 ... A_VINTF0_LVCMDQ_ERR_MAP_3:
+ i = (offset - A_VINTF0_LVCMDQ_ERR_MAP_0) / 4;
+ return cmdqv->vintf_cmdq_err_map[i];
+ default:
+ qemu_log_mask(LOG_UNIMP, "%s unhandled read access at 0x%" PRIx64 "\n",
+ __func__, offset);
+ return 0;
+ }
+}
+
+/* Note that offset aligns down to 0x10000 */
+static uint64_t tegra241_cmdqv_read_vcmdq(Tegra241CMDQV *cmdqv, hwaddr offset,
+ int index)
+{
+ uint32_t *ptr = NULL;
+ uint64_t off;
+
+ /*
+ * Each VCMDQ instance occupies a 128 byte region (0x80).
+ * The hardware layout is:
+ * vcmdq_page0 + (index * 0x80) + (offset - 0x10000)
+ */
+ if (cmdqv->vcmdq_page0) {
+ off = (0x80 * index) + (offset - 0x10000);
+ ptr = (uint32_t *)(cmdqv->vcmdq_page0 + off);
+ }
+
+ switch (offset) {
+ case A_VCMDQ0_CONS_INDX:
+ if (ptr) {
+ cmdqv->vcmdq_cons_indx[index] = *ptr;
+ }
+ return cmdqv->vcmdq_cons_indx[index];
+ case A_VCMDQ0_PROD_INDX:
+ if (ptr) {
+ cmdqv->vcmdq_prod_indx[index] = *ptr;
+ }
+ return cmdqv->vcmdq_prod_indx[index];
+ case A_VCMDQ0_CONFIG:
+ if (ptr) {
+ cmdqv->vcmdq_config[index] = *ptr;
+ }
+ return cmdqv->vcmdq_config[index];
+ case A_VCMDQ0_STATUS:
+ if (ptr) {
+ cmdqv->vcmdq_status[index] = *ptr;
+ }
+ return cmdqv->vcmdq_status[index];
+ case A_VCMDQ0_GERROR:
+ if (ptr) {
+ cmdqv->vcmdq_gerror[index] = *ptr;
+ }
+ return cmdqv->vcmdq_gerror[index];
+ case A_VCMDQ0_GERRORN:
+ if (ptr) {
+ cmdqv->vcmdq_gerrorn[index] = *ptr;
+ }
+ return cmdqv->vcmdq_gerrorn[index];
+ case A_VCMDQ0_BASE_L:
+ return cmdqv->vcmdq_base[index];
+ case A_VCMDQ0_BASE_H:
+ return cmdqv->vcmdq_base[index] >> 32;
+ case A_VCMDQ0_CONS_INDX_BASE_DRAM_L:
+ return cmdqv->vcmdq_cons_indx_base[index];
+ case A_VCMDQ0_CONS_INDX_BASE_DRAM_H:
+ return cmdqv->vcmdq_cons_indx_base[index] >> 32;
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "%s unhandled read access at 0x%" PRIx64 "\n",
+ __func__, offset);
+ return 0;
+ }
+}
static uint64_t tegra241_cmdqv_read(void *opaque, hwaddr offset, unsigned size)
{
Tegra241CMDQV *cmdqv = (Tegra241CMDQV *)opaque;
Error *local_err = NULL;
+ int index;
if (!cmdqv->vcmdq_page0) {
tegra241_cmdqv_init_vcmdq_page0(cmdqv, &local_err);
@@ -65,7 +150,64 @@ static uint64_t tegra241_cmdqv_read(void *opaque, hwaddr offset, unsigned size)
}
}
- return 0;
+ if (offset > TEGRA241_CMDQV_IO_LEN) {
+ qemu_log_mask(LOG_UNIMP,
+ "%s offset 0x%" PRIx64 " off limit (0x50000)\n", __func__,
+ offset);
+ return 0;
+ }
+
+ /* Fallback to cached register values */
+ switch (offset) {
+ case A_CONFIG:
+ return cmdqv->config;
+ case A_PARAM:
+ return cmdqv->param;
+ case A_STATUS:
+ return cmdqv->status;
+ case A_VI_ERR_MAP ... A_VI_ERR_MAP_1:
+ return cmdqv->vi_err_map[(offset - A_VI_ERR_MAP) / 4];
+ case A_VI_INT_MASK ... A_VI_INT_MASK_1:
+ return cmdqv->vi_int_mask[(offset - A_VI_INT_MASK) / 4];
+ case A_CMDQ_ERR_MAP ... A_CMDQ_ERR_MAP_3:
+ return cmdqv->cmdq_err_map[(offset - A_CMDQ_ERR_MAP) / 4];
+ case A_CMDQ_ALLOC_MAP_0 ... A_CMDQ_ALLOC_MAP_127:
+ return cmdqv->cmdq_alloc_map[(offset - A_CMDQ_ALLOC_MAP_0) / 4];
+ case A_VINTF0_CONFIG ... A_VINTF0_LVCMDQ_ERR_MAP_3:
+ return tegra241_cmdqv_read_vintf(cmdqv, offset);
+ case A_VI_VCMDQ0_CONS_INDX ... A_VI_VCMDQ127_GERRORN:
+ offset -= 0x20000;
+ QEMU_FALLTHROUGH;
+ case A_VCMDQ0_CONS_INDX ... A_VCMDQ127_GERRORN:
+ /*
+ * Align offset down to 0x10000 while extracting the index:
+ * VCMDQ0_CONS_INDX (0x10000) => 0x10000, 0
+ * VCMDQ1_CONS_INDX (0x10080) => 0x10000, 1
+ * VCMDQ2_CONS_INDX (0x10100) => 0x10000, 2
+ * ...
+ * VCMDQ127_CONS_INDX (0x13f80) => 0x10000, 127
+ */
+ index = (offset - 0x10000) / 0x80;
+ return tegra241_cmdqv_read_vcmdq(cmdqv, offset - 0x80 * index, index);
+ case A_VI_VCMDQ0_BASE_L ... A_VI_VCMDQ127_CONS_INDX_BASE_DRAM_H:
+ offset -= 0x20000;
+ QEMU_FALLTHROUGH;
+ case A_VCMDQ0_BASE_L ... A_VCMDQ127_CONS_INDX_BASE_DRAM_H:
+ /*
+ * Align offset down to 0x20000 while extracting the index:
+ * VCMDQ0_BASE_L (0x20000) => 0x20000, 0
+ * VCMDQ1_BASE_L (0x20080) => 0x20000, 1
+ * VCMDQ2_BASE_L (0x20100) => 0x20000, 2
+ * ...
+ * VCMDQ127_BASE_L (0x23f80) => 0x20000, 127
+ */
+ index = (offset - 0x20000) / 0x80;
+ return tegra241_cmdqv_read_vcmdq(cmdqv, offset - 0x80 * index, index);
+ default:
+ qemu_log_mask(LOG_UNIMP, "%s unhandled read access at 0x%" PRIx64 "\n",
+ __func__, offset);
+ return 0;
+ }
}
static void tegra241_cmdqv_write(void *opaque, hwaddr offset, uint64_t value,
diff --git a/hw/arm/tegra241-cmdqv.h b/hw/arm/tegra241-cmdqv.h
index ccdf0651be..4972e367f6 100644
--- a/hw/arm/tegra241-cmdqv.h
+++ b/hw/arm/tegra241-cmdqv.h
@@ -10,6 +10,7 @@
#ifndef HW_TEGRA241_CMDQV_H
#define HW_TEGRA241_CMDQV_H
+#include "hw/registerfields.h"
#include CONFIG_DEVICES
#define TEGRA241_CMDQV_IO_LEN 0x50000
@@ -22,10 +23,291 @@ typedef struct Tegra241CMDQV {
MemoryRegion mmio_vcmdq_page;
MemoryRegion mmio_vintf_page;
void *vcmdq_page0;
+ IOMMUFDHWqueue *vcmdq[128];
+
+ /* Register Cache */
+ uint32_t config;
+ uint32_t param;
+ uint32_t status;
+ uint32_t vi_err_map[2];
+ uint32_t vi_int_mask[2];
+ uint32_t cmdq_err_map[4];
+ uint32_t cmdq_alloc_map[128];
+ uint32_t vintf_config;
+ uint32_t vintf_status;
+ uint32_t vintf_cmdq_err_map[4];
+ uint32_t vcmdq_cons_indx[128];
+ uint32_t vcmdq_prod_indx[128];
+ uint32_t vcmdq_config[128];
+ uint32_t vcmdq_status[128];
+ uint32_t vcmdq_gerror[128];
+ uint32_t vcmdq_gerrorn[128];
+ uint64_t vcmdq_base[128];
+ uint64_t vcmdq_cons_indx_base[128];
} Tegra241CMDQV;
+/* MMIO Registers */
+REG32(CONFIG, 0x0)
+FIELD(CONFIG, CMDQV_EN, 0, 1)
+FIELD(CONFIG, CMDQV_PER_CMD_OFFSET, 1, 3)
+FIELD(CONFIG, CMDQ_MAX_CLK_BATCH, 4, 8)
+FIELD(CONFIG, CMDQ_MAX_CMD_BATCH, 12, 8)
+FIELD(CONFIG, CONS_DRAM_EN, 20, 1)
+
+#define V_CONFIG_RESET 0x00020403
+
+REG32(PARAM, 0x4)
+FIELD(PARAM, CMDQV_VER, 0, 4)
+FIELD(PARAM, CMDQV_NUM_CMDQ_LOG2, 4, 4)
+FIELD(PARAM, CMDQV_NUM_VM_LOG2, 8, 4)
+FIELD(PARAM, CMDQV_NUM_SID_PER_VM_LOG2, 12, 4)
+
+#define V_PARAM_RESET 0x00004011
+
+REG32(STATUS, 0x8)
+FIELD(STATUS, CMDQV_ENABLED, 0, 1)
+
+#define A_VI_ERR_MAP 0x14
+#define A_VI_ERR_MAP_1 0x18
+#define V_VI_ERR_MAP_NO_ERROR (0)
+#define V_VI_ERR_MAP_ERROR (1)
+
+#define A_VI_INT_MASK 0x1c
+#define A_VI_INT_MASK_1 0x20
+#define V_VI_INT_MASK_NOT_MASKED (0)
+#define V_VI_INT_MASK_MASKED (1)
+
+#define A_CMDQ_ERR_MAP 0x24
+#define A_CMDQ_ERR_MAP_1 0x28
+#define A_CMDQ_ERR_MAP_2 0x2c
+#define A_CMDQ_ERR_MAP_3 0x30
+
+/* i = [0, 127] */
+#define A_CMDQ_ALLOC_MAP_(i) \
+ REG32(CMDQ_ALLOC_MAP_##i, 0x200 + i * 4) \
+ FIELD(CMDQ_ALLOC_MAP_##i, ALLOC, 0, 1) \
+ FIELD(CMDQ_ALLOC_MAP_##i, LVCMDQ, 1, 7) \
+ FIELD(CMDQ_ALLOC_MAP_##i, VIRT_INTF_INDX, 15, 6)
+
+A_CMDQ_ALLOC_MAP_(0)
+/* Omitting 1~126 as not being directly called */
+A_CMDQ_ALLOC_MAP_(127)
+
+/* i = [0, 0] */
+#define A_VINTFi_CONFIG(i) \
+ REG32(VINTF##i##_CONFIG, 0x1000 + i * 0x100) \
+ FIELD(VINTF##i##_CONFIG, ENABLE, 0, 1) \
+ FIELD(VINTF##i##_CONFIG, VMID, 1, 16) \
+ FIELD(VINTF##i##_CONFIG, HYP_OWN, 17, 1)
+
+A_VINTFi_CONFIG(0)
+
+#define A_VINTFi_STATUS(i) \
+ REG32(VINTF##i##_STATUS, 0x1004 + i * 0x100) \
+ FIELD(VINTF##i##_STATUS, ENABLE_OK, 0, 1) \
+ FIELD(VINTF##i##_STATUS, STATUS, 1, 3) \
+ FIELD(VINTF##i##_STATUS, VI_NUM_LVCMDQ, 16, 8)
+
+ A_VINTFi_STATUS(0)
+
+#define V_VINTF_STATUS_NO_ERROR (0 << 1)
+#define V_VINTF_STATUS_VCMDQ_EROR (1 << 1)
+
+/* i = [0, 0], j = [0, 3] */
+#define A_VINTFi_LVCMDQ_ERR_MAP_(i, j) \
+ REG32(VINTF##i##_LVCMDQ_ERR_MAP_##j, 0x10c0 + j * 4 + i * 0x100) \
+ FIELD(VINTF##i##_LVCMDQ_ERR_MAP_##j, LVCMDQ_ERR_MAP, 0, 32)
+
+ A_VINTFi_LVCMDQ_ERR_MAP_(0, 0)
+ /* Omitting [0][1~2] as not being directly called */
+ A_VINTFi_LVCMDQ_ERR_MAP_(0, 3)
+
+/* VCMDQ registers -- starting from 0x10000 with size 64KB * 2 (0x20000) */
+#define VCMDQ_REG_OFFSET 0x10000
#define VCMDQ_REG_PAGE_SIZE 0x10000
+#define A_VCMDQi_CONS_INDX(i) \
+ REG32(VCMDQ##i##_CONS_INDX, 0x10000 + i * 0x80) \
+ FIELD(VCMDQ##i##_CONS_INDX, RD, 0, 20) \
+ FIELD(VCMDQ##i##_CONS_INDX, ERR, 24, 7)
+
+ A_VCMDQi_CONS_INDX(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_CONS_INDX(127)
+
+#define V_VCMDQ_CONS_INDX_ERR_CERROR_NONE 0
+#define V_VCMDQ_CONS_INDX_ERR_CERROR_ILL_OPCODE 1
+#define V_VCMDQ_CONS_INDX_ERR_CERROR_ABT 2
+#define V_VCMDQ_CONS_INDX_ERR_CERROR_ATC_INV_SYNC 3
+#define V_VCMDQ_CONS_INDX_ERR_CERROR_ILL_ACCESS 4
+
+#define A_VCMDQi_PROD_INDX(i) \
+ REG32(VCMDQ##i##_PROD_INDX, 0x10000 + 0x4 + i * 0x80) \
+ FIELD(VCMDQ##i##_PROD_INDX, WR, 0, 20)
+
+ A_VCMDQi_PROD_INDX(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_PROD_INDX(127)
+
+#define A_VCMDQi_CONFIG(i) \
+ REG32(VCMDQ##i##_CONFIG, 0x10000 + 0x8 + i * 0x80) \
+ FIELD(VCMDQ##i##_CONFIG, CMDQ_EN, 0, 1)
+
+ A_VCMDQi_CONFIG(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_CONFIG(127)
+
+#define A_VCMDQi_STATUS(i) \
+ REG32(VCMDQ##i##_STATUS, 0x10000 + 0xc + i * 0x80) \
+ FIELD(VCMDQ##i##_STATUS, CMDQ_EN_OK, 0, 1)
+
+ A_VCMDQi_STATUS(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_STATUS(127)
+
+#define A_VCMDQi_GERROR(i) \
+ REG32(VCMDQ##i##_GERROR, 0x10000 + 0x10 + i * 0x80) \
+ FIELD(VCMDQ##i##_GERROR, CMDQ_ERR, 0, 1) \
+ FIELD(VCMDQ##i##_GERROR, CONS_DRAM_WR_ABT_ERR, 1, 1) \
+ FIELD(VCMDQ##i##_GERROR, CMDQ_INIT_ERR, 2, 1)
+
+ A_VCMDQi_GERROR(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_GERROR(127)
+
+#define A_VCMDQi_GERRORN(i) \
+ REG32(VCMDQ##i##_GERRORN, 0x10000 + 0x14 + i * 0x80) \
+ FIELD(VCMDQ##i##_GERRORN, CMDQ_ERR, 0, 1) \
+ FIELD(VCMDQ##i##_GERRORN, CONS_DRAM_WR_ABT_ERR, 1, 1) \
+ FIELD(VCMDQ##i##_GERRORN, CMDQ_INIT_ERR, 2, 1)
+
+ A_VCMDQi_GERRORN(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_GERRORN(127)
+
+#define A_VCMDQi_BASE_L(i) \
+ REG32(VCMDQ##i##_BASE_L, 0x20000 + i * 0x80) \
+ FIELD(VCMDQ##i##_BASE_L, LOG2SIZE, 0, 5) \
+ FIELD(VCMDQ##i##_BASE_L, ADDR, 5, 27)
+
+ A_VCMDQi_BASE_L(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_BASE_L(127)
+
+#define A_VCMDQi_BASE_H(i) \
+ REG32(VCMDQ##i##_BASE_H, 0x20000 + 0x4 + i * 0x80) \
+ FIELD(VCMDQ##i##_BASE_H, ADDR, 0, 16)
+
+ A_VCMDQi_BASE_H(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_BASE_H(127)
+
+#define A_VCMDQi_CONS_INDX_BASE_DRAM_L(i) \
+ REG32(VCMDQ##i##_CONS_INDX_BASE_DRAM_L, 0x20000 + 0x8 + i * 0x80) \
+ FIELD(VCMDQ##i##_CONS_INDX_BASE_DRAM_L, ADDR, 0, 32)
+
+ A_VCMDQi_CONS_INDX_BASE_DRAM_L(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_CONS_INDX_BASE_DRAM_L(127)
+
+#define A_VCMDQi_CONS_INDX_BASE_DRAM_H(i) \
+ REG32(VCMDQ##i##_CONS_INDX_BASE_DRAM_H, 0x20000 + 0xc + i * 0x80) \
+ FIELD(VCMDQ##i##_CONS_INDX_BASE_DRAM_H, ADDR, 0, 16)
+
+ A_VCMDQi_CONS_INDX_BASE_DRAM_H(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VCMDQi_CONS_INDX_BASE_DRAM_H(127)
+
+/*
+ * VINTF VI_VCMDQ registers -- starting from 0x30000 with size 64KB * 2
+ * (0x20000)
+ */
+#define A_VI_VCMDQi_CONS_INDX(i) \
+ REG32(VI_VCMDQ##i##_CONS_INDX, 0x30000 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_CONS_INDX, RD, 0, 20) \
+ FIELD(VI_VCMDQ##i##_CONS_INDX, ERR, 24, 7)
+
+ A_VI_VCMDQi_CONS_INDX(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_CONS_INDX(127)
+
+#define A_VI_VCMDQi_PROD_INDX(i) \
+ REG32(VI_VCMDQ##i##_PROD_INDX, 0x30000 + 0x4 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_PROD_INDX, WR, 0, 20)
+
+ A_VI_VCMDQi_PROD_INDX(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_PROD_INDX(127)
+
+#define A_VI_VCMDQi_CONFIG(i) \
+ REG32(VI_VCMDQ##i##_CONFIG, 0x30000 + 0x8 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_CONFIG, CMDQ_EN, 0, 1)
+
+ A_VI_VCMDQi_CONFIG(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_CONFIG(127)
+
+#define A_VI_VCMDQi_STATUS(i) \
+ REG32(VI_VCMDQ##i##_STATUS, 0x30000 + 0xc + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_STATUS, CMDQ_EN_OK, 0, 1)
+
+ A_VI_VCMDQi_STATUS(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_STATUS(127)
+
+#define A_VI_VCMDQi_GERROR(i) \
+ REG32(VI_VCMDQ##i##_GERROR, 0x30000 + 0x10 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_GERROR, CMDQ_ERR, 0, 1) \
+ FIELD(VI_VCMDQ##i##_GERROR, CONS_DRAM_WR_ABT_ERR, 1, 1) \
+ FIELD(VI_VCMDQ##i##_GERROR, CMDQ_INIT_ERR, 2, 1)
+
+ A_VI_VCMDQi_GERROR(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_GERROR(127)
+
+#define A_VI_VCMDQi_GERRORN(i) \
+ REG32(VI_VCMDQ##i##_GERRORN, 0x30000 + 0x14 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_GERRORN, CMDQ_ERR, 0, 1) \
+ FIELD(VI_VCMDQ##i##_GERRORN, CONS_DRAM_WR_ABT_ERR, 1, 1) \
+ FIELD(VI_VCMDQ##i##_GERRORN, CMDQ_INIT_ERR, 2, 1)
+
+ A_VI_VCMDQi_GERRORN(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_GERRORN(127)
+
+#define A_VI_VCMDQi_BASE_L(i) \
+ REG32(VI_VCMDQ##i##_BASE_L, 0x40000 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_BASE_L, LOG2SIZE, 0, 5) \
+ FIELD(VI_VCMDQ##i##_BASE_L, ADDR, 5, 27)
+
+ A_VI_VCMDQi_BASE_L(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_BASE_L(127)
+
+#define A_VI_VCMDQi_BASE_H(i) \
+ REG32(VI_VCMDQ##i##_BASE_H, 0x40000 + 0x4 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_BASE_H, ADDR, 0, 16)
+
+ A_VI_VCMDQi_BASE_H(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_BASE_H(127)
+
+#define A_VI_VCMDQi_CONS_INDX_BASE_DRAM_L(i) \
+ REG32(VI_VCMDQ##i##_CONS_INDX_BASE_DRAM_L, 0x40000 + 0x8 + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_CONS_INDX_BASE_DRAM_L, ADDR, 0, 32)
+
+ A_VI_VCMDQi_CONS_INDX_BASE_DRAM_L(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_CONS_INDX_BASE_DRAM_L(127)
+
+#define A_VI_VCMDQi_CONS_INDX_BASE_DRAM_H(i) \
+ REG32(VI_VCMDQ##i##_CONS_INDX_BASE_DRAM_H, 0x40000 + 0xc + i * 0x80) \
+ FIELD(VI_VCMDQ##i##_CONS_INDX_BASE_DRAM_H, ADDR, 0, 16)
+
+ A_VI_VCMDQi_CONS_INDX_BASE_DRAM_H(0)
+ /* Omitting [1~126] as not being directly called */
+ A_VI_VCMDQi_CONS_INDX_BASE_DRAM_H(127)
+
#ifdef CONFIG_TEGRA241_CMDQV
bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
uint32_t *out_viommu_id, Error **errp);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 08/16] system/physmem: Add helper to check whether a guest PA maps to RAM
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (6 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 07/16] hw/arm/tegra241-cmdqv: Add read emulation support for registers Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 09/16] hw/arm/tegra241-cmdqv:: Add write emulation for registers Shameer Kolothum
` (8 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Introduce cpu_physical_memory_is_ram(), a helper that performs an
address_space translation and returns whether the resolved MemoryRegion is
backed by RAM.
This will be used by the upcoming Tegra241 CMDQV support to validate
guest provided VCMDQ buffer addresses.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
include/exec/cpu-common.h | 2 ++
system/physmem.c | 12 ++++++++++++
2 files changed, 14 insertions(+)
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index e0be4ee2b8..76b91d1b9b 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -148,6 +148,8 @@ void qemu_flush_coalesced_mmio_buffer(void);
typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque);
+bool cpu_physical_memory_is_ram(hwaddr phys_addr);
+
int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque);
/* vl.c */
diff --git a/system/physmem.c b/system/physmem.c
index c9869e4049..1f6c821a0e 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -4068,6 +4068,18 @@ int cpu_memory_rw_debug(CPUState *cpu, vaddr addr,
return 0;
}
+bool cpu_physical_memory_is_ram(hwaddr phys_addr)
+{
+ MemoryRegion *mr;
+ hwaddr l = 1;
+
+ RCU_READ_LOCK_GUARD();
+ mr = address_space_translate(&address_space_memory, phys_addr, &phys_addr,
+ &l, false, MEMTXATTRS_UNSPECIFIED);
+
+ return memory_region_is_ram(mr);
+}
+
int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque)
{
RAMBlock *block;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 09/16] hw/arm/tegra241-cmdqv:: Add write emulation for registers
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (7 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 08/16] system/physmem: Add helper to check whether a guest PA maps to RAM Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 10/16] hw/arm/tegra241-cmdqv: Allocate vEVENTQ object Shameer Kolothum
` (7 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Introduces write handling for VINTF and VCMDQ MMIO regions, including
status/config updates, queue index tracking, and BASE_L/BASE_H
processing. Writes to VCMDQ BASE_L/BASE_H trigger allocation or
teardown of an IOMMUFD HW queue.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/tegra241-cmdqv.c | 213 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 213 insertions(+)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index 185ef957bc..5e9a980d27 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -210,11 +210,158 @@ static uint64_t tegra241_cmdqv_read(void *opaque, hwaddr offset, unsigned size)
}
}
+/* Note that offset aligns down to 0x1000 */
+static void tegra241_cmdqv_write_vintf(Tegra241CMDQV *cmdqv, hwaddr offset,
+ uint64_t value, unsigned size)
+{
+ switch (offset) {
+ case A_VINTF0_CONFIG:
+ /* Strip off HYP_OWN setting from guest kernel */
+ value &= ~R_VINTF0_CONFIG_HYP_OWN_MASK;
+
+ cmdqv->vintf_config = value;
+ if (value & R_VINTF0_CONFIG_ENABLE_MASK) {
+ cmdqv->vintf_status |= R_VINTF0_STATUS_ENABLE_OK_MASK;
+ } else {
+ cmdqv->vintf_status &= ~R_VINTF0_STATUS_ENABLE_OK_MASK;
+ }
+ break;
+ default:
+ qemu_log_mask(LOG_UNIMP, "%s unhandled write access at 0x%" PRIx64 "\n",
+ __func__, offset);
+ return;
+ }
+}
+
+static bool tegra241_cmdqv_setup_vcmdq(Tegra241CMDQV *cmdqv, int index,
+ Error **errp)
+{
+ SMMUv3State *smmu = cmdqv->smmu;
+ SMMUv3AccelState *s_accel = smmu->s_accel;
+ uint64_t base_mask = (uint64_t)R_VCMDQ0_BASE_L_ADDR_MASK |
+ (uint64_t)R_VCMDQ0_BASE_H_ADDR_MASK << 32;
+ uint64_t addr = cmdqv->vcmdq_base[index] & base_mask;
+ uint64_t log2 = cmdqv->vcmdq_base[index] & R_VCMDQ0_BASE_L_LOG2SIZE_MASK;
+ uint64_t size = 1ULL << (log2 + 4);
+ IOMMUFDHWqueue *vcmdq = cmdqv->vcmdq[index];
+ IOMMUFDViommu *viommu;
+ IOMMUFDHWqueue *hw_queue;
+ uint32_t hw_queue_id;
+
+ /* Ignore any invalid address. This may come as part of reset etc */
+ if (!cpu_physical_memory_is_ram(addr)) {
+ return true;
+ }
+
+ if (vcmdq) {
+ iommufd_backend_free_id(s_accel->viommu.iommufd, vcmdq->hw_queue_id);
+ cmdqv->vcmdq[index] = NULL;
+ g_free(vcmdq);
+ }
+
+ viommu = &s_accel->viommu;
+ if (!iommufd_backend_alloc_hw_queue(viommu->iommufd, viommu->viommu_id,
+ IOMMU_HW_QUEUE_TYPE_TEGRA241_CMDQV,
+ index, addr, size, &hw_queue_id,
+ errp)) {
+ return false;
+ }
+ hw_queue = g_new(IOMMUFDHWqueue, 1);
+ hw_queue->hw_queue_id = hw_queue_id;
+ hw_queue->viommu = viommu;
+
+ cmdqv->vcmdq[index] = hw_queue;
+ return true;
+}
+
+/* Note that offset aligns down to 0x10000 */
+static void
+tegra241_cmdqv_write_vcmdq(Tegra241CMDQV *cmdqv, hwaddr offset, int index,
+ uint64_t value, unsigned size, Error **errp)
+{
+ uint32_t *ptr = NULL;
+ uint64_t off;
+
+ if (cmdqv->vcmdq_page0) {
+ off = (0x80 * index) + (offset - 0x10000);
+ ptr = (uint32_t *)(cmdqv->vcmdq_page0 + off);
+ }
+
+ switch (offset) {
+ case A_VCMDQ0_CONS_INDX:
+ if (ptr) {
+ *ptr = value;
+ }
+ cmdqv->vcmdq_cons_indx[index] = value;
+ return;
+ case A_VCMDQ0_PROD_INDX:
+ if (ptr) {
+ *ptr = value;
+ }
+ cmdqv->vcmdq_prod_indx[index] = (uint32_t)value;
+ return;
+ case A_VCMDQ0_CONFIG:
+ if (ptr) {
+ *ptr = (uint32_t)value;
+ } else {
+ if (value & R_VCMDQ0_CONFIG_CMDQ_EN_MASK) {
+ cmdqv->vcmdq_status[index] |= R_VCMDQ0_STATUS_CMDQ_EN_OK_MASK;
+ } else {
+ cmdqv->vcmdq_status[index] &= ~R_VCMDQ0_STATUS_CMDQ_EN_OK_MASK;
+ }
+ }
+ cmdqv->vcmdq_config[index] = (uint32_t)value;
+ return;
+ case A_VCMDQ0_GERRORN:
+ if (ptr) {
+ *ptr = (uint32_t)value;
+ }
+ cmdqv->vcmdq_gerrorn[index] = (uint32_t)value;
+ return;
+ case A_VCMDQ0_BASE_L:
+ if (size == 8) {
+ cmdqv->vcmdq_base[index] = value;
+ } else if (size == 4) {
+ cmdqv->vcmdq_base[index] =
+ (cmdqv->vcmdq_base[index] & 0xffffffff00000000ULL) |
+ (value & 0xffffffffULL);
+ }
+ tegra241_cmdqv_setup_vcmdq(cmdqv, index, errp);
+ return;
+ case A_VCMDQ0_BASE_H:
+ cmdqv->vcmdq_base[index] =
+ (cmdqv->vcmdq_base[index] & 0xffffffffULL) |
+ ((uint64_t)value << 32);
+ tegra241_cmdqv_setup_vcmdq(cmdqv, index, errp);
+ return;
+ case A_VCMDQ0_CONS_INDX_BASE_DRAM_L:
+ if (size == 8) {
+ cmdqv->vcmdq_cons_indx_base[index] = value;
+ } else if (size == 4) {
+ cmdqv->vcmdq_cons_indx_base[index] =
+ (cmdqv->vcmdq_cons_indx_base[index] & 0xffffffff00000000ULL) |
+ (value & 0xffffffffULL);
+ }
+ return;
+ case A_VCMDQ0_CONS_INDX_BASE_DRAM_H:
+ cmdqv->vcmdq_cons_indx_base[index] =
+ (cmdqv->vcmdq_cons_indx_base[index] & 0xffffffffULL) |
+ ((uint64_t)value << 32);
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "%s unhandled write access at 0x%" PRIx64 "\n",
+ __func__, offset);
+ return;
+ }
+}
+
static void tegra241_cmdqv_write(void *opaque, hwaddr offset, uint64_t value,
unsigned size)
{
Tegra241CMDQV *cmdqv = (Tegra241CMDQV *)opaque;
Error *local_err = NULL;
+ int index;
if (!cmdqv->vcmdq_page0) {
tegra241_cmdqv_init_vcmdq_page0(cmdqv, &local_err);
@@ -223,6 +370,72 @@ static void tegra241_cmdqv_write(void *opaque, hwaddr offset, uint64_t value,
local_err = NULL;
}
}
+
+ if (offset > TEGRA241_CMDQV_IO_LEN) {
+ qemu_log_mask(LOG_UNIMP,
+ "%s offset 0x%" PRIx64 " off limit (0x50000)\n", __func__,
+ offset);
+ return;
+ }
+
+ switch (offset) {
+ case A_CONFIG:
+ cmdqv->config = value;
+ if (value & R_CONFIG_CMDQV_EN_MASK) {
+ cmdqv->status |= R_STATUS_CMDQV_ENABLED_MASK;
+ } else {
+ cmdqv->status &= ~R_STATUS_CMDQV_ENABLED_MASK;
+ }
+ break;
+ case A_VI_INT_MASK ... A_VI_INT_MASK_1:
+ cmdqv->vi_int_mask[(offset - A_VI_INT_MASK) / 4] = value;
+ break;
+ case A_CMDQ_ALLOC_MAP_0 ... A_CMDQ_ALLOC_MAP_127:
+ cmdqv->cmdq_alloc_map[(offset - A_CMDQ_ALLOC_MAP_0) / 4] = value;
+ break;
+ case A_VINTF0_CONFIG ... A_VINTF0_LVCMDQ_ERR_MAP_3:
+ tegra241_cmdqv_write_vintf(cmdqv, offset, value, size);
+ break;
+ case A_VI_VCMDQ0_CONS_INDX ... A_VI_VCMDQ127_GERRORN:
+ offset -= 0x20000;
+ QEMU_FALLTHROUGH;
+ case A_VCMDQ0_CONS_INDX ... A_VCMDQ127_GERRORN:
+ /*
+ * Align offset down to 0x10000 while extracting the index:
+ * VCMDQ0_CONS_INDX (0x10000) => 0x10000, 0
+ * VCMDQ1_CONS_INDX (0x10080) => 0x10000, 1
+ * VCMDQ2_CONS_INDX (0x10100) => 0x10000, 2
+ * ...
+ * VCMDQ127_CONS_INDX (0x13f80) => 0x10000, 127
+ */
+ index = (offset - 0x10000) / 0x80;
+ tegra241_cmdqv_write_vcmdq(cmdqv, offset - 0x80 * index, index, value,
+ size, &local_err);
+ break;
+ case A_VI_VCMDQ0_BASE_L ... A_VI_VCMDQ127_CONS_INDX_BASE_DRAM_H:
+ offset -= 0x20000;
+ QEMU_FALLTHROUGH;
+ case A_VCMDQ0_BASE_L ... A_VCMDQ127_CONS_INDX_BASE_DRAM_H:
+ /*
+ * Align offset down to 0x20000 while extracting the index:
+ * VCMDQ0_BASE_L (0x20000) => 0x20000, 0
+ * VCMDQ1_BASE_L (0x20080) => 0x20000, 1
+ * VCMDQ2_BASE_L (0x20100) => 0x20000, 2
+ * ...
+ * VCMDQ127_BASE_L (0x23f80) => 0x20000, 127
+ */
+ index = (offset - 0x20000) / 0x80;
+ tegra241_cmdqv_write_vcmdq(cmdqv, offset - 0x80 * index, index, value,
+ size, &local_err);
+ break;
+ default:
+ qemu_log_mask(LOG_UNIMP, "%s unhandled write access at 0x%" PRIx64 "\n",
+ __func__, offset);
+ }
+
+ if (local_err) {
+ error_report_err(local_err);
+ }
}
static const MemoryRegionOps mmio_cmdqv_ops = {
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 10/16] hw/arm/tegra241-cmdqv: Allocate vEVENTQ object
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (8 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 09/16] hw/arm/tegra241-cmdqv:: Add write emulation for registers Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 11/16] hw/arm/tegra241-cmdqv: Read and propagate Tegra241 CMDQV errors Shameer Kolothum
` (6 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
Allocate a Tegra241 CMDQV type vEVENTQ object so that any host side errors
related to the CMDQV can be received and propagated back to the guest.
Event read and propagation will be added in a later patch.
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/tegra241-cmdqv.c | 51 +++++++++++++++++++++++++++++++++++++++++
hw/arm/tegra241-cmdqv.h | 1 +
2 files changed, 52 insertions(+)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index 5e9a980d27..812b027923 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -136,6 +136,52 @@ static uint64_t tegra241_cmdqv_read_vcmdq(Tegra241CMDQV *cmdqv, hwaddr offset,
return 0;
}
}
+
+static void tegra241_cmdqv_free_veventq(Tegra241CMDQV *cmdqv)
+{
+ SMMUv3State *smmu = cmdqv->smmu;
+ SMMUv3AccelState *s_accel = smmu->s_accel;
+ IOMMUFDViommu *viommu = &s_accel->viommu;
+ IOMMUFDVeventq *veventq = cmdqv->veventq;
+
+ if (!veventq) {
+ return;
+ }
+
+ iommufd_backend_free_id(viommu->iommufd, veventq->veventq_id);
+ g_free(veventq);
+ cmdqv->veventq = NULL;
+}
+
+static bool tegra241_cmdqv_alloc_veventq(Tegra241CMDQV *cmdqv, Error **errp)
+{
+ SMMUv3State *smmu = cmdqv->smmu;
+ SMMUv3AccelState *s_accel = smmu->s_accel;
+ IOMMUFDViommu *viommu = &s_accel->viommu;
+ IOMMUFDVeventq *veventq;
+ uint32_t veventq_id;
+ uint32_t veventq_fd;
+
+ if (cmdqv->veventq) {
+ return true;
+ }
+
+ if (!iommufd_backend_alloc_veventq(viommu->iommufd, viommu->viommu_id,
+ IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV,
+ 1 << 16, &veventq_id, &veventq_fd,
+ errp)) {
+ error_append_hint(errp, "Tegra241 CMDQV: failed to alloc veventq");
+ return false;
+ }
+
+ veventq = g_new(IOMMUFDVeventq, 1);
+ veventq->veventq_id = veventq_id;
+ veventq->veventq_fd = veventq_fd;
+ veventq->viommu = viommu;
+ cmdqv->veventq = veventq;
+ return true;
+}
+
static uint64_t tegra241_cmdqv_read(void *opaque, hwaddr offset, unsigned size)
{
Tegra241CMDQV *cmdqv = (Tegra241CMDQV *)opaque;
@@ -259,11 +305,16 @@ static bool tegra241_cmdqv_setup_vcmdq(Tegra241CMDQV *cmdqv, int index,
g_free(vcmdq);
}
+ if (!tegra241_cmdqv_alloc_veventq(cmdqv, errp)) {
+ return false;
+ }
+
viommu = &s_accel->viommu;
if (!iommufd_backend_alloc_hw_queue(viommu->iommufd, viommu->viommu_id,
IOMMU_HW_QUEUE_TYPE_TEGRA241_CMDQV,
index, addr, size, &hw_queue_id,
errp)) {
+ tegra241_cmdqv_free_veventq(cmdqv);
return false;
}
hw_queue = g_new(IOMMUFDHWqueue, 1);
diff --git a/hw/arm/tegra241-cmdqv.h b/hw/arm/tegra241-cmdqv.h
index 4972e367f6..ba7f2a0b1b 100644
--- a/hw/arm/tegra241-cmdqv.h
+++ b/hw/arm/tegra241-cmdqv.h
@@ -24,6 +24,7 @@ typedef struct Tegra241CMDQV {
MemoryRegion mmio_vintf_page;
void *vcmdq_page0;
IOMMUFDHWqueue *vcmdq[128];
+ IOMMUFDVeventq *veventq;
/* Register Cache */
uint32_t config;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 11/16] hw/arm/tegra241-cmdqv: Read and propagate Tegra241 CMDQV errors
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (9 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 10/16] hw/arm/tegra241-cmdqv: Allocate vEVENTQ object Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 12/16] hw/arm/tegra241-cmdqv: Add reset handler Shameer Kolothum
` (5 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
Install an event handler on the CMDQV vEVENTQ fd to read and propagate
host received CMDQV errors to the guest.
The handler runs in QEMU’s main loop, using a non-blocking fd registered
via qemu_set_fd_handler().
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/tegra241-cmdqv.c | 80 +++++++++++++++++++++++++++++++++++++++++
hw/arm/tegra241-cmdqv.h | 2 ++
hw/arm/trace-events | 3 ++
3 files changed, 85 insertions(+)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index 812b027923..5b8a7bdff2 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -8,9 +8,12 @@
*/
#include "qemu/osdep.h"
+#include "qemu/error-report.h"
#include "qemu/log.h"
+#include "trace.h"
#include "hw/arm/smmuv3.h"
+#include "hw/irq.h"
#include "smmuv3-accel.h"
#include "tegra241-cmdqv.h"
@@ -137,6 +140,79 @@ static uint64_t tegra241_cmdqv_read_vcmdq(Tegra241CMDQV *cmdqv, hwaddr offset,
}
}
+static void tegra241_cmdqv_event_read(void *opaque)
+{
+ Tegra241CMDQV *cmdqv = opaque;
+ struct {
+ struct iommufd_vevent_header hdr;
+ struct iommu_vevent_tegra241_cmdqv vevent;
+ } buf;
+ ssize_t readsz = sizeof(buf);
+ uint32_t last_seq = cmdqv->last_event_seq;
+ ssize_t bytes;
+
+ bytes = read(cmdqv->veventq->veventq_fd, &buf, readsz);
+ if (bytes <= 0) {
+ if (errno == EAGAIN || errno == EINTR) {
+ return;
+ }
+ error_report("Tegra241 CMDQV: vEVENTQ: read failed (%s)",
+ strerror(errno));
+ return;
+ }
+
+ if (bytes < readsz) {
+ error_report("Tegra241 CMDQV: vEVENTQ: incomplete read (%zd/%zd bytes)",
+ bytes, readsz);
+ return;
+ }
+
+ if (buf.hdr.flags & IOMMU_VEVENTQ_FLAG_LOST_EVENTS) {
+ error_report("Tegra241 CMDQV: vEVENTQ has lost events");
+ return;
+ }
+
+ /* Check sequence in hdr for lost events if any */
+ if (cmdqv->event_start) {
+ uint32_t expected = (last_seq == INT_MAX) ? 0 : last_seq + 1;
+
+ if (buf.hdr.sequence != expected) {
+ uint32_t delta;
+
+ if (buf.hdr.sequence >= last_seq) {
+ delta = buf.hdr.sequence - last_seq;
+ } else {
+ /* Handle wraparound from INT_MAX */
+ delta = (INT_MAX - last_seq) + buf.hdr.sequence + 1;
+ }
+ error_report("Tegra241 CMDQV: vEVENTQ: detected lost %u event(s)",
+ delta - 1);
+ }
+ }
+
+ if (buf.vevent.lvcmdq_err_map[0] || buf.vevent.lvcmdq_err_map[1]) {
+ cmdqv->vintf_cmdq_err_map[0] =
+ buf.vevent.lvcmdq_err_map[0] & 0xffffffff;
+ cmdqv->vintf_cmdq_err_map[1] =
+ (buf.vevent.lvcmdq_err_map[0] >> 32) & 0xffffffff;
+ cmdqv->vintf_cmdq_err_map[2] =
+ buf.vevent.lvcmdq_err_map[1] & 0xffffffff;
+ cmdqv->vintf_cmdq_err_map[3] =
+ (buf.vevent.lvcmdq_err_map[1] >> 32) & 0xffffffff;
+ for (int i = 0; i < 4; i++) {
+ cmdqv->cmdq_err_map[i] = cmdqv->vintf_cmdq_err_map[i];
+ }
+ cmdqv->vi_err_map[0] |= 0x1;
+ qemu_irq_pulse(cmdqv->irq);
+ trace_tegra241_cmdqv_err_map(
+ cmdqv->vintf_cmdq_err_map[3], cmdqv->vintf_cmdq_err_map[2],
+ cmdqv->vintf_cmdq_err_map[1], cmdqv->vintf_cmdq_err_map[0]);
+ }
+
+ cmdqv->last_event_seq = buf.hdr.sequence;
+ cmdqv->event_start = true;
+}
+
static void tegra241_cmdqv_free_veventq(Tegra241CMDQV *cmdqv)
{
SMMUv3State *smmu = cmdqv->smmu;
@@ -179,6 +255,10 @@ static bool tegra241_cmdqv_alloc_veventq(Tegra241CMDQV *cmdqv, Error **errp)
veventq->veventq_fd = veventq_fd;
veventq->viommu = viommu;
cmdqv->veventq = veventq;
+
+ /* Set up event handler for veventq fd */
+ fcntl(veventq_fd, F_SETFL, O_NONBLOCK);
+ qemu_set_fd_handler(veventq_fd, tegra241_cmdqv_event_read, NULL, cmdqv);
return true;
}
diff --git a/hw/arm/tegra241-cmdqv.h b/hw/arm/tegra241-cmdqv.h
index ba7f2a0b1b..97eaef8a72 100644
--- a/hw/arm/tegra241-cmdqv.h
+++ b/hw/arm/tegra241-cmdqv.h
@@ -25,6 +25,8 @@ typedef struct Tegra241CMDQV {
void *vcmdq_page0;
IOMMUFDHWqueue *vcmdq[128];
IOMMUFDVeventq *veventq;
+ uint32_t last_event_seq;
+ bool event_start;
/* Register Cache */
uint32_t config;
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 3457536fb0..76bda0efef 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -72,6 +72,9 @@ smmuv3_accel_unset_iommu_device(int devfn, uint32_t devid) "devfn=0x%x (idev dev
smmuv3_accel_translate_ste(uint32_t vsid, uint32_t hwpt_id, uint64_t ste_1, uint64_t ste_0) "vSID=0x%x hwpt_id=0x%x ste=%"PRIx64":%"PRIx64
smmuv3_accel_install_ste(uint32_t vsid, const char * type, uint32_t hwpt_id) "vSID=0x%x ste type=%s hwpt_id=0x%x"
+# tegra241-cmdqv
+tegra241_cmdqv_err_map(uint32_t map3, uint32_t map2, uint32_t map1, uint32_t map0) "hw irq received. error (hex) maps: %04X:%04X:%04X:%04X"
+
# strongarm.c
strongarm_uart_update_parameters(const char *label, int speed, char parity, int data_bits, int stop_bits) "%s speed=%d parity=%c data=%d stop=%d"
strongarm_ssp_read_underrun(void) "SSP rx underrun"
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 12/16] hw/arm/tegra241-cmdqv: Add reset handler
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (10 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 11/16] hw/arm/tegra241-cmdqv: Read and propagate Tegra241 CMDQV errors Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 13/16] hw/arm/tegra241-cmdqv: Limit queue size based on backend page size Shameer Kolothum
` (4 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Introduce a reset handler for the Tegra241 CMDQV and initialize its
register state.
CMDQV gets initialized early during guest boot, hence the handler verifies
that at least one cold-plugged device is attached to the associated vIOMMU
before proceeding. This is required to retrieve host CMDQV info and
to validate it against the QEMU implementation support.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/smmuv3.c | 1 +
hw/arm/tegra241-cmdqv.c | 105 ++++++++++++++++++++++++++++++++++++++++
hw/arm/tegra241-cmdqv.h | 7 +++
hw/arm/trace-events | 1 +
4 files changed, 114 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 02e1a925a4..ec8687d39a 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1943,6 +1943,7 @@ static void smmu_reset_exit(Object *obj, ResetType type)
smmuv3_reset(s);
smmuv3_accel_reset(s);
+ tegra241_cmdqv_reset(s);
}
static bool smmu_validate_property(SMMUv3State *s, Error **errp)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index 5b8a7bdff2..1f62b7627a 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -592,6 +592,111 @@ bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
return true;
}
+static void tegra241_cmdqv_init_regs(SMMUv3State *s, Tegra241CMDQV *cmdqv)
+{
+ SMMUv3AccelState *s_accel = s->s_accel;
+ uint32_t data_type = IOMMU_HW_INFO_TYPE_TEGRA241_CMDQV;
+ struct iommu_hw_info_tegra241_cmdqv cmdqv_info;
+ SMMUv3AccelDevice *accel_dev;
+ Error *local_err = NULL;
+ uint64_t caps;
+ int i;
+
+ if (QLIST_EMPTY(&s_accel->device_list)) {
+ error_report("tegra241-cmdqv=on: requires at least one cold-plugged "
+ "vfio-pci device");
+ goto out_err;
+ }
+
+ accel_dev = QLIST_FIRST(&s_accel->device_list);
+ if (!iommufd_backend_get_device_info(accel_dev->idev->iommufd,
+ accel_dev->idev->devid,
+ &data_type, &cmdqv_info,
+ sizeof(cmdqv_info), &caps,
+ NULL, &local_err)) {
+ error_append_hint(&local_err, "Failed to get Host CMDQV device info");
+ error_report_err(local_err);
+ goto out_err;
+ }
+
+ if (data_type != IOMMU_HW_INFO_TYPE_TEGRA241_CMDQV) {
+ error_report("Wrong data type (%d) from Host CMDQV device info",
+ data_type);
+ goto out_err;
+ }
+ if (cmdqv_info.version != TEGRA241_CMDQV_VERSION) {
+ error_report("Wrong version (%d) from Host CMDQV device info",
+ cmdqv_info.version);
+ goto out_err;
+ }
+ if (cmdqv_info.log2vcmdqs != TEGRA241_CMDQV_NUM_CMDQ_LOG2) {
+ error_report("Wrong num of cmdqs (%d) from Host CMDQV device info",
+ cmdqv_info.version);
+ goto out_err;
+ }
+ if (cmdqv_info.log2vsids != TEGRA241_CMDQV_NUM_SID_PER_VM_LOG2) {
+ error_report("Wrong num of SID per VM (%d) from Host CMDQV device info",
+ cmdqv_info.version);
+ goto out_err;
+ }
+
+ cmdqv->config = V_CONFIG_RESET;
+ cmdqv->param =
+ FIELD_DP32(cmdqv->param, PARAM, CMDQV_VER, TEGRA241_CMDQV_VERSION);
+ cmdqv->param = FIELD_DP32(cmdqv->param, PARAM, CMDQV_NUM_CMDQ_LOG2,
+ TEGRA241_CMDQV_NUM_CMDQ_LOG2);
+ cmdqv->param = FIELD_DP32(cmdqv->param, PARAM, CMDQV_NUM_SID_PER_VM_LOG2,
+ TEGRA241_CMDQV_NUM_SID_PER_VM_LOG2);
+ trace_tegra241_cmdqv_init_regs(cmdqv->param);
+ cmdqv->status = R_STATUS_CMDQV_ENABLED_MASK;
+ for (i = 0; i < 2; i++) {
+ cmdqv->vi_err_map[i] = 0;
+ cmdqv->vi_int_mask[i] = 0;
+ cmdqv->cmdq_err_map[i] = 0;
+ }
+ cmdqv->vintf_config = 0;
+ cmdqv->vintf_status = 0;
+ for (i = 0; i < 4; i++) {
+ cmdqv->vintf_cmdq_err_map[i] = 0;
+ }
+ for (i = 0; i < 128; i++) {
+ cmdqv->cmdq_alloc_map[i] = 0;
+ cmdqv->vcmdq_cons_indx[i] = 0;
+ cmdqv->vcmdq_prod_indx[i] = 0;
+ cmdqv->vcmdq_config[i] = 0;
+ cmdqv->vcmdq_status[i] = 0;
+ cmdqv->vcmdq_gerror[i] = 0;
+ cmdqv->vcmdq_gerrorn[i] = 0;
+ cmdqv->vcmdq_base[i] = 0;
+ cmdqv->vcmdq_cons_indx_base[i] = 0;
+ }
+ return;
+
+out_err:
+ exit(1);
+}
+
+void tegra241_cmdqv_reset(SMMUv3State *s)
+{
+ SMMUv3AccelState *s_accel = s->s_accel;
+ Tegra241CMDQV *cmdqv = s->cmdqv;
+ int i;
+
+ if (!s_accel || !cmdqv) {
+ return;
+ }
+
+ for (i = 127; i >= 0; i--) {
+ if (cmdqv->vcmdq[i]) {
+ iommufd_backend_free_id(s_accel->viommu.iommufd,
+ cmdqv->vcmdq[i]->hw_queue_id);
+ g_free(cmdqv->vcmdq[i]);
+ cmdqv->vcmdq[i] = NULL;
+ }
+ }
+ tegra241_cmdqv_init_regs(s, cmdqv);
+}
+
void tegra241_cmdqv_init(SMMUv3State *s)
{
SysBusDevice *sbd = SYS_BUS_DEVICE(OBJECT(s));
diff --git a/hw/arm/tegra241-cmdqv.h b/hw/arm/tegra241-cmdqv.h
index 97eaef8a72..0e8729c0b0 100644
--- a/hw/arm/tegra241-cmdqv.h
+++ b/hw/arm/tegra241-cmdqv.h
@@ -13,6 +13,9 @@
#include "hw/registerfields.h"
#include CONFIG_DEVICES
+#define TEGRA241_CMDQV_VERSION 0x1
+#define TEGRA241_CMDQV_NUM_CMDQ_LOG2 0x1
+#define TEGRA241_CMDQV_NUM_SID_PER_VM_LOG2 0x4
#define TEGRA241_CMDQV_IO_LEN 0x50000
typedef struct Tegra241CMDQV {
@@ -314,11 +317,15 @@ A_VINTFi_CONFIG(0)
#ifdef CONFIG_TEGRA241_CMDQV
bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
uint32_t *out_viommu_id, Error **errp);
+void tegra241_cmdqv_reset(SMMUv3State *s);
void tegra241_cmdqv_init(SMMUv3State *s);
#else
static inline void tegra241_cmdqv_init(SMMUv3State *s)
{
}
+static inline void tegra241_cmdqv_reset(SMMUv3State *s)
+{
+}
static inline bool
tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
uint32_t *out_viommu_id, Error **errp)
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 76bda0efef..ef495c040c 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -74,6 +74,7 @@ smmuv3_accel_install_ste(uint32_t vsid, const char * type, uint32_t hwpt_id) "vS
# tegra241-cmdqv
tegra241_cmdqv_err_map(uint32_t map3, uint32_t map2, uint32_t map1, uint32_t map0) "hw irq received. error (hex) maps: %04X:%04X:%04X:%04X"
+tegra241_cmdqv_init_regs(uint32_t param) "hw info received. param: 0x%04X"
# strongarm.c
strongarm_uart_update_parameters(const char *label, int speed, char parity, int data_bits, int stop_bits) "%s speed=%d parity=%c data=%d stop=%d"
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 13/16] hw/arm/tegra241-cmdqv: Limit queue size based on backend page size
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (11 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 12/16] hw/arm/tegra241-cmdqv: Add reset handler Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 14/16] virt-acpi-build: Rename AcpiIortSMMUv3Dev to AcpiSMMUv3Dev Shameer Kolothum
` (3 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
CMDQV HW reads guest queue memory in its host physical address setup via
IOMUUFD. This requires the guest queue memory isn't only contiguous in
guest PA space but also in host PA space. With Tegra241 CMDQV enabled, we
must only advertise a CMDQV size that the host can safely back with
physically contiguous memory. Allowing a CMDQV larger than the host page
size could cause the hardware to DMA across page boundaries leading to
faults.
Limit IDR1.CMDQS so the guest cannot configure a CMDQV that exceeds the
host’s contiguous backing.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/tegra241-cmdqv.c | 43 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/hw/arm/tegra241-cmdqv.c b/hw/arm/tegra241-cmdqv.c
index 1f62b7627a..1996d899a1 100644
--- a/hw/arm/tegra241-cmdqv.c
+++ b/hw/arm/tegra241-cmdqv.c
@@ -11,10 +11,14 @@
#include "qemu/error-report.h"
#include "qemu/log.h"
#include "trace.h"
+#include <math.h>
#include "hw/arm/smmuv3.h"
#include "hw/irq.h"
#include "smmuv3-accel.h"
+#include "smmuv3-internal.h"
+#include "system/ramblock.h"
+#include "exec/ramlist.h"
#include "tegra241-cmdqv.h"
static bool tegra241_cmdqv_init_vcmdq_page0(Tegra241CMDQV *cmdqv, Error **errp)
@@ -592,6 +596,33 @@ bool tegra241_cmdqv_alloc_viommu(SMMUv3State *s, HostIOMMUDeviceIOMMUFD *idev,
return true;
}
+static size_t tegra241_cmdqv_min_ram_pagesize(void)
+{
+ RAMBlock *rb;
+ size_t pg, min_pg = SIZE_MAX;
+
+ RAMBLOCK_FOREACH(rb) {
+ MemoryRegion *mr = rb->mr;
+
+ /* Only consider real RAM regions */
+ if (!mr || !memory_region_is_ram(mr)) {
+ continue;
+ }
+
+ /* Skip RAM regions that are not backed by a memory-backend */
+ if (!object_dynamic_cast(mr->owner, TYPE_MEMORY_BACKEND)) {
+ continue;
+ }
+
+ pg = qemu_ram_pagesize(rb);
+ if (pg && pg < min_pg) {
+ min_pg = pg;
+ }
+ }
+
+ return (min_pg == SIZE_MAX) ? qemu_real_host_page_size() : min_pg;
+}
+
static void tegra241_cmdqv_init_regs(SMMUv3State *s, Tegra241CMDQV *cmdqv)
{
SMMUv3AccelState *s_accel = s->s_accel;
@@ -599,7 +630,9 @@ static void tegra241_cmdqv_init_regs(SMMUv3State *s, Tegra241CMDQV *cmdqv)
struct iommu_hw_info_tegra241_cmdqv cmdqv_info;
SMMUv3AccelDevice *accel_dev;
Error *local_err = NULL;
+ size_t pgsize;
uint64_t caps;
+ uint32_t val;
int i;
if (QLIST_EMPTY(&s_accel->device_list)) {
@@ -670,6 +703,16 @@ static void tegra241_cmdqv_init_regs(SMMUv3State *s, Tegra241CMDQV *cmdqv)
cmdqv->vcmdq_base[i] = 0;
cmdqv->vcmdq_cons_indx_base[i] = 0;
}
+
+ /*
+ * CMDQ must not cross a physical RAM backend page. Adjust CMDQS so the
+ * queue fits entirely within the smallest backend page size.
+ * FIXME: Migration support requires this to be taken care.
+ */
+ pgsize = tegra241_cmdqv_min_ram_pagesize();
+ val = FIELD_EX32(s->idr[1], IDR1, CMDQS);
+ s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, MIN(log2(pgsize) - 4, val));
+
return;
out_err:
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 14/16] virt-acpi-build: Rename AcpiIortSMMUv3Dev to AcpiSMMUv3Dev
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (12 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 13/16] hw/arm/tegra241-cmdqv: Limit queue size based on backend page size Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 15/16] hw/arm/virt-acpi: Advertise Tegra241 CMDQV nodes in DSDT Shameer Kolothum
` (2 subsequent siblings)
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
Rename struct AcpiIortSMMUv3Dev to AcpiSMMUv3Dev so that it is not
specific to IORT. Subsequent Tegra241 CMDQV support patch will use the
same struct to build CMDQV DSDT support as well.
No functional changes intended.
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/virt-acpi-build.c | 36 ++++++++++++++++++------------------
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 1e3779991e..4f8d36dae0 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -339,7 +339,7 @@ static int iort_idmap_compare(gconstpointer a, gconstpointer b)
return idmap_a->input_base - idmap_b->input_base;
}
-typedef struct AcpiIortSMMUv3Dev {
+typedef struct AcpiSMMUv3Dev {
int irq;
hwaddr base;
GArray *rc_smmu_idmaps;
@@ -347,16 +347,16 @@ typedef struct AcpiIortSMMUv3Dev {
size_t offset;
bool accel;
bool ats;
-} AcpiIortSMMUv3Dev;
+} AcpiSMMUv3Dev;
/*
- * Populate the struct AcpiIortSMMUv3Dev for the legacy SMMUv3 and
+ * Populate the struct AcpiSMMUv3Dev for the legacy SMMUv3 and
* return the total number of associated idmaps.
*/
static int populate_smmuv3_legacy_dev(GArray *sdev_blob)
{
VirtMachineState *vms = VIRT_MACHINE(qdev_get_machine());
- AcpiIortSMMUv3Dev sdev;
+ AcpiSMMUv3Dev sdev;
sdev.rc_smmu_idmaps = g_array_new(false, true, sizeof(AcpiIortIdMapping));
object_child_foreach_recursive(object_get_root(), iort_host_bridges,
@@ -376,8 +376,8 @@ static int populate_smmuv3_legacy_dev(GArray *sdev_blob)
static int smmuv3_dev_idmap_compare(gconstpointer a, gconstpointer b)
{
- AcpiIortSMMUv3Dev *sdev_a = (AcpiIortSMMUv3Dev *)a;
- AcpiIortSMMUv3Dev *sdev_b = (AcpiIortSMMUv3Dev *)b;
+ AcpiSMMUv3Dev *sdev_a = (AcpiSMMUv3Dev *)a;
+ AcpiSMMUv3Dev *sdev_b = (AcpiSMMUv3Dev *)b;
AcpiIortIdMapping *map_a = &g_array_index(sdev_a->rc_smmu_idmaps,
AcpiIortIdMapping, 0);
AcpiIortIdMapping *map_b = &g_array_index(sdev_b->rc_smmu_idmaps,
@@ -391,7 +391,7 @@ static int iort_smmuv3_devices(Object *obj, void *opaque)
GArray *sdev_blob = opaque;
AcpiIortIdMapping idmap;
PlatformBusDevice *pbus;
- AcpiIortSMMUv3Dev sdev;
+ AcpiSMMUv3Dev sdev;
int min_bus, max_bus;
SysBusDevice *sbdev;
PCIBus *bus;
@@ -421,7 +421,7 @@ static int iort_smmuv3_devices(Object *obj, void *opaque)
}
/*
- * Populate the struct AcpiIortSMMUv3Dev for all SMMUv3 devices and
+ * Populate the struct AcpiSMMUv3Dev for all SMMUv3 devices and
* return the total number of idmaps.
*/
static int populate_smmuv3_dev(GArray *sdev_blob)
@@ -442,10 +442,10 @@ static void create_rc_its_idmaps(GArray *its_idmaps, GArray *smmuv3_devs)
{
AcpiIortIdMapping *idmap;
AcpiIortIdMapping next_range = {0};
- AcpiIortSMMUv3Dev *sdev;
+ AcpiSMMUv3Dev *sdev;
for (int i = 0; i < smmuv3_devs->len; i++) {
- sdev = &g_array_index(smmuv3_devs, AcpiIortSMMUv3Dev, i);
+ sdev = &g_array_index(smmuv3_devs, AcpiSMMUv3Dev, i);
/*
* Based on the RID ranges that are directed to the SMMU, determine the
* bypassed RID ranges, i.e., the ones that are directed to the ITS
@@ -479,7 +479,7 @@ static void create_rc_its_idmaps(GArray *its_idmaps, GArray *smmuv3_devs)
static void
build_iort_rmr_nodes(GArray *table_data, GArray *smmuv3_devices, uint32_t *id)
{
- AcpiIortSMMUv3Dev *sdev;
+ AcpiSMMUv3Dev *sdev;
AcpiIortIdMapping *idmap;
int i;
@@ -487,7 +487,7 @@ build_iort_rmr_nodes(GArray *table_data, GArray *smmuv3_devices, uint32_t *id)
uint16_t rmr_len;
int bdf;
- sdev = &g_array_index(smmuv3_devices, AcpiIortSMMUv3Dev, i);
+ sdev = &g_array_index(smmuv3_devices, AcpiSMMUv3Dev, i);
if (!sdev->accel) {
continue;
}
@@ -544,13 +544,13 @@ static void
build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
{
int i, nb_nodes, rc_mapping_count;
- AcpiIortSMMUv3Dev *sdev;
+ AcpiSMMUv3Dev *sdev;
size_t node_size;
bool ats_needed = false;
int num_smmus = 0;
uint32_t id = 0;
int rc_smmu_idmaps_len = 0;
- GArray *smmuv3_devs = g_array_new(false, true, sizeof(AcpiIortSMMUv3Dev));
+ GArray *smmuv3_devs = g_array_new(false, true, sizeof(AcpiSMMUv3Dev));
GArray *rc_its_idmaps = g_array_new(false, true, sizeof(AcpiIortIdMapping));
AcpiTable table = { .sig = "IORT", .rev = 5, .oem_id = vms->oem_id,
@@ -581,7 +581,7 @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
}
/* Calculate RMR nodes required. One per SMMUv3 with accelerated mode */
for (i = 0; i < num_smmus; i++) {
- sdev = &g_array_index(smmuv3_devs, AcpiIortSMMUv3Dev, i);
+ sdev = &g_array_index(smmuv3_devs, AcpiSMMUv3Dev, i);
if (sdev->ats) {
ats_needed = true;
}
@@ -620,7 +620,7 @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
}
for (i = 0; i < num_smmus; i++) {
- sdev = &g_array_index(smmuv3_devs, AcpiIortSMMUv3Dev, i);
+ sdev = &g_array_index(smmuv3_devs, AcpiSMMUv3Dev, i);
int smmu_mapping_count, offset_to_id_array;
int irq = sdev->irq;
@@ -699,7 +699,7 @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
AcpiIortIdMapping *range;
for (i = 0; i < num_smmus; i++) {
- sdev = &g_array_index(smmuv3_devs, AcpiIortSMMUv3Dev, i);
+ sdev = &g_array_index(smmuv3_devs, AcpiSMMUv3Dev, i);
/*
* Map RIDs (input) from RC to SMMUv3 nodes: RC -> SMMUv3.
@@ -742,7 +742,7 @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
acpi_table_end(linker, &table);
g_array_free(rc_its_idmaps, true);
for (i = 0; i < num_smmus; i++) {
- sdev = &g_array_index(smmuv3_devs, AcpiIortSMMUv3Dev, i);
+ sdev = &g_array_index(smmuv3_devs, AcpiSMMUv3Dev, i);
g_array_free(sdev->rc_smmu_idmaps, true);
}
g_array_free(smmuv3_devs, true);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 15/16] hw/arm/virt-acpi: Advertise Tegra241 CMDQV nodes in DSDT
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (13 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 14/16] virt-acpi-build: Rename AcpiIortSMMUv3Dev to AcpiSMMUv3Dev Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 16/16] hw/arm/smmuv3: Add tegra241-cmdqv property for SMMUv3 device Shameer Kolothum
2025-12-11 17:54 ` [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Eric Auger
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Add ACPI DSDT support for Tegra241 CMDQV when the SMMUv3 instance is
created with tegra241-cmdqv=on.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/trace-events | 1 +
hw/arm/virt-acpi-build.c | 74 ++++++++++++++++++++++++++++++++++++++++
include/hw/arm/virt.h | 2 ++
3 files changed, 77 insertions(+)
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index ef495c040c..e7e3ccfe9f 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -9,6 +9,7 @@ omap1_lpg_led(const char *onoff) "omap1 LPG: LED is %s"
# virt-acpi-build.c
virt_acpi_setup(void) "No fw cfg or ACPI disabled. Bailing out."
+virt_acpi_dsdt_tegra241_cmdqv(int smmu_id, uint64_t base, uint32_t irq) "DSDT: add cmdqv node for (id=%d), base=0x%" PRIx64 ", irq=%d"
# smmu-common.c
smmu_add_mr(const char *name) "%s"
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 4f8d36dae0..11494b29ad 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -1115,6 +1115,78 @@ static void build_fadt_rev6(GArray *table_data, BIOSLinker *linker,
build_fadt(table_data, linker, &fadt, vms->oem_id, vms->oem_table_id);
}
+static int smmuv3_cmdqv_devices(Object *obj, void *opaque)
+{
+ VirtMachineState *vms = VIRT_MACHINE(qdev_get_machine());
+ GArray *sdev_blob = opaque;
+ PlatformBusDevice *pbus;
+ AcpiSMMUv3Dev sdev;
+ SysBusDevice *sbdev;
+
+ if (!object_dynamic_cast(obj, TYPE_ARM_SMMUV3)) {
+ return 0;
+ }
+
+ if (!object_property_get_bool(obj, "tegra241-cmdqv", NULL)) {
+ return 0;
+ }
+
+ pbus = PLATFORM_BUS_DEVICE(vms->platform_bus_dev);
+ sbdev = SYS_BUS_DEVICE(obj);
+ sdev.base = platform_bus_get_mmio_addr(pbus, sbdev, 1);
+ sdev.base += vms->memmap[VIRT_PLATFORM_BUS].base;
+ sdev.irq = platform_bus_get_irqn(pbus, sbdev, NUM_SMMU_IRQS);
+ sdev.irq += vms->irqmap[VIRT_PLATFORM_BUS];
+ sdev.irq += ARM_SPI_BASE;
+ g_array_append_val(sdev_blob, sdev);
+ return 0;
+}
+
+static void acpi_dsdt_add_tegra241_cmdqv(Aml *scope, VirtMachineState *vms)
+{
+ GArray *smmuv3_devs = g_array_new(false, true, sizeof(AcpiSMMUv3Dev));
+ int i;
+
+ if (vms->legacy_smmuv3_present) {
+ return;
+ }
+
+ object_child_foreach_recursive(object_get_root(), smmuv3_cmdqv_devices,
+ smmuv3_devs);
+
+ for (i = 0; i < smmuv3_devs->len; i++) {
+ uint32_t identifier = i;
+ AcpiSMMUv3Dev *sdev;
+ Aml *dev, *crs, *addr;
+
+ sdev = &g_array_index(smmuv3_devs, AcpiSMMUv3Dev, i);
+
+ dev = aml_device("CV%.02u", identifier);
+ aml_append(dev, aml_name_decl("_HID", aml_string("NVDA200C")));
+ if (vms->its) {
+ identifier++;
+ }
+ aml_append(dev, aml_name_decl("_UID", aml_int(identifier)));
+ aml_append(dev, aml_name_decl("_CCA", aml_int(1)));
+
+ crs = aml_resource_template();
+ addr = aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
+ AML_CACHEABLE, AML_READ_WRITE, 0x0, sdev->base,
+ sdev->base + TEGRA241_CMDQV_IO_LEN - 0x1, 0x0,
+ TEGRA241_CMDQV_IO_LEN);
+ aml_append(crs, addr);
+ aml_append(crs, aml_interrupt(AML_CONSUMER, AML_EDGE,
+ AML_ACTIVE_HIGH, AML_EXCLUSIVE,
+ (uint32_t *)&sdev->irq, 1));
+ aml_append(dev, aml_name_decl("_CRS", crs));
+
+ aml_append(scope, dev);
+
+ trace_virt_acpi_dsdt_tegra241_cmdqv(identifier, sdev->base, sdev->irq);
+ }
+ g_array_free(smmuv3_devs, true);
+}
+
/* DSDT */
static void
build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
@@ -1179,6 +1251,8 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
acpi_dsdt_add_tpm(scope, vms);
#endif
+ acpi_dsdt_add_tegra241_cmdqv(scope, vms);
+
aml_append(dsdt, scope);
pci0_scope = aml_scope("\\_SB.PCI0");
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index efbc1758c5..842143cc85 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -46,6 +46,8 @@
#define NUM_VIRTIO_TRANSPORTS 32
#define NUM_SMMU_IRQS 4
+#define TEGRA241_CMDQV_IO_LEN 0x50000
+
/* See Linux kernel arch/arm64/include/asm/pvclock-abi.h */
#define PVTIME_SIZE_PER_CPU 64
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC PATCH 16/16] hw/arm/smmuv3: Add tegra241-cmdqv property for SMMUv3 device
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (14 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 15/16] hw/arm/virt-acpi: Advertise Tegra241 CMDQV nodes in DSDT Shameer Kolothum
@ 2025-12-10 13:37 ` Shameer Kolothum
2025-12-11 17:54 ` [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Eric Auger
16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-10 13:37 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs, jgg,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, kjaju
Introduce a “tegra241-cmdqv” property to enable Tegra241 CMDQV
support. This is only enabled for accelerated SMMUv3 devices.
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/smmuv3.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index ec8687d39a..58c35c2af3 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1953,6 +1953,12 @@ static bool smmu_validate_property(SMMUv3State *s, Error **errp)
error_setg(errp, "accel=on support not compiled in");
return false;
}
+#endif
+#ifndef CONFIG_TEGRA241_CMDQ
+ if (s->tegra241_cmdqv) {
+ error_setg(errp, "tegra241_cmdqv=on support not compiled in");
+ return false;
+ }
#endif
if (!s->accel) {
if (!s->ril) {
@@ -1971,6 +1977,10 @@ static bool smmu_validate_property(SMMUv3State *s, Error **errp)
error_setg(errp, "pasid can only be enabled if accel=on");
return false;
}
+ if (s->tegra241_cmdqv) {
+ error_setg(errp, "tegra241_cmdqv can only be enabled if accel=on");
+ return false;
+ }
return true;
}
@@ -2109,6 +2119,7 @@ static const Property smmuv3_properties[] = {
DEFINE_PROP_BOOL("ats", SMMUv3State, ats, false),
DEFINE_PROP_UINT8("oas", SMMUv3State, oas, 44),
DEFINE_PROP_BOOL("pasid", SMMUv3State, pasid, false),
+ DEFINE_PROP_BOOL("tegra241-cmdqv", SMMUv3State, tegra241_cmdqv, false),
};
static void smmuv3_instance_init(Object *obj)
@@ -2144,6 +2155,8 @@ static void smmuv3_class_init(ObjectClass *klass, const void *data)
"are 44 or 48 bits. Defaults to 44 bits");
object_class_property_set_description(klass, "pasid",
"Enable/disable PASID support (for accel=on)");
+ object_class_property_set_description(klass, "tegra241-cmdqv",
+ "Enable/disable Tegra241 CMDQ-Virtualisation support (for accel=on)");
}
static int smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
` (15 preceding siblings ...)
2025-12-10 13:37 ` [RFC PATCH 16/16] hw/arm/smmuv3: Add tegra241-cmdqv property for SMMUv3 device Shameer Kolothum
@ 2025-12-11 17:54 ` Eric Auger
2025-12-12 0:23 ` Shameer Kolothum
16 siblings, 1 reply; 19+ messages in thread
From: Eric Auger @ 2025-12-11 17:54 UTC (permalink / raw)
To: Shameer Kolothum, qemu-arm, qemu-devel
Cc: peter.maydell, nicolinc, nathanc, mochs, jgg, jonathan.cameron,
zhangfei.gao, zhenzhong.duan, kjaju
Hi Shameer,
On 12/10/25 2:37 PM, Shameer Kolothum wrote:
> Hi,
>
> This RFC series adds initial support for NVIDIA Tegra241 CMDQV
> (Command Queue Virtualisation), an extension to ARM SMMUv3 that
> provides hardware accelerated virtual command queues (VCMDQs) for
> guests. CMDQV allows guests to issue SMMU invalidation commands
> directly to hardware without VM exits, significantly reducing TLBI
> overhead.
>
> Thanks to Nicolin for the initial patches and testing on which this RFC
> is based.
>
> This is based on v6[0] of the SMMUv3 accel series, which is still under
> review, though nearing convergence. This is sent as an RFC, with the goal
> of gathering early feedback on the CMDQV design and its integration with
> the SMMUv3 acceleration path.
>
> Background:
>
> Tegra241 CMDQV extends SMMUv3 by allocating per-VM "virtual interfaces"
> (VINTFs), each hosting up to 128 VCMDQs.
>
> Each VINTF exposes two 64KB MMIO pages:
> - Page0 – guest owned control and status registers (directly mapped
> into the VM)
> - Page1 – queue configuration registers (trapped/emulated by QEMU)
>
> Unlike the standard SMMU CMDQ, a guest owned Tegra241 VCMDQ does not
> support the full command set. Only a subset, primarily invalidation
> related commands, is accepted by the CMDQV hardware. For this reason,
> a distinct CMDQV device must be exposed to the guest, and the guest OS
> must include a Tegra241 CMDQV aware driver to take advantage of the
> hardware acceleration.
>
> VCMDQ support is integrated via the IOMMU_HW_QUEUE_ALLOC mechanism,
> allowing QEMU to attach guest configured VCMDQ buffers to the
> underlying CMDQV hardware through IOMMUFD. The Linux kernel already
> supports the full CMDQV virtualisation model via IOMMUFD[0].
>
> Summary of QEMU changes:
>
> - Integrated into the existing SMMUv3 accel path via a
> "tegra241-cmdqv" property.
> - Support for allocating vIOMMU objects of type
> IOMMU_VIOMMU_TYPE_TEGRA241_CMDQV.
> - Mapping and emulation of the CMDQV MMIO register layout.
> - VCMDQ/VINTF read/write handling and queue allocation using IOMMUFD
> APIs.
> - Reset and initialisation hooks, including checks for at least one
> cold-plugged device.
> - CMDQV hardware reads guest queue memory using host physical addresses
> provided through IOMMUFD, which requires that the VCMDQ buffer be
> physically contiguous not only in guest PA space but also in host
> PA space. When Tegra241 CMDQV is enabled, QEMU must therefore only
> expose a CMDQV size that the host can reliably back with contiguous
> physical memory. Because of this constraint, it is suggested to use
> huge pages to back the guest RAM.
> - ACPI DSDT node generation for CMDQV devices on the virt machine.
>
> These patches have been sanity tested on NVIDIA Grace platforms.
>
> ToDo / revisit:
> - Prevent hot-unplug of the last device associated with vIOMMU as
> this might allow associating a different host SMMU/CMDQV.
> - Locking requirements around error event propagation.
>
> Feedback and testing are very welcome.
>
> Thanks,
> Shameer
> [0] https://lore.kernel.org/qemu-devel/20251120132213.56581-1-skolothumtho@nvidia.com/
> [1] https://lore.kernel.org/all/cover.1752126748.git.nicolinc@nvidia.com/
do you have a branch to share with all the bits?
Thanks
Eric
>
> Nicolin Chen (12):
> backends/iommufd: Update iommufd_backend_get_device_info
> backends/iommufd: Update iommufd_backend_alloc_viommu to allow user
> ptr
> backends/iommufd: Introduce iommufd_backend_alloc_hw_queue
> backends/iommufd: Introduce iommufd_backend_viommu_mmap
> hw/arm/tegra241-cmdqv: Add initial Tegra241 CMDQ-Virtualisation
> support
> hw/arm/tegra241-cmdqv: Map VINTF Page0 into guest
> hw/arm/tegra241-cmdqv: Add read emulation support for registers
> system/physmem: Add helper to check whether a guest PA maps to RAM
> hw/arm/tegra241-cmdqv:: Add write emulation for registers
> hw/arm/tegra241-cmdqv: Add reset handler
> hw/arm/tegra241-cmdqv: Limit queue size based on backend page size
> hw/arm/virt-acpi: Advertise Tegra241 CMDQV nodes in DSDT
>
> Shameer Kolothum (4):
> hw/arm/tegra241-cmdqv: Allocate vEVENTQ object
> hw/arm/tegra241-cmdqv: Read and propagate Tegra241 CMDQV errors
> virt-acpi-build: Rename AcpiIortSMMUv3Dev to AcpiSMMUv3Dev
> hw/arm/smmuv3: Add tegra241-cmdqv property for SMMUv3 device
>
> backends/iommufd.c | 65 ++++
> backends/trace-events | 2 +
> hw/arm/Kconfig | 5 +
> hw/arm/meson.build | 1 +
> hw/arm/smmuv3-accel.c | 16 +-
> hw/arm/smmuv3.c | 18 +
> hw/arm/tegra241-cmdqv.c | 759 ++++++++++++++++++++++++++++++++++++++
> hw/arm/tegra241-cmdqv.h | 337 +++++++++++++++++
> hw/arm/trace-events | 5 +
> hw/arm/virt-acpi-build.c | 110 +++++-
> hw/vfio/iommufd.c | 6 +-
> include/exec/cpu-common.h | 2 +
> include/hw/arm/smmuv3.h | 3 +
> include/hw/arm/virt.h | 2 +
> include/system/iommufd.h | 16 +
> system/physmem.c | 12 +
> 16 files changed, 1332 insertions(+), 27 deletions(-)
> create mode 100644 hw/arm/tegra241-cmdqv.c
> create mode 100644 hw/arm/tegra241-cmdqv.h
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3
2025-12-11 17:54 ` [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Eric Auger
@ 2025-12-12 0:23 ` Shameer Kolothum
0 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2025-12-12 0:23 UTC (permalink / raw)
To: eric.auger@redhat.com, qemu-arm@nongnu.org, qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org, Nicolin Chen, Nathan Chen, Matt Ochs,
Jason Gunthorpe, jonathan.cameron@huawei.com,
zhangfei.gao@linaro.org, zhenzhong.duan@intel.com,
Krishnakant Jaju
Hi Eric,
> -----Original Message-----
> From: Eric Auger <eric.auger@redhat.com>
> Sent: 11 December 2025 17:55
> To: Shameer Kolothum <skolothumtho@nvidia.com>; qemu-
> arm@nongnu.org; qemu-devel@nongnu.org
> Cc: peter.maydell@linaro.org; Nicolin Chen <nicolinc@nvidia.com>; Nathan
> Chen <nathanc@nvidia.com>; Matt Ochs <mochs@nvidia.com>; Jason
> Gunthorpe <jgg@nvidia.com>; jonathan.cameron@huawei.com;
> zhangfei.gao@linaro.org; zhenzhong.duan@intel.com; Krishnakant Jaju
> <kjaju@nvidia.com>
> Subject: Re: [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support
> for accelerated SMMUv3
>
> External email: Use caution opening links or attachments
[...]
>
> > [0] https://lore.kernel.org/qemu-devel/20251120132213.56581-1-
> skolothumtho@nvidia.com/
> > [1] https://lore.kernel.org/all/cover.1752126748.git.nicolinc@nvidia.com/
>
> do you have a branch to share with all the bits?
Here:
https://github.com/shamiali2008/qemu-master.git master-smmuv3-accel-v6-veventq-v2-vcmdq-rfcv1
Thanks,
Shameer
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2025-12-12 0:24 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-10 13:37 [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 01/16] backends/iommufd: Update iommufd_backend_get_device_info Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 02/16] backends/iommufd: Update iommufd_backend_alloc_viommu to allow user ptr Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 03/16] backends/iommufd: Introduce iommufd_backend_alloc_hw_queue Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 04/16] backends/iommufd: Introduce iommufd_backend_viommu_mmap Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 05/16] hw/arm/tegra241-cmdqv: Add initial Tegra241 CMDQ-Virtualisation support Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 06/16] hw/arm/tegra241-cmdqv: Map VINTF Page0 into guest Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 07/16] hw/arm/tegra241-cmdqv: Add read emulation support for registers Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 08/16] system/physmem: Add helper to check whether a guest PA maps to RAM Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 09/16] hw/arm/tegra241-cmdqv:: Add write emulation for registers Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 10/16] hw/arm/tegra241-cmdqv: Allocate vEVENTQ object Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 11/16] hw/arm/tegra241-cmdqv: Read and propagate Tegra241 CMDQV errors Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 12/16] hw/arm/tegra241-cmdqv: Add reset handler Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 13/16] hw/arm/tegra241-cmdqv: Limit queue size based on backend page size Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 14/16] virt-acpi-build: Rename AcpiIortSMMUv3Dev to AcpiSMMUv3Dev Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 15/16] hw/arm/virt-acpi: Advertise Tegra241 CMDQV nodes in DSDT Shameer Kolothum
2025-12-10 13:37 ` [RFC PATCH 16/16] hw/arm/smmuv3: Add tegra241-cmdqv property for SMMUv3 device Shameer Kolothum
2025-12-11 17:54 ` [RFC PATCH 00/16] hw/arm: Introduce Tegra241 CMDQV support for accelerated SMMUv3 Eric Auger
2025-12-12 0:23 ` Shameer Kolothum
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).