* [RFC PATCH 0/4] vEVENTQ support for accelerated SMMUv3 devices
@ 2025-11-05 15:46 Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 1/4] backends/iommufd: Introduce iommufd_backend_alloc_veventq Shameer Kolothum
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-05 15:46 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, jgg, kjaju
Hi,
When accel=on is enabled for an SMMUv3 instance, the host hardware SMMUv3
may generate Stage-1 (S1) fault or event notifications that are targeted
toward the vIOMMU instance in userspace.
This series adds support in QEMU to receive such host events through a
vEVENTQ object and propagate them to the guest. The implementation
leverages the vEVENTQ interface provided by the IOMMUFD kernel subsystem.
This is being sent as an RFC since it depends on the "Add support for
user-creatable accelerated SMMUv3" series which is currently under
discussion[0].
I have lightly tested this on a Grace platform with some hacks to generate
faults events. Further testing and feedbacks welcome.
Thanks,
Shameer
[0] https://lore.kernel.org/qemu-devel/20251031105005.24618-1-skolothumtho@nvidia.com/
Nicolin Chen (2):
backends/iommufd: Introduce iommufd_backend_alloc_veventq
hw/arm/smmuv3-accel: Allocate vEVENTQ for accelerated SMMUv3 devices
Shameer Kolothum (2):
hw/arm/smmuv3: Introduce a helper function for event propagation
hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
backends/iommufd.c | 31 ++++++++++
backends/trace-events | 1 +
hw/arm/smmuv3-accel.c | 123 +++++++++++++++++++++++++++++++++++++++
hw/arm/smmuv3-accel.h | 8 +++
hw/arm/smmuv3-internal.h | 4 ++
hw/arm/smmuv3.c | 28 +++++++--
hw/arm/trace-events | 2 +-
include/system/iommufd.h | 12 ++++
8 files changed, 202 insertions(+), 7 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC PATCH 1/4] backends/iommufd: Introduce iommufd_backend_alloc_veventq
2025-11-05 15:46 [RFC PATCH 0/4] vEVENTQ support for accelerated SMMUv3 devices Shameer Kolothum
@ 2025-11-05 15:46 ` Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 2/4] hw/arm/smmuv3-accel: Allocate vEVENTQ for accelerated SMMUv3 devices Shameer Kolothum
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-05 15:46 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, jgg, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
Add a new helper for IOMMU_VEVENTQ_ALLOC ioctl to allocate a virtual event
queue (vEVENTQ) for a vIOMMU object.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
backends/iommufd.c | 31 +++++++++++++++++++++++++++++++
backends/trace-events | 1 +
include/system/iommufd.h | 12 ++++++++++++
3 files changed, 44 insertions(+)
diff --git a/backends/iommufd.c b/backends/iommufd.c
index 392f9cf2a8..4a6aebdb42 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -503,6 +503,37 @@ bool iommufd_backend_alloc_vdev(IOMMUFDBackend *be, uint32_t dev_id,
return true;
}
+bool iommufd_backend_alloc_veventq(IOMMUFDBackend *be, uint32_t viommu_id,
+ uint32_t type, uint32_t depth,
+ uint32_t *out_veventq_id,
+ uint32_t *out_veventq_fd, Error **errp)
+{
+ int ret;
+ struct iommu_veventq_alloc alloc_veventq = {
+ .size = sizeof(alloc_veventq),
+ .flags = 0,
+ .type = type,
+ .veventq_depth = depth,
+ .viommu_id = viommu_id,
+ };
+
+ ret = ioctl(be->fd, IOMMU_VEVENTQ_ALLOC, &alloc_veventq);
+
+ trace_iommufd_viommu_alloc_eventq(be->fd, viommu_id, type,
+ alloc_veventq.out_veventq_id,
+ alloc_veventq.out_veventq_fd, ret);
+ if (ret) {
+ error_setg_errno(errp, errno, "IOMMU_VEVENTQ_ALLOC failed");
+ return false;
+ }
+
+ g_assert(out_veventq_id);
+ g_assert(out_veventq_fd);
+ *out_veventq_id = alloc_veventq.out_veventq_id;
+ *out_veventq_fd = alloc_veventq.out_veventq_fd;
+ return true;
+}
+
bool host_iommu_device_iommufd_attach_hwpt(HostIOMMUDeviceIOMMUFD *idev,
uint32_t hwpt_id, Error **errp)
{
diff --git a/backends/trace-events b/backends/trace-events
index 8408dc8701..5afa7a40be 100644
--- a/backends/trace-events
+++ b/backends/trace-events
@@ -23,3 +23,4 @@ iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, u
iommufd_backend_invalidate_cache(int iommufd, uint32_t id, uint32_t data_type, uint32_t entry_len, uint32_t entry_num, uint32_t done_num, uint64_t data_ptr, int ret) " iommufd=%d id=%u data_type=%u entry_len=%u entry_num=%u done_num=%u data_ptr=0x%"PRIx64" (%d)"
iommufd_backend_alloc_viommu(int iommufd, uint32_t dev_id, uint32_t type, uint32_t hwpt_id, uint32_t viommu_id, int ret) " iommufd=%d type=%u dev_id=%u hwpt_id=%u viommu_id=%u (%d)"
iommufd_backend_alloc_vdev(int iommufd, uint32_t dev_id, uint32_t viommu_id, uint64_t virt_id, uint32_t vdev_id, int ret) " iommufd=%d dev_id=%u viommu_id=%u virt_id=0x%"PRIx64" vdev_id=%u (%d)"
+iommufd_viommu_alloc_eventq(int iommufd, uint32_t viommu_id, uint32_t type, uint32_t veventq_id, uint32_t veventq_fd, int ret) " iommufd=%d viommu_id=%u type=%u veventq_id=%u veventq_fd=%u (%d)"
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index aa78bf1e1d..9770ff1484 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -56,6 +56,13 @@ typedef struct IOMMUFDVdev {
uint32_t virt_id; /* virtual device ID */
} IOMMUFDVdev;
+/* Virtual event queue interface for a vIOMMU */
+typedef struct IOMMUFDVeventq {
+ IOMMUFDViommu *viommu;
+ uint32_t veventq_id;
+ uint32_t veventq_fd;
+} IOMMUFDVeventq;
+
bool iommufd_backend_connect(IOMMUFDBackend *be, Error **errp);
void iommufd_backend_disconnect(IOMMUFDBackend *be);
@@ -86,6 +93,11 @@ bool iommufd_backend_alloc_vdev(IOMMUFDBackend *be, uint32_t dev_id,
uint32_t viommu_id, uint64_t virt_id,
uint32_t *out_vdev_id, Error **errp);
+bool iommufd_backend_alloc_veventq(IOMMUFDBackend *be, uint32_t viommu_id,
+ uint32_t type, uint32_t depth,
+ uint32_t *out_veventq_id,
+ uint32_t *out_veventq_fd, Error **errp);
+
bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be, uint32_t hwpt_id,
bool start, Error **errp);
bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be, uint32_t hwpt_id,
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC PATCH 2/4] hw/arm/smmuv3-accel: Allocate vEVENTQ for accelerated SMMUv3 devices
2025-11-05 15:46 [RFC PATCH 0/4] vEVENTQ support for accelerated SMMUv3 devices Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 1/4] backends/iommufd: Introduce iommufd_backend_alloc_veventq Shameer Kolothum
@ 2025-11-05 15:46 ` Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 3/4] hw/arm/smmuv3: Introduce a helper function for event propagation Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events Shameer Kolothum
3 siblings, 0 replies; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-05 15:46 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, jgg, kjaju
From: Nicolin Chen <nicolinc@nvidia.com>
When the guest enables the Event Queue and a vIOMMU is present, allocate a
vEVENTQ object so that host-side events related to the vIOMMU can be
received and propagated back to the guest.
For cold-plugged devices using SMMUv3 acceleration, the vIOMMU is created
before the guest boots. In this case, the vEVENTQ is allocated when the
guest writes to SMMU_CR0 and sets EVENTQEN = 1.
If no cold-plugged device exists at boot (i.e. no vIOMMU initially), the
vEVENTQ is allocated when a vIOMMU is created, i.e. during the first
device hot-plug.
Event read and propagation will be added in a later patch.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/smmuv3-accel.c | 61 +++++++++++++++++++++++++++++++++++++++++++
hw/arm/smmuv3-accel.h | 6 +++++
hw/arm/smmuv3.c | 7 +++++
3 files changed, 74 insertions(+)
diff --git a/hw/arm/smmuv3-accel.c b/hw/arm/smmuv3-accel.c
index 1f206be8e4..210e7ebf36 100644
--- a/hw/arm/smmuv3-accel.c
+++ b/hw/arm/smmuv3-accel.c
@@ -383,6 +383,59 @@ static SMMUv3AccelDevice *smmuv3_accel_get_dev(SMMUState *bs, SMMUPciBus *sbus,
return accel_dev;
}
+static void smmuv3_accel_free_veventq(SMMUViommu *vsmmu)
+{
+ IOMMUFDVeventq *veventq = vsmmu->veventq;
+
+ if (!veventq) {
+ return;
+ }
+ iommufd_backend_free_id(vsmmu->iommufd, veventq->veventq_id);
+ g_free(veventq);
+ vsmmu->veventq = NULL;
+}
+
+bool smmuv3_accel_alloc_veventq(SMMUv3State *s, Error **errp)
+{
+ SMMUv3AccelState *s_accel = s->s_accel;
+ IOMMUFDVeventq *veventq;
+ SMMUViommu *vsmmu;
+ uint32_t veventq_id;
+ uint32_t veventq_fd;
+
+ if (!s_accel || !s_accel->vsmmu) {
+ return true;
+ }
+
+ vsmmu = s_accel->vsmmu;
+ if (vsmmu->veventq) {
+ return true;
+ }
+
+ /*
+ * Check whether the Guest has enabled the Event Queue. The queue enabled
+ * means EVENTQ_BASE has been programmed with a valid base address and size.
+ * If it’s not yet configured, return and retry later.
+ */
+ if (!smmuv3_eventq_enabled(s)) {
+ return true;
+ }
+
+ if (!iommufd_backend_alloc_veventq(vsmmu->iommufd, vsmmu->viommu.viommu_id,
+ IOMMU_VEVENTQ_TYPE_ARM_SMMUV3,
+ 1 << s->eventq.log2size, &veventq_id,
+ &veventq_fd, errp)) {
+ return false;
+ }
+
+ veventq = g_new(IOMMUFDVeventq, 1);
+ veventq->veventq_id = veventq_id;
+ veventq->veventq_fd = veventq_fd;
+ veventq->viommu = &vsmmu->viommu;
+ vsmmu->veventq = veventq;
+ return true;
+}
+
static bool
smmuv3_accel_dev_alloc_viommu(SMMUv3AccelDevice *accel_dev,
HostIOMMUDeviceIOMMUFD *idev, Error **errp)
@@ -438,8 +491,15 @@ smmuv3_accel_dev_alloc_viommu(SMMUv3AccelDevice *accel_dev,
vsmmu->iommufd = idev->iommufd;
s_accel->vsmmu = vsmmu;
accel_dev->vsmmu = vsmmu;
+
+ /* Allocate a vEVENTQ if guest has enabled event queue */
+ if (!smmuv3_accel_alloc_veventq(s, errp)) {
+ goto free_bypass_hwpt;
+ }
return true;
+free_bypass_hwpt:
+ iommufd_backend_free_id(idev->iommufd, vsmmu->bypass_hwpt_id);
free_abort_hwpt:
iommufd_backend_free_id(idev->iommufd, vsmmu->abort_hwpt_id);
free_viommu:
@@ -536,6 +596,7 @@ static void smmuv3_accel_unset_iommu_device(PCIBus *bus, void *opaque,
}
if (QLIST_EMPTY(&vsmmu->device_list)) {
+ smmuv3_accel_free_veventq(vsmmu);
iommufd_backend_free_id(vsmmu->iommufd, vsmmu->bypass_hwpt_id);
iommufd_backend_free_id(vsmmu->iommufd, vsmmu->abort_hwpt_id);
iommufd_backend_free_id(vsmmu->iommufd, vsmmu->viommu.viommu_id);
diff --git a/hw/arm/smmuv3-accel.h b/hw/arm/smmuv3-accel.h
index 4f5b672712..740253bc34 100644
--- a/hw/arm/smmuv3-accel.h
+++ b/hw/arm/smmuv3-accel.h
@@ -22,6 +22,7 @@
typedef struct SMMUViommu {
IOMMUFDBackend *iommufd;
IOMMUFDViommu viommu;
+ IOMMUFDVeventq *veventq;
uint32_t bypass_hwpt_id;
uint32_t abort_hwpt_id;
QLIST_HEAD(, SMMUv3AccelDevice) device_list;
@@ -56,6 +57,7 @@ bool smmuv3_accel_issue_inv_cmd(SMMUv3State *s, void *cmd, SMMUDevice *sdev,
void smmuv3_accel_gbpa_update(SMMUv3State *s);
void smmuv3_accel_reset(SMMUv3State *s);
void smmuv3_accel_idr_override(SMMUv3State *s);
+bool smmuv3_accel_alloc_veventq(SMMUv3State *s, Error **errp);
#else
static inline void smmuv3_accel_init(SMMUv3State *s)
{
@@ -87,6 +89,10 @@ static inline void smmuv3_accel_reset(SMMUv3State *s)
static inline void smmuv3_accel_idr_override(SMMUv3State *s)
{
}
+bool smmuv3_accel_alloc_veventq(SMMUv3State *s, Error **errp)
+{
+ return true;
+}
#endif
#endif /* HW_ARM_SMMUV3_ACCEL_H */
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index e1140fe087..976a436bd4 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1616,12 +1616,19 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
uint64_t data, MemTxAttrs attrs)
{
+ Error *local_err = NULL;
+
switch (offset) {
case A_CR0:
s->cr[0] = data;
s->cr0ack = data & ~SMMU_CR0_RESERVED;
/* in case the command queue has been enabled */
smmuv3_cmdq_consume(s);
+ /* Allocate vEVENTQ if guest enables EventQ and vIOMMU is ready */
+ if (!smmuv3_accel_alloc_veventq(s, &local_err)) {
+ error_report_err(local_err);
+ /* TODO: Should we return err? */
+ }
return MEMTX_OK;
case A_CR1:
s->cr[1] = data;
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC PATCH 3/4] hw/arm/smmuv3: Introduce a helper function for event propagation
2025-11-05 15:46 [RFC PATCH 0/4] vEVENTQ support for accelerated SMMUv3 devices Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 1/4] backends/iommufd: Introduce iommufd_backend_alloc_veventq Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 2/4] hw/arm/smmuv3-accel: Allocate vEVENTQ for accelerated SMMUv3 devices Shameer Kolothum
@ 2025-11-05 15:46 ` Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events Shameer Kolothum
3 siblings, 0 replies; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-05 15:46 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, jgg, kjaju
Factor out the code that propagates event records to the guest into a
helper function. The accelerated SMMUv3 path can use this to propagate
host events in a subsequent patch.
Since this helper may be called from outside the SMMUv3 core, take the
mutex before accessing the Event Queue.
No functional change intended.
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/smmuv3-internal.h | 4 ++++
hw/arm/smmuv3.c | 21 +++++++++++++++------
hw/arm/trace-events | 2 +-
3 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 2e0d8d538b..58dfa64eb3 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -525,7 +525,11 @@ typedef struct SMMUEventInfo {
(x)->word[6] = (uint32_t)(addr & 0xffffffff); \
} while (0)
+#define EVT_GET_TYPE(x) extract32((x)->word[0], 0, 8)
+#define EVT_GET_SID(x) ((x)->word[1])
+
void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
+void smmuv3_propagate_event(SMMUv3State *s, Evt *evt);
/* Configuration Data */
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 976a436bd4..43d297698b 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -168,10 +168,23 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
return MEMTX_OK;
}
+void smmuv3_propagate_event(SMMUv3State *s, Evt *evt)
+{
+ MemTxResult r;
+
+ trace_smmuv3_propagate_event(smmu_event_string(EVT_GET_TYPE(evt)),
+ EVT_GET_SID(evt));
+ qemu_mutex_lock(&s->mutex);
+ r = smmuv3_write_eventq(s, evt);
+ if (r != MEMTX_OK) {
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
+ }
+ qemu_mutex_unlock(&s->mutex);
+}
+
void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
{
Evt evt = {};
- MemTxResult r;
if (!smmuv3_eventq_enabled(s)) {
return;
@@ -251,11 +264,7 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
g_assert_not_reached();
}
- trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
- r = smmuv3_write_eventq(s, &evt);
- if (r != MEMTX_OK) {
- smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
- }
+ smmuv3_propagate_event(s, &evt);
info->recorded = true;
}
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 2e0b1f8f6f..bbe989d042 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -40,7 +40,7 @@ smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
-smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
+smmuv3_propagate_event(const char *type, uint32_t sid) "%s sid=0x%x"
smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-05 15:46 [RFC PATCH 0/4] vEVENTQ support for accelerated SMMUv3 devices Shameer Kolothum
` (2 preceding siblings ...)
2025-11-05 15:46 ` [RFC PATCH 3/4] hw/arm/smmuv3: Introduce a helper function for event propagation Shameer Kolothum
@ 2025-11-05 15:46 ` Shameer Kolothum
2025-11-11 13:29 ` Jonathan Cameron via
2025-11-13 11:59 ` Zhangfei Gao
3 siblings, 2 replies; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-05 15:46 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Cc: eric.auger, peter.maydell, nicolinc, nathanc, mochs,
jonathan.cameron, zhangfei.gao, zhenzhong.duan, jgg, kjaju
Install an event handler on the vEVENTQ fd to read and propagate host
generated vIOMMU events to the guest.
The handler runs in QEMU’s main loop, using a non-blocking fd registered
via qemu_set_fd_handler().
Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
---
hw/arm/smmuv3-accel.c | 62 +++++++++++++++++++++++++++++++++++++++++++
hw/arm/smmuv3-accel.h | 2 ++
2 files changed, 64 insertions(+)
diff --git a/hw/arm/smmuv3-accel.c b/hw/arm/smmuv3-accel.c
index 210e7ebf36..e6c81c4786 100644
--- a/hw/arm/smmuv3-accel.c
+++ b/hw/arm/smmuv3-accel.c
@@ -383,6 +383,62 @@ static SMMUv3AccelDevice *smmuv3_accel_get_dev(SMMUState *bs, SMMUPciBus *sbus,
return accel_dev;
}
+static void smmuv3_accel_event_read(void *opaque)
+{
+ SMMUv3State *s = opaque;
+ SMMUv3AccelState *s_accel = s->s_accel;
+ SMMUViommu *vsmmu = s_accel->vsmmu;
+ struct iommu_vevent_arm_smmuv3 *vevent;
+ struct iommufd_vevent_header *hdr;
+ ssize_t readsz = sizeof(*hdr) + sizeof(*vevent);
+ uint8_t buf[sizeof(*hdr) + sizeof(*vevent)];
+ uint32_t last_seq = vsmmu->last_event_seq;
+ ssize_t bytes;
+ Evt evt = {};
+
+ bytes = read(vsmmu->veventq->veventq_fd, buf, readsz);
+ if (bytes <= 0) {
+ if (errno == EAGAIN || errno == EINTR) {
+ return;
+ }
+ error_report("vEVENTQ: read failed (%s)", strerror(errno));
+ return;
+ }
+
+ if (bytes < readsz) {
+ error_report("vEVENTQ: incomplete read (%zd/%zd bytes)", bytes, readsz);
+ return;
+ }
+
+ hdr = (struct iommufd_vevent_header *)buf;
+ if (hdr->flags & IOMMU_VEVENTQ_FLAG_LOST_EVENTS) {
+ error_report("vEVENTQ has lost events");
+ return;
+ }
+
+ vevent = (struct iommu_vevent_arm_smmuv3 *)(buf + sizeof(*hdr));
+ /* Check sequence in hdr for lost events if any */
+ if (vsmmu->event_start) {
+ uint32_t expected = (last_seq == INT_MAX) ? 0 : last_seq + 1;
+
+ if (hdr->sequence != expected) {
+ uint32_t delta;
+
+ if (hdr->sequence >= last_seq) {
+ delta = hdr->sequence - last_seq;
+ } else {
+ /* Handle wraparound from INT_MAX */
+ delta = (INT_MAX - last_seq) + hdr->sequence + 1;
+ }
+ error_report("vEVENTQ: detected lost %u event(s)", delta - 1);
+ }
+ }
+ vsmmu->last_event_seq = hdr->sequence;
+ vsmmu->event_start = true;
+ memcpy(&evt, vevent, sizeof(evt));
+ smmuv3_propagate_event(s, &evt);
+}
+
static void smmuv3_accel_free_veventq(SMMUViommu *vsmmu)
{
IOMMUFDVeventq *veventq = vsmmu->veventq;
@@ -390,6 +446,8 @@ static void smmuv3_accel_free_veventq(SMMUViommu *vsmmu)
if (!veventq) {
return;
}
+ qemu_set_fd_handler(veventq->veventq_fd, NULL, NULL, NULL);
+ close(veventq->veventq_fd);
iommufd_backend_free_id(vsmmu->iommufd, veventq->veventq_id);
g_free(veventq);
vsmmu->veventq = NULL;
@@ -433,6 +491,10 @@ bool smmuv3_accel_alloc_veventq(SMMUv3State *s, Error **errp)
veventq->veventq_fd = veventq_fd;
veventq->viommu = &vsmmu->viommu;
vsmmu->veventq = veventq;
+
+ /* Set up event handler for veventq fd */
+ fcntl(veventq_fd, F_SETFL, O_NONBLOCK);
+ qemu_set_fd_handler(veventq_fd, smmuv3_accel_event_read, NULL, s);
return true;
}
diff --git a/hw/arm/smmuv3-accel.h b/hw/arm/smmuv3-accel.h
index 740253bc34..6ed5f3b821 100644
--- a/hw/arm/smmuv3-accel.h
+++ b/hw/arm/smmuv3-accel.h
@@ -23,6 +23,8 @@ typedef struct SMMUViommu {
IOMMUFDBackend *iommufd;
IOMMUFDViommu viommu;
IOMMUFDVeventq *veventq;
+ uint32_t last_event_seq;
+ bool event_start;
uint32_t bypass_hwpt_id;
uint32_t abort_hwpt_id;
QLIST_HEAD(, SMMUv3AccelDevice) device_list;
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-05 15:46 ` [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events Shameer Kolothum
@ 2025-11-11 13:29 ` Jonathan Cameron via
2025-11-13 11:59 ` Zhangfei Gao
1 sibling, 0 replies; 11+ messages in thread
From: Jonathan Cameron via @ 2025-11-11 13:29 UTC (permalink / raw)
To: Shameer Kolothum
Cc: qemu-arm, qemu-devel, eric.auger, peter.maydell, nicolinc,
nathanc, mochs, zhangfei.gao, zhenzhong.duan, jgg, kjaju
On Wed, 5 Nov 2025 15:46:52 +0000
Shameer Kolothum <skolothumtho@nvidia.com> wrote:
> Install an event handler on the vEVENTQ fd to read and propagate host
> generated vIOMMU events to the guest.
>
> The handler runs in QEMU’s main loop, using a non-blocking fd registered
> via qemu_set_fd_handler().
>
> Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
A few minor suggestions inline. Otherwise set looks good to me, though
I'm very far from an expert of this stuff!
Jonathan
> ---
> hw/arm/smmuv3-accel.c | 62 +++++++++++++++++++++++++++++++++++++++++++
> hw/arm/smmuv3-accel.h | 2 ++
> 2 files changed, 64 insertions(+)
>
> diff --git a/hw/arm/smmuv3-accel.c b/hw/arm/smmuv3-accel.c
> index 210e7ebf36..e6c81c4786 100644
> --- a/hw/arm/smmuv3-accel.c
> +++ b/hw/arm/smmuv3-accel.c
> @@ -383,6 +383,62 @@ static SMMUv3AccelDevice *smmuv3_accel_get_dev(SMMUState *bs, SMMUPciBus *sbus,
> return accel_dev;
> }
>
> +static void smmuv3_accel_event_read(void *opaque)
> +{
> + SMMUv3State *s = opaque;
> + SMMUv3AccelState *s_accel = s->s_accel;
> + SMMUViommu *vsmmu = s_accel->vsmmu;
> + struct iommu_vevent_arm_smmuv3 *vevent;
> + struct iommufd_vevent_header *hdr;
> + ssize_t readsz = sizeof(*hdr) + sizeof(*vevent);
> + uint8_t buf[sizeof(*hdr) + sizeof(*vevent)];
Could you wrap this up in a structure to make it a tiny
bit more obvious what is going on?
struct {
struct iommufd_vevent_header hdr;
struct iommufd_vevent_arm_smmuv3 vevent;
} buf;
Should allow sizeof(buf);
and accessing elements directly without casts.
> + uint32_t last_seq = vsmmu->last_event_seq;
> + ssize_t bytes;
> + Evt evt = {};
Given you copy into this based on sizeof(evt) I can't see why you need
to initialize.
> +
> + bytes = read(vsmmu->veventq->veventq_fd, buf, readsz);
> + if (bytes <= 0) {
> + if (errno == EAGAIN || errno == EINTR) {
> + return;
> + }
> + error_report("vEVENTQ: read failed (%s)", strerror(errno));
> + return;
> + }
> +
> + if (bytes < readsz) {
> + error_report("vEVENTQ: incomplete read (%zd/%zd bytes)", bytes, readsz);
> + return;
> + }
> +
> + hdr = (struct iommufd_vevent_header *)buf;
> + if (hdr->flags & IOMMU_VEVENTQ_FLAG_LOST_EVENTS) {
> + error_report("vEVENTQ has lost events");
> + return;
> + }
> +
> + vevent = (struct iommu_vevent_arm_smmuv3 *)(buf + sizeof(*hdr));
> + /* Check sequence in hdr for lost events if any */
> + if (vsmmu->event_start) {
> + uint32_t expected = (last_seq == INT_MAX) ? 0 : last_seq + 1;
> +
> + if (hdr->sequence != expected) {
> + uint32_t delta;
> +
> + if (hdr->sequence >= last_seq) {
> + delta = hdr->sequence - last_seq;
> + } else {
> + /* Handle wraparound from INT_MAX */
> + delta = (INT_MAX - last_seq) + hdr->sequence + 1;
> + }
> + error_report("vEVENTQ: detected lost %u event(s)", delta - 1);
> + }
> + }
> + vsmmu->last_event_seq = hdr->sequence;
> + vsmmu->event_start = true;
> + memcpy(&evt, vevent, sizeof(evt));
> + smmuv3_propagate_event(s, &evt);
Why is the copy needed? Can't you just use the vevent in place?
> +}
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-05 15:46 ` [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events Shameer Kolothum
2025-11-11 13:29 ` Jonathan Cameron via
@ 2025-11-13 11:59 ` Zhangfei Gao
2025-11-13 13:07 ` Shameer Kolothum
1 sibling, 1 reply; 11+ messages in thread
From: Zhangfei Gao @ 2025-11-13 11:59 UTC (permalink / raw)
To: Shameer Kolothum
Cc: qemu-arm, qemu-devel, eric.auger, peter.maydell, nicolinc,
nathanc, mochs, jonathan.cameron, zhenzhong.duan, jgg, kjaju
Hi, Shameer
On Wed, 5 Nov 2025 at 23:49, Shameer Kolothum <skolothumtho@nvidia.com> wrote:
>
> Install an event handler on the vEVENTQ fd to read and propagate host
> generated vIOMMU events to the guest.
>
> The handler runs in QEMU’s main loop, using a non-blocking fd registered
> via qemu_set_fd_handler().
>
> Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
Still don't understand how to use this vevent.
Is it to replace the fault queue (IOMMU_FAULT_QUEUE_ALLOC)?
And only find read, no write, only receive events but no response
(from guest kernel)?
By the way, can we use vevent in user space application? not in qemu
environment.
Thanks
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-13 11:59 ` Zhangfei Gao
@ 2025-11-13 13:07 ` Shameer Kolothum
2025-11-13 17:44 ` Nicolin Chen
0 siblings, 1 reply; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-13 13:07 UTC (permalink / raw)
To: Zhangfei Gao, Nicolin Chen
Cc: qemu-arm@nongnu.org, qemu-devel@nongnu.org, eric.auger@redhat.com,
peter.maydell@linaro.org, Nathan Chen, Matt Ochs,
jonathan.cameron@huawei.com, zhenzhong.duan@intel.com,
Jason Gunthorpe, Krishnakant Jaju
Hi Zhangfei,
> -----Original Message-----
> From: Zhangfei Gao <zhangfei.gao@linaro.org>
> Sent: 13 November 2025 11:59
> To: Shameer Kolothum <skolothumtho@nvidia.com>
> Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org;
> eric.auger@redhat.com; peter.maydell@linaro.org; Nicolin Chen
> <nicolinc@nvidia.com>; Nathan Chen <nathanc@nvidia.com>; Matt Ochs
> <mochs@nvidia.com>; jonathan.cameron@huawei.com;
> zhenzhong.duan@intel.com; Jason Gunthorpe <jgg@nvidia.com>;
> Krishnakant Jaju <kjaju@nvidia.com>
> Subject: Re: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate
> host vIOMMU events
>
> External email: Use caution opening links or attachments
>
>
> Hi, Shameer
>
> On Wed, 5 Nov 2025 at 23:49, Shameer Kolothum
> <skolothumtho@nvidia.com> wrote:
> >
> > Install an event handler on the vEVENTQ fd to read and propagate host
> > generated vIOMMU events to the guest.
> >
> > The handler runs in QEMU’s main loop, using a non-blocking fd registered
> > via qemu_set_fd_handler().
> >
> > Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
>
> Still don't understand how to use this vevent.
> Is it to replace the fault queue (IOMMU_FAULT_QUEUE_ALLOC)?
No. IIUC, IOMMU_FAULT_QUEUE_ALLOC is to handle I/O page faults
for any HWPT capable of handling page faults/response. The QEMU
SMMUv3 still don't support page fault handling.
The VEVENTQ, on the other hand, provides a way to report any
other s1 events to Guest.
See how events are reported in arm_smmu_handle_event():
if (event->stall)
ret = iommu_report_device_fault(master->dev, &fault_evt); //Page faults
else if (master->vmaster && !event->s2)
ret = arm_vmaster_report_event(master->vmaster, evt); //This series handles this case.
else
ret = -EOPNOTSUPP;
> And only find read, no write, only receive events but no response
> (from guest kernel)?
Yes. And I am not sure what the long term plan is. We can still use
IOMMU_FAULT_QUEUE_ALLOC for page fault handling or extend this
VEVENTQ to have write() support for responses
To me, from an implementation perspective, both this FAULT and
VEVENTQ look almost similar.
@Nicolin, any idea what's plan for page fault handling?
> By the way, can we use vevent in user space application? not in qemu
> environment.
I didn't get that. Qemu is userspace. Or you meant just to receive any events
from host SMMUv3 in user spacel?
Thanks,
Shameer
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-13 13:07 ` Shameer Kolothum
@ 2025-11-13 17:44 ` Nicolin Chen
2025-11-14 8:45 ` Zhangfei Gao
0 siblings, 1 reply; 11+ messages in thread
From: Nicolin Chen @ 2025-11-13 17:44 UTC (permalink / raw)
To: Shameer Kolothum
Cc: Zhangfei Gao, qemu-arm@nongnu.org, qemu-devel@nongnu.org,
eric.auger@redhat.com, peter.maydell@linaro.org, Nathan Chen,
Matt Ochs, jonathan.cameron@huawei.com, zhenzhong.duan@intel.com,
Jason Gunthorpe, Krishnakant Jaju
On Thu, Nov 13, 2025 at 05:07:50AM -0800, Shameer Kolothum wrote:
> > On Wed, 5 Nov 2025 at 23:49, Shameer Kolothum
> > <skolothumtho@nvidia.com> wrote:
> > >
> > > Install an event handler on the vEVENTQ fd to read and propagate host
> > > generated vIOMMU events to the guest.
> > >
> > > The handler runs in QEMU’s main loop, using a non-blocking fd registered
> > > via qemu_set_fd_handler().
> > >
> > > Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
> >
> > Still don't understand how to use this vevent.
> > Is it to replace the fault queue (IOMMU_FAULT_QUEUE_ALLOC)?
>
> No. IIUC, IOMMU_FAULT_QUEUE_ALLOC is to handle I/O page faults
> for any HWPT capable of handling page faults/response. The QEMU
> SMMUv3 still don't support page fault handling.
>
> The VEVENTQ, on the other hand, provides a way to report any
> other s1 events to Guest.
>
> See how events are reported in arm_smmu_handle_event():
>
> if (event->stall)
> ret = iommu_report_device_fault(master->dev, &fault_evt); //Page faults
> else if (master->vmaster && !event->s2)
> ret = arm_vmaster_report_event(master->vmaster, evt); //This series handles this case.
> else
> ret = -EOPNOTSUPP;
Yes. We can say that FAULT_QUEUE is exclusively for PRI while the
vEVENTQ is for other types of HW events (or IRQs) related to the
guest stage-1. They can be used together.
> > And only find read, no write, only receive events but no response
> > (from guest kernel)?
>
> Yes. And I am not sure what the long term plan is. We can still use
> IOMMU_FAULT_QUEUE_ALLOC for page fault handling or extend this
> VEVENTQ to have write() support for responses
>
> To me, from an implementation perspective, both this FAULT and
> VEVENTQ look almost similar.
>
> @Nicolin, any idea what's plan for page fault handling?
No. I think PRI should be done via FAULT_QUEUE.
> > By the way, can we use vevent in user space application? not in qemu
> > environment.
>
> I didn't get that. Qemu is userspace. Or you meant just to receive any events
> from host SMMUv3 in user spacel?
If user space application follows the iommufd uAPI like QEMU does,
it can. I am not sure about the use case though.
Nicolin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-13 17:44 ` Nicolin Chen
@ 2025-11-14 8:45 ` Zhangfei Gao
2025-11-14 8:55 ` Shameer Kolothum
0 siblings, 1 reply; 11+ messages in thread
From: Zhangfei Gao @ 2025-11-14 8:45 UTC (permalink / raw)
To: Nicolin Chen
Cc: Shameer Kolothum, qemu-arm@nongnu.org, qemu-devel@nongnu.org,
eric.auger@redhat.com, peter.maydell@linaro.org, Nathan Chen,
Matt Ochs, jonathan.cameron@huawei.com, zhenzhong.duan@intel.com,
Jason Gunthorpe, Krishnakant Jaju
On Fri, 14 Nov 2025 at 01:44, Nicolin Chen <nicolinc@nvidia.com> wrote:
>
> On Thu, Nov 13, 2025 at 05:07:50AM -0800, Shameer Kolothum wrote:
> > > On Wed, 5 Nov 2025 at 23:49, Shameer Kolothum
> > > <skolothumtho@nvidia.com> wrote:
> > > >
> > > > Install an event handler on the vEVENTQ fd to read and propagate host
> > > > generated vIOMMU events to the guest.
> > > >
> > > > The handler runs in QEMU’s main loop, using a non-blocking fd registered
> > > > via qemu_set_fd_handler().
> > > >
> > > > Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
> > >
> > > Still don't understand how to use this vevent.
> > > Is it to replace the fault queue (IOMMU_FAULT_QUEUE_ALLOC)?
> >
> > No. IIUC, IOMMU_FAULT_QUEUE_ALLOC is to handle I/O page faults
> > for any HWPT capable of handling page faults/response. The QEMU
> > SMMUv3 still don't support page fault handling.
> >
> > The VEVENTQ, on the other hand, provides a way to report any
> > other s1 events to Guest.
> >
> > See how events are reported in arm_smmu_handle_event():
> >
> > if (event->stall)
> > ret = iommu_report_device_fault(master->dev, &fault_evt); //Page faults
> > else if (master->vmaster && !event->s2)
> > ret = arm_vmaster_report_event(master->vmaster, evt); //This series handles this case.
> > else
> > ret = -EOPNOTSUPP;
>
> Yes. We can say that FAULT_QUEUE is exclusively for PRI while the
> vEVENTQ is for other types of HW events (or IRQs) related to the
> guest stage-1. They can be used together.
>
> > > And only find read, no write, only receive events but no response
> > > (from guest kernel)?
> >
> > Yes. And I am not sure what the long term plan is. We can still use
> > IOMMU_FAULT_QUEUE_ALLOC for page fault handling or extend this
> > VEVENTQ to have write() support for responses
> >
> > To me, from an implementation perspective, both this FAULT and
> > VEVENTQ look almost similar.
> >
> > @Nicolin, any idea what's plan for page fault handling?
>
> No. I think PRI should be done via FAULT_QUEUE.
Does that mean FAULT_QUEUE needs a response, so read + write
VEVENTQ only notify, no need response, only read.
page faults need FAULT_QUEUE to resume.
dirty page record in userspace for live-migration etc, just use VEVENTQ?
>
> > > By the way, can we use vevent in user space application? not in qemu
> > > environment.
> >
> > I didn't get that. Qemu is userspace. Or you meant just to receive any events
> > from host SMMUv3 in user spacel?
>
> If user space application follows the iommufd uAPI like QEMU does,
> it can. I am not sure about the use case though.
Thanks Nico.
>
> Nicolin
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events
2025-11-14 8:45 ` Zhangfei Gao
@ 2025-11-14 8:55 ` Shameer Kolothum
0 siblings, 0 replies; 11+ messages in thread
From: Shameer Kolothum @ 2025-11-14 8:55 UTC (permalink / raw)
To: Zhangfei Gao, Nicolin Chen
Cc: qemu-arm@nongnu.org, qemu-devel@nongnu.org, eric.auger@redhat.com,
peter.maydell@linaro.org, Nathan Chen, Matt Ochs,
jonathan.cameron@huawei.com, zhenzhong.duan@intel.com,
Jason Gunthorpe, Krishnakant Jaju
> -----Original Message-----
> From: Zhangfei Gao <zhangfei.gao@linaro.org>
> Sent: 14 November 2025 08:45
> To: Nicolin Chen <nicolinc@nvidia.com>
> Cc: Shameer Kolothum <skolothumtho@nvidia.com>; qemu-
> arm@nongnu.org; qemu-devel@nongnu.org; eric.auger@redhat.com;
> peter.maydell@linaro.org; Nathan Chen <nathanc@nvidia.com>; Matt Ochs
> <mochs@nvidia.com>; jonathan.cameron@huawei.com;
> zhenzhong.duan@intel.com; Jason Gunthorpe <jgg@nvidia.com>;
> Krishnakant Jaju <kjaju@nvidia.com>
> Subject: Re: [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate
> host vIOMMU events
>
> External email: Use caution opening links or attachments
>
>
> On Fri, 14 Nov 2025 at 01:44, Nicolin Chen <nicolinc@nvidia.com> wrote:
> >
> > On Thu, Nov 13, 2025 at 05:07:50AM -0800, Shameer Kolothum wrote:
> > > > On Wed, 5 Nov 2025 at 23:49, Shameer Kolothum
> > > > <skolothumtho@nvidia.com> wrote:
> > > > >
> > > > > Install an event handler on the vEVENTQ fd to read and propagate host
> > > > > generated vIOMMU events to the guest.
> > > > >
> > > > > The handler runs in QEMU’s main loop, using a non-blocking fd
> registered
> > > > > via qemu_set_fd_handler().
> > > > >
> > > > > Signed-off-by: Shameer Kolothum <skolothumtho@nvidia.com>
> > > >
> > > > Still don't understand how to use this vevent.
> > > > Is it to replace the fault queue (IOMMU_FAULT_QUEUE_ALLOC)?
> > >
> > > No. IIUC, IOMMU_FAULT_QUEUE_ALLOC is to handle I/O page faults
> > > for any HWPT capable of handling page faults/response. The QEMU
> > > SMMUv3 still don't support page fault handling.
> > >
> > > The VEVENTQ, on the other hand, provides a way to report any
> > > other s1 events to Guest.
> > >
> > > See how events are reported in arm_smmu_handle_event():
> > >
> > > if (event->stall)
> > > ret = iommu_report_device_fault(master->dev, &fault_evt); //Page faults
> > > else if (master->vmaster && !event->s2)
> > > ret = arm_vmaster_report_event(master->vmaster, evt); //This series
> handles this case.
> > > else
> > > ret = -EOPNOTSUPP;
> >
> > Yes. We can say that FAULT_QUEUE is exclusively for PRI while the
> > vEVENTQ is for other types of HW events (or IRQs) related to the
> > guest stage-1. They can be used together.
> >
> > > > And only find read, no write, only receive events but no response
> > > > (from guest kernel)?
> > >
> > > Yes. And I am not sure what the long term plan is. We can still use
> > > IOMMU_FAULT_QUEUE_ALLOC for page fault handling or extend this
> > > VEVENTQ to have write() support for responses
> > >
> > > To me, from an implementation perspective, both this FAULT and
> > > VEVENTQ look almost similar.
> > >
> > > @Nicolin, any idea what's plan for page fault handling?
> >
> > No. I think PRI should be done via FAULT_QUEUE.
>
> Does that mean FAULT_QUEUE needs a response, so read + write
> VEVENTQ only notify, no need response, only read.
Of course. Page fault needs a response always whether abort or
retry.
VEVENTQ is for everything excepts page fault event that doesn't
need any response back.
If you are after the STALL based page fault handling on D06, you
could rebase the old page fault/response patches on top of this
and handle it, I guess.
Thanks,
Shameer
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-11-14 8:56 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-05 15:46 [RFC PATCH 0/4] vEVENTQ support for accelerated SMMUv3 devices Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 1/4] backends/iommufd: Introduce iommufd_backend_alloc_veventq Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 2/4] hw/arm/smmuv3-accel: Allocate vEVENTQ for accelerated SMMUv3 devices Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 3/4] hw/arm/smmuv3: Introduce a helper function for event propagation Shameer Kolothum
2025-11-05 15:46 ` [RFC PATCH 4/4] hw/arm/smmuv3-accel: Read and propagate host vIOMMU events Shameer Kolothum
2025-11-11 13:29 ` Jonathan Cameron via
2025-11-13 11:59 ` Zhangfei Gao
2025-11-13 13:07 ` Shameer Kolothum
2025-11-13 17:44 ` Nicolin Chen
2025-11-14 8:45 ` Zhangfei Gao
2025-11-14 8:55 ` Shameer Kolothum
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).