* [PATCH V3 01/16] i3c: mipi-i3c-hci: Fix suspend behavior when bus disable falls back to software reset
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 02/16] i3c: mipi-i3c-hci: Preserve RUN bit when aborting DMA ring Adrian Hunter
` (14 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Software reset was introduced as a fallback if bus disable failed. The
change was made in 2 places: the cleanup path and the suspend path.
For the cleanup path (i3c_hci_bus_cleanup()), after software reset the
function continues to do cleanup for the current I/O mode. For the
suspend path (i3c_hci_rpm_suspend()), after software reset the function
returns early. However software reset does not reset any Ring Headers in
the Host Controller, so returning early is not the right thing to do.
Instead, continue to call suspend for the current I/O mode, which for DMA
mode will reset any Ring Headers.
Note, although Ring Headers should not be active at this stage, performing
this reset follows the procedure defined by the specification and keeps
the suspend path consistent with the cleanup path.
Note also, i3c_hci_sync_irq_inactive() is still called via the PIO and DMA
hci->io->suspend() callbacks.
Always return 0 because the device is quiesced as much as possible and
returning a negative error code would unnecessarily prevent system suspend.
Fixes: 9a258d1336f7 ("i3c: mipi-i3c-hci: Fallback to software reset when bus disable fails")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Frank Li <Frank.Li@nxp.com>
---
Changes in V3:
Add Frank's rev'd-by
Changes in V2:
Always return 0 from suspend callback
Amend commit message
drivers/i3c/master/mipi-i3c-hci/core.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index b781dbed2165..afb0764b5e1f 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -762,15 +762,10 @@ static int i3c_hci_reset_and_init(struct i3c_hci *hci)
int i3c_hci_rpm_suspend(struct device *dev)
{
struct i3c_hci *hci = dev_get_drvdata(dev);
- int ret;
- ret = i3c_hci_bus_disable(hci);
- if (ret) {
- /* Fall back to software reset to disable the bus */
- ret = i3c_hci_software_reset(hci);
- i3c_hci_sync_irq_inactive(hci);
- return ret;
- }
+ /* Fall back to software reset to disable the bus */
+ if (i3c_hci_bus_disable(hci))
+ i3c_hci_software_reset(hci);
hci->io->suspend(hci);
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 02/16] i3c: mipi-i3c-hci: Preserve RUN bit when aborting DMA ring
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 01/16] i3c: mipi-i3c-hci: Fix suspend behavior when bus disable falls back to software reset Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 03/16] i3c: mipi-i3c-hci: Prevent DMA enqueue while ring is aborting or in error Adrian Hunter
` (13 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
The MIPI I3C HCI specification does not require the DMA ring RUN bit
(RUN_STOP) to be cleared when issuing an ABORT. That allows the DMA ring
to continue to receive IBIs, although an IBI is anyway not lost because it
can be received once the ring restarts if the I3C device has not given up.
Note, currently ABORT is only used on a timeout error path so the change
has very little effect in practice. In the more common case of a transfer
error, the ring (bundle) operation is halted by the controller anyway.
Adjust the RING_CONTROL handling to set ABORT without clearing RUN_STOP,
bringing the driver into alignment with the specification.
Fixes: b795e68bf3073 ("i3c: mipi-i3c-hci: Correct RING_CTRL_ABORT handling in DMA dequeue")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Frank Li <Frank.Li@nxp.com>
---
Changes in V3:
Add Frank's rev'd-by
Changes in V2:
Improve commit message
drivers/i3c/master/mipi-i3c-hci/dma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index e487ef52f6b4..4cd32e3afa7b 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -554,7 +554,7 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
if (ring_status & RING_STATUS_RUNNING) {
/* stop the ring */
reinit_completion(&rh->op_done);
- rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE | RING_CTRL_ABORT);
+ rh_reg_write(RING_CONTROL, rh_reg_read(RING_CONTROL) | RING_CTRL_ABORT);
wait_for_completion_timeout(&rh->op_done, HZ);
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 03/16] i3c: mipi-i3c-hci: Prevent DMA enqueue while ring is aborting or in error
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 01/16] i3c: mipi-i3c-hci: Fix suspend behavior when bus disable falls back to software reset Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 02/16] i3c: mipi-i3c-hci: Preserve RUN bit when aborting DMA ring Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 04/16] i3c: mipi-i3c-hci: Wait for DMA ring restart to complete Adrian Hunter
` (12 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Block the DMA enqueue path while a Ring abort is in progress or after an
error condition has been detected.
Previously, new transfers could be enqueued while the DMA Ring was being
aborted or while error handling was underway. This allowed enqueue and
error-recovery paths to run concurrently, potentially interfering with
each other and corrupting Ring state.
Introduce explicit enqueue blocking and a wait queue to serialize access:
enqueue operations now wait until abort or error handling has completed
before proceeding. Enqueue is unblocked once the Ring is safely restarted.
Note, there is only 1 ring bundle configured, and a transfer error causes
the controller to halt ring (bundle) operation, so there is only ever 1
outstanding error at a time. Furthermore, a later patch ensures that only
the currently active transfer list can time out. Consequently, the DMA
queue will not be unblocked while there are outstanding transfer errors or
timeouts.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Improve commit message
drivers/i3c/master/mipi-i3c-hci/core.c | 1 +
drivers/i3c/master/mipi-i3c-hci/dma.c | 25 +++++++++++++++++++++++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 2 ++
3 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index afb0764b5e1f..44617eb3a3f1 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -973,6 +973,7 @@ static int i3c_hci_probe(struct platform_device *pdev)
spin_lock_init(&hci->lock);
mutex_init(&hci->control_mutex);
+ init_waitqueue_head(&hci->enqueue_wait_queue);
/*
* Multi-bus instances share the same MMIO address range, but not
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 4cd32e3afa7b..314635e6e190 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -484,6 +484,12 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci,
spin_lock_irq(&hci->lock);
+ while (unlikely(hci->enqueue_blocked)) {
+ spin_unlock_irq(&hci->lock);
+ wait_event(hci->enqueue_wait_queue, !READ_ONCE(hci->enqueue_blocked));
+ spin_lock_irq(&hci->lock);
+ }
+
if (n > rh->xfer_space) {
spin_unlock_irq(&hci->lock);
hci_dma_unmap_xfer(hci, xfer_list, n);
@@ -539,6 +545,14 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci,
return 0;
}
+static void hci_dma_unblock_enqueue(struct i3c_hci *hci)
+{
+ if (hci->enqueue_blocked) {
+ hci->enqueue_blocked = false;
+ wake_up_all(&hci->enqueue_wait_queue);
+ }
+}
+
static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
struct hci_xfer *xfer_list, int n)
{
@@ -550,12 +564,17 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
guard(mutex)(&hci->control_mutex);
+ spin_lock_irq(&hci->lock);
+
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
+ hci->enqueue_blocked = true;
+ spin_unlock_irq(&hci->lock);
/* stop the ring */
reinit_completion(&rh->op_done);
rh_reg_write(RING_CONTROL, rh_reg_read(RING_CONTROL) | RING_CTRL_ABORT);
wait_for_completion_timeout(&rh->op_done, HZ);
+ spin_lock_irq(&hci->lock);
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
/*
@@ -567,8 +586,6 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
}
}
- spin_lock_irq(&hci->lock);
-
for (i = 0; i < n; i++) {
struct hci_xfer *xfer = xfer_list + i;
int idx = xfer->ring_entry;
@@ -604,6 +621,8 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE);
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE | RING_CTRL_RUN_STOP);
+ hci_dma_unblock_enqueue(hci);
+
spin_unlock_irq(&hci->lock);
return did_unqueue;
@@ -647,6 +666,8 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
}
if (xfer->completion)
complete(xfer->completion);
+ if (RESP_STATUS(resp))
+ hci->enqueue_blocked = true;
}
done_ptr = (done_ptr + 1) % rh->xfer_entries;
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index f17f43494c1b..d630400ec945 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -54,6 +54,8 @@ struct i3c_hci {
struct mutex control_mutex;
atomic_t next_cmd_tid;
bool irq_inactive;
+ bool enqueue_blocked;
+ wait_queue_head_t enqueue_wait_queue;
u32 caps;
unsigned int quirks;
unsigned int DAT_entries;
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 04/16] i3c: mipi-i3c-hci: Wait for DMA ring restart to complete
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (2 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 03/16] i3c: mipi-i3c-hci: Prevent DMA enqueue while ring is aborting or in error Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 05/16] i3c: mipi-i3c-hci: Move hci_dma_xfer_done() definition Adrian Hunter
` (11 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Although hci_dma_dequeue_xfer() is serialized against itself via
control_mutex, this does not guarantee that a DMA ring restart
triggered by a previous invocation has fully completed.
When the function is called again in rapid succession, the DMA ring may
still be transitioning back to the running state, which may confound or
disrupt further state changes.
Address this by waiting for the DMA ring restart to complete before
continuing.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V2 and V3:
None
drivers/i3c/master/mipi-i3c-hci/dma.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 314635e6e190..28614fdbf558 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -617,6 +617,7 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
}
/* restart the ring */
+ reinit_completion(&rh->op_done);
mipi_i3c_hci_resume(hci);
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE);
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE | RING_CTRL_RUN_STOP);
@@ -625,6 +626,8 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
spin_unlock_irq(&hci->lock);
+ wait_for_completion_timeout(&rh->op_done, HZ);
+
return did_unqueue;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 05/16] i3c: mipi-i3c-hci: Move hci_dma_xfer_done() definition
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (3 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 04/16] i3c: mipi-i3c-hci: Wait for DMA ring restart to complete Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 06/16] i3c: mipi-i3c-hci: Call hci_dma_xfer_done() from dequeue path Adrian Hunter
` (10 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Move hci_dma_xfer_done() earlier in the file to avoid a forward
declaration needed by a subsequent change.
No functional change.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Frank Li <Frank.Li@nxp.com>
---
Changes in V3:
None
Changes in V2:
Added Frank's Rev'd-by
drivers/i3c/master/mipi-i3c-hci/dma.c | 98 +++++++++++++--------------
1 file changed, 49 insertions(+), 49 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 28614fdbf558..c9852b85d6b0 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -545,6 +545,55 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci,
return 0;
}
+static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
+{
+ u32 op1_val, op2_val, resp, *ring_resp;
+ unsigned int tid, done_ptr = rh->done_ptr;
+ unsigned int done_cnt = 0;
+ struct hci_xfer *xfer;
+
+ for (;;) {
+ op2_val = rh_reg_read(RING_OPERATION2);
+ if (done_ptr == FIELD_GET(RING_OP2_CR_DEQ_PTR, op2_val))
+ break;
+
+ ring_resp = rh->resp + rh->resp_struct_sz * done_ptr;
+ resp = *ring_resp;
+ tid = RESP_TID(resp);
+ dev_dbg(&hci->master.dev, "resp = 0x%08x", resp);
+
+ xfer = rh->src_xfers[done_ptr];
+ if (!xfer) {
+ dev_dbg(&hci->master.dev, "orphaned ring entry");
+ } else {
+ hci_dma_unmap_xfer(hci, xfer, 1);
+ rh->src_xfers[done_ptr] = NULL;
+ xfer->ring_entry = -1;
+ xfer->response = resp;
+ if (tid != xfer->cmd_tid) {
+ dev_err(&hci->master.dev,
+ "response tid=%d when expecting %d\n",
+ tid, xfer->cmd_tid);
+ /* TODO: do something about it? */
+ }
+ if (xfer->completion)
+ complete(xfer->completion);
+ if (RESP_STATUS(resp))
+ hci->enqueue_blocked = true;
+ }
+
+ done_ptr = (done_ptr + 1) % rh->xfer_entries;
+ rh->done_ptr = done_ptr;
+ done_cnt += 1;
+ }
+
+ rh->xfer_space += done_cnt;
+ op1_val = rh_reg_read(RING_OPERATION1);
+ op1_val &= ~RING_OP1_CR_SW_DEQ_PTR;
+ op1_val |= FIELD_PREP(RING_OP1_CR_SW_DEQ_PTR, done_ptr);
+ rh_reg_write(RING_OPERATION1, op1_val);
+}
+
static void hci_dma_unblock_enqueue(struct i3c_hci *hci)
{
if (hci->enqueue_blocked) {
@@ -636,55 +685,6 @@ static int hci_dma_handle_error(struct i3c_hci *hci, struct hci_xfer *xfer_list,
return hci_dma_dequeue_xfer(hci, xfer_list, n) ? -EIO : 0;
}
-static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
-{
- u32 op1_val, op2_val, resp, *ring_resp;
- unsigned int tid, done_ptr = rh->done_ptr;
- unsigned int done_cnt = 0;
- struct hci_xfer *xfer;
-
- for (;;) {
- op2_val = rh_reg_read(RING_OPERATION2);
- if (done_ptr == FIELD_GET(RING_OP2_CR_DEQ_PTR, op2_val))
- break;
-
- ring_resp = rh->resp + rh->resp_struct_sz * done_ptr;
- resp = *ring_resp;
- tid = RESP_TID(resp);
- dev_dbg(&hci->master.dev, "resp = 0x%08x", resp);
-
- xfer = rh->src_xfers[done_ptr];
- if (!xfer) {
- dev_dbg(&hci->master.dev, "orphaned ring entry");
- } else {
- hci_dma_unmap_xfer(hci, xfer, 1);
- rh->src_xfers[done_ptr] = NULL;
- xfer->ring_entry = -1;
- xfer->response = resp;
- if (tid != xfer->cmd_tid) {
- dev_err(&hci->master.dev,
- "response tid=%d when expecting %d\n",
- tid, xfer->cmd_tid);
- /* TODO: do something about it? */
- }
- if (xfer->completion)
- complete(xfer->completion);
- if (RESP_STATUS(resp))
- hci->enqueue_blocked = true;
- }
-
- done_ptr = (done_ptr + 1) % rh->xfer_entries;
- rh->done_ptr = done_ptr;
- done_cnt += 1;
- }
-
- rh->xfer_space += done_cnt;
- op1_val = rh_reg_read(RING_OPERATION1);
- op1_val &= ~RING_OP1_CR_SW_DEQ_PTR;
- op1_val |= FIELD_PREP(RING_OP1_CR_SW_DEQ_PTR, done_ptr);
- rh_reg_write(RING_OPERATION1, op1_val);
-}
-
static int hci_dma_request_ibi(struct i3c_hci *hci, struct i3c_dev_desc *dev,
const struct i3c_ibi_setup *req)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 06/16] i3c: mipi-i3c-hci: Call hci_dma_xfer_done() from dequeue path
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (4 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 05/16] i3c: mipi-i3c-hci: Move hci_dma_xfer_done() definition Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 07/16] i3c: mipi-i3c-hci: Complete transfer lists immediately on error Adrian Hunter
` (9 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
hci_dma_dequeue_xfer() relies on state normally updated by the DMA
interrupt handler. Ensure that state is current by explicitly invoking
hci_dma_xfer_done() from the dequeue path.
This handles cases where the interrupt handler has not (yet) run.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Frank Li <Frank.Li@nxp.com>
---
Changes in V3:
None
Changes in V2:
Added Frank's Rev'd-by
drivers/i3c/master/mipi-i3c-hci/dma.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index c9852b85d6b0..28e4d38f55d3 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -635,6 +635,8 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
}
}
+ hci_dma_xfer_done(hci, rh);
+
for (i = 0; i < n; i++) {
struct hci_xfer *xfer = xfer_list + i;
int idx = xfer->ring_entry;
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 07/16] i3c: mipi-i3c-hci: Complete transfer lists immediately on error
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (5 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 06/16] i3c: mipi-i3c-hci: Call hci_dma_xfer_done() from dequeue path Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 08/16] i3c: mipi-i3c-hci: Avoid restarting DMA ring after aborting wrong transfer Adrian Hunter
` (8 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
In DMA mode, transfer lists are currently completed only when the final
transfer in the list completes. If an earlier transfer fails, the list is
left incomplete and callers wait until timeout.
There is no need to wait for a timeout, as the completion path in
i3c_hci_process_xfer() already checks for error status. Complete the
transfer list as soon as any transfer in the list reports an error.
This avoids unnecessary delays and spurious timeouts on error.
Complete a transfer list completion immediately there is an error.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Renamed completing_xfer to final_xfer
drivers/i3c/master/mipi-i3c-hci/dma.c | 6 ++++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 1 +
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 28e4d38f55d3..899fdf6555a8 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -502,6 +502,8 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci,
struct hci_xfer *xfer = xfer_list + i;
u32 *ring_data = rh->xfer + rh->xfer_struct_sz * enqueue_ptr;
+ xfer->final_xfer = xfer_list + n - 1;
+
/* store cmd descriptor */
*ring_data++ = xfer->cmd_desc[0];
*ring_data++ = xfer->cmd_desc[1];
@@ -576,8 +578,8 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
tid, xfer->cmd_tid);
/* TODO: do something about it? */
}
- if (xfer->completion)
- complete(xfer->completion);
+ if (xfer == xfer->final_xfer || RESP_STATUS(resp))
+ complete(xfer->final_xfer->completion);
if (RESP_STATUS(resp))
hci->enqueue_blocked = true;
}
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index d630400ec945..f07fc627d4d2 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -104,6 +104,7 @@ struct hci_xfer {
struct {
/* DMA specific */
struct i3c_dma *dma;
+ struct hci_xfer *final_xfer;
int ring_number;
int ring_entry;
};
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 08/16] i3c: mipi-i3c-hci: Avoid restarting DMA ring after aborting wrong transfer
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (6 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 07/16] i3c: mipi-i3c-hci: Complete transfer lists immediately on error Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 09/16] i3c: mipi-i3c-hci: Add DMA ring abort/reset quirk for Intel controllers Adrian Hunter
` (7 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Software ABORT of the DMA ring is used to recover from transfer list
timeouts, but it is inherently racy. The intended transfer list may
complete just before the ABORT takes effect, causing the subsequent
transfer list to be aborted instead.
In this case, an incomplete transfer list may remain in the ring and has
not yet been processed by hci_dma_dequeue_xfer(). Restarting the DMA
ring at that point can lead to unpredictable results.
Detect when the next queued transfer is not the first entry of a transfer
list and does not belong to the list currently being dequeued. In that
case, skip restarting the DMA ring and defer recovery until a subsequent
call to hci_dma_dequeue_xfer(), which will safely restart the ring once
the incomplete list is handled.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Renamed completing_xfer to final_xfer
drivers/i3c/master/mipi-i3c-hci/dma.c | 15 +++++++++++++++
drivers/i3c/master/mipi-i3c-hci/hci.h | 1 +
2 files changed, 16 insertions(+)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 899fdf6555a8..268f54b32101 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -503,6 +503,7 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci,
u32 *ring_data = rh->xfer + rh->xfer_struct_sz * enqueue_ptr;
xfer->final_xfer = xfer_list + n - 1;
+ xfer->xfer_list_pos = i;
/* store cmd descriptor */
*ring_data++ = xfer->cmd_desc[0];
@@ -669,6 +670,20 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
}
}
+ /*
+ * A software ABORT may race with transfer completion and abort the next
+ * transfer list instead. Detect that case, and do not restart the ring.
+ * It will be handled by a subsequent dequeue.
+ */
+ if (!did_unqueue) {
+ struct hci_xfer *xfer = rh->src_xfers[rh->done_ptr];
+
+ if (xfer && xfer->xfer_list_pos && xfer->final_xfer != xfer_list->final_xfer) {
+ spin_unlock_irq(&hci->lock);
+ return false;
+ }
+ }
+
/* restart the ring */
reinit_completion(&rh->op_done);
mipi_i3c_hci_resume(hci);
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index f07fc627d4d2..83d4f13a68a3 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -107,6 +107,7 @@ struct hci_xfer {
struct hci_xfer *final_xfer;
int ring_number;
int ring_entry;
+ int xfer_list_pos;
};
};
};
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 09/16] i3c: mipi-i3c-hci: Add DMA ring abort/reset quirk for Intel controllers
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (7 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 08/16] i3c: mipi-i3c-hci: Avoid restarting DMA ring after aborting wrong transfer Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 10/16] i3c: mipi-i3c-hci: Add DMA ring abort " Adrian Hunter
` (6 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Some Intel I3C HCI controllers cannot reliably restart a DMA ring after an
ABORT. Additional queue resets are required to recover, and must be
performed using PIO reset bits even while operating in DMA mode.
This behavior is non-standard. Introduce a controller quirk to opt into
the required PIO queue resets after a DMA ring abort, and enable it for
Intel LPSS I3C controllers.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V2 and V3:
None
drivers/i3c/master/mipi-i3c-hci/core.c | 15 ++++++++++++++-
drivers/i3c/master/mipi-i3c-hci/dma.c | 9 +++++++++
drivers/i3c/master/mipi-i3c-hci/hci.h | 2 ++
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index 44617eb3a3f1..770235ad6b25 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -240,6 +240,18 @@ void mipi_i3c_hci_pio_reset(struct i3c_hci *hci)
reg_write(RESET_CONTROL, RX_FIFO_RST | TX_FIFO_RST | RESP_QUEUE_RST);
}
+#define ALL_QUEUES_RST (CMD_QUEUE_RST | RESP_QUEUE_RST | RX_FIFO_RST | TX_FIFO_RST | IBI_QUEUE_RST)
+
+void mipi_i3c_hci_pio_reset_all_queues(struct i3c_hci *hci)
+{
+ u32 regval;
+
+ reg_write(RESET_CONTROL, ALL_QUEUES_RST);
+ if (readx_poll_timeout_atomic(reg_read, RESET_CONTROL, regval,
+ !(regval & ALL_QUEUES_RST), 0, 20))
+ dev_err(&hci->master.dev, "%s: Reset queues failed\n", __func__);
+}
+
/* located here rather than dct.c because needed bits are in core reg space */
void mipi_i3c_hci_dct_index_reset(struct i3c_hci *hci)
{
@@ -1040,7 +1052,8 @@ MODULE_DEVICE_TABLE(acpi, i3c_hci_acpi_match);
static const struct platform_device_id i3c_hci_driver_ids[] = {
{ .name = "intel-lpss-i3c", HCI_QUIRK_RPM_ALLOWED |
HCI_QUIRK_RPM_IBI_ALLOWED |
- HCI_QUIRK_RPM_PARENT_MANAGED },
+ HCI_QUIRK_RPM_PARENT_MANAGED |
+ HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(platform, i3c_hci_driver_ids);
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 268f54b32101..699c6d523eed 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -597,6 +597,13 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
rh_reg_write(RING_OPERATION1, op1_val);
}
+static void hci_dma_abort_requires_pio_reset_quirk(struct i3c_hci *hci, struct hci_rh_data *rh)
+{
+ if ((hci->quirks & HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET) &&
+ (rh_reg_read(RING_STATUS) & RING_STATUS_ABORTED))
+ mipi_i3c_hci_pio_reset_all_queues(hci);
+}
+
static void hci_dma_unblock_enqueue(struct i3c_hci *hci)
{
if (hci->enqueue_blocked) {
@@ -638,6 +645,8 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
}
}
+ hci_dma_abort_requires_pio_reset_quirk(hci, rh);
+
hci_dma_xfer_done(hci, rh);
for (i = 0; i < n; i++) {
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index 83d4f13a68a3..01237b12d32e 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -156,10 +156,12 @@ struct i3c_hci_dev_data {
#define HCI_QUIRK_RPM_ALLOWED BIT(5) /* Runtime PM allowed */
#define HCI_QUIRK_RPM_IBI_ALLOWED BIT(6) /* IBI and Hot-Join allowed while runtime suspended */
#define HCI_QUIRK_RPM_PARENT_MANAGED BIT(7) /* Runtime PM managed by parent device */
+#define HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET BIT(8) /* Do PIO queue SW resets after DMA abort */
/* global functions */
void mipi_i3c_hci_resume(struct i3c_hci *hci);
void mipi_i3c_hci_pio_reset(struct i3c_hci *hci);
+void mipi_i3c_hci_pio_reset_all_queues(struct i3c_hci *hci);
void mipi_i3c_hci_dct_index_reset(struct i3c_hci *hci);
void amd_set_od_pp_timing(struct i3c_hci *hci);
void amd_set_resp_buf_thld(struct i3c_hci *hci);
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 10/16] i3c: mipi-i3c-hci: Add DMA ring abort quirk for Intel controllers
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (8 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 09/16] i3c: mipi-i3c-hci: Add DMA ring abort/reset quirk for Intel controllers Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 11/16] i3c: mipi-i3c-hci: Factor out reset-and-restore helper Adrian Hunter
` (5 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
DMA rings can be aborted either per-ring via RING_CONTROL or globally
via HC_CONTROL_ABORT. The driver currently relies on the per-ring
mechanism.
Some Intel I3C HCI controllers require HC_CONTROL_ABORT to be asserted
before a DMA ring abort is effective. This behavior is non-standard.
Introduce a controller quirk to select the required abort method and
enable it for Intel LPSS I3C controllers.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V2 and V3:
None
drivers/i3c/master/mipi-i3c-hci/core.c | 18 +++++++++++++++--
drivers/i3c/master/mipi-i3c-hci/dma.c | 27 +++++++++++++++++++++++---
drivers/i3c/master/mipi-i3c-hci/hci.h | 2 ++
3 files changed, 42 insertions(+), 5 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index 770235ad6b25..8274c84b16be 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -231,7 +231,20 @@ static void i3c_hci_bus_cleanup(struct i3c_master_controller *m)
void mipi_i3c_hci_resume(struct i3c_hci *hci)
{
- reg_set(HC_CONTROL, HC_CONTROL_RESUME);
+ u32 reg = reg_read(HC_CONTROL);
+
+ reg |= HC_CONTROL_RESUME;
+ reg &= ~HC_CONTROL_ABORT;
+ reg_write(HC_CONTROL, reg);
+}
+
+void mipi_i3c_hci_abort(struct i3c_hci *hci)
+{
+ u32 reg = reg_read(HC_CONTROL);
+
+ reg &= ~HC_CONTROL_RESUME; /* Do not set resume */
+ reg |= HC_CONTROL_ABORT;
+ reg_write(HC_CONTROL, reg);
}
/* located here rather than pio.c because needed bits are in core reg space */
@@ -1053,7 +1066,8 @@ static const struct platform_device_id i3c_hci_driver_ids[] = {
{ .name = "intel-lpss-i3c", HCI_QUIRK_RPM_ALLOWED |
HCI_QUIRK_RPM_IBI_ALLOWED |
HCI_QUIRK_RPM_PARENT_MANAGED |
- HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET },
+ HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET |
+ HCI_QUIRK_DMA_REQUIRES_HC_ABORT },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(platform, i3c_hci_driver_ids);
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 699c6d523eed..41bbd912df7f 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -597,6 +597,29 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
rh_reg_write(RING_OPERATION1, op1_val);
}
+static bool hci_dma_requires_hc_abort_quirk(struct i3c_hci *hci, struct hci_rh_data *rh)
+{
+ if (!(hci->quirks & HCI_QUIRK_DMA_REQUIRES_HC_ABORT))
+ return false;
+
+ reinit_completion(&rh->op_done);
+ mipi_i3c_hci_abort(hci);
+ wait_for_completion_timeout(&rh->op_done, HZ);
+ rh_reg_write(RING_CONTROL, rh_reg_read(RING_CONTROL) | RING_CTRL_ABORT);
+
+ return true;
+}
+
+static void hci_dma_abort(struct i3c_hci *hci, struct hci_rh_data *rh)
+{
+ if (hci_dma_requires_hc_abort_quirk(hci, rh))
+ return;
+
+ reinit_completion(&rh->op_done);
+ rh_reg_write(RING_CONTROL, rh_reg_read(RING_CONTROL) | RING_CTRL_ABORT);
+ wait_for_completion_timeout(&rh->op_done, HZ);
+}
+
static void hci_dma_abort_requires_pio_reset_quirk(struct i3c_hci *hci, struct hci_rh_data *rh)
{
if ((hci->quirks & HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET) &&
@@ -630,9 +653,7 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
hci->enqueue_blocked = true;
spin_unlock_irq(&hci->lock);
/* stop the ring */
- reinit_completion(&rh->op_done);
- rh_reg_write(RING_CONTROL, rh_reg_read(RING_CONTROL) | RING_CTRL_ABORT);
- wait_for_completion_timeout(&rh->op_done, HZ);
+ hci_dma_abort(hci, rh);
spin_lock_irq(&hci->lock);
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index 01237b12d32e..97c31a315a6e 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -157,9 +157,11 @@ struct i3c_hci_dev_data {
#define HCI_QUIRK_RPM_IBI_ALLOWED BIT(6) /* IBI and Hot-Join allowed while runtime suspended */
#define HCI_QUIRK_RPM_PARENT_MANAGED BIT(7) /* Runtime PM managed by parent device */
#define HCI_QUIRK_DMA_ABORT_REQUIRES_PIO_RESET BIT(8) /* Do PIO queue SW resets after DMA abort */
+#define HCI_QUIRK_DMA_REQUIRES_HC_ABORT BIT(9) /* Use HC_CONTROL ABORT to abort DMA */
/* global functions */
void mipi_i3c_hci_resume(struct i3c_hci *hci);
+void mipi_i3c_hci_abort(struct i3c_hci *hci);
void mipi_i3c_hci_pio_reset(struct i3c_hci *hci);
void mipi_i3c_hci_pio_reset_all_queues(struct i3c_hci *hci);
void mipi_i3c_hci_dct_index_reset(struct i3c_hci *hci);
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 11/16] i3c: mipi-i3c-hci: Factor out reset-and-restore helper
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (9 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 10/16] i3c: mipi-i3c-hci: Add DMA ring abort " Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 12/16] i3c: mipi-i3c-hci: Add DMA-mode recovery for internal controller errors Adrian Hunter
` (4 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Factor the reset-and-restore sequence out of i3c_hci_rpm_resume() into
a separate helper.
This allows the same logic to be reused for recovery paths in subsequent
changes without duplicating suspend/resume handling.
No functional change.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Drop redundant i3c_hci_sync_irq_inactive(hci)
from i3c_hci_reset_and_restore() because it is called by
hci->io->suspend() anyway
drivers/i3c/master/mipi-i3c-hci/core.c | 19 +++++++++++++++++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 2 ++
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index 8274c84b16be..12a0122fb709 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -798,9 +798,8 @@ int i3c_hci_rpm_suspend(struct device *dev)
}
EXPORT_SYMBOL_GPL(i3c_hci_rpm_suspend);
-int i3c_hci_rpm_resume(struct device *dev)
+static int i3c_hci_do_reset_and_restore(struct i3c_hci *hci)
{
- struct i3c_hci *hci = dev_get_drvdata(dev);
int ret;
ret = i3c_hci_reset_and_init(hci);
@@ -821,6 +820,22 @@ int i3c_hci_rpm_resume(struct device *dev)
return 0;
}
+
+int i3c_hci_reset_and_restore(struct i3c_hci *hci)
+{
+ i3c_hci_bus_disable(hci);
+
+ hci->io->suspend(hci);
+
+ return i3c_hci_do_reset_and_restore(hci);
+}
+
+int i3c_hci_rpm_resume(struct device *dev)
+{
+ struct i3c_hci *hci = dev_get_drvdata(dev);
+
+ return i3c_hci_do_reset_and_restore(hci);
+}
EXPORT_SYMBOL_GPL(i3c_hci_rpm_resume);
static int i3c_hci_runtime_suspend(struct device *dev)
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index 97c31a315a6e..a3151c26827e 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -175,4 +175,6 @@ int i3c_hci_process_xfer(struct i3c_hci *hci, struct hci_xfer *xfer, int n);
int i3c_hci_rpm_suspend(struct device *dev);
int i3c_hci_rpm_resume(struct device *dev);
+int i3c_hci_reset_and_restore(struct i3c_hci *hci);
+
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 12/16] i3c: mipi-i3c-hci: Add DMA-mode recovery for internal controller errors
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (10 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 11/16] i3c: mipi-i3c-hci: Factor out reset-and-restore helper Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 13/16] i3c: mipi-i3c-hci: Wait for NoOp commands to complete Adrian Hunter
` (3 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Handle internal I3C HCI errors when operating in DMA mode by adding a
simple recovery mechanism.
On detection of an internal controller error, mark recovery as needed and
attempt to restore operation by performing a software reset followed by
state restore. To keep recovery straightforward on this unlikely error
path, all currently queued transfers are terminated and completed with an
error.
This allows the controller to resume operation after internal failures
rather than remaining permanently stuck.
Note, internal errors indicated by INTR_HC_INTERNAL_ERR, cause the
controller to stop.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
When erroring out transfers, ensure the final transfer of a
transfer list is processed last
Changes in V2:
Rename completing_xfer to final_xfer
Add hci_dma_xfer_done() before checking for an already complete
transfer
Improve commit message
drivers/i3c/master/mipi-i3c-hci/cmd.h | 6 ++
drivers/i3c/master/mipi-i3c-hci/core.c | 1 +
drivers/i3c/master/mipi-i3c-hci/dma.c | 93 +++++++++++++++++++++++---
drivers/i3c/master/mipi-i3c-hci/hci.h | 1 +
4 files changed, 92 insertions(+), 9 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/cmd.h b/drivers/i3c/master/mipi-i3c-hci/cmd.h
index b1bf87daa651..7bada7b4b2de 100644
--- a/drivers/i3c/master/mipi-i3c-hci/cmd.h
+++ b/drivers/i3c/master/mipi-i3c-hci/cmd.h
@@ -65,4 +65,10 @@ struct hci_cmd_ops {
extern const struct hci_cmd_ops mipi_i3c_hci_cmd_v1;
extern const struct hci_cmd_ops mipi_i3c_hci_cmd_v2;
+static inline void hci_cmd_set_resp_err(u32 *response, int resp_err)
+{
+ *response &= ~RESP_ERR_FIELD;
+ *response |= FIELD_PREP(RESP_ERR_FIELD, resp_err);
+}
+
#endif
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index 12a0122fb709..69dcf5dad3a5 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -668,6 +668,7 @@ static irqreturn_t i3c_hci_irq_handler(int irq, void *dev_id)
if (val & INTR_HC_INTERNAL_ERR) {
dev_err(&hci->master.dev, "Host Controller Internal Error\n");
val &= ~INTR_HC_INTERNAL_ERR;
+ hci->recovery_needed = true;
}
if (val)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 41bbd912df7f..376062c0fcbf 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -9,6 +9,7 @@
*/
#include <linux/bitfield.h>
+#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/errno.h>
@@ -258,6 +259,10 @@ static void hci_dma_init_rh(struct i3c_hci *hci, struct hci_rh_data *rh, int i)
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE);
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE | RING_CTRL_RUN_STOP);
+ /*
+ * Do not clear the entries of rh->src_xfers because the recovery uses
+ * them. In other cases they should be NULL anyway.
+ */
rh->done_ptr = 0;
rh->ibi_chunk_ptr = 0;
rh->xfer_space = rh->xfer_entries;
@@ -362,7 +367,7 @@ static int hci_dma_init(struct i3c_hci *hci)
rh->resp = dma_alloc_coherent(rings->sysdev, resps_sz,
&rh->resp_dma, GFP_KERNEL);
rh->src_xfers =
- kmalloc_objs(*rh->src_xfers, rh->xfer_entries);
+ kzalloc_objs(*rh->src_xfers, rh->xfer_entries);
ret = -ENOMEM;
if (!rh->xfer || !rh->resp || !rh->src_xfers)
goto err_out;
@@ -572,13 +577,15 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
hci_dma_unmap_xfer(hci, xfer, 1);
rh->src_xfers[done_ptr] = NULL;
xfer->ring_entry = -1;
- xfer->response = resp;
if (tid != xfer->cmd_tid) {
dev_err(&hci->master.dev,
"response tid=%d when expecting %d\n",
tid, xfer->cmd_tid);
- /* TODO: do something about it? */
+ hci->recovery_needed = true;
+ if (!RESP_STATUS(resp))
+ hci_cmd_set_resp_err(&resp, RESP_ERR_HC_TERMINATED);
}
+ xfer->response = resp;
if (xfer == xfer->final_xfer || RESP_STATUS(resp))
complete(xfer->final_xfer->completion);
if (RESP_STATUS(resp))
@@ -635,6 +642,60 @@ static void hci_dma_unblock_enqueue(struct i3c_hci *hci)
}
}
+static void hci_dma_error_out_rh(struct i3c_hci *hci, struct hci_rh_data *rh)
+{
+ /*
+ * The entries of rh->src_xfers are not cleared by
+ * i3c_hci_reset_and_restore(), so can be used here. Do 2 passes so
+ * that the final_xfer of an xfer list is always processed last.
+ */
+ for (int pass = 0; pass < 2; pass++)
+ for (int i = 0; i < rh->xfer_entries; i++) {
+ struct hci_xfer *xfer = rh->src_xfers[i];
+
+ if (!xfer || (!pass && xfer == xfer->final_xfer))
+ continue;
+ hci_dma_unmap_xfer(hci, xfer, 1);
+ rh->src_xfers[i] = NULL;
+ xfer->ring_entry = -1;
+ hci_cmd_set_resp_err(&xfer->response, RESP_ERR_HC_TERMINATED);
+ if (xfer == xfer->final_xfer)
+ complete(xfer->final_xfer->completion);
+ }
+}
+
+static void hci_dma_error_out_all(struct i3c_hci *hci)
+{
+ struct hci_rings_data *rings = hci->io_data;
+
+ for (int i = 0; i < rings->total; i++)
+ hci_dma_error_out_rh(hci, &rings->headers[i]);
+}
+
+static void hci_dma_recovery(struct i3c_hci *hci)
+{
+ int ret;
+
+ dev_err(&hci->master.dev, "Attempting to recover from internal errors\n");
+
+ for (int i = 0; i < 3; i++) {
+ ret = i3c_hci_reset_and_restore(hci);
+ if (!ret)
+ break;
+ dev_err(&hci->master.dev, "Reset and restore failed, error %d\n", ret);
+ /* Just in case the controller is busy, give it some time */
+ msleep(1000);
+ }
+
+ spin_lock_irq(&hci->lock);
+ hci_dma_error_out_all(hci);
+ hci_dma_unblock_enqueue(hci);
+ hci->recovery_needed = false;
+ spin_unlock_irq(&hci->lock);
+
+ dev_err(&hci->master.dev, "Recovery %s\n", ret ? "failed!" : "done");
+}
+
static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
struct hci_xfer *xfer_list, int n)
{
@@ -650,6 +711,17 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
+ /*
+ * The transfer may have already completed, especially
+ * if recovery has just run. Do nothing in that case.
+ */
+ hci_dma_xfer_done(hci, rh);
+ if (xfer_list->final_xfer->ring_entry < 0 &&
+ !hci->recovery_needed && !hci->enqueue_blocked &&
+ ring_status == (RING_STATUS_ENABLED | RING_STATUS_RUNNING)) {
+ spin_unlock_irq(&hci->lock);
+ return false;
+ }
hci->enqueue_blocked = true;
spin_unlock_irq(&hci->lock);
/* stop the ring */
@@ -657,12 +729,8 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
spin_lock_irq(&hci->lock);
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
- /*
- * We're deep in it if ever this condition is ever met.
- * Hardware might still be writing to memory, etc.
- */
- dev_crit(&hci->master.dev, "unable to abort the ring\n");
- WARN_ON(1);
+ dev_err(&hci->master.dev, "Unable to abort the DMA ring\n");
+ hci->recovery_needed = true;
}
}
@@ -670,6 +738,13 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
hci_dma_xfer_done(hci, rh);
+ if (hci->recovery_needed) {
+ hci->enqueue_blocked = true;
+ spin_unlock_irq(&hci->lock);
+ hci_dma_recovery(hci);
+ return true;
+ }
+
for (i = 0; i < n; i++) {
struct hci_xfer *xfer = xfer_list + i;
int idx = xfer->ring_entry;
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index a3151c26827e..4bf2c66c97b4 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -55,6 +55,7 @@ struct i3c_hci {
atomic_t next_cmd_tid;
bool irq_inactive;
bool enqueue_blocked;
+ bool recovery_needed;
wait_queue_head_t enqueue_wait_queue;
u32 caps;
unsigned int quirks;
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 13/16] i3c: mipi-i3c-hci: Wait for NoOp commands to complete
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (11 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 12/16] i3c: mipi-i3c-hci: Add DMA-mode recovery for internal controller errors Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 14/16] i3c: mipi-i3c-hci: Base timeouts on actual transfer start time Adrian Hunter
` (2 subsequent siblings)
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
When a transfer list is only partially completed due to an error,
hci_dma_dequeue_xfer() overwrites the remaining DMA ring entries with
NoOp commands and restarts the ring to flush them out.
While NoOp commands are expected to complete successfully, they may still
fail to complete if the DMA ring is stuck. Explicitly wait for the NoOp
commands to finish, and trigger controller recovery if they do not
complete or report an error.
This ensures that partially completed transfer lists are reliably
resolved and that a stuck ring is recovered promptly.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Rename completing_xfer to final_xfer
Add missing reinit_completion()
drivers/i3c/master/mipi-i3c-hci/dma.c | 39 ++++++++++++++++++++++-----
1 file changed, 33 insertions(+), 6 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 376062c0fcbf..90fa621c9d56 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -696,11 +696,33 @@ static void hci_dma_recovery(struct i3c_hci *hci)
dev_err(&hci->master.dev, "Recovery %s\n", ret ? "failed!" : "done");
}
+static bool hci_dma_wait_for_noop(struct i3c_hci *hci, struct hci_xfer *xfer_list, int n,
+ int noop_pos)
+{
+ struct completion *done = xfer_list->final_xfer->completion;
+ bool timeout = !wait_for_completion_timeout(done, HZ);
+ u32 error = timeout;
+
+ for (int i = noop_pos; i < n && !error; i++)
+ error = RESP_STATUS(xfer_list[i].response);
+
+ if (!error)
+ return true;
+
+ if (timeout)
+ dev_err(&hci->master.dev, "NoOp timeout error\n");
+ else
+ dev_err(&hci->master.dev, "NoOp error %u\n", error);
+
+ return false;
+}
+
static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
struct hci_xfer *xfer_list, int n)
{
struct hci_rings_data *rings = hci->io_data;
struct hci_rh_data *rh = &rings->headers[xfer_list[0].ring_number];
+ int noop_pos = -1;
unsigned int i;
bool did_unqueue = false;
u32 ring_status;
@@ -708,7 +730,7 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
guard(mutex)(&hci->control_mutex);
spin_lock_irq(&hci->lock);
-
+restart:
ring_status = rh_reg_read(RING_STATUS);
if (ring_status & RING_STATUS_RUNNING) {
/*
@@ -765,11 +787,10 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
*ring_data++ = 0;
}
- /* disassociate this xfer struct */
- rh->src_xfers[idx] = NULL;
-
- /* and unmap it */
- hci_dma_unmap_xfer(hci, xfer, 1);
+ if (noop_pos < 0) {
+ reinit_completion(xfer->final_xfer->completion);
+ noop_pos = i;
+ }
did_unqueue = true;
}
@@ -801,6 +822,12 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
wait_for_completion_timeout(&rh->op_done, HZ);
+ if (did_unqueue && !hci_dma_wait_for_noop(hci, xfer_list, n, noop_pos)) {
+ spin_lock_irq(&hci->lock);
+ hci->recovery_needed = true;
+ goto restart;
+ }
+
return did_unqueue;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 14/16] i3c: mipi-i3c-hci: Base timeouts on actual transfer start time
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (12 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 13/16] i3c: mipi-i3c-hci: Wait for NoOp commands to complete Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 15/16] i3c: mipi-i3c-hci: Consolidate DMA ring allocation Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 16/16] i3c: mipi-i3c-hci: Increase DMA transfer ring size to maximum Adrian Hunter
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
Transfer timeouts are currently measured from the point where a transfer
list is queued to the controller. This can cause transfers to time out
before they have actually started, if earlier queued transfers consume
the timeout interval.
Fix this by recording when a transfer reaches the head of the queue and
adjusting the timeout calculation to start from that point. The existing
low-overhead completion-based timeout mechanism is preserved, but care is
taken to ensure the transfer start time is consistently recorded for both
PIO and DMA paths.
This prevents premature timeouts while retaining efficient timeout
handling.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Do not flag the next transfer as started when there is an error
which halts the controller
Instead flag it started at the end of hci_dma_dequeue_xfer()
Use hci_start_xfer() in pio.c
drivers/i3c/master/mipi-i3c-hci/core.c | 19 ++++++++++++++++++-
drivers/i3c/master/mipi-i3c-hci/dma.c | 19 ++++++++++++++++++-
drivers/i3c/master/mipi-i3c-hci/hci.h | 11 +++++++++++
drivers/i3c/master/mipi-i3c-hci/pio.c | 1 +
4 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index 69dcf5dad3a5..2866d599612a 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -275,13 +275,30 @@ int i3c_hci_process_xfer(struct i3c_hci *hci, struct hci_xfer *xfer, int n)
{
struct completion *done = xfer[n - 1].completion;
unsigned long timeout = xfer[n - 1].timeout;
+ unsigned long remaining_timeout = timeout;
+ long time_taken;
+ bool started;
int ret;
+ xfer[0].started = false;
+
ret = hci->io->queue_xfer(hci, xfer, n);
if (ret)
return ret;
- if (!wait_for_completion_timeout(done, timeout)) {
+ while (!wait_for_completion_timeout(done, remaining_timeout)) {
+ scoped_guard(spinlock_irqsave, &hci->lock) {
+ started = xfer[0].started;
+ time_taken = jiffies - xfer[0].start_time;
+ }
+ /* Keep waiting if xfer has not started */
+ if (!started)
+ continue;
+ /* Recalculate timeout based on actual start time */
+ if (time_taken < timeout) {
+ remaining_timeout = timeout - time_taken;
+ continue;
+ }
if (hci->io->dequeue_xfer(hci, xfer, n)) {
dev_err(&hci->master.dev, "%s: timeout error\n", __func__);
return -ETIMEDOUT;
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 90fa621c9d56..6440302c63ca 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -543,6 +543,9 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci,
enqueue_ptr = (enqueue_ptr + 1) % rh->xfer_entries;
}
+ if (rh->xfer_space == rh->xfer_entries)
+ hci_start_xfer(xfer_list);
+
rh->xfer_space -= n;
op1_val &= ~RING_OP1_CR_ENQ_PTR;
@@ -558,6 +561,7 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
u32 op1_val, op2_val, resp, *ring_resp;
unsigned int tid, done_ptr = rh->done_ptr;
unsigned int done_cnt = 0;
+ bool start_next = false;
struct hci_xfer *xfer;
for (;;) {
@@ -588,8 +592,14 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
xfer->response = resp;
if (xfer == xfer->final_xfer || RESP_STATUS(resp))
complete(xfer->final_xfer->completion);
- if (RESP_STATUS(resp))
+ else
+ hci_start_xfer(xfer);
+ if (RESP_STATUS(resp)) {
hci->enqueue_blocked = true;
+ start_next = false;
+ } else {
+ start_next = true;
+ }
}
done_ptr = (done_ptr + 1) % rh->xfer_entries;
@@ -598,6 +608,10 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh)
}
rh->xfer_space += done_cnt;
+ if (start_next && rh->xfer_space < rh->xfer_entries) {
+ xfer = rh->src_xfers[done_ptr];
+ hci_start_xfer(xfer);
+ }
op1_val = rh_reg_read(RING_OPERATION1);
op1_val &= ~RING_OP1_CR_SW_DEQ_PTR;
op1_val |= FIELD_PREP(RING_OP1_CR_SW_DEQ_PTR, done_ptr);
@@ -818,6 +832,9 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci,
hci_dma_unblock_enqueue(hci);
+ if (rh->xfer_space < rh->xfer_entries)
+ hci_start_xfer(rh->src_xfers[rh->done_ptr]);
+
spin_unlock_irq(&hci->lock);
wait_for_completion_timeout(&rh->op_done, HZ);
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index 4bf2c66c97b4..243d7a67f6f6 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -11,6 +11,7 @@
#define HCI_H
#include <linux/io.h>
+#include <linux/jiffies.h>
/* 32-bit word aware bit and mask macros */
#define W0_MASK(h, l) GENMASK((h) - 0, (l) - 0)
@@ -88,11 +89,13 @@ struct hci_xfer {
u32 cmd_desc[4];
u32 response;
bool rnw;
+ bool started;
void *data;
unsigned int data_len;
unsigned int cmd_tid;
struct completion *completion;
unsigned long timeout;
+ unsigned long start_time;
union {
struct {
/* PIO specific */
@@ -123,6 +126,14 @@ static inline void hci_free_xfer(struct hci_xfer *xfer, unsigned int n)
kfree(xfer);
}
+static inline void hci_start_xfer(struct hci_xfer *xfer)
+{
+ if (!xfer->started) {
+ xfer->started = true;
+ xfer->start_time = jiffies;
+ }
+}
+
/* This abstracts PIO vs DMA operations */
struct hci_io_ops {
bool (*irq_handler)(struct i3c_hci *hci);
diff --git a/drivers/i3c/master/mipi-i3c-hci/pio.c b/drivers/i3c/master/mipi-i3c-hci/pio.c
index 8f48a81e65ab..6b8cc5f2b4d2 100644
--- a/drivers/i3c/master/mipi-i3c-hci/pio.c
+++ b/drivers/i3c/master/mipi-i3c-hci/pio.c
@@ -605,6 +605,7 @@ static bool hci_pio_process_cmd(struct i3c_hci *hci, struct hci_pio_data *pio)
* Finally send the command.
*/
hci_pio_write_cmd(hci, pio->curr_xfer);
+ hci_start_xfer(pio->curr_xfer);
/*
* And move on.
*/
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 15/16] i3c: mipi-i3c-hci: Consolidate DMA ring allocation
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (13 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 14/16] i3c: mipi-i3c-hci: Base timeouts on actual transfer start time Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
2026-05-04 11:33 ` [PATCH V3 16/16] i3c: mipi-i3c-hci: Increase DMA transfer ring size to maximum Adrian Hunter
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
dma_alloc_coherent() allocates memory in whole pages, which can waste
space when command and response queues are allocated separately.
Allocate the DMA command and response queues from a single coherent
allocation instead, while preserving the required 4-byte alignment.
This reduces memory overhead without changing behavior.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V3:
None
Changes in V2:
Check for failed allocation before assignments to avoid doing
arithmetic with NULL pointers
drivers/i3c/master/mipi-i3c-hci/dma.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 6440302c63ca..4029d4d9e784 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -186,14 +186,12 @@ static void hci_dma_free(void *data)
for (int i = 0; i < rings->total; i++) {
rh = &rings->headers[i];
- if (rh->xfer)
- dma_free_coherent(rings->sysdev,
- rh->xfer_struct_sz * rh->xfer_entries,
- rh->xfer, rh->xfer_dma);
- if (rh->resp)
- dma_free_coherent(rings->sysdev,
- rh->resp_struct_sz * rh->xfer_entries,
- rh->resp, rh->resp_dma);
+ if (rh->xfer) {
+ size_t sz = round_up(rh->xfer_struct_sz * rh->xfer_entries, 4);
+
+ sz += rh->resp_struct_sz * rh->xfer_entries;
+ dma_free_coherent(rings->sysdev, sz, rh->xfer, rh->xfer_dma);
+ }
kfree(rh->src_xfers);
if (rh->ibi_status)
dma_free_coherent(rings->sysdev,
@@ -359,18 +357,18 @@ static int hci_dma_init(struct i3c_hci *hci)
dev_dbg(&hci->master.dev,
"xfer_struct_sz = %d, resp_struct_sz = %d",
rh->xfer_struct_sz, rh->resp_struct_sz);
- xfers_sz = rh->xfer_struct_sz * rh->xfer_entries;
+ xfers_sz = round_up(rh->xfer_struct_sz * rh->xfer_entries, 4);
resps_sz = rh->resp_struct_sz * rh->xfer_entries;
- rh->xfer = dma_alloc_coherent(rings->sysdev, xfers_sz,
+ rh->xfer = dma_alloc_coherent(rings->sysdev, xfers_sz + resps_sz,
&rh->xfer_dma, GFP_KERNEL);
- rh->resp = dma_alloc_coherent(rings->sysdev, resps_sz,
- &rh->resp_dma, GFP_KERNEL);
rh->src_xfers =
kzalloc_objs(*rh->src_xfers, rh->xfer_entries);
ret = -ENOMEM;
- if (!rh->xfer || !rh->resp || !rh->src_xfers)
+ if (!rh->xfer || !rh->src_xfers)
goto err_out;
+ rh->resp = rh->xfer + xfers_sz;
+ rh->resp_dma = rh->xfer_dma + xfers_sz;
/* IBIs */
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH V3 16/16] i3c: mipi-i3c-hci: Increase DMA transfer ring size to maximum
2026-05-04 11:33 [PATCH V3 00/16] i3c: mipi-i3c-hci: DMA abort, recovery and related improvements Adrian Hunter
` (14 preceding siblings ...)
2026-05-04 11:33 ` [PATCH V3 15/16] i3c: mipi-i3c-hci: Consolidate DMA ring allocation Adrian Hunter
@ 2026-05-04 11:33 ` Adrian Hunter
15 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2026-05-04 11:33 UTC (permalink / raw)
To: alexandre.belloni; +Cc: Frank.Li, linux-i3c, linux-kernel
The DMA transfer ring is currently limited to 16 entries, despite the
MIPI I3C HCI supporting up to 32 devices. When the ring lacks space for a
new transfer list, the driver returns -EBUSY, which can be unexpected
for clients.
Increase the DMA transfer ring size to the maximum supported value of
255 entries. This effectively eliminates ring-space exhaustion in
practice and avoids the complexity of adding secondary queuing
mechanisms.
Even at the maximum size, the memory overhead remains small
(approximately 24 bytes per entry by default).
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
Changes in V2 and V3:
None
drivers/i3c/master/mipi-i3c-hci/dma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index 4029d4d9e784..9549d98add4b 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -27,7 +27,7 @@
*/
#define XFER_RINGS 1 /* max: 8 */
-#define XFER_RING_ENTRIES 16 /* max: 255 */
+#define XFER_RING_ENTRIES 255 /* max: 255 */
#define IBI_RINGS 1 /* max: 8 */
#define IBI_STATUS_RING_ENTRIES 32 /* max: 255 */
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread