From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B78A53A5E6D for ; Tue, 21 Apr 2026 17:54:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776794095; cv=none; b=RXSSQtQ5bOqmJX++E0BiAXuLsF8tDM4g4kppiT+QniVMtNLkZ0OmqiBCqzMZpPwyKrT1ZbG2tTa+dDLcPs51NSRfpIu1Gz8LjLFz8plZW3hAKyFDcnWXwKB495WeXFZ2SW8iieNL0o+quXeP5eN9PzpXzhwW0NGBQ4Itqfqwcpg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776794095; c=relaxed/simple; bh=TcGBdft8ef+1JXeXz1ZtqewxU4mcE8tysfIYbnH8K7o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=en6l9VB4uucZrxsIkT4bzNOZHU2qbHG3MsrgDmprSxY05SJwmTYoxX3ItbMnV0v+P4tu9XTLezRbQBgsg5QR0wnp+WN6s6CPMLGf106kxJHJ9nvV6fs1hGB9SCnqhs8k/qhwNQyE02zb/aJV08SxJ+g4qkv2fAcCmE0FqxHyq8s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CWSloMmh; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CWSloMmh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1776794094; x=1808330094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TcGBdft8ef+1JXeXz1ZtqewxU4mcE8tysfIYbnH8K7o=; b=CWSloMmhlMiL4dGQ1mpZVb+unNJNfuAjU5ZDTG6PVLCZSWK4mKkZvOgu ujr7cU6eClyJOrWL5IEsU2gSmKt+PpAGMq6js4VIkwsTK80G8L3AD21Lo RKlNAgwa4hvmxc8R3hPII0OwZ1DaE+nD40GneviZLIjJJABi9oRnPCdC9 0zS62SClqXNeCPB/IPgtij6xVxsQMViH+FYABwKA7FEYHle48Toxrmjpe +tsh4QOc37I5hgGY1n0CDxU3sjPai0ubym0mihZGtZaoUBfPU+rkFLO2Q WIzTGqlAn7wwHmltwaE3TjC/L9184kXAVTAJPR01x/zLUafq0comfgEt5 Q==; X-CSE-ConnectionGUID: gkCgaDxWS+ablpp6AJ5f8w== X-CSE-MsgGUID: SwX9ouPjTH+lz1JhxmWqvg== X-IronPort-AV: E=McAfee;i="6800,10657,11763"; a="77651358" X-IronPort-AV: E=Sophos;i="6.23,192,1770624000"; d="scan'208";a="77651358" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2026 10:54:53 -0700 X-CSE-ConnectionGUID: 1QtFJrveTzWqdrtqb9GWpw== X-CSE-MsgGUID: qiyDLJ+3RI6vDhKPyn9vXg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,192,1770624000"; d="scan'208";a="227494875" Received: from hrotuna-mobl2.ger.corp.intel.com (HELO ahunter6-desk) ([10.245.244.242]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2026 10:54:52 -0700 From: Adrian Hunter To: alexandre.belloni@bootlin.com Cc: Frank.Li@nxp.com, linux-i3c@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 03/16] i3c: mipi-i3c-hci: Prevent DMA enqueue while ring is aborting or in error Date: Tue, 21 Apr 2026 20:54:22 +0300 Message-ID: <20260421175435.122094-4-adrian.hunter@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260421175435.122094-1-adrian.hunter@intel.com> References: <20260421175435.122094-1-adrian.hunter@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Organization: Intel Finland Oy, Registered Address: c/o Alberga Business Park, 6 krs, Bertel Jungin Aukio 5, 02600 Espoo, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Content-Transfer-Encoding: 8bit Block the DMA enqueue path while a Ring abort is in progress or after an error condition has been detected. Previously, new transfers could be enqueued while the DMA Ring was being aborted or while error handling was underway. This allowed enqueue and error-recovery paths to run concurrently, potentially interfering with each other and corrupting Ring state. Introduce explicit enqueue blocking and a wait queue to serialize access: enqueue operations now wait until abort or error handling has completed before proceeding. Enqueue is unblocked once the Ring is safely restarted. Note, there is only 1 ring bundle configured, and a transfer error causes the controller to halt ring (bundle) operation, so there is only ever 1 outstanding error at a time. Furthermore, a later patch ensures that only the currently active transfer list can time out. Consequently, the DMA queue will not be unblocked while there are outstanding transfer errors or timeouts. Signed-off-by: Adrian Hunter --- Changes in V2: Improve commit message drivers/i3c/master/mipi-i3c-hci/core.c | 1 + drivers/i3c/master/mipi-i3c-hci/dma.c | 25 +++++++++++++++++++++++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 2 ++ 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c index afb0764b5e1f..44617eb3a3f1 100644 --- a/drivers/i3c/master/mipi-i3c-hci/core.c +++ b/drivers/i3c/master/mipi-i3c-hci/core.c @@ -973,6 +973,7 @@ static int i3c_hci_probe(struct platform_device *pdev) spin_lock_init(&hci->lock); mutex_init(&hci->control_mutex); + init_waitqueue_head(&hci->enqueue_wait_queue); /* * Multi-bus instances share the same MMIO address range, but not diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c index 4cd32e3afa7b..314635e6e190 100644 --- a/drivers/i3c/master/mipi-i3c-hci/dma.c +++ b/drivers/i3c/master/mipi-i3c-hci/dma.c @@ -484,6 +484,12 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci, spin_lock_irq(&hci->lock); + while (unlikely(hci->enqueue_blocked)) { + spin_unlock_irq(&hci->lock); + wait_event(hci->enqueue_wait_queue, !READ_ONCE(hci->enqueue_blocked)); + spin_lock_irq(&hci->lock); + } + if (n > rh->xfer_space) { spin_unlock_irq(&hci->lock); hci_dma_unmap_xfer(hci, xfer_list, n); @@ -539,6 +545,14 @@ static int hci_dma_queue_xfer(struct i3c_hci *hci, return 0; } +static void hci_dma_unblock_enqueue(struct i3c_hci *hci) +{ + if (hci->enqueue_blocked) { + hci->enqueue_blocked = false; + wake_up_all(&hci->enqueue_wait_queue); + } +} + static bool hci_dma_dequeue_xfer(struct i3c_hci *hci, struct hci_xfer *xfer_list, int n) { @@ -550,12 +564,17 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci, guard(mutex)(&hci->control_mutex); + spin_lock_irq(&hci->lock); + ring_status = rh_reg_read(RING_STATUS); if (ring_status & RING_STATUS_RUNNING) { + hci->enqueue_blocked = true; + spin_unlock_irq(&hci->lock); /* stop the ring */ reinit_completion(&rh->op_done); rh_reg_write(RING_CONTROL, rh_reg_read(RING_CONTROL) | RING_CTRL_ABORT); wait_for_completion_timeout(&rh->op_done, HZ); + spin_lock_irq(&hci->lock); ring_status = rh_reg_read(RING_STATUS); if (ring_status & RING_STATUS_RUNNING) { /* @@ -567,8 +586,6 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci, } } - spin_lock_irq(&hci->lock); - for (i = 0; i < n; i++) { struct hci_xfer *xfer = xfer_list + i; int idx = xfer->ring_entry; @@ -604,6 +621,8 @@ static bool hci_dma_dequeue_xfer(struct i3c_hci *hci, rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE); rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE | RING_CTRL_RUN_STOP); + hci_dma_unblock_enqueue(hci); + spin_unlock_irq(&hci->lock); return did_unqueue; @@ -647,6 +666,8 @@ static void hci_dma_xfer_done(struct i3c_hci *hci, struct hci_rh_data *rh) } if (xfer->completion) complete(xfer->completion); + if (RESP_STATUS(resp)) + hci->enqueue_blocked = true; } done_ptr = (done_ptr + 1) % rh->xfer_entries; diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h index f17f43494c1b..d630400ec945 100644 --- a/drivers/i3c/master/mipi-i3c-hci/hci.h +++ b/drivers/i3c/master/mipi-i3c-hci/hci.h @@ -54,6 +54,8 @@ struct i3c_hci { struct mutex control_mutex; atomic_t next_cmd_tid; bool irq_inactive; + bool enqueue_blocked; + wait_queue_head_t enqueue_wait_queue; u32 caps; unsigned int quirks; unsigned int DAT_entries; -- 2.51.0