* Re: [PATCH 6.12.y] spi: tegra210-quad: Protect curr_xfer check in IRQ handler
2026-03-24 6:08 [PATCH 6.12.y] spi: tegra210-quad: Protect curr_xfer check in IRQ handler Jianqiang kang
@ 2026-03-25 11:12 ` Breno Leitao
2026-03-31 10:55 ` Patch "spi: tegra210-quad: Protect curr_xfer check in IRQ handler" has been added to the 6.12-stable tree gregkh
1 sibling, 0 replies; 3+ messages in thread
From: Breno Leitao @ 2026-03-25 11:12 UTC (permalink / raw)
To: Jianqiang kang
Cc: gregkh, stable, patches, linux-kernel, thierry.reding, jonathanh,
skomatineni, ldewangan, treding, broonie, va, linux-tegra,
linux-spi
On Tue, Mar 24, 2026 at 02:08:32PM +0800, Jianqiang kang wrote:
> From: Breno Leitao <leitao@debian.org>
>
> [ Upstream commit edf9088b6e1d6d88982db7eb5e736a0e4fbcc09e ]
>
> Now that all other accesses to curr_xfer are done under the lock,
> protect the curr_xfer NULL check in tegra_qspi_isr_thread() with the
> spinlock. Without this protection, the following race can occur:
>
> CPU0 (ISR thread) CPU1 (timeout path)
> ---------------- -------------------
> if (!tqspi->curr_xfer)
> // sees non-NULL
> spin_lock()
> tqspi->curr_xfer = NULL
> spin_unlock()
> handle_*_xfer()
> spin_lock()
> t = tqspi->curr_xfer // NULL!
> ... t->len ... // NULL dereference!
>
> With this patch, all curr_xfer accesses are now properly synchronized.
>
> Although all accesses to curr_xfer are done under the lock, in
> tegra_qspi_isr_thread() it checks for NULL, releases the lock and
> reacquires it later in handle_cpu_based_xfer()/handle_dma_based_xfer().
> There is a potential for an update in between, which could cause a NULL
> pointer dereference.
>
> To handle this, add a NULL check inside the handlers after acquiring
> the lock. This ensures that if the timeout path has already cleared
> curr_xfer, the handler will safely return without dereferencing the
> NULL pointer.
>
> Fixes: b4e002d8a7ce ("spi: tegra210-quad: Fix timeout handling")
> Signed-off-by: Breno Leitao <leitao@debian.org>
> Tested-by: Jon Hunter <jonathanh@nvidia.com>
> Acked-by: Jon Hunter <jonathanh@nvidia.com>
> Acked-by: Thierry Reding <treding@nvidia.com>
> Link: https://patch.msgid.link/20260126-tegra_xfer-v2-6-6d2115e4f387@debian.org
> Signed-off-by: Mark Brown <broonie@kernel.org>
> [ Minor conflict resolved. ]
> Signed-off-by: Jianqiang kang <jianqkang@sina.cn>
Acked-by: Breno Leitao <leitao@debian.org>
^ permalink raw reply [flat|nested] 3+ messages in thread* Patch "spi: tegra210-quad: Protect curr_xfer check in IRQ handler" has been added to the 6.12-stable tree
2026-03-24 6:08 [PATCH 6.12.y] spi: tegra210-quad: Protect curr_xfer check in IRQ handler Jianqiang kang
2026-03-25 11:12 ` Breno Leitao
@ 2026-03-31 10:55 ` gregkh
1 sibling, 0 replies; 3+ messages in thread
From: gregkh @ 2026-03-31 10:55 UTC (permalink / raw)
To: broonie, gregkh, jianqkang, jonathanh, ldewangan, leitao, patches,
skomatineni, thierry.reding, treding, va
Cc: stable-commits
This is a note to let you know that I've just added the patch titled
spi: tegra210-quad: Protect curr_xfer check in IRQ handler
to the 6.12-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
spi-tegra210-quad-protect-curr_xfer-check-in-irq-handler.patch
and it can be found in the queue-6.12 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
From jianqkang@sina.cn Tue Mar 24 07:08:53 2026
From: Jianqiang kang <jianqkang@sina.cn>
Date: Tue, 24 Mar 2026 14:08:32 +0800
Subject: spi: tegra210-quad: Protect curr_xfer check in IRQ handler
To: gregkh@linuxfoundation.org, stable@vger.kernel.org, leitao@debian.org
Cc: patches@lists.linux.dev, linux-kernel@vger.kernel.org, thierry.reding@gmail.com, jonathanh@nvidia.com, skomatineni@nvidia.com, ldewangan@nvidia.com, treding@nvidia.com, broonie@kernel.org, va@nvidia.com, linux-tegra@vger.kernel.org, linux-spi@vger.kernel.org
Message-ID: <20260324060832.724228-1-jianqkang@sina.cn>
From: Breno Leitao <leitao@debian.org>
[ Upstream commit edf9088b6e1d6d88982db7eb5e736a0e4fbcc09e ]
Now that all other accesses to curr_xfer are done under the lock,
protect the curr_xfer NULL check in tegra_qspi_isr_thread() with the
spinlock. Without this protection, the following race can occur:
CPU0 (ISR thread) CPU1 (timeout path)
---------------- -------------------
if (!tqspi->curr_xfer)
// sees non-NULL
spin_lock()
tqspi->curr_xfer = NULL
spin_unlock()
handle_*_xfer()
spin_lock()
t = tqspi->curr_xfer // NULL!
... t->len ... // NULL dereference!
With this patch, all curr_xfer accesses are now properly synchronized.
Although all accesses to curr_xfer are done under the lock, in
tegra_qspi_isr_thread() it checks for NULL, releases the lock and
reacquires it later in handle_cpu_based_xfer()/handle_dma_based_xfer().
There is a potential for an update in between, which could cause a NULL
pointer dereference.
To handle this, add a NULL check inside the handlers after acquiring
the lock. This ensures that if the timeout path has already cleared
curr_xfer, the handler will safely return without dereferencing the
NULL pointer.
Fixes: b4e002d8a7ce ("spi: tegra210-quad: Fix timeout handling")
Signed-off-by: Breno Leitao <leitao@debian.org>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Link: https://patch.msgid.link/20260126-tegra_xfer-v2-6-6d2115e4f387@debian.org
Signed-off-by: Mark Brown <broonie@kernel.org>
[ Minor conflict resolved. ]
Signed-off-by: Jianqiang kang <jianqkang@sina.cn>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/spi/spi-tegra210-quad.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
--- a/drivers/spi/spi-tegra210-quad.c
+++ b/drivers/spi/spi-tegra210-quad.c
@@ -1351,6 +1351,11 @@ static irqreturn_t handle_cpu_based_xfer
spin_lock_irqsave(&tqspi->lock, flags);
t = tqspi->curr_xfer;
+ if (!t) {
+ spin_unlock_irqrestore(&tqspi->lock, flags);
+ return IRQ_HANDLED;
+ }
+
if (tqspi->tx_status || tqspi->rx_status) {
tegra_qspi_handle_error(tqspi);
complete(&tqspi->xfer_completion);
@@ -1419,6 +1424,11 @@ static irqreturn_t handle_dma_based_xfer
spin_lock_irqsave(&tqspi->lock, flags);
t = tqspi->curr_xfer;
+ if (!t) {
+ spin_unlock_irqrestore(&tqspi->lock, flags);
+ return IRQ_HANDLED;
+ }
+
if (err) {
tegra_qspi_dma_unmap_xfer(tqspi, t);
tegra_qspi_handle_error(tqspi);
@@ -1457,6 +1467,7 @@ exit:
static irqreturn_t tegra_qspi_isr_thread(int irq, void *context_data)
{
struct tegra_qspi *tqspi = context_data;
+ unsigned long flags;
u32 status;
/*
@@ -1474,7 +1485,9 @@ static irqreturn_t tegra_qspi_isr_thread
* If no transfer is in progress, check if this was a real interrupt
* that the timeout handler already processed, or a spurious one.
*/
+ spin_lock_irqsave(&tqspi->lock, flags);
if (!tqspi->curr_xfer) {
+ spin_unlock_irqrestore(&tqspi->lock, flags);
/* Spurious interrupt - transfer not ready */
if (!(status & QSPI_RDY))
return IRQ_NONE;
@@ -1491,7 +1504,14 @@ static irqreturn_t tegra_qspi_isr_thread
tqspi->rx_status = tqspi->status_reg & (QSPI_RX_FIFO_OVF | QSPI_RX_FIFO_UNF);
tegra_qspi_mask_clear_irq(tqspi);
+ spin_unlock_irqrestore(&tqspi->lock, flags);
+ /*
+ * Lock is released here but handlers safely re-check curr_xfer under
+ * lock before dereferencing.
+ * DMA handler also needs to sleep in wait_for_completion_*(), which
+ * cannot be done while holding spinlock.
+ */
if (!tqspi->is_curr_dma_xfer)
return handle_cpu_based_xfer(tqspi);
Patches currently in stable-queue which might be from jianqkang@sina.cn are
queue-6.12/spi-tegra210-quad-protect-curr_xfer-check-in-irq-handler.patch
^ permalink raw reply [flat|nested] 3+ messages in thread