From: Alex Bereza <alex@bereza.email>
To: Vinod Koul <vkoul@kernel.org>, Frank Li <Frank.Li@kernel.org>,
Michal Simek <michal.simek@amd.com>,
Geert Uytterhoeven <geert+renesas@glider.be>,
Ulf Hansson <ulf.hansson@linaro.org>,
Arnd Bergmann <arnd@arndb.de>, Tony Lindgren <tony@atomide.com>,
Kedareswara rao Appana <appana.durga.rao@xilinx.com>
Cc: dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Alex Bereza <alex@bereza.email>
Subject: [PATCH v3 1/2] dmaengine: xilinx_dma: Fix CPU stall in xilinx_dma_poll_timeout
Date: Wed, 01 Apr 2026 12:56:32 +0200 [thread overview]
Message-ID: <20260401-fix-atomic-poll-timeout-regression-v3-1-85508f0aedde@bereza.email> (raw)
In-Reply-To: <20260401-fix-atomic-poll-timeout-regression-v3-0-85508f0aedde@bereza.email>
Currently when calling xilinx_dma_poll_timeout with delay_us=0 and a
condition that is never fulfilled, the CPU busy-waits for prolonged time
and the timeout triggers only with a massive delay causing a CPU stall.
This happens due to a huge underestimation of wall clock time in
poll_timeout_us_atomic. Commit 7349a69cf312 ("iopoll: Do not use
timekeeping in read_poll_timeout_atomic()") changed the behavior to no
longer use ktime_get at the expense of underestimation of wall clock
time which appears to be very large for delay_us=0. Instead of timing
out after approximately XILINX_DMA_LOOP_COUNT microseconds, the timeout
takes XILINX_DMA_LOOP_COUNT * 1000 * (time that the overhead of the for
loop in poll_timeout_us_atomic takes) which is in the range of several
minutes for XILINX_DMA_LOOP_COUNT=1000000. Fix this by using a non-zero
value for delay_us. Use delay_us=10 to keep the delay in the hot path of
starting DMA transfers minimal but still avoid CPU stalls in case of
unexpected hardware failures.
One-off measurement with delay_us=0 causes the cpu to busy wait around 7
minutes in the timeout case. After applying this patch with delay_us=10
the measured timeout was 1053428 microseconds which is roughly
equivalent to the expected 1000000 microseconds specified in
XILINX_DMA_LOOP_COUNT.
Add a constant XILINX_DMA_POLL_DELAY_US for delay_us value.
Fixes: 9495f2648287 ("dmaengine: xilinx_vdma: Use readl_poll_timeout instead of do while loop's")
Fixes: 7349a69cf312 ("iopoll: Do not use timekeeping in read_poll_timeout_atomic()")
Signed-off-by: Alex Bereza <alex@bereza.email>
---
drivers/dma/xilinx/xilinx_dma.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 02a05f215614..345a738bab2c 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -167,6 +167,8 @@
/* Delay loop counter to prevent hardware failure */
#define XILINX_DMA_LOOP_COUNT 1000000
+/* Delay between polls (avoid a delay of 0 to prevent CPU stalls) */
+#define XILINX_DMA_POLL_DELAY_US 10
/* AXI DMA Specific Registers/Offsets */
#define XILINX_DMA_REG_SRCDSTADDR 0x18
@@ -1332,7 +1334,8 @@ static int xilinx_dma_stop_transfer(struct xilinx_dma_chan *chan)
/* Wait for the hardware to halt */
return xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMASR, val,
- val & XILINX_DMA_DMASR_HALTED, 0,
+ val & XILINX_DMA_DMASR_HALTED,
+ XILINX_DMA_POLL_DELAY_US,
XILINX_DMA_LOOP_COUNT);
}
@@ -1347,7 +1350,8 @@ static int xilinx_cdma_stop_transfer(struct xilinx_dma_chan *chan)
u32 val;
return xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMASR, val,
- val & XILINX_DMA_DMASR_IDLE, 0,
+ val & XILINX_DMA_DMASR_IDLE,
+ XILINX_DMA_POLL_DELAY_US,
XILINX_DMA_LOOP_COUNT);
}
@@ -1364,7 +1368,8 @@ static void xilinx_dma_start(struct xilinx_dma_chan *chan)
/* Wait for the hardware to start */
err = xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMASR, val,
- !(val & XILINX_DMA_DMASR_HALTED), 0,
+ !(val & XILINX_DMA_DMASR_HALTED),
+ XILINX_DMA_POLL_DELAY_US,
XILINX_DMA_LOOP_COUNT);
if (err) {
@@ -1780,7 +1785,8 @@ static int xilinx_dma_reset(struct xilinx_dma_chan *chan)
/* Wait for the hardware to finish reset */
err = xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMACR, tmp,
- !(tmp & XILINX_DMA_DMACR_RESET), 0,
+ !(tmp & XILINX_DMA_DMACR_RESET),
+ XILINX_DMA_POLL_DELAY_US,
XILINX_DMA_LOOP_COUNT);
if (err) {
--
2.53.0
next prev parent reply other threads:[~2026-04-01 10:57 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 10:56 [PATCH v3 0/2] Fix CPU stall in xilinx_dma_poll_timeout caused by passing delay_us=0 Alex Bereza
2026-04-01 10:56 ` Alex Bereza [this message]
2026-04-01 22:45 ` [PATCH v3 1/2] dmaengine: xilinx_dma: Fix CPU stall in xilinx_dma_poll_timeout Frank Li
2026-04-01 10:56 ` [PATCH v3 2/2] dmaengine: xilinx_dma: Rename XILINX_DMA_LOOP_COUNT Alex Bereza
2026-04-01 22:40 ` Frank Li
2026-04-01 18:37 ` [PATCH v3 0/2] Fix CPU stall in xilinx_dma_poll_timeout caused by passing delay_us=0 Gupta, Suraj
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260401-fix-atomic-poll-timeout-regression-v3-1-85508f0aedde@bereza.email \
--to=alex@bereza.email \
--cc=Frank.Li@kernel.org \
--cc=appana.durga.rao@xilinx.com \
--cc=arnd@arndb.de \
--cc=dmaengine@vger.kernel.org \
--cc=geert+renesas@glider.be \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michal.simek@amd.com \
--cc=tony@atomide.com \
--cc=ulf.hansson@linaro.org \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox