* [PATCH] blk-mq-dma: always initialize dma state
@ 2025-12-10 6:49 Keith Busch
2025-12-10 6:54 ` Christoph Hellwig
2025-12-10 10:38 ` Sebastian Ott
0 siblings, 2 replies; 5+ messages in thread
From: Keith Busch @ 2025-12-10 6:49 UTC (permalink / raw)
To: linux-block, axboe, hch; +Cc: Keith Busch, Sebastian Ott
From: Keith Busch <kbusch@kernel.org>
Ensure the dma state is initialized when we're not using the contiguous
iova, otherwise the caller may be using a stale state from a previous
request that could use the coalesed iova allocation.
Fixes: 2f6b2565d43cdb5 ("block: accumulate memory segment gaps per bio")
Reported-by: Sebastian Ott <sebott@redhat.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
block/blk-mq-dma.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c
index e9108ccaf4b06..4ca768e0cc7eb 100644
--- a/block/blk-mq-dma.c
+++ b/block/blk-mq-dma.c
@@ -196,8 +196,9 @@ static bool blk_dma_map_iter_start(struct request *req, struct device *dma_dev,
return false;
}
- if (blk_can_dma_map_iova(req, dma_dev) &&
- dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
+ if (!blk_can_dma_map_iova(req, dma_dev))
+ memset(state, 0, sizeof(*state));
+ else if (dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
return blk_dma_map_direct(req, dma_dev, iter, &vec);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] blk-mq-dma: always initialize dma state
2025-12-10 6:49 [PATCH] blk-mq-dma: always initialize dma state Keith Busch
@ 2025-12-10 6:54 ` Christoph Hellwig
2025-12-10 7:06 ` Keith Busch
2025-12-10 10:38 ` Sebastian Ott
1 sibling, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2025-12-10 6:54 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-block, axboe, hch, Keith Busch, Sebastian Ott
On Tue, Dec 09, 2025 at 10:49:15PM -0800, Keith Busch wrote:
> - if (blk_can_dma_map_iova(req, dma_dev) &&
> - dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> + if (!blk_can_dma_map_iova(req, dma_dev))
> + memset(state, 0, sizeof(*state));
> + else if (dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
What about just doing the memset unconditionally? It's just two
64-bit fields so no real overhead, and it gives us a clean slate that
avoid introducing other bugs later on.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] blk-mq-dma: always initialize dma state
2025-12-10 6:54 ` Christoph Hellwig
@ 2025-12-10 7:06 ` Keith Busch
2025-12-10 8:25 ` Christoph Hellwig
0 siblings, 1 reply; 5+ messages in thread
From: Keith Busch @ 2025-12-10 7:06 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Keith Busch, linux-block, axboe, Sebastian Ott
On Wed, Dec 10, 2025 at 07:54:07AM +0100, Christoph Hellwig wrote:
> On Tue, Dec 09, 2025 at 10:49:15PM -0800, Keith Busch wrote:
> > - if (blk_can_dma_map_iova(req, dma_dev) &&
> > - dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> > + if (!blk_can_dma_map_iova(req, dma_dev))
> > + memset(state, 0, sizeof(*state));
> > + else if (dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> > return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
>
> What about just doing the memset unconditionally? It's just two
> 64-bit fields so no real overhead, and it gives us a clean slate that
> avoid introducing other bugs later on.
I didn't do that is because dma_iova_try_alloc() also does that, so it'd
be two repeated and unnecessary memset's on the same address. That feels
undesirable no matter how small the struct is. I could remove the memset
in dma_iova_try_alloc instead and make the caller initialize it. There
are only two existing users of the API, so not a big deal to change it.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] blk-mq-dma: always initialize dma state
2025-12-10 7:06 ` Keith Busch
@ 2025-12-10 8:25 ` Christoph Hellwig
0 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2025-12-10 8:25 UTC (permalink / raw)
To: Keith Busch
Cc: Christoph Hellwig, Keith Busch, linux-block, axboe, Sebastian Ott,
Leon Romanovsky
On Wed, Dec 10, 2025 at 04:06:57PM +0900, Keith Busch wrote:
> On Wed, Dec 10, 2025 at 07:54:07AM +0100, Christoph Hellwig wrote:
> > On Tue, Dec 09, 2025 at 10:49:15PM -0800, Keith Busch wrote:
> > > - if (blk_can_dma_map_iova(req, dma_dev) &&
> > > - dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> > > + if (!blk_can_dma_map_iova(req, dma_dev))
> > > + memset(state, 0, sizeof(*state));
> > > + else if (dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> > > return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
> >
> > What about just doing the memset unconditionally? It's just two
> > 64-bit fields so no real overhead, and it gives us a clean slate that
> > avoid introducing other bugs later on.
>
> I didn't do that is because dma_iova_try_alloc() also does that, so it'd
> be two repeated and unnecessary memset's on the same address. That feels
> undesirable no matter how small the struct is. I could remove the memset
> in dma_iova_try_alloc instead and make the caller initialize it. There
> are only two existing users of the API, so not a big deal to change it.
If we move it out we'll need a separate dma_iova_init helper or similar
(which I think Leon had in earlier version before I optimized it away).
Personally I'd prefer to just zero twice, as it's really cheap.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] blk-mq-dma: always initialize dma state
2025-12-10 6:49 [PATCH] blk-mq-dma: always initialize dma state Keith Busch
2025-12-10 6:54 ` Christoph Hellwig
@ 2025-12-10 10:38 ` Sebastian Ott
1 sibling, 0 replies; 5+ messages in thread
From: Sebastian Ott @ 2025-12-10 10:38 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-block, axboe, hch, Keith Busch
On Tue, 9 Dec 2025, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> Ensure the dma state is initialized when we're not using the contiguous
> iova, otherwise the caller may be using a stale state from a previous
> request that could use the coalesed iova allocation.
>
> Fixes: 2f6b2565d43cdb5 ("block: accumulate memory segment gaps per bio")
> Reported-by: Sebastian Ott <sebott@redhat.com>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> block/blk-mq-dma.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c
> index e9108ccaf4b06..4ca768e0cc7eb 100644
> --- a/block/blk-mq-dma.c
> +++ b/block/blk-mq-dma.c
> @@ -196,8 +196,9 @@ static bool blk_dma_map_iter_start(struct request *req, struct device *dma_dev,
> return false;
> }
>
> - if (blk_can_dma_map_iova(req, dma_dev) &&
> - dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> + if (!blk_can_dma_map_iova(req, dma_dev))
> + memset(state, 0, sizeof(*state));
> + else if (dma_iova_try_alloc(dma_dev, state, vec.paddr, total_len))
> return blk_rq_dma_map_iova(req, dma_dev, state, iter, &vec);
> return blk_dma_map_direct(req, dma_dev, iter, &vec);
> }
> --
Yes, that did the trick!
Tested on top of:
cc25df3e2e22 Merge tag 'for-6.19/block-20251201' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux
and on top the latest upstream:
0048fbb4011e Merge tag 'locking-futex-2025-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Tested-by: Sebastian Ott <sebott@redhat.com>
Thanks,
Sebastian
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-12-10 10:38 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-10 6:49 [PATCH] blk-mq-dma: always initialize dma state Keith Busch
2025-12-10 6:54 ` Christoph Hellwig
2025-12-10 7:06 ` Keith Busch
2025-12-10 8:25 ` Christoph Hellwig
2025-12-10 10:38 ` Sebastian Ott
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox