* [PATCH for-next] RDMA/umem: Initialize iova for dmabuf umem
@ 2026-04-10 18:58 Dennis Dalessandro
2026-04-10 21:46 ` Jason Gunthorpe
0 siblings, 1 reply; 2+ messages in thread
From: Dennis Dalessandro @ 2026-04-10 18:58 UTC (permalink / raw)
To: jgg, leon; +Cc: Nick Child, linux-rdma
From: Nick Child <nchild@cornelisnetworks.com>
Ensure iova is initialized in ib_umem_dmabuf_get_with_dma_device(),
otherwise calculation erors can occur in ib_umem_num_dma_blocks() and
dependent functions like rdma_umem_for_each_dma_block() won't iterate
properly.
As of commit 4fbc3a52cd4d ("RDMA/core: Fix umem iterator when PAGE_SIZE is
greater then HCA pgsz") rdma_umem_for_each_dma_block() iterates at most
ib_umem_num_dma_blocks() times. ib_umem_num_dma_blocks() calculates the
number of blocks by extending iova + length to page boundaries.
Previously, a call to ib_umem_dmabuf_get_pinned_with_dma_device() followed
by rdma_umem_for_each_dma_block() would leave iova uninitialized and
iteration would only cover a subset of blocks if the memory did not
begin on a boundary.
For example, if page size is 4096 and a dmabuf is registered with offset
512 and length 4096 then ib_umem_num_dma_blocks() would previously (and
incorrectly) return 1.
Defaulting the iova to the offset value is okay since it can be adjusted
later via ib_umem_find_best_pgsz().
Fixes: 4fbc3a52cd4d ("RDMA/core: Fix umem iterator when PAGE_SIZE is greater then HCA pgsz")
Signed-off-by: Nick Child <nchild@cornelisnetworks.com>
---
drivers/infiniband/core/umem_dmabuf.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
index f5298c33e581..f6b7ae4ee2db 100644
--- a/drivers/infiniband/core/umem_dmabuf.c
+++ b/drivers/infiniband/core/umem_dmabuf.c
@@ -146,6 +146,11 @@ ib_umem_dmabuf_get_with_dma_device(struct ib_device *device,
umem->ibdev = device;
umem->length = size;
umem->address = offset;
+ /*
+ * Drivers should call ib_umem_find_best_pgsz() to set the iova
+ * correctly.
+ */
+ umem->iova = offset;
umem->writable = ib_access_writable(access);
umem->is_dmabuf = 1;
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH for-next] RDMA/umem: Initialize iova for dmabuf umem
2026-04-10 18:58 [PATCH for-next] RDMA/umem: Initialize iova for dmabuf umem Dennis Dalessandro
@ 2026-04-10 21:46 ` Jason Gunthorpe
0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2026-04-10 21:46 UTC (permalink / raw)
To: Dennis Dalessandro; +Cc: leon, Nick Child, linux-rdma
On Fri, Apr 10, 2026 at 02:58:25PM -0400, Dennis Dalessandro wrote:
> As of commit 4fbc3a52cd4d ("RDMA/core: Fix umem iterator when PAGE_SIZE is
> greater then HCA pgsz") rdma_umem_for_each_dma_block() iterates at most
> ib_umem_num_dma_blocks() times. ib_umem_num_dma_blocks() calculates the
> number of blocks by extending iova + length to page boundaries.
> Previously, a call to ib_umem_dmabuf_get_pinned_with_dma_device() followed
> by rdma_umem_for_each_dma_block() would leave iova uninitialized and
> iteration would only cover a subset of blocks if the memory did not
> begin on a boundary.
Well that's illegal. Drivers must always call ib_umem_find_best_pgsz()
which always sets iova properly. Fix the driver that isn't doing this
sequence right.
Jason
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-10 21:46 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-10 18:58 [PATCH for-next] RDMA/umem: Initialize iova for dmabuf umem Dennis Dalessandro
2026-04-10 21:46 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox