* [PATCH rdma-next 0/2] RDMA: detect and handle CoCo DMA bounce buffering @ 2026-05-05 6:11 Jiri Pirko 2026-05-05 6:11 ` [PATCH rdma-next 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko 2026-05-05 6:11 ` [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko 0 siblings, 2 replies; 13+ messages in thread From: Jiri Pirko @ 2026-05-05 6:11 UTC (permalink / raw) To: linux-rdma Cc: jgg, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 From: Jiri Pirko <jiri@nvidia.com> In Confidential Computing (CoCo) guests, the DMA mapping layer redirects all device DMA through swiotlb bounce buffers to keep guest memory encrypted. This is transparent for regular devices because the CPU copies data between the bounce buffer and the real buffer on every DMA map/unmap cycle. RDMA breaks this model. Once a memory region is registered, the device accesses the underlying pages directly for an extended period without CPU involvement. The swiotlb layer never gets a chance to synchronize, so the device operates on bounce buffer memory while the application works with its own pages - the two never see each other's updates. This series adds detection and handling of this condition. A new IB_UVERBS_DEVICE_CC_DMA_BOUNCE flag is exposed in device_cap_flags_ex so userspace libraries can detect the situation and switch to dmabuf-based memory registration using "system_cc_shared" heap where available. Plain __ib_umem_get_va() is made to fail early with -EOPNOTSUPP to prevent silent misfunction. --- based on top of: https://lore.kernel.org/all/20260504135731.2345383-1-jiri@resnulli.us/ Jiri Pirko (2): RDMA/uverbs: expose CoCo DMA bounce requirement to userspace RDMA/umem: block plain userspace memory registration under CoCo bounce drivers/infiniband/core/device.c | 6 ++++++ drivers/infiniband/core/umem.c | 3 +++ drivers/infiniband/core/uverbs_cmd.c | 2 ++ include/rdma/ib_verbs.h | 3 +++ include/uapi/rdma/ib_user_verbs.h | 2 ++ 5 files changed, 16 insertions(+) -- 2.53.0 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH rdma-next 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace 2026-05-05 6:11 [PATCH rdma-next 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko @ 2026-05-05 6:11 ` Jiri Pirko 2026-05-05 6:11 ` [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko 1 sibling, 0 replies; 13+ messages in thread From: Jiri Pirko @ 2026-05-05 6:11 UTC (permalink / raw) To: linux-rdma Cc: jgg, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 From: Jiri Pirko <jiri@nvidia.com> In CoCo guests, device DMA to regular userspace memory does not work because the DMA mapping layer redirects all mappings through swiotlb bounce buffers. Since RDMA devices access registered memory directly without CPU involvement, there is no opportunity for swiotlb to synchronize between the bounce buffer and the original pages. Expose this condition to userspace as IB_UVERBS_DEVICE_CC_DMA_BOUNCE in device_cap_flags_exi. Signed-off-by: Jiri Pirko <jiri@nvidia.com> --- drivers/infiniband/core/device.c | 6 ++++++ drivers/infiniband/core/uverbs_cmd.c | 2 ++ include/rdma/ib_verbs.h | 3 +++ include/uapi/rdma/ib_user_verbs.h | 2 ++ 4 files changed, 13 insertions(+) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index b89efaaa81ec..ad3da92c9318 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -42,6 +42,8 @@ #include <linux/security.h> #include <linux/notifier.h> #include <linux/hashtable.h> +#include <linux/cc_platform.h> +#include <linux/swiotlb.h> #include <rdma/rdma_netlink.h> #include <rdma/ib_addr.h> #include <rdma/ib_cache.h> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name, */ WARN_ON(dma_device && !dma_device->dma_parms); device->dma_device = dma_device; + if (dma_device && + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && + is_swiotlb_force_bounce(dma_device)) + device->cc_dma_bounce = 1; ret = setup_device(device); if (ret) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 240f8a0cfd86..2a70774c639a 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -3655,6 +3655,8 @@ static int ib_uverbs_ex_query_device(struct uverbs_attr_bundle *attrs) resp.timestamp_mask = attr.timestamp_mask; resp.hca_core_clock = attr.hca_core_clock; resp.device_cap_flags_ex = attr.device_cap_flags; + if (ib_dev->cc_dma_bounce) + resp.device_cap_flags_ex |= IB_UVERBS_DEVICE_CC_DMA_BOUNCE; resp.rss_caps.supported_qpts = attr.rss_caps.supported_qpts; resp.rss_caps.max_rwq_indirection_tables = attr.rss_caps.max_rwq_indirection_tables; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 167fb924f0cf..d06071b87d96 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -275,6 +275,7 @@ enum ib_device_cap_flags { IB_DEVICE_FLUSH_GLOBAL = IB_UVERBS_DEVICE_FLUSH_GLOBAL, IB_DEVICE_FLUSH_PERSISTENT = IB_UVERBS_DEVICE_FLUSH_PERSISTENT, IB_DEVICE_ATOMIC_WRITE = IB_UVERBS_DEVICE_ATOMIC_WRITE, + IB_DEVICE_CC_DMA_BOUNCE = IB_UVERBS_DEVICE_CC_DMA_BOUNCE, }; enum ib_kernel_cap_flags { @@ -2950,6 +2951,8 @@ struct ib_device { u16 kverbs_provider:1; /* CQ adaptive moderation (RDMA DIM) */ u16 use_cq_dim:1; + /* CoCo guest with DMA bounce buffering required */ + u16 cc_dma_bounce:1; u8 node_type; u32 phys_port_cnt; struct ib_device_attr attrs; diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h index 3b7bd99813e9..d2aeadb6d2f9 100644 --- a/include/uapi/rdma/ib_user_verbs.h +++ b/include/uapi/rdma/ib_user_verbs.h @@ -1368,6 +1368,8 @@ enum ib_uverbs_device_cap_flags { IB_UVERBS_DEVICE_FLUSH_PERSISTENT = 1ULL << 39, /* Atomic write attributes */ IB_UVERBS_DEVICE_ATOMIC_WRITE = 1ULL << 40, + /* CoCo guest with DMA bounce buffering required */ + IB_UVERBS_DEVICE_CC_DMA_BOUNCE = 1ULL << 41, }; enum ib_uverbs_raw_packet_caps { -- 2.53.0 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 6:11 [PATCH rdma-next 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko 2026-05-05 6:11 ` [PATCH rdma-next 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko @ 2026-05-05 6:11 ` Jiri Pirko 2026-05-05 13:20 ` Jacob Moroni 1 sibling, 1 reply; 13+ messages in thread From: Jiri Pirko @ 2026-05-05 6:11 UTC (permalink / raw) To: linux-rdma Cc: jgg, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 From: Jiri Pirko <jiri@nvidia.com> When a device requires DMA bounce buffering inside a Confidential Computing guest, __ib_umem_get_va() cannot work. The DMA mapping layer redirects all mappings through swiotlb bounce buffers, so the device receives DMA addresses pointing to bounce buffer memory rather than the user's pages. Since RDMA devices access registered memory directly without CPU involvement, there is no opportunity for swiotlb to synchronize between the bounce buffer and the original pages. Fail early with -EOPNOTSUPP to let the user know instead of a silent misfunction. Signed-off-by: Jiri Pirko <jiri@nvidia.com> --- drivers/infiniband/core/umem.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 611d693eb9a2..b1877b83b021 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -167,6 +167,9 @@ static struct ib_umem *__ib_umem_get_va(struct ib_device *device, int pinned, ret; unsigned int gup_flags = FOLL_LONGTERM; + if (device->cc_dma_bounce) + return ERR_PTR(-EOPNOTSUPP); + /* * If the combination of the addr and size requested for this memory * region causes an integer overflow, return error. -- 2.53.0 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 6:11 ` [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko @ 2026-05-05 13:20 ` Jacob Moroni 2026-05-05 16:02 ` Jason Gunthorpe 0 siblings, 1 reply; 13+ messages in thread From: Jacob Moroni @ 2026-05-05 13:20 UTC (permalink / raw) To: Jiri Pirko Cc: linux-rdma, jgg, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 Hi, Out of curiosity, it seems like we set DMA_ATTR_REQUIRE_COHERENT, so would that have caused these registrations to fail anyway since it would be trying to use swiotlb if running in a CVM? It's not really related to your change but something else I was curious about is how to handle drivers that allocate their ring buffers in userspace and register them (irdma). I was hoping that the new cc_shared heap could be used without modifying the kernel driver by replacing the normal allocations in the provider with a dmabuf allocation+mmap and just passing the resulting pointer to reg_mr, but that won't work because it's a PFN mapping. The driver could be modified to accept the actual dmabuf instead for the QP/CQ rings, but I just wanted to see if that matches your vision here or if you had something else in mind. Another idea was to just allocate them in the kernel using the DMA allocator and map them into userspace but it would be a larger change. Thanks, Jake ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 13:20 ` Jacob Moroni @ 2026-05-05 16:02 ` Jason Gunthorpe 2026-05-05 18:17 ` Jacob Moroni ` (2 more replies) 0 siblings, 3 replies; 13+ messages in thread From: Jason Gunthorpe @ 2026-05-05 16:02 UTC (permalink / raw) To: Jacob Moroni Cc: Jiri Pirko, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 On Tue, May 05, 2026 at 09:20:01AM -0400, Jacob Moroni wrote: > Hi, > > Out of curiosity, it seems like we set DMA_ATTR_REQUIRE_COHERENT, so > would that have caused these registrations to fail anyway since it would > be trying to use swiotlb if running in a CVM? It is supposed to, at least that is the intention. I think that new attribute overtook Jiri's patch here? > I was hoping that the new cc_shared heap could be used without > modifying the kernel driver by replacing the normal allocations in the provider > with a dmabuf allocation+mmap and just passing the resulting pointer to reg_mr, > but that won't work because it's a PFN mapping. > The driver could be modified to accept the actual dmabuf instead for the QP/CQ > rings, but I just wanted to see if that matches your vision here or if > you had something > else in mind. Jiri has been looking at both options, but kernel side irdma must be upgraded to accept a dmabuf for every kind of userspace memory. This is why we have been trying to centralize more of the umem logic because every driver should be upgraded to accept dmabuf for everything... > Another idea was to just allocate them in the kernel using the DMA > allocator and map them into userspace but it would be a larger change. This isn't the pattern we are using in rdma.. Jason ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 16:02 ` Jason Gunthorpe @ 2026-05-05 18:17 ` Jacob Moroni 2026-05-06 9:20 ` Jiri Pirko 2026-05-06 9:17 ` Jiri Pirko 2026-05-06 9:25 ` Jiri Pirko 2 siblings, 1 reply; 13+ messages in thread From: Jacob Moroni @ 2026-05-05 18:17 UTC (permalink / raw) To: Jason Gunthorpe Cc: Jiri Pirko, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 > Jiri has been looking at both options, but kernel side irdma must be > upgraded to accept a dmabuf for every kind of userspace memory. I think changing the irdma kernel driver to support dmabufs for the rings may be a relatively straightforward change if we can adopt an approach similar to how it's currently done using normal mrs (which are explicitly registered during the QP/CQ creation process). If so, it may just amount to adding a ptr attr to pass a struct irdma_mem_reg_req and using ibv_cmd_reg_dmabuf_mr instead of ibv_cmd_reg_mr. Thanks, Jake ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 18:17 ` Jacob Moroni @ 2026-05-06 9:20 ` Jiri Pirko 0 siblings, 0 replies; 13+ messages in thread From: Jiri Pirko @ 2026-05-06 9:20 UTC (permalink / raw) To: Jacob Moroni Cc: Jason Gunthorpe, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 Tue, May 05, 2026 at 08:17:06PM +0200, jmoroni@google.com wrote: >> Jiri has been looking at both options, but kernel side irdma must be >> upgraded to accept a dmabuf for every kind of userspace memory. > >I think changing the irdma kernel driver to support dmabufs for the rings may >be a relatively straightforward change if we can adopt an approach similar to >how it's currently done using normal mrs (which are explicitly registered during >the QP/CQ creation process). If so, it may just amount to adding a ptr attr to >pass a struct irdma_mem_reg_req and using ibv_cmd_reg_dmabuf_mr instead >of ibv_cmd_reg_mr. After this patchset merged https://lore.kernel.org/all/20260504135731.2345383-1-jiri@resnulli.us/ it should be very easy for irdma to add support for dma-buf backed qps/cqs ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 16:02 ` Jason Gunthorpe 2026-05-05 18:17 ` Jacob Moroni @ 2026-05-06 9:17 ` Jiri Pirko 2026-05-06 9:25 ` Jiri Pirko 2 siblings, 0 replies; 13+ messages in thread From: Jiri Pirko @ 2026-05-06 9:17 UTC (permalink / raw) To: Jason Gunthorpe Cc: Jacob Moroni, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 Tue, May 05, 2026 at 06:02:50PM +0200, jgg@ziepe.ca wrote: >On Tue, May 05, 2026 at 09:20:01AM -0400, Jacob Moroni wrote: >> Hi, >> >> Out of curiosity, it seems like we set DMA_ATTR_REQUIRE_COHERENT, so >> would that have caused these registrations to fail anyway since it would >> be trying to use swiotlb if running in a CVM? > >It is supposed to, at least that is the intention. I think that >new attribute overtook Jiri's patch here? Yeah, my patch seems to be redundant now. > >> I was hoping that the new cc_shared heap could be used without >> modifying the kernel driver by replacing the normal allocations in the provider >> with a dmabuf allocation+mmap and just passing the resulting pointer to reg_mr, >> but that won't work because it's a PFN mapping. > >> The driver could be modified to accept the actual dmabuf instead for the QP/CQ >> rings, but I just wanted to see if that matches your vision here or if >> you had something >> else in mind. > >Jiri has been looking at both options, but kernel side irdma must be >upgraded to accept a dmabuf for every kind of userspace memory. Correct, the transparent dmabuf-backed-VA pinning is still in pipes, as it is based on https://lore.kernel.org/all/20260504135731.2345383-1-jiri@resnulli.us/ Check it out in my working branch here: https://github.com/jpirko/linux_mlxsw/commit/00c51b20a977bb63681d140d65d857f978b3b8a6 > >This is why we have been trying to centralize more of the umem logic >because every driver should be upgraded to accept dmabuf for >everything... > >> Another idea was to just allocate them in the kernel using the DMA >> allocator and map them into userspace but it would be a larger change. > >This isn't the pattern we are using in rdma.. Yeah, plus I'm missing the motivation, what that would help us to achieve? ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-05 16:02 ` Jason Gunthorpe 2026-05-05 18:17 ` Jacob Moroni 2026-05-06 9:17 ` Jiri Pirko @ 2026-05-06 9:25 ` Jiri Pirko 2026-05-06 9:49 ` Jason Gunthorpe 2 siblings, 1 reply; 13+ messages in thread From: Jiri Pirko @ 2026-05-06 9:25 UTC (permalink / raw) To: Jason Gunthorpe Cc: Jacob Moroni, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 Tue, May 05, 2026 at 06:02:50PM +0200, jgg@ziepe.ca wrote: >On Tue, May 05, 2026 at 09:20:01AM -0400, Jacob Moroni wrote: >> Hi, >> >> Out of curiosity, it seems like we set DMA_ATTR_REQUIRE_COHERENT, so >> would that have caused these registrations to fail anyway since it would >> be trying to use swiotlb if running in a CVM? > >It is supposed to, at least that is the intention. I think that >new attribute overtook Jiri's patch here? Hmm, don't we want rather -EOPNOTSUPP instead of very wide -EIO in this case? I think that might be better for the user. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-06 9:25 ` Jiri Pirko @ 2026-05-06 9:49 ` Jason Gunthorpe 2026-05-06 10:54 ` Jiri Pirko 0 siblings, 1 reply; 13+ messages in thread From: Jason Gunthorpe @ 2026-05-06 9:49 UTC (permalink / raw) To: Jiri Pirko Cc: Jacob Moroni, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 On Wed, May 06, 2026 at 11:25:13AM +0200, Jiri Pirko wrote: > Tue, May 05, 2026 at 06:02:50PM +0200, jgg@ziepe.ca wrote: > >On Tue, May 05, 2026 at 09:20:01AM -0400, Jacob Moroni wrote: > >> Hi, > >> > >> Out of curiosity, it seems like we set DMA_ATTR_REQUIRE_COHERENT, so > >> would that have caused these registrations to fail anyway since it would > >> be trying to use swiotlb if running in a CVM? > > > >It is supposed to, at least that is the intention. I think that > >new attribute overtook Jiri's patch here? > > Hmm, don't we want rather -EOPNOTSUPP instead of very wide -EIO in this > case? I think that might be better for the user. Yeah, I would prefer that also, it is a good enough reason for this patch. Jason ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-06 9:49 ` Jason Gunthorpe @ 2026-05-06 10:54 ` Jiri Pirko 2026-05-06 13:39 ` Jacob Moroni 0 siblings, 1 reply; 13+ messages in thread From: Jiri Pirko @ 2026-05-06 10:54 UTC (permalink / raw) To: Jason Gunthorpe Cc: Jacob Moroni, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 Wed, May 06, 2026 at 11:49:40AM +0200, jgg@ziepe.ca wrote: >On Wed, May 06, 2026 at 11:25:13AM +0200, Jiri Pirko wrote: >> Tue, May 05, 2026 at 06:02:50PM +0200, jgg@ziepe.ca wrote: >> >On Tue, May 05, 2026 at 09:20:01AM -0400, Jacob Moroni wrote: >> >> Hi, >> >> >> >> Out of curiosity, it seems like we set DMA_ATTR_REQUIRE_COHERENT, so >> >> would that have caused these registrations to fail anyway since it would >> >> be trying to use swiotlb if running in a CVM? >> > >> >It is supposed to, at least that is the intention. I think that >> >new attribute overtook Jiri's patch here? >> >> Hmm, don't we want rather -EOPNOTSUPP instead of very wide -EIO in this >> case? I think that might be better for the user. > >Yeah, I would prefer that also, it is a good enough reason for this >patch. Good. Will send v2 with updated patch description. Thanks! ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-06 10:54 ` Jiri Pirko @ 2026-05-06 13:39 ` Jacob Moroni 2026-05-06 14:54 ` Jiri Pirko 0 siblings, 1 reply; 13+ messages in thread From: Jacob Moroni @ 2026-05-06 13:39 UTC (permalink / raw) To: Jiri Pirko Cc: Jason Gunthorpe, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 > transparent dmabuf-backed-VA pinning Thanks! I took a look at your WIP code. It seems like it would really simplify things for irdma. Looking forward to it. Is there a WIP you can share for any rdma-core changes? For example, I am wondering if there will be some generic allocation helper for drivers to allocate umems for internal use (for QP rings, etc.). This helper would detect if it's running in a CVM and use the cc_shared heap or something. I'm mainly just curious how you see it being used on the userspace side. >>> Another idea was to just allocate them in the kernel using the DMA >>> allocator and map them into userspace but it would be a larger change. >>This isn't the pattern we are using in rdma.. > Yeah, plus I'm missing the motivation, what that would help us to > achieve? This would have been a driver hack and doesn't make sense compared to your current plan, but the idea would have been to use the DMA allocator in the kernel to allocate the QP rings. This would give us a public buffer, which could then be mapped into the process with dma_mmap_coherent which sets the pages to decrypted. I imagine this scheme would be needed for NICs that require physically contiguous ring buffers (if any exist, not sure). Thanks, Jake ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce 2026-05-06 13:39 ` Jacob Moroni @ 2026-05-06 14:54 ` Jiri Pirko 0 siblings, 0 replies; 13+ messages in thread From: Jiri Pirko @ 2026-05-06 14:54 UTC (permalink / raw) To: Jacob Moroni Cc: Jason Gunthorpe, linux-rdma, leon, edwards, kees, parav, mbloch, yishaih, lirongqing, huangjunxian6, liuy22 Wed, May 06, 2026 at 03:39:32PM +0200, jmoroni@google.com wrote: >> transparent dmabuf-backed-VA pinning > >Thanks! I took a look at your WIP code. It seems like it would really simplify >things for irdma. Looking forward to it. > >Is there a WIP you can share for any rdma-core changes? For example, I >am wondering if there will be some generic allocation helper for drivers to >allocate umems for internal use (for QP rings, etc.). This helper would >detect if it's running in a CVM and use the cc_shared heap or something. > >I'm mainly just curious how you see it being used on the userspace side. https://github.com/jpirko/rdma-core/commits/wip_umem_bufs/ > >>>> Another idea was to just allocate them in the kernel using the DMA >>>> allocator and map them into userspace but it would be a larger change. > >>>This isn't the pattern we are using in rdma.. > >> Yeah, plus I'm missing the motivation, what that would help us to >> achieve? > >This would have been a driver hack and doesn't make sense compared to >your current plan, but the idea would have been to use the DMA allocator in >the kernel to allocate the QP rings. This would give us a public buffer, which >could then be mapped into the process with dma_mmap_coherent which >sets the pages to decrypted. I imagine this scheme would be needed for >NICs that require physically contiguous ring buffers (if any exist, not sure). > >Thanks, >Jake ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-05-06 14:54 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-05-05 6:11 [PATCH rdma-next 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko 2026-05-05 6:11 ` [PATCH rdma-next 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko 2026-05-05 6:11 ` [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko 2026-05-05 13:20 ` Jacob Moroni 2026-05-05 16:02 ` Jason Gunthorpe 2026-05-05 18:17 ` Jacob Moroni 2026-05-06 9:20 ` Jiri Pirko 2026-05-06 9:17 ` Jiri Pirko 2026-05-06 9:25 ` Jiri Pirko 2026-05-06 9:49 ` Jason Gunthorpe 2026-05-06 10:54 ` Jiri Pirko 2026-05-06 13:39 ` Jacob Moroni 2026-05-06 14:54 ` Jiri Pirko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox