* [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-06 11:14 [PATCH rdma-next v2 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko
@ 2026-05-06 11:14 ` Jiri Pirko
2026-05-12 13:03 ` Leon Romanovsky
2026-05-06 11:14 ` [PATCH rdma-next v2 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko
2026-05-06 12:52 ` [PATCH rdma-next v2 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jacob Moroni
2 siblings, 1 reply; 12+ messages in thread
From: Jiri Pirko @ 2026-05-06 11:14 UTC (permalink / raw)
To: linux-rdma
Cc: jgg, leon, edwards, kees, parav, mbloch, yishaih, lirongqing,
huangjunxian6, liuy22, jmoroni
From: Jiri Pirko <jiri@nvidia.com>
In CoCo guests, device DMA to regular userspace memory does not work
because the DMA mapping layer redirects all mappings through swiotlb
bounce buffers. Since RDMA devices access registered memory directly
without CPU involvement, there is no opportunity for swiotlb to
synchronize between the bounce buffer and the original pages.
Expose this condition to userspace as IB_UVERBS_DEVICE_CC_DMA_BOUNCE
in device_cap_flags_exi.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
---
drivers/infiniband/core/device.c | 6 ++++++
drivers/infiniband/core/uverbs_cmd.c | 2 ++
include/rdma/ib_verbs.h | 3 +++
include/uapi/rdma/ib_user_verbs.h | 2 ++
4 files changed, 13 insertions(+)
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index b89efaaa81ec..ad3da92c9318 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -42,6 +42,8 @@
#include <linux/security.h>
#include <linux/notifier.h>
#include <linux/hashtable.h>
+#include <linux/cc_platform.h>
+#include <linux/swiotlb.h>
#include <rdma/rdma_netlink.h>
#include <rdma/ib_addr.h>
#include <rdma/ib_cache.h>
@@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
*/
WARN_ON(dma_device && !dma_device->dma_parms);
device->dma_device = dma_device;
+ if (dma_device &&
+ cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
+ is_swiotlb_force_bounce(dma_device))
+ device->cc_dma_bounce = 1;
ret = setup_device(device);
if (ret)
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 240f8a0cfd86..2a70774c639a 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -3655,6 +3655,8 @@ static int ib_uverbs_ex_query_device(struct uverbs_attr_bundle *attrs)
resp.timestamp_mask = attr.timestamp_mask;
resp.hca_core_clock = attr.hca_core_clock;
resp.device_cap_flags_ex = attr.device_cap_flags;
+ if (ib_dev->cc_dma_bounce)
+ resp.device_cap_flags_ex |= IB_UVERBS_DEVICE_CC_DMA_BOUNCE;
resp.rss_caps.supported_qpts = attr.rss_caps.supported_qpts;
resp.rss_caps.max_rwq_indirection_tables =
attr.rss_caps.max_rwq_indirection_tables;
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 167fb924f0cf..d06071b87d96 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -275,6 +275,7 @@ enum ib_device_cap_flags {
IB_DEVICE_FLUSH_GLOBAL = IB_UVERBS_DEVICE_FLUSH_GLOBAL,
IB_DEVICE_FLUSH_PERSISTENT = IB_UVERBS_DEVICE_FLUSH_PERSISTENT,
IB_DEVICE_ATOMIC_WRITE = IB_UVERBS_DEVICE_ATOMIC_WRITE,
+ IB_DEVICE_CC_DMA_BOUNCE = IB_UVERBS_DEVICE_CC_DMA_BOUNCE,
};
enum ib_kernel_cap_flags {
@@ -2950,6 +2951,8 @@ struct ib_device {
u16 kverbs_provider:1;
/* CQ adaptive moderation (RDMA DIM) */
u16 use_cq_dim:1;
+ /* CoCo guest with DMA bounce buffering required */
+ u16 cc_dma_bounce:1;
u8 node_type;
u32 phys_port_cnt;
struct ib_device_attr attrs;
diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
index 3b7bd99813e9..d2aeadb6d2f9 100644
--- a/include/uapi/rdma/ib_user_verbs.h
+++ b/include/uapi/rdma/ib_user_verbs.h
@@ -1368,6 +1368,8 @@ enum ib_uverbs_device_cap_flags {
IB_UVERBS_DEVICE_FLUSH_PERSISTENT = 1ULL << 39,
/* Atomic write attributes */
IB_UVERBS_DEVICE_ATOMIC_WRITE = 1ULL << 40,
+ /* CoCo guest with DMA bounce buffering required */
+ IB_UVERBS_DEVICE_CC_DMA_BOUNCE = 1ULL << 41,
};
enum ib_uverbs_raw_packet_caps {
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-06 11:14 ` [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko
@ 2026-05-12 13:03 ` Leon Romanovsky
2026-05-12 14:03 ` Jiri Pirko
0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2026-05-12 13:03 UTC (permalink / raw)
To: Jiri Pirko
Cc: linux-rdma, jgg, edwards, kees, parav, mbloch, yishaih,
lirongqing, huangjunxian6, liuy22, jmoroni
On Wed, May 06, 2026 at 01:14:46PM +0200, Jiri Pirko wrote:
> From: Jiri Pirko <jiri@nvidia.com>
>
> In CoCo guests, device DMA to regular userspace memory does not work
> because the DMA mapping layer redirects all mappings through swiotlb
> bounce buffers. Since RDMA devices access registered memory directly
> without CPU involvement, there is no opportunity for swiotlb to
> synchronize between the bounce buffer and the original pages.
>
> Expose this condition to userspace as IB_UVERBS_DEVICE_CC_DMA_BOUNCE
> in device_cap_flags_exi.
>
> Signed-off-by: Jiri Pirko <jiri@nvidia.com>
> ---
> drivers/infiniband/core/device.c | 6 ++++++
> drivers/infiniband/core/uverbs_cmd.c | 2 ++
> include/rdma/ib_verbs.h | 3 +++
> include/uapi/rdma/ib_user_verbs.h | 2 ++
> 4 files changed, 13 insertions(+)
>
> diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
> index b89efaaa81ec..ad3da92c9318 100644
> --- a/drivers/infiniband/core/device.c
> +++ b/drivers/infiniband/core/device.c
> @@ -42,6 +42,8 @@
> #include <linux/security.h>
> #include <linux/notifier.h>
> #include <linux/hashtable.h>
> +#include <linux/cc_platform.h>
> +#include <linux/swiotlb.h>
> #include <rdma/rdma_netlink.h>
> #include <rdma/ib_addr.h>
> #include <rdma/ib_cache.h>
> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
> */
> WARN_ON(dma_device && !dma_device->dma_parms);
> device->dma_device = dma_device;
> + if (dma_device &&
> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
> + is_swiotlb_force_bounce(dma_device))
It is the wrong place. When I worked on my DMA series, I tried something
similar (a call into SWIOTLB) to notify users that RDMA would not work.
The general feedback was that this is a layering violation, and that any
knowledge of SWIOTLB (and its API) should not leak out of the DMA API.
You shouldn't call to is_swiotlb_force_bounce() here.
Thanks
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-12 13:03 ` Leon Romanovsky
@ 2026-05-12 14:03 ` Jiri Pirko
2026-05-12 14:05 ` Jason Gunthorpe
0 siblings, 1 reply; 12+ messages in thread
From: Jiri Pirko @ 2026-05-12 14:03 UTC (permalink / raw)
To: Leon Romanovsky
Cc: linux-rdma, jgg, edwards, kees, parav, mbloch, yishaih,
lirongqing, huangjunxian6, liuy22, jmoroni
Tue, May 12, 2026 at 03:03:29PM CEST, leon@kernel.org wrote:
>On Wed, May 06, 2026 at 01:14:46PM +0200, Jiri Pirko wrote:
>> From: Jiri Pirko <jiri@nvidia.com>
>>
>> In CoCo guests, device DMA to regular userspace memory does not work
>> because the DMA mapping layer redirects all mappings through swiotlb
>> bounce buffers. Since RDMA devices access registered memory directly
>> without CPU involvement, there is no opportunity for swiotlb to
>> synchronize between the bounce buffer and the original pages.
>>
>> Expose this condition to userspace as IB_UVERBS_DEVICE_CC_DMA_BOUNCE
>> in device_cap_flags_exi.
>>
>> Signed-off-by: Jiri Pirko <jiri@nvidia.com>
>> ---
>> drivers/infiniband/core/device.c | 6 ++++++
>> drivers/infiniband/core/uverbs_cmd.c | 2 ++
>> include/rdma/ib_verbs.h | 3 +++
>> include/uapi/rdma/ib_user_verbs.h | 2 ++
>> 4 files changed, 13 insertions(+)
>>
>> diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
>> index b89efaaa81ec..ad3da92c9318 100644
>> --- a/drivers/infiniband/core/device.c
>> +++ b/drivers/infiniband/core/device.c
>> @@ -42,6 +42,8 @@
>> #include <linux/security.h>
>> #include <linux/notifier.h>
>> #include <linux/hashtable.h>
>> +#include <linux/cc_platform.h>
>> +#include <linux/swiotlb.h>
>> #include <rdma/rdma_netlink.h>
>> #include <rdma/ib_addr.h>
>> #include <rdma/ib_cache.h>
>> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
>> */
>> WARN_ON(dma_device && !dma_device->dma_parms);
>> device->dma_device = dma_device;
>> + if (dma_device &&
>> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
>> + is_swiotlb_force_bounce(dma_device))
>
>It is the wrong place. When I worked on my DMA series, I tried something
>similar (a call into SWIOTLB) to notify users that RDMA would not work.
>
>The general feedback was that this is a layering violation, and that any
>knowledge of SWIOTLB (and its API) should not leak out of the DMA API.
>
>You shouldn't call to is_swiotlb_force_bounce() here.
What do you suggest as alternative? We need to somehow tell the user
what is the situation.
>
>Thanks
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-12 14:03 ` Jiri Pirko
@ 2026-05-12 14:05 ` Jason Gunthorpe
2026-05-12 14:08 ` Jiri Pirko
0 siblings, 1 reply; 12+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 14:05 UTC (permalink / raw)
To: Jiri Pirko
Cc: Leon Romanovsky, linux-rdma, edwards, kees, parav, mbloch,
yishaih, lirongqing, huangjunxian6, liuy22, jmoroni
On Tue, May 12, 2026 at 04:03:07PM +0200, Jiri Pirko wrote:
> >> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
> >> */
> >> WARN_ON(dma_device && !dma_device->dma_parms);
> >> device->dma_device = dma_device;
> >> + if (dma_device &&
> >> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
> >> + is_swiotlb_force_bounce(dma_device))
> >
> >It is the wrong place. When I worked on my DMA series, I tried something
> >similar (a call into SWIOTLB) to notify users that RDMA would not work.
> >
> >The general feedback was that this is a layering violation, and that any
> >knowledge of SWIOTLB (and its API) should not leak out of the DMA API.
> >
> >You shouldn't call to is_swiotlb_force_bounce() here.
>
> What do you suggest as alternative? We need to somehow tell the user
> what is the situation.
For now CC_ATTR_GUEST_MEM_ENCRYPT is likely sufficient.
Later we should be able to detect if the device is in T=1 mode
directly.
Jason
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-12 14:05 ` Jason Gunthorpe
@ 2026-05-12 14:08 ` Jiri Pirko
2026-05-12 14:34 ` Jason Gunthorpe
0 siblings, 1 reply; 12+ messages in thread
From: Jiri Pirko @ 2026-05-12 14:08 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Leon Romanovsky, linux-rdma, edwards, kees, parav, mbloch,
yishaih, lirongqing, huangjunxian6, liuy22, jmoroni
Tue, May 12, 2026 at 04:05:10PM CEST, jgg@ziepe.ca wrote:
>On Tue, May 12, 2026 at 04:03:07PM +0200, Jiri Pirko wrote:
>> >> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
>> >> */
>> >> WARN_ON(dma_device && !dma_device->dma_parms);
>> >> device->dma_device = dma_device;
>> >> + if (dma_device &&
>> >> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
>> >> + is_swiotlb_force_bounce(dma_device))
>> >
>> >It is the wrong place. When I worked on my DMA series, I tried something
>> >similar (a call into SWIOTLB) to notify users that RDMA would not work.
>> >
>> >The general feedback was that this is a layering violation, and that any
>> >knowledge of SWIOTLB (and its API) should not leak out of the DMA API.
>> >
>> >You shouldn't call to is_swiotlb_force_bounce() here.
>>
>> What do you suggest as alternative? We need to somehow tell the user
>> what is the situation.
>
>For now CC_ATTR_GUEST_MEM_ENCRYPT is likely sufficient.
>
>Later we should be able to detect if the device is in T=1 mode
>directly.
Okay, so we assume for now that every device is T=0 (which I believe is
the reality). Once T=1 device appears, it changes this "if statement".
Do I understand that correctly?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-12 14:08 ` Jiri Pirko
@ 2026-05-12 14:34 ` Jason Gunthorpe
2026-05-12 18:30 ` Jiri Pirko
0 siblings, 1 reply; 12+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 14:34 UTC (permalink / raw)
To: Jiri Pirko
Cc: Leon Romanovsky, linux-rdma, edwards, kees, parav, mbloch,
yishaih, lirongqing, huangjunxian6, liuy22, jmoroni
On Tue, May 12, 2026 at 04:08:44PM +0200, Jiri Pirko wrote:
> Tue, May 12, 2026 at 04:05:10PM CEST, jgg@ziepe.ca wrote:
> >On Tue, May 12, 2026 at 04:03:07PM +0200, Jiri Pirko wrote:
> >> >> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
> >> >> */
> >> >> WARN_ON(dma_device && !dma_device->dma_parms);
> >> >> device->dma_device = dma_device;
> >> >> + if (dma_device &&
> >> >> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
> >> >> + is_swiotlb_force_bounce(dma_device))
> >> >
> >> >It is the wrong place. When I worked on my DMA series, I tried something
> >> >similar (a call into SWIOTLB) to notify users that RDMA would not work.
> >> >
> >> >The general feedback was that this is a layering violation, and that any
> >> >knowledge of SWIOTLB (and its API) should not leak out of the DMA API.
> >> >
> >> >You shouldn't call to is_swiotlb_force_bounce() here.
> >>
> >> What do you suggest as alternative? We need to somehow tell the user
> >> what is the situation.
> >
> >For now CC_ATTR_GUEST_MEM_ENCRYPT is likely sufficient.
> >
> >Later we should be able to detect if the device is in T=1 mode
> >directly.
>
> Okay, so we assume for now that every device is T=0 (which I believe is
> the reality). Once T=1 device appears, it changes this "if statement".
> Do I understand that correctly?
Yes, that is what I was thinking
Jason
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace
2026-05-12 14:34 ` Jason Gunthorpe
@ 2026-05-12 18:30 ` Jiri Pirko
0 siblings, 0 replies; 12+ messages in thread
From: Jiri Pirko @ 2026-05-12 18:30 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Leon Romanovsky, linux-rdma, edwards, kees, parav, mbloch,
yishaih, lirongqing, huangjunxian6, liuy22, jmoroni
Tue, May 12, 2026 at 04:34:02PM CEST, jgg@ziepe.ca wrote:
>On Tue, May 12, 2026 at 04:08:44PM +0200, Jiri Pirko wrote:
>> Tue, May 12, 2026 at 04:05:10PM CEST, jgg@ziepe.ca wrote:
>> >On Tue, May 12, 2026 at 04:03:07PM +0200, Jiri Pirko wrote:
>> >> >> @@ -1419,6 +1421,10 @@ int ib_register_device(struct ib_device *device, const char *name,
>> >> >> */
>> >> >> WARN_ON(dma_device && !dma_device->dma_parms);
>> >> >> device->dma_device = dma_device;
>> >> >> + if (dma_device &&
>> >> >> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
>> >> >> + is_swiotlb_force_bounce(dma_device))
>> >> >
>> >> >It is the wrong place. When I worked on my DMA series, I tried something
>> >> >similar (a call into SWIOTLB) to notify users that RDMA would not work.
>> >> >
>> >> >The general feedback was that this is a layering violation, and that any
>> >> >knowledge of SWIOTLB (and its API) should not leak out of the DMA API.
>> >> >
>> >> >You shouldn't call to is_swiotlb_force_bounce() here.
>> >>
>> >> What do you suggest as alternative? We need to somehow tell the user
>> >> what is the situation.
>> >
>> >For now CC_ATTR_GUEST_MEM_ENCRYPT is likely sufficient.
>> >
>> >Later we should be able to detect if the device is in T=1 mode
>> >directly.
>>
>> Okay, so we assume for now that every device is T=0 (which I believe is
>> the reality). Once T=1 device appears, it changes this "if statement".
>> Do I understand that correctly?
>
>Yes, that is what I was thinking
Okay, I'll leave some comment for future generations. Thanks!
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH rdma-next v2 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce
2026-05-06 11:14 [PATCH rdma-next v2 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko
2026-05-06 11:14 ` [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko
@ 2026-05-06 11:14 ` Jiri Pirko
2026-05-12 13:05 ` Leon Romanovsky
2026-05-06 12:52 ` [PATCH rdma-next v2 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jacob Moroni
2 siblings, 1 reply; 12+ messages in thread
From: Jiri Pirko @ 2026-05-06 11:14 UTC (permalink / raw)
To: linux-rdma
Cc: jgg, leon, edwards, kees, parav, mbloch, yishaih, lirongqing,
huangjunxian6, liuy22, jmoroni
From: Jiri Pirko <jiri@nvidia.com>
When a device requires DMA bounce buffering inside a Confidential
Computing guest, __ib_umem_get_va() cannot work. The DMA mapping layer
redirects all mappings through swiotlb bounce buffers, so the device
receives DMA addresses pointing to bounce buffer memory rather than
the user's pages. Since RDMA devices access registered memory directly
without CPU involvement, there is no opportunity for swiotlb to
synchronize between the bounce buffer and the original pages.
The registration would already fail later on, since the umem mapping
is requested with DMA_ATTR_REQUIRE_COHERENT and gets rejected under
is_swiotlb_force_bounce() with -EIO. Fail early with -EOPNOTSUPP
instead, so the user gets a specific error code to react to.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
---
v1->v2:
- updated patch description with mention of DMA_ATTR_REQUIRE_COHERENT
---
drivers/infiniband/core/umem.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 611d693eb9a2..b1877b83b021 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -167,6 +167,9 @@ static struct ib_umem *__ib_umem_get_va(struct ib_device *device,
int pinned, ret;
unsigned int gup_flags = FOLL_LONGTERM;
+ if (device->cc_dma_bounce)
+ return ERR_PTR(-EOPNOTSUPP);
+
/*
* If the combination of the addr and size requested for this memory
* region causes an integer overflow, return error.
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce
2026-05-06 11:14 ` [PATCH rdma-next v2 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko
@ 2026-05-12 13:05 ` Leon Romanovsky
2026-05-12 14:04 ` Jiri Pirko
0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2026-05-12 13:05 UTC (permalink / raw)
To: Jiri Pirko
Cc: linux-rdma, jgg, edwards, kees, parav, mbloch, yishaih,
lirongqing, huangjunxian6, liuy22, jmoroni
On Wed, May 06, 2026 at 01:14:47PM +0200, Jiri Pirko wrote:
> From: Jiri Pirko <jiri@nvidia.com>
>
> When a device requires DMA bounce buffering inside a Confidential
> Computing guest, __ib_umem_get_va() cannot work. The DMA mapping layer
> redirects all mappings through swiotlb bounce buffers, so the device
> receives DMA addresses pointing to bounce buffer memory rather than
> the user's pages. Since RDMA devices access registered memory directly
> without CPU involvement, there is no opportunity for swiotlb to
> synchronize between the bounce buffer and the original pages.
>
> The registration would already fail later on, since the umem mapping
> is requested with DMA_ATTR_REQUIRE_COHERENT and gets rejected under
> is_swiotlb_force_bounce() with -EIO. Fail early with -EOPNOTSUPP
> instead, so the user gets a specific error code to react to.
DMA_ATTR_REQUIRE_COHERENT was our answer to "layering violation claim".
Thanks
>
> Signed-off-by: Jiri Pirko <jiri@nvidia.com>
> ---
> v1->v2:
> - updated patch description with mention of DMA_ATTR_REQUIRE_COHERENT
> ---
> drivers/infiniband/core/umem.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index 611d693eb9a2..b1877b83b021 100644
> --- a/drivers/infiniband/core/umem.c
> +++ b/drivers/infiniband/core/umem.c
> @@ -167,6 +167,9 @@ static struct ib_umem *__ib_umem_get_va(struct ib_device *device,
> int pinned, ret;
> unsigned int gup_flags = FOLL_LONGTERM;
>
> + if (device->cc_dma_bounce)
> + return ERR_PTR(-EOPNOTSUPP);
> +
> /*
> * If the combination of the addr and size requested for this memory
> * region causes an integer overflow, return error.
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce
2026-05-12 13:05 ` Leon Romanovsky
@ 2026-05-12 14:04 ` Jiri Pirko
0 siblings, 0 replies; 12+ messages in thread
From: Jiri Pirko @ 2026-05-12 14:04 UTC (permalink / raw)
To: Leon Romanovsky
Cc: linux-rdma, jgg, edwards, kees, parav, mbloch, yishaih,
lirongqing, huangjunxian6, liuy22, jmoroni
Tue, May 12, 2026 at 03:05:15PM CEST, leon@kernel.org wrote:
>On Wed, May 06, 2026 at 01:14:47PM +0200, Jiri Pirko wrote:
>> From: Jiri Pirko <jiri@nvidia.com>
>>
>> When a device requires DMA bounce buffering inside a Confidential
>> Computing guest, __ib_umem_get_va() cannot work. The DMA mapping layer
>> redirects all mappings through swiotlb bounce buffers, so the device
>> receives DMA addresses pointing to bounce buffer memory rather than
>> the user's pages. Since RDMA devices access registered memory directly
>> without CPU involvement, there is no opportunity for swiotlb to
>> synchronize between the bounce buffer and the original pages.
>>
>> The registration would already fail later on, since the umem mapping
>> is requested with DMA_ATTR_REQUIRE_COHERENT and gets rejected under
>> is_swiotlb_force_bounce() with -EIO. Fail early with -EOPNOTSUPP
>> instead, so the user gets a specific error code to react to.
>
>DMA_ATTR_REQUIRE_COHERENT was our answer to "layering violation claim".
I'm not sure I follow. What's the issue you see?
>
>Thanks
>
>>
>> Signed-off-by: Jiri Pirko <jiri@nvidia.com>
>> ---
>> v1->v2:
>> - updated patch description with mention of DMA_ATTR_REQUIRE_COHERENT
>> ---
>> drivers/infiniband/core/umem.c | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>> index 611d693eb9a2..b1877b83b021 100644
>> --- a/drivers/infiniband/core/umem.c
>> +++ b/drivers/infiniband/core/umem.c
>> @@ -167,6 +167,9 @@ static struct ib_umem *__ib_umem_get_va(struct ib_device *device,
>> int pinned, ret;
>> unsigned int gup_flags = FOLL_LONGTERM;
>>
>> + if (device->cc_dma_bounce)
>> + return ERR_PTR(-EOPNOTSUPP);
>> +
>> /*
>> * If the combination of the addr and size requested for this memory
>> * region causes an integer overflow, return error.
>> --
>> 2.53.0
>>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH rdma-next v2 0/2] RDMA: detect and handle CoCo DMA bounce buffering
2026-05-06 11:14 [PATCH rdma-next v2 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko
2026-05-06 11:14 ` [PATCH rdma-next v2 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko
2026-05-06 11:14 ` [PATCH rdma-next v2 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko
@ 2026-05-06 12:52 ` Jacob Moroni
2 siblings, 0 replies; 12+ messages in thread
From: Jacob Moroni @ 2026-05-06 12:52 UTC (permalink / raw)
To: Jiri Pirko
Cc: linux-rdma, jgg, leon, edwards, kees, parav, mbloch, yishaih,
lirongqing, huangjunxian6, liuy22
Reviewed-by: Jacob Moroni <jmoroni@google.com>
^ permalink raw reply [flat|nested] 12+ messages in thread