From: Jiri Pirko <jiri@resnulli.us>
To: Jacob Moroni <jmoroni@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>,
linux-rdma@vger.kernel.org, leon@kernel.org, edwards@nvidia.com,
kees@kernel.org, parav@nvidia.com, mbloch@nvidia.com,
yishaih@nvidia.com, lirongqing@baidu.com,
huangjunxian6@hisilicon.com, liuy22@mails.tsinghua.edu.cn
Subject: Re: [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce
Date: Wed, 6 May 2026 16:54:00 +0200 [thread overview]
Message-ID: <aftVhkIh4DtoH8zq@FV6GYCPJ69> (raw)
In-Reply-To: <CAHYDg1Tbrekfnd7RyHm07ctAP9DLtUHqQ5EsYMYJr=bjjHSMPg@mail.gmail.com>
Wed, May 06, 2026 at 03:39:32PM +0200, jmoroni@google.com wrote:
>> transparent dmabuf-backed-VA pinning
>
>Thanks! I took a look at your WIP code. It seems like it would really simplify
>things for irdma. Looking forward to it.
>
>Is there a WIP you can share for any rdma-core changes? For example, I
>am wondering if there will be some generic allocation helper for drivers to
>allocate umems for internal use (for QP rings, etc.). This helper would
>detect if it's running in a CVM and use the cc_shared heap or something.
>
>I'm mainly just curious how you see it being used on the userspace side.
https://github.com/jpirko/rdma-core/commits/wip_umem_bufs/
>
>>>> Another idea was to just allocate them in the kernel using the DMA
>>>> allocator and map them into userspace but it would be a larger change.
>
>>>This isn't the pattern we are using in rdma..
>
>> Yeah, plus I'm missing the motivation, what that would help us to
>> achieve?
>
>This would have been a driver hack and doesn't make sense compared to
>your current plan, but the idea would have been to use the DMA allocator in
>the kernel to allocate the QP rings. This would give us a public buffer, which
>could then be mapped into the process with dma_mmap_coherent which
>sets the pages to decrypted. I imagine this scheme would be needed for
>NICs that require physically contiguous ring buffers (if any exist, not sure).
>
>Thanks,
>Jake
prev parent reply other threads:[~2026-05-06 14:54 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-05 6:11 [PATCH rdma-next 0/2] RDMA: detect and handle CoCo DMA bounce buffering Jiri Pirko
2026-05-05 6:11 ` [PATCH rdma-next 1/2] RDMA/uverbs: expose CoCo DMA bounce requirement to userspace Jiri Pirko
2026-05-05 6:11 ` [PATCH rdma-next 2/2] RDMA/umem: block plain userspace memory registration under CoCo bounce Jiri Pirko
2026-05-05 13:20 ` Jacob Moroni
2026-05-05 16:02 ` Jason Gunthorpe
2026-05-05 18:17 ` Jacob Moroni
2026-05-06 9:20 ` Jiri Pirko
2026-05-06 9:17 ` Jiri Pirko
2026-05-06 9:25 ` Jiri Pirko
2026-05-06 9:49 ` Jason Gunthorpe
2026-05-06 10:54 ` Jiri Pirko
2026-05-06 13:39 ` Jacob Moroni
2026-05-06 14:54 ` Jiri Pirko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aftVhkIh4DtoH8zq@FV6GYCPJ69 \
--to=jiri@resnulli.us \
--cc=edwards@nvidia.com \
--cc=huangjunxian6@hisilicon.com \
--cc=jgg@ziepe.ca \
--cc=jmoroni@google.com \
--cc=kees@kernel.org \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=lirongqing@baidu.com \
--cc=liuy22@mails.tsinghua.edu.cn \
--cc=mbloch@nvidia.com \
--cc=parav@nvidia.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox