netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pavel Begunkov <asml.silence@gmail.com>
To: Mina Almasry <almasrymina@google.com>,
	David Ahern <dsahern@kernel.org>, David Wei <dw@davidwei.uk>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
	"Arnd Bergmann" <arnd@arndb.de>,
	"Willem de Bruijn" <willemdebruijn.kernel@gmail.com>,
	"Shuah Khan" <shuah@kernel.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Shakeel Butt" <shakeelb@google.com>,
	"Jeroen de Borst" <jeroendb@google.com>,
	"Praveen Kaligineedi" <pkaligineedi@google.com>,
	"Willem de Bruijn" <willemb@google.com>,
	"Kaiyuan Zhang" <kaiyuanz@google.com>
Subject: Re: [RFC PATCH v3 05/12] netdev: netdevice devmem allocator
Date: Fri, 10 Nov 2023 14:26:46 +0000	[thread overview]
Message-ID: <3687e70e-29e6-34af-c943-8c0830ff92b8@gmail.com> (raw)
In-Reply-To: <CAHS8izMKDOw5_y2MLRfuJHs=ai+sZ6GF7Rg1NuR_JqONg-5u5Q@mail.gmail.com>

On 11/7/23 23:03, Mina Almasry wrote:
> On Tue, Nov 7, 2023 at 2:55 PM David Ahern <dsahern@kernel.org> wrote:
>>
>> On 11/7/23 3:10 PM, Mina Almasry wrote:
>>> On Mon, Nov 6, 2023 at 3:44 PM David Ahern <dsahern@kernel.org> wrote:
>>>>
>>>> On 11/5/23 7:44 PM, Mina Almasry wrote:
>>>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
>>>>> index eeeda849115c..1c351c138a5b 100644
>>>>> --- a/include/linux/netdevice.h
>>>>> +++ b/include/linux/netdevice.h
>>>>> @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding {
>>>>>   };
>>>>>
>>>>>   #ifdef CONFIG_DMA_SHARED_BUFFER
>>>>> +struct page_pool_iov *
>>>>> +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding);
>>>>> +void netdev_free_devmem(struct page_pool_iov *ppiov);
>>>>
>>>> netdev_{alloc,free}_dmabuf?
>>>>
>>>
>>> Can do.
>>>
>>>> I say that because a dmabuf can be host memory, at least I am not aware
>>>> of a restriction that a dmabuf is device memory.
>>>>
>>>
>>> In my limited experience dma-buf is generally device memory, and
>>> that's really its use case. CONFIG_UDMABUF is a driver that mocks
>>> dma-buf with a memfd which I think is used for testing. But I can do
>>> the rename, it's more clear anyway, I think.
>>
>> config UDMABUF
>>          bool "userspace dmabuf misc driver"
>>          default n
>>          depends on DMA_SHARED_BUFFER
>>          depends on MEMFD_CREATE || COMPILE_TEST
>>          help
>>            A driver to let userspace turn memfd regions into dma-bufs.
>>            Qemu can use this to create host dmabufs for guest framebuffers.
>>
>>
>> Qemu is just a userspace process; it is no way a special one.
>>
>> Treating host memory as a dmabuf should radically simplify the io_uring
>> extension of this set.
> 
> I agree actually, and I was about to make that comment to David Wei's
> series once I have the time.
> 
> David, your io_uring RX zerocopy proposal actually works with devmem
> TCP, if you're inclined to do that instead, what you'd do roughly is
> (I think):
That would be a Frankenstein's monster api with no good reason for it.
You bind memory via netlink because you don't have a proper context to
work with otherwise, io_uring serves as the context with a separate and
precise abstraction around queues. Same with dmabufs, it totally makes
sense for device memory, but wrapping host memory into a file just to
immediately unwrap it back with no particular benefits from doing so
doesn't seem like a good uapi. Currently, the difference will be
hidden by io_uring.

And we'd still need to have a hook in pp's get page to grab buffers from
the buffer ring instead of refilling via SO_DEVMEM_DONTNEED and a
callback for when skbs are dropped. It's just instead of a new pp ops
it'll be a branch in the devmem path. io_uring might want to use the
added iov format in the future for device memory or even before that,
io_uring doesn't really care whether it's pages or not.

It's also my big concern from how many optimisations it'll fence us off.
With the current io_uring RFC I can get rid of all buffer atomic
refcounting and replace it with a single percpu counting per skb.
Hopefully, that will be doable after we place it on top of pp providers.


> - Allocate a memfd,
> - Use CONFIG_UDMABUF to create a dma-buf out of that memfd.
> - Bind the dma-buf to the NIC using the netlink API in this RFC.
> - Your io_uring extensions and io_uring uapi should work as-is almost
> on top of this series, I think.
> 
> If you do this the incoming packets should land into your memfd, which
> may or may not work for you. In the future if you feel inclined to use
> device memory, this approach that I'm describing here would be more
> extensible to device memory, because you'd already be using dma-bufs
> for your user memory; you'd just replace one kind of dma-buf (UDMABUF)
> with another.
> 
>> That the io_uring set needs to dive into
>> page_pools is just wrong - complicating the design and code and pushing
>> io_uring into a realm it does not need to be involved in.

I disagree. How does it complicate it? io_uring will be just a yet another
provider implementing the callbacks of the API created for such use cases
and not changing common pp/net bits. The rest of code is in io_uring
implementing interaction with userspace and other usability features, but
there will be anyway some amount of code if we want to have a convenient
and performant api via io_uring.


>>
>> Most (all?) of this patch set can work with any memory; only device
>> memory is unreadable.

-- 
Pavel Begunkov

  parent reply	other threads:[~2023-11-10 14:28 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-06  2:43 [RFC PATCH v3 00/12] Device Memory TCP Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 01/12] net: page_pool: factor out releasing DMA from releasing the page Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 02/12] net: page_pool: create hooks for custom page providers Mina Almasry
2023-11-07  7:44   ` Yunsheng Lin
2023-11-09 11:09   ` Paolo Abeni
2023-11-10 23:19   ` Jakub Kicinski
2023-11-13  3:28     ` Mina Almasry
2023-11-13 22:10       ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 03/12] net: netdev netlink api to bind dma-buf to a net device Mina Almasry
2023-11-10 23:16   ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 04/12] netdev: support binding dma-buf to netdevice Mina Almasry
2023-11-07  7:46   ` Yunsheng Lin
2023-11-07 21:59     ` Mina Almasry
2023-11-08  3:40       ` Yunsheng Lin
2023-11-09  2:22         ` Mina Almasry
2023-11-09  9:29           ` Yunsheng Lin
2023-11-08 23:47   ` David Wei
2023-11-09  2:25     ` Mina Almasry
2023-11-09  8:29   ` Paolo Abeni
2023-11-10  2:59     ` Mina Almasry
2023-11-10  7:38       ` Yunsheng Lin
2023-11-10  9:45         ` Mina Almasry
2023-11-10 23:19   ` Jakub Kicinski
2023-11-11  2:19     ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 05/12] netdev: netdevice devmem allocator Mina Almasry
2023-11-06 23:44   ` David Ahern
2023-11-07 22:10     ` Mina Almasry
2023-11-07 22:55       ` David Ahern
2023-11-07 23:03         ` Mina Almasry
2023-11-09  1:15           ` David Wei
2023-11-10 14:26           ` Pavel Begunkov [this message]
2023-11-11 17:19             ` David Ahern
2023-11-14 16:09               ` Pavel Begunkov
2023-11-09  1:00         ` David Wei
2023-11-08  3:48       ` Yunsheng Lin
2023-11-09  1:41         ` Mina Almasry
2023-11-07  7:45   ` Yunsheng Lin
2023-11-09  8:44   ` Paolo Abeni
2023-11-06  2:44 ` [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider Mina Almasry
2023-11-06 21:02   ` Stanislav Fomichev
2023-11-06 23:49   ` David Ahern
2023-11-08  0:02     ` Mina Almasry
2023-11-08  0:10       ` David Ahern
2023-11-10 23:16   ` Jakub Kicinski
2023-11-13  4:54     ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 07/12] page-pool: device memory support Mina Almasry
2023-11-07  8:00   ` Yunsheng Lin
2023-11-07 21:56     ` Mina Almasry
2023-11-08 10:56       ` Yunsheng Lin
2023-11-09  3:20         ` Mina Almasry
2023-11-09  9:30           ` Yunsheng Lin
2023-11-09 12:20             ` Mina Almasry
2023-11-09 13:23               ` Yunsheng Lin
2023-11-09  9:01   ` Paolo Abeni
2023-11-06  2:44 ` [RFC PATCH v3 08/12] net: support non paged skb frags Mina Almasry
2023-11-07  9:00   ` Yunsheng Lin
2023-11-07 21:19     ` Mina Almasry
2023-11-08 11:25       ` Yunsheng Lin
2023-11-09  9:14   ` Paolo Abeni
2023-11-10  4:06     ` Mina Almasry
2023-11-10 23:19   ` Jakub Kicinski
2023-11-13  6:05     ` Mina Almasry
2023-11-13 22:17       ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 09/12] net: add support for skbs with unreadable frags Mina Almasry
2023-11-06 18:47   ` Stanislav Fomichev
2023-11-06 19:34     ` David Ahern
2023-11-06 20:31       ` Mina Almasry
2023-11-06 21:59         ` Stanislav Fomichev
2023-11-06 22:18           ` Mina Almasry
2023-11-06 22:59             ` Stanislav Fomichev
2023-11-06 23:27               ` Mina Almasry
2023-11-06 23:55                 ` Stanislav Fomichev
2023-11-07  0:07                   ` Willem de Bruijn
2023-11-07  0:14                     ` Stanislav Fomichev
2023-11-07  0:59                       ` Stanislav Fomichev
2023-11-07  2:23                         ` Willem de Bruijn
2023-11-07 17:44                           ` Stanislav Fomichev
2023-11-07 17:57                             ` Willem de Bruijn
2023-11-07 18:14                               ` Stanislav Fomichev
2023-11-07  0:20                     ` Mina Almasry
2023-11-07  1:06                       ` Stanislav Fomichev
2023-11-07 19:53                         ` Mina Almasry
2023-11-07 21:05                           ` Stanislav Fomichev
2023-11-07 21:17                             ` Eric Dumazet
2023-11-07 22:23                               ` Stanislav Fomichev
2023-11-10 23:17                                 ` Jakub Kicinski
2023-11-10 23:19                           ` Jakub Kicinski
2023-11-07  1:09                       ` David Ahern
2023-11-06 23:37             ` David Ahern
2023-11-07  0:03               ` Mina Almasry
2023-11-06 20:56   ` Stanislav Fomichev
2023-11-07  0:16   ` David Ahern
2023-11-07  0:23     ` Mina Almasry
2023-11-08 14:43   ` David Laight
2023-11-06  2:44 ` [RFC PATCH v3 10/12] tcp: RX path for devmem TCP Mina Almasry
2023-11-06 18:44   ` Stanislav Fomichev
2023-11-06 19:29     ` Mina Almasry
2023-11-06 21:14       ` Willem de Bruijn
2023-11-06 22:34         ` Stanislav Fomichev
2023-11-06 22:55           ` Willem de Bruijn
2023-11-06 23:32             ` Stanislav Fomichev
2023-11-06 23:55               ` David Ahern
2023-11-07  0:02                 ` Willem de Bruijn
2023-11-07 23:55                   ` Mina Almasry
2023-11-08  0:01                     ` David Ahern
2023-11-09  2:39                       ` Mina Almasry
2023-11-09 16:07                         ` Edward Cree
2023-12-08 20:12                           ` Pavel Begunkov
2023-11-09 11:05             ` Paolo Abeni
2023-11-10 23:16               ` Jakub Kicinski
2023-12-08 20:28             ` Pavel Begunkov
2023-12-08 20:09           ` Pavel Begunkov
2023-11-06 21:17       ` Stanislav Fomichev
2023-11-08 15:36         ` Edward Cree
2023-11-09 10:52   ` Paolo Abeni
2023-11-10 23:19   ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 11/12] net: add SO_DEVMEM_DONTNEED setsockopt to release RX pages Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 12/12] selftests: add ncdevmem, netcat for devmem TCP Mina Almasry
2023-11-09 11:03   ` Paolo Abeni
2023-11-10 23:13   ` Jakub Kicinski
2023-11-11  2:27     ` Mina Almasry
2023-11-11  2:35       ` Jakub Kicinski
2023-11-13  4:08         ` Mina Almasry
2023-11-13 22:20           ` Jakub Kicinski
2023-11-10 23:17   ` Jakub Kicinski
2023-11-07 15:18 ` [RFC PATCH v3 00/12] Device Memory TCP David Ahern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3687e70e-29e6-34af-c943-8c0830ff92b8@gmail.com \
    --to=asml.silence@gmail.com \
    --cc=almasrymina@google.com \
    --cc=arnd@arndb.de \
    --cc=christian.koenig@amd.com \
    --cc=davem@davemloft.net \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=dsahern@kernel.org \
    --cc=dw@davidwei.uk \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jeroendb@google.com \
    --cc=kaiyuanz@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pkaligineedi@google.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=willemb@google.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).