linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC net-next v1 0/5] Device memory TCP TX
@ 2024-12-21  0:42 Mina Almasry
  2024-12-21  0:42 ` [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation Mina Almasry
                   ` (5 more replies)
  0 siblings, 6 replies; 26+ messages in thread
From: Mina Almasry @ 2024-12-21  0:42 UTC (permalink / raw)
  To: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

The TX path had been dropped from the Device Memory TCP patch series
post RFCv1 [1], to make that series slightly easier to review. This
series rebases the implementation of the TX path on top of the
net_iov/netmem framework agreed upon and merged. The motivation for
the feature is thoroughly described in the docs & cover letter of the
original proposal, so I don't repeat the lengthy descriptions here, but
they are available in [1].

Sending this series as RFC as the winder closure is immenient. I plan on
reposting as non-RFC once the tree re-opens, addressing any feedback
I receive in the meantime.

Full outline on usage of the TX path is detailed in the documentation
added in the first patch.

Test example is available via the kselftest included in the series as well.

The series is relatively small, as the TX path for this feature largely
piggybacks on the existing MSG_ZEROCOPY implementation.

Patch Overview:
---------------

1. Documentation & tests to give high level overview of the feature
   being added.

2. Add netmem refcounting needed for the TX path.

3. Devmem TX netlink API.

4. Devmem TX net stack implementation.

Testing:
--------

Testing is very similar to devmem TCP RX path. The ncdevmem test used
for the RX path is now augemented with client functionality to test TX
path.

* Test Setup:

Kernel: net-next with this RFC and memory provider API cherry-picked
locally.

Hardware: Google Cloud A3 VMs.

NIC: GVE with header split & RSS & flow steering support.

Performance results are not included with this version, unfortunately.
I'm having issues running the dma-buf exporter driver against the
upstream kernel on my test setup. The issues are specific to that
dma-buf exporter and do not affect this patch series. I plan to follow
up this series with perf fixes if the tests point to issues once they're
up and running.

Special thanks to Stan who took a stab at rebasing the TX implementation
on top of the netmem/net_iov framework merged. Parts of his proposal [2]
that are reused as-is are forked off into their own patches to give full
credit.

[1] https://lore.kernel.org/netdev/20240909054318.1809580-1-almasrymina@google.com/
[2] https://lore.kernel.org/netdev/20240913150913.1280238-2-sdf@fomichev.me/T/#m066dd407fbed108828e2c40ae50e3f4376ef57fd

Cc: sdf@fomichev.me
Cc: asml.silence@gmail.com
Cc: dw@davidwei.uk


Mina Almasry (4):
  net: add devmem TCP TX documentation
  selftests: ncdevmem: Implement devmem TCP TX
  net: add get_netmem/put_netmem support
  net: devmem: Implement TX path

Stanislav Fomichev (1):
  net: devmem TCP tx netlink api

 Documentation/netlink/specs/netdev.yaml       |  12 +
 Documentation/networking/devmem.rst           | 140 +++++++++-
 include/linux/skbuff.h                        |  13 +-
 include/linux/skbuff_ref.h                    |   4 +-
 include/net/netmem.h                          |   3 +
 include/net/sock.h                            |   2 +
 include/uapi/linux/netdev.h                   |   1 +
 include/uapi/linux/uio.h                      |   5 +
 net/core/datagram.c                           |  40 ++-
 net/core/devmem.c                             | 101 ++++++-
 net/core/devmem.h                             |  51 +++-
 net/core/netdev-genl-gen.c                    |  13 +
 net/core/netdev-genl-gen.h                    |   1 +
 net/core/netdev-genl.c                        |  67 ++++-
 net/core/skbuff.c                             |  38 ++-
 net/core/sock.c                               |   9 +
 net/ipv4/tcp.c                                |  36 ++-
 net/vmw_vsock/virtio_transport_common.c       |   4 +-
 tools/include/uapi/linux/netdev.h             |   1 +
 .../selftests/drivers/net/hw/ncdevmem.c       | 261 +++++++++++++++++-
 20 files changed, 764 insertions(+), 38 deletions(-)

-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation
  2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
@ 2024-12-21  0:42 ` Mina Almasry
  2024-12-21  4:56   ` Stanislav Fomichev
  2024-12-21  0:42 ` [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX Mina Almasry
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 26+ messages in thread
From: Mina Almasry @ 2024-12-21  0:42 UTC (permalink / raw)
  To: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Add documentation outlining the usage and details of the devmem TCP TX
API.

Signed-off-by: Mina Almasry <almasrymina@google.com>
---
 Documentation/networking/devmem.rst | 140 +++++++++++++++++++++++++++-
 1 file changed, 136 insertions(+), 4 deletions(-)

diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
index d95363645331..9be01cd96ee2 100644
--- a/Documentation/networking/devmem.rst
+++ b/Documentation/networking/devmem.rst
@@ -62,15 +62,15 @@ More Info
     https://lore.kernel.org/netdev/20240831004313.3713467-1-almasrymina@google.com/
 
 
-Interface
-=========
+RX Interface
+============
 
 
 Example
 -------
 
-tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up
-the RX path of this API.
+./tools/testing/selftests/drivers/net/hw/ncdevmem:do_server shows an example of
+setting up the RX path of this API.
 
 
 NIC Setup
@@ -235,6 +235,138 @@ can be less than the tokens provided by the user in case of:
 (a) an internal kernel leak bug.
 (b) the user passed more than 1024 frags.
 
+TX Interface
+============
+
+
+Example
+-------
+
+./tools/testing/selftests/drivers/net/hw/ncdevmem:do_client shows an example of
+setting up the TX path of this API.
+
+
+NIC Setup
+---------
+
+The user must bind a TX dmabuf to a given NIC using the netlink API::
+
+        struct netdev_bind_tx_req *req = NULL;
+        struct netdev_bind_tx_rsp *rsp = NULL;
+        struct ynl_error yerr;
+
+        *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+
+        req = netdev_bind_tx_req_alloc();
+        netdev_bind_tx_req_set_ifindex(req, ifindex);
+        netdev_bind_tx_req_set_fd(req, dmabuf_fd);
+
+        rsp = netdev_bind_tx(*ys, req);
+
+        tx_dmabuf_id = rsp->id;
+
+
+The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf
+that has been bound.
+
+The user can unbind the dmabuf from the netdevice by closing the netlink socket
+that established the binding. We do this so that the binding is automatically
+unbound even if the userspace process crashes.
+
+Note that any reasonably well-behaved dmabuf from any exporter should work with
+devmem TCP, even if the dmabuf is not actually backed by devmem. An example of
+this is udmabuf, which wraps user memory (non-devmem) in a dmabuf.
+
+Socket Setup
+------------
+
+The user application must use MSG_ZEROCOPY flag when sending devmem TCP. Devmem
+cannot be copied by the kernel, so the semantics of the devmem TX are similar
+to the semantics of MSG_ZEROCOPY.
+
+	ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt));
+
+Sending data
+--------------
+
+Devmem data is sent using the SCM_DEVMEM_DMABUF cmsg.
+
+The user should create a msghdr with iov_base set to NULL and iov_len set to the
+number of bytes to be sent from the dmabuf.
+
+The user passes the dma-buf id via the dmabuf_tx_cmsg.dmabuf_id, and passes the
+offset into the dmabuf from where to start sending using the
+dmabuf_tx_cmsg.dmabuf_offset field::
+
+        char ctrl_data[CMSG_SPACE(sizeof(struct dmabuf_tx_cmsg))];
+        struct dmabuf_tx_cmsg ddmabuf;
+        struct msghdr msg = {};
+        struct cmsghdr *cmsg;
+        uint64_t off = 100;
+        struct iovec iov;
+
+	iov.iov_base = NULL;
+	iov.iov_len = line_size;
+
+	msg.msg_iov = &iov;
+	msg.msg_iovlen = 1;
+
+	msg.msg_control = ctrl_data;
+	msg.msg_controllen = sizeof(ctrl_data);
+
+	cmsg = CMSG_FIRSTHDR(&msg);
+	cmsg->cmsg_level = SOL_SOCKET;
+	cmsg->cmsg_type = SCM_DEVMEM_DMABUF;
+	cmsg->cmsg_len = CMSG_LEN(sizeof(struct dmabuf_tx_cmsg));
+
+	ddmabuf.dmabuf_id = tx_dmabuf_id;
+	ddmabuf.dmabuf_offset = off;
+
+	*((struct dmabuf_tx_cmsg *)CMSG_DATA(cmsg)) = ddmabuf;
+
+	ret = sendmsg(socket_fd, &msg, MSG_ZEROCOPY);
+
+Reusing TX dmabufs
+------------------
+
+Similar to MSG_ZEROCOPY with regular memory, the user should not modify the
+contents of the dma-buf while a send operation is in progress. This is because
+the kernel does not keep a copy of the dmabuf contents. Instead, the kernel
+will pin and send data from the buffer available to the userspace.
+
+Just as in MSG_ZEROCOPY, the kernel notifies the userspace of send completions
+using MSG_ERRQUEUE::
+
+        int64_t tstop = gettimeofday_ms() + waittime_ms;
+        char control[CMSG_SPACE(100)] = {};
+        struct sock_extended_err *serr;
+        struct msghdr msg = {};
+        struct cmsghdr *cm;
+        int retries = 10;
+        __u32 hi, lo;
+
+        msg.msg_control = control;
+        msg.msg_controllen = sizeof(control);
+
+        while (gettimeofday_ms() < tstop) {
+                if (!do_poll(fd)) continue;
+
+                ret = recvmsg(fd, &msg, MSG_ERRQUEUE);
+
+                for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+                        serr = (void *)CMSG_DATA(cm);
+
+                        hi = serr->ee_data;
+                        lo = serr->ee_info;
+
+                        fprintf(stdout, "tx complete [%d,%d]\n", lo, hi);
+                }
+        }
+
+After the associated sendmsg has been completed, the dmabuf can be reused by
+the userspace.
+
+
 Implementation & Caveats
 ========================
 
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX
  2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
  2024-12-21  0:42 ` [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation Mina Almasry
@ 2024-12-21  0:42 ` Mina Almasry
  2024-12-21  4:57   ` Stanislav Fomichev
  2024-12-26 21:24   ` Willem de Bruijn
  2024-12-21  0:42 ` [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support Mina Almasry
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 26+ messages in thread
From: Mina Almasry @ 2024-12-21  0:42 UTC (permalink / raw)
  To: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Add support for devmem TX in ncdevmem.

This is a combination of the ncdevmem from the devmem TCP series RFCv1
which included the TX path, and work by Stan to include the netlink API
and refactored on top of his generic memory_provider support.

Signed-off-by: Mina Almasry <almasrymina@google.com>
Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
---
 .../selftests/drivers/net/hw/ncdevmem.c       | 261 +++++++++++++++++-
 1 file changed, 259 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/drivers/net/hw/ncdevmem.c b/tools/testing/selftests/drivers/net/hw/ncdevmem.c
index 19a6969643f4..c1cbe2e11230 100644
--- a/tools/testing/selftests/drivers/net/hw/ncdevmem.c
+++ b/tools/testing/selftests/drivers/net/hw/ncdevmem.c
@@ -40,15 +40,18 @@
 #include <fcntl.h>
 #include <malloc.h>
 #include <error.h>
+#include <poll.h>
 
 #include <arpa/inet.h>
 #include <sys/socket.h>
 #include <sys/mman.h>
 #include <sys/ioctl.h>
 #include <sys/syscall.h>
+#include <sys/time.h>
 
 #include <linux/memfd.h>
 #include <linux/dma-buf.h>
+#include <linux/errqueue.h>
 #include <linux/udmabuf.h>
 #include <libmnl/libmnl.h>
 #include <linux/types.h>
@@ -80,6 +83,8 @@ static int num_queues = -1;
 static char *ifname;
 static unsigned int ifindex;
 static unsigned int dmabuf_id;
+static uint32_t tx_dmabuf_id;
+static int waittime_ms = 500;
 
 struct memory_buffer {
 	int fd;
@@ -93,6 +98,8 @@ struct memory_buffer {
 struct memory_provider {
 	struct memory_buffer *(*alloc)(size_t size);
 	void (*free)(struct memory_buffer *ctx);
+	void (*memcpy_to_device)(struct memory_buffer *dst, size_t off,
+				 void *src, int n);
 	void (*memcpy_from_device)(void *dst, struct memory_buffer *src,
 				   size_t off, int n);
 };
@@ -153,6 +160,20 @@ static void udmabuf_free(struct memory_buffer *ctx)
 	free(ctx);
 }
 
+static void udmabuf_memcpy_to_device(struct memory_buffer *dst, size_t off,
+				     void *src, int n)
+{
+	struct dma_buf_sync sync = {};
+
+	sync.flags = DMA_BUF_SYNC_START | DMA_BUF_SYNC_WRITE;
+	ioctl(dst->fd, DMA_BUF_IOCTL_SYNC, &sync);
+
+	memcpy(dst->buf_mem + off, src, n);
+
+	sync.flags = DMA_BUF_SYNC_END | DMA_BUF_SYNC_WRITE;
+	ioctl(dst->fd, DMA_BUF_IOCTL_SYNC, &sync);
+}
+
 static void udmabuf_memcpy_from_device(void *dst, struct memory_buffer *src,
 				       size_t off, int n)
 {
@@ -170,6 +191,7 @@ static void udmabuf_memcpy_from_device(void *dst, struct memory_buffer *src,
 static struct memory_provider udmabuf_memory_provider = {
 	.alloc = udmabuf_alloc,
 	.free = udmabuf_free,
+	.memcpy_to_device = udmabuf_memcpy_to_device,
 	.memcpy_from_device = udmabuf_memcpy_from_device,
 };
 
@@ -394,6 +416,49 @@ static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd,
 	return -1;
 }
 
+static int bind_tx_queue(unsigned int ifindex, unsigned int dmabuf_fd,
+			 struct ynl_sock **ys)
+{
+	struct netdev_bind_tx_req *req = NULL;
+	struct netdev_bind_tx_rsp *rsp = NULL;
+	struct ynl_error yerr;
+
+	*ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+	if (!*ys) {
+		fprintf(stderr, "YNL: %s\n", yerr.msg);
+		return -1;
+	}
+
+	req = netdev_bind_tx_req_alloc();
+	netdev_bind_tx_req_set_ifindex(req, ifindex);
+	netdev_bind_tx_req_set_fd(req, dmabuf_fd);
+
+	rsp = netdev_bind_tx(*ys, req);
+	if (!rsp) {
+		perror("netdev_bind_tx");
+		goto err_close;
+	}
+
+	if (!rsp->_present.id) {
+		perror("id not present");
+		goto err_close;
+	}
+
+	fprintf(stderr, "got tx dmabuf id=%d\n", rsp->id);
+	tx_dmabuf_id = rsp->id;
+
+	netdev_bind_tx_req_free(req);
+	netdev_bind_tx_rsp_free(rsp);
+
+	return 0;
+
+err_close:
+	fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg);
+	netdev_bind_tx_req_free(req);
+	ynl_sock_destroy(*ys);
+	return -1;
+}
+
 static void enable_reuseaddr(int fd)
 {
 	int opt = 1;
@@ -432,7 +497,7 @@ static int parse_address(const char *str, int port, struct sockaddr_in6 *sin6)
 	return 0;
 }
 
-int do_server(struct memory_buffer *mem)
+static int do_server(struct memory_buffer *mem)
 {
 	char ctrl_data[sizeof(int) * 20000];
 	struct netdev_queue_id *queues;
@@ -686,6 +751,198 @@ void run_devmem_tests(void)
 	provider->free(mem);
 }
 
+static unsigned long gettimeofday_ms(void)
+{
+	struct timeval tv;
+
+	gettimeofday(&tv, NULL);
+	return (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
+}
+
+static int do_poll(int fd)
+{
+	struct pollfd pfd;
+	int ret;
+
+	pfd.events = POLLERR;
+	pfd.revents = 0;
+	pfd.fd = fd;
+
+	ret = poll(&pfd, 1, waittime_ms);
+	if (ret == -1)
+		error(1, errno, "poll");
+
+	return ret && (pfd.revents & POLLERR);
+}
+
+static void wait_compl(int fd)
+{
+	int64_t tstop = gettimeofday_ms() + waittime_ms;
+	char control[CMSG_SPACE(100)] = {};
+	struct sock_extended_err *serr;
+	struct msghdr msg = {};
+	struct cmsghdr *cm;
+	int retries = 10;
+	__u32 hi, lo;
+	int ret;
+
+	msg.msg_control = control;
+	msg.msg_controllen = sizeof(control);
+
+	while (gettimeofday_ms() < tstop) {
+		if (!do_poll(fd))
+			continue;
+
+		ret = recvmsg(fd, &msg, MSG_ERRQUEUE);
+		if (ret < 0) {
+			if (errno == EAGAIN)
+				continue;
+			error(1, ret, "recvmsg(MSG_ERRQUEUE)");
+			return;
+		}
+		if (msg.msg_flags & MSG_CTRUNC)
+			error(1, 0, "MSG_CTRUNC\n");
+
+		for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+			if (cm->cmsg_level != SOL_IP &&
+			    cm->cmsg_level != SOL_IPV6)
+				continue;
+			if (cm->cmsg_level == SOL_IP &&
+			    cm->cmsg_type != IP_RECVERR)
+				continue;
+			if (cm->cmsg_level == SOL_IPV6 &&
+			    cm->cmsg_type != IPV6_RECVERR)
+				continue;
+
+			serr = (void *)CMSG_DATA(cm);
+			if (serr->ee_origin != SO_EE_ORIGIN_ZEROCOPY)
+				error(1, 0, "wrong origin %u", serr->ee_origin);
+			if (serr->ee_errno != 0)
+				error(1, 0, "wrong errno %d", serr->ee_errno);
+
+			hi = serr->ee_data;
+			lo = serr->ee_info;
+
+			fprintf(stderr, "tx complete [%d,%d]\n", lo, hi);
+			return;
+		}
+	}
+
+	error(1, 0, "did not receive tx completion");
+}
+
+static int do_client(struct memory_buffer *mem)
+{
+	char ctrl_data[CMSG_SPACE(sizeof(struct dmabuf_tx_cmsg))];
+	struct sockaddr_in6 server_sin;
+	struct sockaddr_in6 client_sin;
+	struct dmabuf_tx_cmsg ddmabuf;
+	struct ynl_sock *ys = NULL;
+	struct msghdr msg = {};
+	ssize_t line_size = 0;
+	struct cmsghdr *cmsg;
+	uint64_t off = 100;
+	char *line = NULL;
+	struct iovec iov;
+	size_t len = 0;
+	int socket_fd;
+	int opt = 1;
+	int ret;
+
+	ret = parse_address(server_ip, atoi(port), &server_sin);
+	if (ret < 0)
+		error(1, 0, "parse server address");
+
+	socket_fd = socket(AF_INET6, SOCK_STREAM, 0);
+	if (socket_fd < 0)
+		error(1, socket_fd, "create socket");
+
+	enable_reuseaddr(socket_fd);
+
+	ret = setsockopt(socket_fd, SOL_SOCKET, SO_BINDTODEVICE, ifname,
+			 strlen(ifname) + 1);
+	if (ret)
+		error(1, ret, "bindtodevice");
+
+	if (bind_tx_queue(ifindex, mem->fd, &ys))
+		error(1, 0, "Failed to bind\n");
+
+	ret = parse_address(client_ip, atoi(port), &client_sin);
+	if (ret < 0)
+		error(1, 0, "parse client address");
+
+	ret = bind(socket_fd, &client_sin, sizeof(client_sin));
+	if (ret)
+		error(1, ret, "bind");
+
+	ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt));
+	if (ret)
+		error(1, ret, "set sock opt");
+
+	fprintf(stderr, "Connect to %s %d (via %s)\n", server_ip,
+		ntohs(server_sin.sin6_port), ifname);
+
+	ret = connect(socket_fd, &server_sin, sizeof(server_sin));
+	if (ret)
+		error(1, ret, "connect");
+
+	while (1) {
+		free(line);
+		line = NULL;
+		line_size = getline(&line, &len, stdin);
+
+		if (line_size < 0)
+			break;
+
+		provider->memcpy_to_device(mem, off, line, line_size);
+
+		while (line_size) {
+			fprintf(stderr, "read line_size=%ld off=%d\n",
+				line_size, off);
+
+			iov.iov_base = NULL;
+			iov.iov_len = line_size;
+
+			msg.msg_iov = &iov;
+			msg.msg_iovlen = 1;
+
+			msg.msg_control = ctrl_data;
+			msg.msg_controllen = sizeof(ctrl_data);
+
+			cmsg = CMSG_FIRSTHDR(&msg);
+			cmsg->cmsg_level = SOL_SOCKET;
+			cmsg->cmsg_type = SCM_DEVMEM_DMABUF;
+			cmsg->cmsg_len = CMSG_LEN(sizeof(struct dmabuf_tx_cmsg));
+
+			ddmabuf.dmabuf_id = tx_dmabuf_id;
+			ddmabuf.dmabuf_offset = off;
+
+			*((struct dmabuf_tx_cmsg *)CMSG_DATA(cmsg)) = ddmabuf;
+
+			ret = sendmsg(socket_fd, &msg, MSG_ZEROCOPY);
+			if (ret < 0)
+				error(1, errno, "Failed sendmsg");
+
+			fprintf(stderr, "sendmsg_ret=%d\n", ret);
+
+			off += ret;
+			line_size -= ret;
+
+			wait_compl(socket_fd);
+		}
+	}
+
+	fprintf(stderr, "%s: tx ok\n", TEST_PREFIX);
+
+	free(line);
+	close(socket_fd);
+
+	if (ys)
+		ynl_sock_destroy(ys);
+
+	return 0;
+}
+
 int main(int argc, char *argv[])
 {
 	struct memory_buffer *mem;
@@ -779,7 +1036,7 @@ int main(int argc, char *argv[])
 		error(1, 0, "Missing -p argument\n");
 
 	mem = provider->alloc(getpagesize() * NUM_PAGES);
-	ret = is_server ? do_server(mem) : 1;
+	ret = is_server ? do_server(mem) : do_client(mem);
 	provider->free(mem);
 
 	return ret;
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support
  2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
  2024-12-21  0:42 ` [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation Mina Almasry
  2024-12-21  0:42 ` [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX Mina Almasry
@ 2024-12-21  0:42 ` Mina Almasry
  2024-12-26 19:07   ` Stanislav Fomichev
  2024-12-21  0:42 ` [PATCH RFC net-next v1 4/5] net: devmem TCP tx netlink api Mina Almasry
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 26+ messages in thread
From: Mina Almasry @ 2024-12-21  0:42 UTC (permalink / raw)
  To: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Currently net_iovs support only pp ref counts, and do not support a
page ref equivalent.

This is fine for the RX path as net_iovs are used exclusively with the
pp and only pp refcounting is needed there. The TX path however does not
use pp ref counts, thus, support for get_page/put_page equivalent is
needed for netmem.

Support get_netmem/put_netmem. Check the type of the netmem before
passing it to page or net_iov specific code to obtain a page ref
equivalent.

For dmabuf net_iovs, we obtain a ref on the underlying binding. This
ensures the entire binding doesn't disappear until all the net_iovs have
been put_netmem'ed. We do not need to track the refcount of individual
dmabuf net_iovs as we don't allocate/free them from a pool similar to
what the buddy allocator does for pages.

This code is written to be extensible by other net_iov implementers.
get_netmem/put_netmem will check the type of the netmem and route it to
the correct helper:

pages -> [get|put]_page()
dmabuf net_iovs -> net_devmem_[get|put]_net_iov()
new net_iovs ->	new helpers

Signed-off-by: Mina Almasry <almasrymina@google.com>

---
 include/linux/skbuff_ref.h |  4 ++--
 include/net/netmem.h       |  3 +++
 net/core/devmem.c          | 10 ++++++++++
 net/core/devmem.h          | 11 +++++++++++
 net/core/skbuff.c          | 30 ++++++++++++++++++++++++++++++
 5 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h
index 0f3c58007488..9e49372ef1a0 100644
--- a/include/linux/skbuff_ref.h
+++ b/include/linux/skbuff_ref.h
@@ -17,7 +17,7 @@
  */
 static inline void __skb_frag_ref(skb_frag_t *frag)
 {
-	get_page(skb_frag_page(frag));
+	get_netmem(skb_frag_netmem(frag));
 }
 
 /**
@@ -40,7 +40,7 @@ static inline void skb_page_unref(netmem_ref netmem, bool recycle)
 	if (recycle && napi_pp_put_page(netmem))
 		return;
 #endif
-	put_page(netmem_to_page(netmem));
+	put_netmem(netmem);
 }
 
 /**
diff --git a/include/net/netmem.h b/include/net/netmem.h
index 1b58faa4f20f..d30f31878a09 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -245,4 +245,7 @@ static inline unsigned long netmem_get_dma_addr(netmem_ref netmem)
 	return __netmem_clear_lsb(netmem)->dma_addr;
 }
 
+void get_netmem(netmem_ref netmem);
+void put_netmem(netmem_ref netmem);
+
 #endif /* _NET_NETMEM_H */
diff --git a/net/core/devmem.c b/net/core/devmem.c
index 0b6ed7525b22..f7e06a8cba01 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -322,6 +322,16 @@ void dev_dmabuf_uninstall(struct net_device *dev)
 	}
 }
 
+void net_devmem_get_net_iov(struct net_iov *niov)
+{
+	net_devmem_dmabuf_binding_get(niov->owner->binding);
+}
+
+void net_devmem_put_net_iov(struct net_iov *niov)
+{
+	net_devmem_dmabuf_binding_put(niov->owner->binding);
+}
+
 /*** "Dmabuf devmem memory provider" ***/
 
 int mp_dmabuf_devmem_init(struct page_pool *pool)
diff --git a/net/core/devmem.h b/net/core/devmem.h
index 76099ef9c482..54e30fea80b3 100644
--- a/net/core/devmem.h
+++ b/net/core/devmem.h
@@ -119,6 +119,9 @@ net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding)
 	__net_devmem_dmabuf_binding_free(binding);
 }
 
+void net_devmem_get_net_iov(struct net_iov *niov);
+void net_devmem_put_net_iov(struct net_iov *niov);
+
 struct net_iov *
 net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding);
 void net_devmem_free_dmabuf(struct net_iov *ppiov);
@@ -126,6 +129,14 @@ void net_devmem_free_dmabuf(struct net_iov *ppiov);
 #else
 struct net_devmem_dmabuf_binding;
 
+static inline void net_devmem_get_net_iov(struct net_iov *niov)
+{
+}
+
+static inline void net_devmem_put_net_iov(struct net_iov *niov)
+{
+}
+
 static inline void
 __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
 {
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index a441613a1e6c..815245d5c36b 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -88,6 +88,7 @@
 #include <linux/textsearch.h>
 
 #include "dev.h"
+#include "devmem.h"
 #include "netmem_priv.h"
 #include "sock_destructor.h"
 
@@ -7290,3 +7291,32 @@ bool csum_and_copy_from_iter_full(void *addr, size_t bytes,
 	return false;
 }
 EXPORT_SYMBOL(csum_and_copy_from_iter_full);
+
+void get_netmem(netmem_ref netmem)
+{
+	if (netmem_is_net_iov(netmem)) {
+		/* Assume any net_iov is devmem and route it to
+		 * net_devmem_get_net_iov. As new net_iov types are added they
+		 * need to be checked here.
+		 */
+		net_devmem_get_net_iov(netmem_to_net_iov(netmem));
+		return;
+	}
+	get_page(netmem_to_page(netmem));
+}
+EXPORT_SYMBOL(get_netmem);
+
+void put_netmem(netmem_ref netmem)
+{
+	if (netmem_is_net_iov(netmem)) {
+		/* Assume any net_iov is devmem and route it to
+		 * net_devmem_put_net_iov. As new net_iov types are added they
+		 * need to be checked here.
+		 */
+		net_devmem_put_net_iov(netmem_to_net_iov(netmem));
+		return;
+	}
+
+	put_page(netmem_to_page(netmem));
+}
+EXPORT_SYMBOL(put_netmem);
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC net-next v1 4/5] net: devmem TCP tx netlink api
  2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
                   ` (2 preceding siblings ...)
  2024-12-21  0:42 ` [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support Mina Almasry
@ 2024-12-21  0:42 ` Mina Almasry
  2024-12-21  0:42 ` [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path Mina Almasry
  2024-12-21  4:53 ` [PATCH RFC net-next v1 0/5] Device memory TCP TX Stanislav Fomichev
  5 siblings, 0 replies; 26+ messages in thread
From: Mina Almasry @ 2024-12-21  0:42 UTC (permalink / raw)
  To: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

From: Stanislav Fomichev <sdf@fomichev.me>

Add bind-tx netlink call to attach dmabuf for TX; queue is not
required, only ifindex and dmabuf fd for attachment.

Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
Signed-off-by: Mina Almasry <almasrymina@google.com>

---
 Documentation/netlink/specs/netdev.yaml | 12 ++++++++++++
 include/uapi/linux/netdev.h             |  1 +
 net/core/netdev-genl-gen.c              | 13 +++++++++++++
 net/core/netdev-genl-gen.h              |  1 +
 net/core/netdev-genl.c                  |  6 ++++++
 tools/include/uapi/linux/netdev.h       |  1 +
 6 files changed, 34 insertions(+)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index cbb544bd6c84..93f4333e7bc6 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -711,6 +711,18 @@ operations:
             - defer-hard-irqs
             - gro-flush-timeout
             - irq-suspend-timeout
+    -
+      name: bind-tx
+      doc: Bind dmabuf to netdev for TX
+      attribute-set: dmabuf
+      do:
+        request:
+          attributes:
+            - ifindex
+            - fd
+        reply:
+          attributes:
+            - id
 
 kernel-family:
   headers: [ "linux/list.h"]
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index e4be227d3ad6..04364ef5edbe 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -203,6 +203,7 @@ enum {
 	NETDEV_CMD_QSTATS_GET,
 	NETDEV_CMD_BIND_RX,
 	NETDEV_CMD_NAPI_SET,
+	NETDEV_CMD_BIND_TX,
 
 	__NETDEV_CMD_MAX,
 	NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index a89cbd8d87c3..581b6b9935a5 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -99,6 +99,12 @@ static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_IRQ_SUSPE
 	[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT] = { .type = NLA_UINT, },
 };
 
+/* NETDEV_CMD_BIND_TX - do */
+static const struct nla_policy netdev_bind_tx_nl_policy[NETDEV_A_DMABUF_FD + 1] = {
+	[NETDEV_A_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
+	[NETDEV_A_DMABUF_FD] = { .type = NLA_U32, },
+};
+
 /* Ops table for netdev */
 static const struct genl_split_ops netdev_nl_ops[] = {
 	{
@@ -190,6 +196,13 @@ static const struct genl_split_ops netdev_nl_ops[] = {
 		.maxattr	= NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 		.flags		= GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
 	},
+	{
+		.cmd		= NETDEV_CMD_BIND_TX,
+		.doit		= netdev_nl_bind_tx_doit,
+		.policy		= netdev_bind_tx_nl_policy,
+		.maxattr	= NETDEV_A_DMABUF_FD,
+		.flags		= GENL_CMD_CAP_DO,
+	},
 };
 
 static const struct genl_multicast_group netdev_nl_mcgrps[] = {
diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h
index e09dd7539ff2..c1fed66e92b9 100644
--- a/net/core/netdev-genl-gen.h
+++ b/net/core/netdev-genl-gen.h
@@ -34,6 +34,7 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb,
 				struct netlink_callback *cb);
 int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info);
 int netdev_nl_napi_set_doit(struct sk_buff *skb, struct genl_info *info);
+int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info);
 
 enum {
 	NETDEV_NLGRP_MGMT,
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 2d3ae0cd3ad2..00d3d5851487 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -907,6 +907,12 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
 	return err;
 }
 
+/* stub */
+int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info)
+{
+	return 0;
+}
+
 void netdev_nl_sock_priv_init(struct list_head *priv)
 {
 	INIT_LIST_HEAD(priv);
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index e4be227d3ad6..04364ef5edbe 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -203,6 +203,7 @@ enum {
 	NETDEV_CMD_QSTATS_GET,
 	NETDEV_CMD_BIND_RX,
 	NETDEV_CMD_NAPI_SET,
+	NETDEV_CMD_BIND_TX,
 
 	__NETDEV_CMD_MAX,
 	NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
                   ` (3 preceding siblings ...)
  2024-12-21  0:42 ` [PATCH RFC net-next v1 4/5] net: devmem TCP tx netlink api Mina Almasry
@ 2024-12-21  0:42 ` Mina Almasry
  2024-12-21  5:09   ` Stanislav Fomichev
                     ` (2 more replies)
  2024-12-21  4:53 ` [PATCH RFC net-next v1 0/5] Device memory TCP TX Stanislav Fomichev
  5 siblings, 3 replies; 26+ messages in thread
From: Mina Almasry @ 2024-12-21  0:42 UTC (permalink / raw)
  To: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Augment dmabuf binding to be able to handle TX. Additional to all the RX
binding, we also create tx_vec and tx_iter needed for the TX path.

Provide API for sendmsg to be able to send dmabufs bound to this device:

- Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from,
  and the offset into the dmabuf to send from.
- MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf.

Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY
implementation, while disabling instances where MSG_ZEROCOPY falls back
to copying.

We additionally look up the dmabuf to send from by id, then pipe the
binding down to the new zerocopy_fill_skb_from_devmem which fills a TX skb
with net_iov netmems instead of the traditional page netmems.

We also special case skb_frag_dma_map to return the dma-address of these
dmabuf net_iovs instead of attempting to map pages.

Based on work by Stanislav Fomichev <sdf@fomichev.me>. A lot of the meat
of the implementation came from devmem TCP RFC v1[1], which included the
TX path, but Stan did all the rebasing on top of netmem/net_iov.

Cc: Stanislav Fomichev <sdf@fomichev.me>
Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com>
Signed-off-by: Mina Almasry <almasrymina@google.com>

---
 include/linux/skbuff.h                  | 13 +++-
 include/net/sock.h                      |  2 +
 include/uapi/linux/uio.h                |  5 ++
 net/core/datagram.c                     | 40 ++++++++++-
 net/core/devmem.c                       | 91 +++++++++++++++++++++++--
 net/core/devmem.h                       | 40 +++++++++--
 net/core/netdev-genl.c                  | 65 +++++++++++++++++-
 net/core/skbuff.c                       |  8 ++-
 net/core/sock.c                         |  9 +++
 net/ipv4/tcp.c                          | 36 +++++++---
 net/vmw_vsock/virtio_transport_common.c |  4 +-
 11 files changed, 281 insertions(+), 32 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index bb2b751d274a..e90dc0c4d542 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1711,9 +1711,10 @@ struct ubuf_info *msg_zerocopy_realloc(struct sock *sk, size_t size,
 
 void msg_zerocopy_put_abort(struct ubuf_info *uarg, bool have_uref);
 
+struct net_devmem_dmabuf_binding;
 int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
 			    struct sk_buff *skb, struct iov_iter *from,
-			    size_t length);
+			    size_t length, bool is_devmem);
 
 int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
 				struct iov_iter *from, size_t length);
@@ -1721,12 +1722,14 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
 static inline int skb_zerocopy_iter_dgram(struct sk_buff *skb,
 					  struct msghdr *msg, int len)
 {
-	return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len);
+	return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len,
+				       false);
 }
 
 int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
 			     struct msghdr *msg, int len,
-			     struct ubuf_info *uarg);
+			     struct ubuf_info *uarg,
+			     struct net_devmem_dmabuf_binding *binding);
 
 /* Internal */
 #define skb_shinfo(SKB)	((struct skb_shared_info *)(skb_end_pointer(SKB)))
@@ -3697,6 +3700,10 @@ static inline dma_addr_t __skb_frag_dma_map(struct device *dev,
 					    size_t offset, size_t size,
 					    enum dma_data_direction dir)
 {
+	if (skb_frag_is_net_iov(frag)) {
+		return netmem_to_net_iov(frag->netmem)->dma_addr + offset +
+		       frag->offset;
+	}
 	return dma_map_page(dev, skb_frag_page(frag),
 			    skb_frag_off(frag) + offset, size, dir);
 }
diff --git a/include/net/sock.h b/include/net/sock.h
index d4bdd3286e03..75bd580fe9c6 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1816,6 +1816,8 @@ struct sockcm_cookie {
 	u32 tsflags;
 	u32 ts_opt_id;
 	u32 priority;
+	u32 dmabuf_id;
+	u64 dmabuf_offset;
 };
 
 static inline void sockcm_init(struct sockcm_cookie *sockc,
diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
index 649739e0c404..41490cde95ad 100644
--- a/include/uapi/linux/uio.h
+++ b/include/uapi/linux/uio.h
@@ -38,6 +38,11 @@ struct dmabuf_token {
 	__u32 token_count;
 };
 
+struct dmabuf_tx_cmsg {
+	__u32 dmabuf_id;
+	__u64 dmabuf_offset;
+};
+
 /*
  *	UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
  */
diff --git a/net/core/datagram.c b/net/core/datagram.c
index f0693707aece..3b09995db894 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -63,6 +63,8 @@
 #include <net/busy_poll.h>
 #include <crypto/hash.h>
 
+#include "devmem.h"
+
 /*
  *	Is a socket 'connection oriented' ?
  */
@@ -692,9 +694,41 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
 	return 0;
 }
 
+static int zerocopy_fill_skb_from_devmem(struct sk_buff *skb,
+					 struct msghdr *msg,
+					 struct iov_iter *from, int length)
+{
+	int i = skb_shinfo(skb)->nr_frags;
+	int orig_length = length;
+	netmem_ref netmem;
+	size_t size;
+
+	while (length && iov_iter_count(from)) {
+		if (i == MAX_SKB_FRAGS)
+			return -EMSGSIZE;
+
+		size = min_t(size_t, iter_iov_len(from), length);
+		if (!size)
+			return -EFAULT;
+
+		netmem = net_iov_to_netmem(iter_iov(from)->iov_base);
+		get_netmem(netmem);
+		skb_add_rx_frag_netmem(skb, i, netmem, from->iov_offset, size,
+				       PAGE_SIZE);
+
+		iov_iter_advance(from, size);
+		length -= size;
+		i++;
+	}
+
+	iov_iter_advance(&msg->msg_iter, orig_length);
+
+	return 0;
+}
+
 int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
 			    struct sk_buff *skb, struct iov_iter *from,
-			    size_t length)
+			    size_t length, bool is_devmem)
 {
 	unsigned long orig_size = skb->truesize;
 	unsigned long truesize;
@@ -702,6 +736,8 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
 
 	if (msg && msg->msg_ubuf && msg->sg_from_iter)
 		ret = msg->sg_from_iter(skb, from, length);
+	else if (unlikely(is_devmem))
+		ret = zerocopy_fill_skb_from_devmem(skb, msg, from, length);
 	else
 		ret = zerocopy_fill_skb_from_iter(skb, from, length);
 
@@ -735,7 +771,7 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from)
 	if (skb_copy_datagram_from_iter(skb, 0, from, copy))
 		return -EFAULT;
 
-	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U);
+	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U, NULL);
 }
 EXPORT_SYMBOL(zerocopy_sg_from_iter);
 
diff --git a/net/core/devmem.c b/net/core/devmem.c
index f7e06a8cba01..81f1b715cfa6 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -15,6 +15,7 @@
 #include <net/netdev_queues.h>
 #include <net/netdev_rx_queue.h>
 #include <net/page_pool/helpers.h>
+#include <net/sock.h>
 #include <trace/events/page_pool.h>
 
 #include "devmem.h"
@@ -63,8 +64,10 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
 	dma_buf_detach(binding->dmabuf, binding->attachment);
 	dma_buf_put(binding->dmabuf);
 	xa_destroy(&binding->bound_rxqs);
+	kfree(binding->tx_vec);
 	kfree(binding);
 }
+EXPORT_SYMBOL(__net_devmem_dmabuf_binding_free);
 
 struct net_iov *
 net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
@@ -109,6 +112,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
 	unsigned long xa_idx;
 	unsigned int rxq_idx;
 
+	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+
+	/* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
+	 * erase.
+	 */
+	synchronize_net();
+
 	if (binding->list.next)
 		list_del(&binding->list);
 
@@ -122,8 +132,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
 		WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
 	}
 
-	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
-
 	net_devmem_dmabuf_binding_put(binding);
 }
 
@@ -174,8 +182,9 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
 }
 
 struct net_devmem_dmabuf_binding *
-net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
-		       struct netlink_ext_ack *extack)
+net_devmem_bind_dmabuf(struct net_device *dev,
+		       enum dma_data_direction direction,
+		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack)
 {
 	struct net_devmem_dmabuf_binding *binding;
 	static u32 id_alloc_next;
@@ -183,6 +192,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 	struct dma_buf *dmabuf;
 	unsigned int sg_idx, i;
 	unsigned long virtual;
+	struct iovec *iov;
 	int err;
 
 	dmabuf = dma_buf_get(dmabuf_fd);
@@ -218,13 +228,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 	}
 
 	binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment,
-						       DMA_FROM_DEVICE);
+						       direction);
 	if (IS_ERR(binding->sgt)) {
 		err = PTR_ERR(binding->sgt);
 		NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment");
 		goto err_detach;
 	}
 
+	if (!binding->sgt || binding->sgt->nents == 0) {
+		err = -EINVAL;
+		NL_SET_ERR_MSG(extack, "Empty dmabuf attachment");
+		goto err_detach;
+	}
+
 	/* For simplicity we expect to make PAGE_SIZE allocations, but the
 	 * binding can be much more flexible than that. We may be able to
 	 * allocate MTU sized chunks here. Leave that for future work...
@@ -236,6 +252,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 		goto err_unmap;
 	}
 
+	if (direction == DMA_TO_DEVICE) {
+		virtual = 0;
+		for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx)
+			virtual += sg_dma_len(sg);
+
+		binding->tx_vec = kcalloc(virtual / PAGE_SIZE + 1,
+					  sizeof(struct iovec), GFP_KERNEL);
+		if (!binding->tx_vec) {
+			err = -ENOMEM;
+			goto err_unmap;
+		}
+	}
+
 	virtual = 0;
 	for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
 		dma_addr_t dma_addr = sg_dma_address(sg);
@@ -277,11 +306,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 			niov->owner = owner;
 			page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
 						      net_devmem_get_dma_addr(niov));
+
+			if (direction == DMA_TO_DEVICE) {
+				iov = &binding->tx_vec[virtual / PAGE_SIZE + i];
+				iov->iov_base = niov;
+				iov->iov_len = PAGE_SIZE;
+			}
 		}
 
 		virtual += len;
 	}
 
+	if (direction == DMA_TO_DEVICE)
+		iov_iter_init(&binding->tx_iter, WRITE, binding->tx_vec,
+			      virtual / PAGE_SIZE + 1, virtual);
+
 	return binding;
 
 err_free_chunks:
@@ -302,6 +341,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
 	return ERR_PTR(err);
 }
 
+struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
+{
+	struct net_devmem_dmabuf_binding *binding;
+
+	rcu_read_lock();
+	binding = xa_load(&net_devmem_dmabuf_bindings, id);
+	if (binding) {
+		if (!net_devmem_dmabuf_binding_get(binding))
+			binding = NULL;
+	}
+	rcu_read_unlock();
+
+	return binding;
+}
+
 void dev_dmabuf_uninstall(struct net_device *dev)
 {
 	struct net_devmem_dmabuf_binding *binding;
@@ -332,6 +386,33 @@ void net_devmem_put_net_iov(struct net_iov *niov)
 	net_devmem_dmabuf_binding_put(niov->owner->binding);
 }
 
+struct net_devmem_dmabuf_binding *
+net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
+{
+	struct net_devmem_dmabuf_binding *binding;
+	int err = 0;
+
+	binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
+	if (!binding || !binding->tx_vec) {
+		err = -EINVAL;
+		goto out_err;
+	}
+
+	if (sock_net(sk) != dev_net(binding->dev)) {
+		err = -ENODEV;
+		goto out_err;
+	}
+
+	iov_iter_advance(&binding->tx_iter, sockc->dmabuf_offset);
+	return binding;
+
+out_err:
+	if (binding)
+		net_devmem_dmabuf_binding_put(binding);
+
+	return ERR_PTR(err);
+}
+
 /*** "Dmabuf devmem memory provider" ***/
 
 int mp_dmabuf_devmem_init(struct page_pool *pool)
diff --git a/net/core/devmem.h b/net/core/devmem.h
index 54e30fea80b3..f923c77d9c45 100644
--- a/net/core/devmem.h
+++ b/net/core/devmem.h
@@ -11,6 +11,8 @@
 #define _NET_DEVMEM_H
 
 struct netlink_ext_ack;
+struct sockcm_cookie;
+struct sock;
 
 struct net_devmem_dmabuf_binding {
 	struct dma_buf *dmabuf;
@@ -27,6 +29,10 @@ struct net_devmem_dmabuf_binding {
 	 * The binding undos itself and unmaps the underlying dmabuf once all
 	 * those refs are dropped and the binding is no longer desired or in
 	 * use.
+	 *
+	 * net_devmem_get_net_iov() on dmabuf net_iovs will increment this
+	 * reference, making sure that that the binding remains alive until all
+	 * the net_iovs are no longer used.
 	 */
 	refcount_t ref;
 
@@ -42,6 +48,10 @@ struct net_devmem_dmabuf_binding {
 	 * active.
 	 */
 	u32 id;
+
+	/* iov_iter representing all possible net_iov chunks in the dmabuf. */
+	struct iov_iter tx_iter;
+	struct iovec *tx_vec;
 };
 
 #if defined(CONFIG_NET_DEVMEM)
@@ -66,8 +76,10 @@ struct dmabuf_genpool_chunk_owner {
 
 void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding);
 struct net_devmem_dmabuf_binding *
-net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
-		       struct netlink_ext_ack *extack);
+net_devmem_bind_dmabuf(struct net_device *dev,
+		       enum dma_data_direction direction,
+		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack);
+struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id);
 void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding);
 int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
 				    struct net_devmem_dmabuf_binding *binding,
@@ -104,10 +116,10 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov)
 	return net_iov_owner(niov)->binding->id;
 }
 
-static inline void
+static inline bool
 net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding)
 {
-	refcount_inc(&binding->ref);
+	return refcount_inc_not_zero(&binding->ref);
 }
 
 static inline void
@@ -126,6 +138,9 @@ struct net_iov *
 net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding);
 void net_devmem_free_dmabuf(struct net_iov *ppiov);
 
+struct net_devmem_dmabuf_binding *
+net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc);
+
 #else
 struct net_devmem_dmabuf_binding;
 
@@ -144,11 +159,17 @@ __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
 
 static inline struct net_devmem_dmabuf_binding *
 net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
+		       enum dma_data_direction direction,
 		       struct netlink_ext_ack *extack)
 {
 	return ERR_PTR(-EOPNOTSUPP);
 }
 
+static inline struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
+{
+	return NULL;
+}
+
 static inline void
 net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
 {
@@ -186,6 +207,17 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov)
 {
 	return 0;
 }
+
+static inline void
+net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding)
+{
+}
+
+static inline struct net_devmem_dmabuf_binding *
+net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
 #endif
 
 #endif /* _NET_DEVMEM_H */
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 00d3d5851487..b9928bac94da 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -850,7 +850,8 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
 		goto err_unlock;
 	}
 
-	binding = net_devmem_bind_dmabuf(netdev, dmabuf_fd, info->extack);
+	binding = net_devmem_bind_dmabuf(netdev, DMA_FROM_DEVICE, dmabuf_fd,
+					 info->extack);
 	if (IS_ERR(binding)) {
 		err = PTR_ERR(binding);
 		goto err_unlock;
@@ -907,10 +908,68 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
 	return err;
 }
 
-/* stub */
 int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info)
 {
-	return 0;
+	struct net_devmem_dmabuf_binding *binding;
+	struct list_head *sock_binding_list;
+	struct net_device *netdev;
+	u32 ifindex, dmabuf_fd;
+	struct sk_buff *rsp;
+	int err = 0;
+	void *hdr;
+
+	if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
+	    GENL_REQ_ATTR_CHECK(info, NETDEV_A_DMABUF_FD))
+		return -EINVAL;
+
+	ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
+	dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_DMABUF_FD]);
+
+	sock_binding_list =
+		genl_sk_priv_get(&netdev_nl_family, NETLINK_CB(skb).sk);
+	if (IS_ERR(sock_binding_list))
+		return PTR_ERR(sock_binding_list);
+
+	rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!rsp)
+		return -ENOMEM;
+
+	hdr = genlmsg_iput(rsp, info);
+	if (!hdr) {
+		err = -EMSGSIZE;
+		goto err_genlmsg_free;
+	}
+
+	rtnl_lock();
+
+	netdev = __dev_get_by_index(genl_info_net(info), ifindex);
+	if (!netdev || !netif_device_present(netdev)) {
+		err = -ENODEV;
+		goto err_unlock;
+	}
+
+	binding = net_devmem_bind_dmabuf(netdev, DMA_TO_DEVICE, dmabuf_fd,
+					 info->extack);
+	if (IS_ERR(binding)) {
+		err = PTR_ERR(binding);
+		goto err_unlock;
+	}
+
+	list_add(&binding->list, sock_binding_list);
+
+	nla_put_u32(rsp, NETDEV_A_DMABUF_ID, binding->id);
+	genlmsg_end(rsp, hdr);
+
+	rtnl_unlock();
+
+	return genlmsg_reply(rsp, info);
+
+	net_devmem_unbind_dmabuf(binding);
+err_unlock:
+	rtnl_unlock();
+err_genlmsg_free:
+	nlmsg_free(rsp);
+	return err;
 }
 
 void netdev_nl_sock_priv_init(struct list_head *priv)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 815245d5c36b..eb6b41a32524 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -1882,8 +1882,10 @@ EXPORT_SYMBOL_GPL(msg_zerocopy_ubuf_ops);
 
 int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
 			     struct msghdr *msg, int len,
-			     struct ubuf_info *uarg)
+			     struct ubuf_info *uarg,
+			     struct net_devmem_dmabuf_binding *binding)
 {
+	struct iov_iter *from = binding ? &binding->tx_iter : &msg->msg_iter;
 	int err, orig_len = skb->len;
 
 	if (uarg->ops->link_skb) {
@@ -1901,12 +1903,12 @@ int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
 			return -EEXIST;
 	}
 
-	err = __zerocopy_sg_from_iter(msg, sk, skb, &msg->msg_iter, len);
+	err = __zerocopy_sg_from_iter(msg, sk, skb, from, len, binding != NULL);
 	if (err == -EFAULT || (err == -EMSGSIZE && skb->len == orig_len)) {
 		struct sock *save_sk = skb->sk;
 
 		/* Streams do not free skb on error. Reset to prev state. */
-		iov_iter_revert(&msg->msg_iter, skb->len - orig_len);
+		iov_iter_revert(from, skb->len - orig_len);
 		skb->sk = sk;
 		___pskb_trim(skb, orig_len);
 		skb->sk = save_sk;
diff --git a/net/core/sock.c b/net/core/sock.c
index e7bcc8952248..ed7089310f0d 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2908,6 +2908,7 @@ EXPORT_SYMBOL(sock_alloc_send_pskb);
 int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg,
 		     struct sockcm_cookie *sockc)
 {
+	struct dmabuf_tx_cmsg dmabuf_tx;
 	u32 tsflags;
 
 	BUILD_BUG_ON(SOF_TIMESTAMPING_LAST == (1 << 31));
@@ -2961,6 +2962,14 @@ int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg,
 		if (!sk_set_prio_allowed(sk, *(u32 *)CMSG_DATA(cmsg)))
 			return -EPERM;
 		sockc->priority = *(u32 *)CMSG_DATA(cmsg);
+		break;
+	case SCM_DEVMEM_DMABUF:
+		if (cmsg->cmsg_len != CMSG_LEN(sizeof(struct dmabuf_tx_cmsg)))
+			return -EINVAL;
+		dmabuf_tx = *(struct dmabuf_tx_cmsg *)CMSG_DATA(cmsg);
+		sockc->dmabuf_id = dmabuf_tx.dmabuf_id;
+		sockc->dmabuf_offset = dmabuf_tx.dmabuf_offset;
+
 		break;
 	default:
 		return -EINVAL;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 0d704bda6c41..406dc2993742 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1051,6 +1051,7 @@ int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *copied,
 
 int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 {
+	struct net_devmem_dmabuf_binding *binding = NULL;
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct ubuf_info *uarg = NULL;
 	struct sk_buff *skb;
@@ -1063,6 +1064,15 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 
 	flags = msg->msg_flags;
 
+	sockcm_init(&sockc, sk);
+	if (msg->msg_controllen) {
+		err = sock_cmsg_send(sk, msg, &sockc);
+		if (unlikely(err)) {
+			err = -EINVAL;
+			goto out_err;
+		}
+	}
+
 	if ((flags & MSG_ZEROCOPY) && size) {
 		if (msg->msg_ubuf) {
 			uarg = msg->msg_ubuf;
@@ -1080,6 +1090,15 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 			else
 				uarg_to_msgzc(uarg)->zerocopy = 0;
 		}
+
+		if (sockc.dmabuf_id != 0) {
+			binding = net_devmem_get_sockc_binding(sk, &sockc);
+			if (IS_ERR(binding)) {
+				err = PTR_ERR(binding);
+				binding = NULL;
+				goto out_err;
+			}
+		}
 	} else if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES) && size) {
 		if (sk->sk_route_caps & NETIF_F_SG)
 			zc = MSG_SPLICE_PAGES;
@@ -1123,15 +1142,6 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 		/* 'common' sending to sendq */
 	}
 
-	sockcm_init(&sockc, sk);
-	if (msg->msg_controllen) {
-		err = sock_cmsg_send(sk, msg, &sockc);
-		if (unlikely(err)) {
-			err = -EINVAL;
-			goto out_err;
-		}
-	}
-
 	/* This should be in poll */
 	sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
 
@@ -1248,7 +1258,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 					goto wait_for_space;
 			}
 
-			err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg);
+			err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg,
+						       binding);
 			if (err == -EMSGSIZE || err == -EEXIST) {
 				tcp_mark_push(tp, skb);
 				goto new_segment;
@@ -1329,6 +1340,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 	/* msg->msg_ubuf is pinned by the caller so we don't take extra refs */
 	if (uarg && !msg->msg_ubuf)
 		net_zcopy_put(uarg);
+	if (binding)
+		net_devmem_dmabuf_binding_put(binding);
 	return copied + copied_syn;
 
 do_error:
@@ -1346,6 +1359,9 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
 		sk->sk_write_space(sk);
 		tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED);
 	}
+	if (binding)
+		net_devmem_dmabuf_binding_put(binding);
+
 	return err;
 }
 EXPORT_SYMBOL_GPL(tcp_sendmsg_locked);
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 9acc13ab3f82..286e6cd5ad34 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -104,8 +104,8 @@ static int virtio_transport_fill_skb(struct sk_buff *skb,
 {
 	if (zcopy)
 		return __zerocopy_sg_from_iter(info->msg, NULL, skb,
-					       &info->msg->msg_iter,
-					       len);
+					       &info->msg->msg_iter, len,
+					       false);
 
 	return memcpy_from_msg(skb_put(skb, len), info->msg, len);
 }
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 0/5] Device memory TCP TX
  2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
                   ` (4 preceding siblings ...)
  2024-12-21  0:42 ` [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path Mina Almasry
@ 2024-12-21  4:53 ` Stanislav Fomichev
  5 siblings, 0 replies; 26+ messages in thread
From: Stanislav Fomichev @ 2024-12-21  4:53 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 12/21, Mina Almasry wrote:
> The TX path had been dropped from the Device Memory TCP patch series
> post RFCv1 [1], to make that series slightly easier to review. This
> series rebases the implementation of the TX path on top of the
> net_iov/netmem framework agreed upon and merged. The motivation for
> the feature is thoroughly described in the docs & cover letter of the
> original proposal, so I don't repeat the lengthy descriptions here, but
> they are available in [1].
> 
> Sending this series as RFC as the winder closure is immenient. I plan on
> reposting as non-RFC once the tree re-opens, addressing any feedback
> I receive in the meantime.

Thank you for squeezing it in before the window closure. I will be off
on Monday-Wednesday, but I'll try to test it on Thursday-Friday.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation
  2024-12-21  0:42 ` [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation Mina Almasry
@ 2024-12-21  4:56   ` Stanislav Fomichev
  2025-01-27 22:45     ` Mina Almasry
  0 siblings, 1 reply; 26+ messages in thread
From: Stanislav Fomichev @ 2024-12-21  4:56 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 12/21, Mina Almasry wrote:
> Add documentation outlining the usage and details of the devmem TCP TX
> API.
> 
> Signed-off-by: Mina Almasry <almasrymina@google.com>
> ---
>  Documentation/networking/devmem.rst | 140 +++++++++++++++++++++++++++-
>  1 file changed, 136 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
> index d95363645331..9be01cd96ee2 100644
> --- a/Documentation/networking/devmem.rst
> +++ b/Documentation/networking/devmem.rst
> @@ -62,15 +62,15 @@ More Info
>      https://lore.kernel.org/netdev/20240831004313.3713467-1-almasrymina@google.com/
>  
>  
> -Interface
> -=========
> +RX Interface
> +============
>  
>  
>  Example
>  -------
>  
> -tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up
> -the RX path of this API.
> +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_server shows an example of
> +setting up the RX path of this API.
>  
>  
>  NIC Setup
> @@ -235,6 +235,138 @@ can be less than the tokens provided by the user in case of:
>  (a) an internal kernel leak bug.
>  (b) the user passed more than 1024 frags.
>  
> +TX Interface
> +============
> +
> +
> +Example
> +-------
> +
> +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_client shows an example of
> +setting up the TX path of this API.
> +
> +
> +NIC Setup
> +---------
> +
> +The user must bind a TX dmabuf to a given NIC using the netlink API::
> +
> +        struct netdev_bind_tx_req *req = NULL;
> +        struct netdev_bind_tx_rsp *rsp = NULL;
> +        struct ynl_error yerr;
> +
> +        *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
> +
> +        req = netdev_bind_tx_req_alloc();
> +        netdev_bind_tx_req_set_ifindex(req, ifindex);
> +        netdev_bind_tx_req_set_fd(req, dmabuf_fd);
> +
> +        rsp = netdev_bind_tx(*ys, req);
> +
> +        tx_dmabuf_id = rsp->id;
> +
> +
> +The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf
> +that has been bound.
> +
> +The user can unbind the dmabuf from the netdevice by closing the netlink socket
> +that established the binding. We do this so that the binding is automatically
> +unbound even if the userspace process crashes.
> +
> +Note that any reasonably well-behaved dmabuf from any exporter should work with
> +devmem TCP, even if the dmabuf is not actually backed by devmem. An example of
> +this is udmabuf, which wraps user memory (non-devmem) in a dmabuf.
> +
> +Socket Setup
> +------------
> +
> +The user application must use MSG_ZEROCOPY flag when sending devmem TCP. Devmem
> +cannot be copied by the kernel, so the semantics of the devmem TX are similar
> +to the semantics of MSG_ZEROCOPY.
> +
> +	ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt));
> +
> +Sending data
> +--------------
> +
> +Devmem data is sent using the SCM_DEVMEM_DMABUF cmsg.
> +

[...]

> +The user should create a msghdr with iov_base set to NULL and iov_len set to the
> +number of bytes to be sent from the dmabuf.

Should we verify that iov_base is NULL in the kernel?

But also, alternatively, why not go with iov_base == offset? This way we
can support several offsets in a single message, just like regular
sendmsg with host memory. Any reason to not do that?

> +The user passes the dma-buf id via the dmabuf_tx_cmsg.dmabuf_id, and passes the
> +offset into the dmabuf from where to start sending using the
> +dmabuf_tx_cmsg.dmabuf_offset field::
> +

[...]

> +        char ctrl_data[CMSG_SPACE(sizeof(struct dmabuf_tx_cmsg))];
> +        struct dmabuf_tx_cmsg ddmabuf;
> +        struct msghdr msg = {};
> +        struct cmsghdr *cmsg;
> +        uint64_t off = 100;
> +        struct iovec iov;
> +
> +	iov.iov_base = NULL;
> +	iov.iov_len = line_size;

nit: indent seems to be different (tabs vs spaces)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX
  2024-12-21  0:42 ` [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX Mina Almasry
@ 2024-12-21  4:57   ` Stanislav Fomichev
  2024-12-26 21:24   ` Willem de Bruijn
  1 sibling, 0 replies; 26+ messages in thread
From: Stanislav Fomichev @ 2024-12-21  4:57 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 12/21, Mina Almasry wrote:
> Add support for devmem TX in ncdevmem.
> 
> This is a combination of the ncdevmem from the devmem TCP series RFCv1
> which included the TX path, and work by Stan to include the netlink API
> and refactored on top of his generic memory_provider support.

Do you plan to include python part for the non-rfc series [1]? Or should
I follow up separately?

1: https://github.com/fomichev/linux/commit/df5ef094db57f6c49603e6be5730782e379dd237

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-21  0:42 ` [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path Mina Almasry
@ 2024-12-21  5:09   ` Stanislav Fomichev
  2024-12-26 19:10     ` Stanislav Fomichev
  2024-12-26 21:52   ` Willem de Bruijn
  2024-12-28 19:28   ` David Ahern
  2 siblings, 1 reply; 26+ messages in thread
From: Stanislav Fomichev @ 2024-12-21  5:09 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 12/21, Mina Almasry wrote:
> Augment dmabuf binding to be able to handle TX. Additional to all the RX
> binding, we also create tx_vec and tx_iter needed for the TX path.
> 
> Provide API for sendmsg to be able to send dmabufs bound to this device:
> 
> - Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from,
>   and the offset into the dmabuf to send from.
> - MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf.
> 
> Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY
> implementation, while disabling instances where MSG_ZEROCOPY falls back
> to copying.
> 
> We additionally look up the dmabuf to send from by id, then pipe the
> binding down to the new zerocopy_fill_skb_from_devmem which fills a TX skb
> with net_iov netmems instead of the traditional page netmems.
> 
> We also special case skb_frag_dma_map to return the dma-address of these
> dmabuf net_iovs instead of attempting to map pages.
> 
> Based on work by Stanislav Fomichev <sdf@fomichev.me>. A lot of the meat
> of the implementation came from devmem TCP RFC v1[1], which included the
> TX path, but Stan did all the rebasing on top of netmem/net_iov.
> 
> Cc: Stanislav Fomichev <sdf@fomichev.me>
> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com>
> Signed-off-by: Mina Almasry <almasrymina@google.com>
> 
> ---
>  include/linux/skbuff.h                  | 13 +++-
>  include/net/sock.h                      |  2 +
>  include/uapi/linux/uio.h                |  5 ++
>  net/core/datagram.c                     | 40 ++++++++++-
>  net/core/devmem.c                       | 91 +++++++++++++++++++++++--
>  net/core/devmem.h                       | 40 +++++++++--
>  net/core/netdev-genl.c                  | 65 +++++++++++++++++-
>  net/core/skbuff.c                       |  8 ++-
>  net/core/sock.c                         |  9 +++
>  net/ipv4/tcp.c                          | 36 +++++++---
>  net/vmw_vsock/virtio_transport_common.c |  4 +-
>  11 files changed, 281 insertions(+), 32 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index bb2b751d274a..e90dc0c4d542 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -1711,9 +1711,10 @@ struct ubuf_info *msg_zerocopy_realloc(struct sock *sk, size_t size,
>  
>  void msg_zerocopy_put_abort(struct ubuf_info *uarg, bool have_uref);
>  
> +struct net_devmem_dmabuf_binding;
>  int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
>  			    struct sk_buff *skb, struct iov_iter *from,
> -			    size_t length);
> +			    size_t length, bool is_devmem);
>  
>  int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
>  				struct iov_iter *from, size_t length);
> @@ -1721,12 +1722,14 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
>  static inline int skb_zerocopy_iter_dgram(struct sk_buff *skb,
>  					  struct msghdr *msg, int len)
>  {
> -	return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len);
> +	return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len,
> +				       false);
>  }
>  
>  int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
>  			     struct msghdr *msg, int len,
> -			     struct ubuf_info *uarg);
> +			     struct ubuf_info *uarg,
> +			     struct net_devmem_dmabuf_binding *binding);
>  
>  /* Internal */
>  #define skb_shinfo(SKB)	((struct skb_shared_info *)(skb_end_pointer(SKB)))
> @@ -3697,6 +3700,10 @@ static inline dma_addr_t __skb_frag_dma_map(struct device *dev,
>  					    size_t offset, size_t size,
>  					    enum dma_data_direction dir)
>  {
> +	if (skb_frag_is_net_iov(frag)) {
> +		return netmem_to_net_iov(frag->netmem)->dma_addr + offset +
> +		       frag->offset;
> +	}
>  	return dma_map_page(dev, skb_frag_page(frag),
>  			    skb_frag_off(frag) + offset, size, dir);
>  }
> diff --git a/include/net/sock.h b/include/net/sock.h
> index d4bdd3286e03..75bd580fe9c6 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1816,6 +1816,8 @@ struct sockcm_cookie {
>  	u32 tsflags;
>  	u32 ts_opt_id;
>  	u32 priority;
> +	u32 dmabuf_id;
> +	u64 dmabuf_offset;
>  };
>  
>  static inline void sockcm_init(struct sockcm_cookie *sockc,
> diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
> index 649739e0c404..41490cde95ad 100644
> --- a/include/uapi/linux/uio.h
> +++ b/include/uapi/linux/uio.h
> @@ -38,6 +38,11 @@ struct dmabuf_token {
>  	__u32 token_count;
>  };
>  
> +struct dmabuf_tx_cmsg {
> +	__u32 dmabuf_id;
> +	__u64 dmabuf_offset;
> +};
> +
>  /*
>   *	UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
>   */
> diff --git a/net/core/datagram.c b/net/core/datagram.c
> index f0693707aece..3b09995db894 100644
> --- a/net/core/datagram.c
> +++ b/net/core/datagram.c
> @@ -63,6 +63,8 @@
>  #include <net/busy_poll.h>
>  #include <crypto/hash.h>
>  
> +#include "devmem.h"
> +
>  /*
>   *	Is a socket 'connection oriented' ?
>   */
> @@ -692,9 +694,41 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
>  	return 0;
>  }
>  
> +static int zerocopy_fill_skb_from_devmem(struct sk_buff *skb,
> +					 struct msghdr *msg,
> +					 struct iov_iter *from, int length)
> +{
> +	int i = skb_shinfo(skb)->nr_frags;
> +	int orig_length = length;
> +	netmem_ref netmem;
> +	size_t size;
> +
> +	while (length && iov_iter_count(from)) {
> +		if (i == MAX_SKB_FRAGS)
> +			return -EMSGSIZE;
> +
> +		size = min_t(size_t, iter_iov_len(from), length);
> +		if (!size)
> +			return -EFAULT;
> +
> +		netmem = net_iov_to_netmem(iter_iov(from)->iov_base);
> +		get_netmem(netmem);
> +		skb_add_rx_frag_netmem(skb, i, netmem, from->iov_offset, size,
> +				       PAGE_SIZE);
> +
> +		iov_iter_advance(from, size);
> +		length -= size;
> +		i++;
> +	}
> +
> +	iov_iter_advance(&msg->msg_iter, orig_length);
> +
> +	return 0;
> +}
> +
>  int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
>  			    struct sk_buff *skb, struct iov_iter *from,
> -			    size_t length)
> +			    size_t length, bool is_devmem)
>  {
>  	unsigned long orig_size = skb->truesize;
>  	unsigned long truesize;
> @@ -702,6 +736,8 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
>  
>  	if (msg && msg->msg_ubuf && msg->sg_from_iter)
>  		ret = msg->sg_from_iter(skb, from, length);
> +	else if (unlikely(is_devmem))
> +		ret = zerocopy_fill_skb_from_devmem(skb, msg, from, length);
>  	else
>  		ret = zerocopy_fill_skb_from_iter(skb, from, length);
>  
> @@ -735,7 +771,7 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from)
>  	if (skb_copy_datagram_from_iter(skb, 0, from, copy))
>  		return -EFAULT;
>  
> -	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U);
> +	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U, NULL);
>  }
>  EXPORT_SYMBOL(zerocopy_sg_from_iter);
>  
> diff --git a/net/core/devmem.c b/net/core/devmem.c
> index f7e06a8cba01..81f1b715cfa6 100644
> --- a/net/core/devmem.c
> +++ b/net/core/devmem.c
> @@ -15,6 +15,7 @@
>  #include <net/netdev_queues.h>
>  #include <net/netdev_rx_queue.h>
>  #include <net/page_pool/helpers.h>
> +#include <net/sock.h>
>  #include <trace/events/page_pool.h>
>  
>  #include "devmem.h"
> @@ -63,8 +64,10 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
>  	dma_buf_detach(binding->dmabuf, binding->attachment);
>  	dma_buf_put(binding->dmabuf);
>  	xa_destroy(&binding->bound_rxqs);
> +	kfree(binding->tx_vec);
>  	kfree(binding);
>  }
> +EXPORT_SYMBOL(__net_devmem_dmabuf_binding_free);
>  
>  struct net_iov *
>  net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
> @@ -109,6 +112,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
>  	unsigned long xa_idx;
>  	unsigned int rxq_idx;
>  
> +	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> +
> +	/* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
> +	 * erase.
> +	 */
> +	synchronize_net();
> +
>  	if (binding->list.next)
>  		list_del(&binding->list);
>  
> @@ -122,8 +132,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
>  		WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
>  	}
>  
> -	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> -
>  	net_devmem_dmabuf_binding_put(binding);
>  }
>  
> @@ -174,8 +182,9 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
>  }
>  
>  struct net_devmem_dmabuf_binding *
> -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> -		       struct netlink_ext_ack *extack)
> +net_devmem_bind_dmabuf(struct net_device *dev,
> +		       enum dma_data_direction direction,
> +		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack)
>  {
>  	struct net_devmem_dmabuf_binding *binding;
>  	static u32 id_alloc_next;
> @@ -183,6 +192,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  	struct dma_buf *dmabuf;
>  	unsigned int sg_idx, i;
>  	unsigned long virtual;
> +	struct iovec *iov;
>  	int err;
>  
>  	dmabuf = dma_buf_get(dmabuf_fd);
> @@ -218,13 +228,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  	}
>  
>  	binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment,
> -						       DMA_FROM_DEVICE);
> +						       direction);
>  	if (IS_ERR(binding->sgt)) {
>  		err = PTR_ERR(binding->sgt);
>  		NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment");
>  		goto err_detach;
>  	}
>  
> +	if (!binding->sgt || binding->sgt->nents == 0) {
> +		err = -EINVAL;
> +		NL_SET_ERR_MSG(extack, "Empty dmabuf attachment");
> +		goto err_detach;
> +	}
> +
>  	/* For simplicity we expect to make PAGE_SIZE allocations, but the
>  	 * binding can be much more flexible than that. We may be able to
>  	 * allocate MTU sized chunks here. Leave that for future work...
> @@ -236,6 +252,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  		goto err_unmap;
>  	}
>  
> +	if (direction == DMA_TO_DEVICE) {
> +		virtual = 0;
> +		for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx)
> +			virtual += sg_dma_len(sg);
> +
> +		binding->tx_vec = kcalloc(virtual / PAGE_SIZE + 1,
> +					  sizeof(struct iovec), GFP_KERNEL);
> +		if (!binding->tx_vec) {
> +			err = -ENOMEM;
> +			goto err_unmap;
> +		}
> +	}
> +
>  	virtual = 0;
>  	for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
>  		dma_addr_t dma_addr = sg_dma_address(sg);
> @@ -277,11 +306,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  			niov->owner = owner;
>  			page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
>  						      net_devmem_get_dma_addr(niov));
> +
> +			if (direction == DMA_TO_DEVICE) {
> +				iov = &binding->tx_vec[virtual / PAGE_SIZE + i];
> +				iov->iov_base = niov;
> +				iov->iov_len = PAGE_SIZE;
> +			}
>  		}
>  
>  		virtual += len;
>  	}
>  
> +	if (direction == DMA_TO_DEVICE)
> +		iov_iter_init(&binding->tx_iter, WRITE, binding->tx_vec,
> +			      virtual / PAGE_SIZE + 1, virtual);
> +
>  	return binding;
>  
>  err_free_chunks:
> @@ -302,6 +341,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  	return ERR_PTR(err);
>  }
>  
> +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
> +{
> +	struct net_devmem_dmabuf_binding *binding;
> +
> +	rcu_read_lock();
> +	binding = xa_load(&net_devmem_dmabuf_bindings, id);
> +	if (binding) {
> +		if (!net_devmem_dmabuf_binding_get(binding))
> +			binding = NULL;
> +	}
> +	rcu_read_unlock();
> +
> +	return binding;
> +}
> +
>  void dev_dmabuf_uninstall(struct net_device *dev)
>  {
>  	struct net_devmem_dmabuf_binding *binding;
> @@ -332,6 +386,33 @@ void net_devmem_put_net_iov(struct net_iov *niov)
>  	net_devmem_dmabuf_binding_put(niov->owner->binding);
>  }
>  
> +struct net_devmem_dmabuf_binding *
> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> +{
> +	struct net_devmem_dmabuf_binding *binding;
> +	int err = 0;
> +
> +	binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
> +	if (!binding || !binding->tx_vec) {
> +		err = -EINVAL;
> +		goto out_err;
> +	}
> +
> +	if (sock_net(sk) != dev_net(binding->dev)) {
> +		err = -ENODEV;
> +		goto out_err;
> +	}
> +
> +	iov_iter_advance(&binding->tx_iter, sockc->dmabuf_offset);
> +	return binding;
> +
> +out_err:
> +	if (binding)
> +		net_devmem_dmabuf_binding_put(binding);
> +
> +	return ERR_PTR(err);
> +}
> +
>  /*** "Dmabuf devmem memory provider" ***/
>  
>  int mp_dmabuf_devmem_init(struct page_pool *pool)
> diff --git a/net/core/devmem.h b/net/core/devmem.h
> index 54e30fea80b3..f923c77d9c45 100644
> --- a/net/core/devmem.h
> +++ b/net/core/devmem.h
> @@ -11,6 +11,8 @@
>  #define _NET_DEVMEM_H
>  
>  struct netlink_ext_ack;
> +struct sockcm_cookie;
> +struct sock;
>  
>  struct net_devmem_dmabuf_binding {
>  	struct dma_buf *dmabuf;
> @@ -27,6 +29,10 @@ struct net_devmem_dmabuf_binding {
>  	 * The binding undos itself and unmaps the underlying dmabuf once all
>  	 * those refs are dropped and the binding is no longer desired or in
>  	 * use.
> +	 *
> +	 * net_devmem_get_net_iov() on dmabuf net_iovs will increment this
> +	 * reference, making sure that that the binding remains alive until all
> +	 * the net_iovs are no longer used.
>  	 */
>  	refcount_t ref;
>  
> @@ -42,6 +48,10 @@ struct net_devmem_dmabuf_binding {
>  	 * active.
>  	 */
>  	u32 id;
> +
> +	/* iov_iter representing all possible net_iov chunks in the dmabuf. */
> +	struct iov_iter tx_iter;
> +	struct iovec *tx_vec;
>  };
>  
>  #if defined(CONFIG_NET_DEVMEM)
> @@ -66,8 +76,10 @@ struct dmabuf_genpool_chunk_owner {
>  
>  void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding);
>  struct net_devmem_dmabuf_binding *
> -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> -		       struct netlink_ext_ack *extack);
> +net_devmem_bind_dmabuf(struct net_device *dev,
> +		       enum dma_data_direction direction,
> +		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack);
> +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id);
>  void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding);
>  int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
>  				    struct net_devmem_dmabuf_binding *binding,
> @@ -104,10 +116,10 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov)
>  	return net_iov_owner(niov)->binding->id;
>  }
>  
> -static inline void
> +static inline bool
>  net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding)
>  {
> -	refcount_inc(&binding->ref);
> +	return refcount_inc_not_zero(&binding->ref);
>  }
>  
>  static inline void
> @@ -126,6 +138,9 @@ struct net_iov *
>  net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding);
>  void net_devmem_free_dmabuf(struct net_iov *ppiov);
>  
> +struct net_devmem_dmabuf_binding *
> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc);
> +
>  #else
>  struct net_devmem_dmabuf_binding;
>  
> @@ -144,11 +159,17 @@ __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
>  
>  static inline struct net_devmem_dmabuf_binding *
>  net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> +		       enum dma_data_direction direction,
>  		       struct netlink_ext_ack *extack)
>  {
>  	return ERR_PTR(-EOPNOTSUPP);
>  }
>  
> +static inline struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
> +{
> +	return NULL;
> +}
> +
>  static inline void
>  net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
>  {
> @@ -186,6 +207,17 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov)
>  {
>  	return 0;
>  }
> +
> +static inline void
> +net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding)
> +{
> +}
> +
> +static inline struct net_devmem_dmabuf_binding *
> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
>  #endif
>  
>  #endif /* _NET_DEVMEM_H */
> diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> index 00d3d5851487..b9928bac94da 100644
> --- a/net/core/netdev-genl.c
> +++ b/net/core/netdev-genl.c
> @@ -850,7 +850,8 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
>  		goto err_unlock;
>  	}
>  
> -	binding = net_devmem_bind_dmabuf(netdev, dmabuf_fd, info->extack);
> +	binding = net_devmem_bind_dmabuf(netdev, DMA_FROM_DEVICE, dmabuf_fd,
> +					 info->extack);
>  	if (IS_ERR(binding)) {
>  		err = PTR_ERR(binding);
>  		goto err_unlock;
> @@ -907,10 +908,68 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
>  	return err;
>  }
>  
> -/* stub */
>  int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info)
>  {
> -	return 0;
> +	struct net_devmem_dmabuf_binding *binding;
> +	struct list_head *sock_binding_list;
> +	struct net_device *netdev;
> +	u32 ifindex, dmabuf_fd;
> +	struct sk_buff *rsp;
> +	int err = 0;
> +	void *hdr;
> +
> +	if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
> +	    GENL_REQ_ATTR_CHECK(info, NETDEV_A_DMABUF_FD))
> +		return -EINVAL;
> +
> +	ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
> +	dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_DMABUF_FD]);
> +
> +	sock_binding_list =
> +		genl_sk_priv_get(&netdev_nl_family, NETLINK_CB(skb).sk);
> +	if (IS_ERR(sock_binding_list))
> +		return PTR_ERR(sock_binding_list);
> +
> +	rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
> +	if (!rsp)
> +		return -ENOMEM;
> +
> +	hdr = genlmsg_iput(rsp, info);
> +	if (!hdr) {
> +		err = -EMSGSIZE;
> +		goto err_genlmsg_free;
> +	}
> +
> +	rtnl_lock();
> +
> +	netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> +	if (!netdev || !netif_device_present(netdev)) {
> +		err = -ENODEV;
> +		goto err_unlock;
> +	}
> +
> +	binding = net_devmem_bind_dmabuf(netdev, DMA_TO_DEVICE, dmabuf_fd,
> +					 info->extack);
> +	if (IS_ERR(binding)) {
> +		err = PTR_ERR(binding);
> +		goto err_unlock;
> +	}
> +
> +	list_add(&binding->list, sock_binding_list);
> +
> +	nla_put_u32(rsp, NETDEV_A_DMABUF_ID, binding->id);
> +	genlmsg_end(rsp, hdr);
> +
> +	rtnl_unlock();
> +
> +	return genlmsg_reply(rsp, info);
> +
> +	net_devmem_unbind_dmabuf(binding);
> +err_unlock:
> +	rtnl_unlock();
> +err_genlmsg_free:
> +	nlmsg_free(rsp);
> +	return err;
>  }
>  
>  void netdev_nl_sock_priv_init(struct list_head *priv)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 815245d5c36b..eb6b41a32524 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -1882,8 +1882,10 @@ EXPORT_SYMBOL_GPL(msg_zerocopy_ubuf_ops);
>  
>  int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
>  			     struct msghdr *msg, int len,
> -			     struct ubuf_info *uarg)
> +			     struct ubuf_info *uarg,
> +			     struct net_devmem_dmabuf_binding *binding)
>  {
> +	struct iov_iter *from = binding ? &binding->tx_iter : &msg->msg_iter;

For tx, I feel like this needs a copy of binding->tx_iter:

	struct iov_iter tx_iter = binding->tx_iter;
	struct iov_iter *from = binding ? &tx_iter : &msg->msg_iter;

Or something similar (rewind?). The tx_iter is advanced in
zerocopy_fill_skb_from_devmem but never reset back it seems (or I'm
missing something). In you case, if you call sendmsg twice with the same
offset, the second one will copy from 2*offset.

But I'd vote to move away from iov_iter on the tx side. From my initial
testing, the perf wasn't great. It's basically O(n_chunks) on every
message (iov_iter_advance does linear walk over the chunks). So when we call
several sendmsg() with 4k chunks and increasing offset,
this blows up to O(n_chunks^2). I tried to "fix" it by caching previous iter
position so the next sendmsg can continue without rewinding, but it was
a bit messy.

I have this simplified constant time version in [1], pls take a look
and lmk what you think (zerocopy_fill_skb_from_devmem and
net_devmem_get_niov_at). It is significantly faster and much simpler.

1: https://github.com/fomichev/linux/commit/3b3ad4f36771a376c204727e5a167c4993d4c65a#diff-a3bb611c084dada9bcd3ef6c5f9a17f6c5d58eb8e168aa9b07241841e96f6b94R710

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support
  2024-12-21  0:42 ` [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support Mina Almasry
@ 2024-12-26 19:07   ` Stanislav Fomichev
  2025-01-27 22:47     ` Mina Almasry
  0 siblings, 1 reply; 26+ messages in thread
From: Stanislav Fomichev @ 2024-12-26 19:07 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 12/21, Mina Almasry wrote:
> Currently net_iovs support only pp ref counts, and do not support a
> page ref equivalent.
> 
> This is fine for the RX path as net_iovs are used exclusively with the
> pp and only pp refcounting is needed there. The TX path however does not
> use pp ref counts, thus, support for get_page/put_page equivalent is
> needed for netmem.
> 
> Support get_netmem/put_netmem. Check the type of the netmem before
> passing it to page or net_iov specific code to obtain a page ref
> equivalent.
> 
> For dmabuf net_iovs, we obtain a ref on the underlying binding. This
> ensures the entire binding doesn't disappear until all the net_iovs have
> been put_netmem'ed. We do not need to track the refcount of individual
> dmabuf net_iovs as we don't allocate/free them from a pool similar to
> what the buddy allocator does for pages.
> 
> This code is written to be extensible by other net_iov implementers.
> get_netmem/put_netmem will check the type of the netmem and route it to
> the correct helper:
> 
> pages -> [get|put]_page()
> dmabuf net_iovs -> net_devmem_[get|put]_net_iov()
> new net_iovs ->	new helpers
> 
> Signed-off-by: Mina Almasry <almasrymina@google.com>
> 
> ---
>  include/linux/skbuff_ref.h |  4 ++--
>  include/net/netmem.h       |  3 +++
>  net/core/devmem.c          | 10 ++++++++++
>  net/core/devmem.h          | 11 +++++++++++
>  net/core/skbuff.c          | 30 ++++++++++++++++++++++++++++++
>  5 files changed, 56 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h
> index 0f3c58007488..9e49372ef1a0 100644
> --- a/include/linux/skbuff_ref.h
> +++ b/include/linux/skbuff_ref.h
> @@ -17,7 +17,7 @@
>   */
>  static inline void __skb_frag_ref(skb_frag_t *frag)
>  {
> -	get_page(skb_frag_page(frag));
> +	get_netmem(skb_frag_netmem(frag));
>  }
>  
>  /**
> @@ -40,7 +40,7 @@ static inline void skb_page_unref(netmem_ref netmem, bool recycle)
>  	if (recycle && napi_pp_put_page(netmem))
>  		return;
>  #endif

[..]

> -	put_page(netmem_to_page(netmem));
> +	put_netmem(netmem);

I moved the release operation onto a workqueue in my series [1] to avoid
calling dmabuf detach (which can sleep) from the socket close path
(which is called with bh disabled). You should probably do something similar,
see the trace attached below.

1: https://github.com/fomichev/linux/commit/3b3ad4f36771a376c204727e5a167c4993d4c65a#diff-3c58b866674b2f9beb5ac7349f81566e4df595c25c647710203549589d450f2dR436

(the condition to trigger that is to have an skb in the write queue
and call close from the userspace)

[    1.548495] BUG: sleeping function called from invalid context at drivers/dma-buf/dma-buf.c:1255
[    1.548741] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 149, name: ncdevmem
[    1.548926] preempt_count: 201, expected: 0
[    1.549026] RCU nest depth: 0, expected: 0
[    1.549197]
[    1.549237] =============================
[    1.549331] [ BUG: Invalid wait context ]
[    1.549425] 6.13.0-rc3-00770-gbc9ef9606dc9-dirty #15 Tainted: G        W
[    1.549609] -----------------------------
[    1.549704] ncdevmem/149 is trying to lock:
[    1.549801] ffff8880066701c0 (reservation_ww_class_mutex){+.+.}-{4:4}, at: dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.550051] other info that might help us debug this:
[    1.550167] context-{5:5}
[    1.550229] 3 locks held by ncdevmem/149:
[    1.550322]  #0: ffff888005730208 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x40/0xf0
[    1.550530]  #1: ffff88800b148f98 (sk_lock-AF_INET6){+.+.}-{0:0}, at: tcp_close+0x19/0x80
[    1.550731]  #2: ffff88800b148f18 (slock-AF_INET6){+.-.}-{3:3}, at: __tcp_close+0x185/0x4b0
[    1.550921] stack backtrace:
[    1.550990] CPU: 0 UID: 0 PID: 149 Comm: ncdevmem Tainted: G        W          6.13.0-rc3-00770-gbc9ef9606dc9-dirty #15
[    1.551233] Tainted: [W]=WARN
[    1.551304] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
[    1.551518] Call Trace:
[    1.551584]  <TASK>
[    1.551636]  dump_stack_lvl+0x86/0xc0
[    1.551723]  __lock_acquire+0xb0f/0xc30
[    1.551814]  ? dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.551941]  lock_acquire+0xf1/0x2a0
[    1.552026]  ? dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.552152]  ? dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.552281]  ? dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.552408]  __ww_mutex_lock+0x121/0x1060
[    1.552503]  ? dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.552648]  ww_mutex_lock+0x3d/0xa0
[    1.552733]  dma_buf_unmap_attachment_unlocked+0x4b/0x90
[    1.552857]  __net_devmem_dmabuf_binding_free+0x56/0xb0
[    1.552979]  skb_release_data+0x120/0x1f0
[    1.553074]  __kfree_skb+0x29/0xa0
[    1.553156]  tcp_write_queue_purge+0x41/0x310
[    1.553259]  tcp_v4_destroy_sock+0x127/0x320
[    1.553363]  ? __tcp_close+0x169/0x4b0
[    1.553452]  inet_csk_destroy_sock+0x53/0x130
[    1.553560]  __tcp_close+0x421/0x4b0
[    1.553646]  tcp_close+0x24/0x80
[    1.553724]  inet_release+0x5d/0x90
[    1.553806]  sock_close+0x4a/0xf0
[    1.553886]  __fput+0x9c/0x2b0
[    1.553960]  task_work_run+0x89/0xc0
[    1.554046]  do_exit+0x27f/0x980
[    1.554125]  do_group_exit+0xa4/0xb0
[    1.554211]  __x64_sys_exit_group+0x17/0x20
[    1.554309]  x64_sys_call+0x21a0/0x21a0
[    1.554400]  do_syscall_64+0xec/0x1d0
[    1.554487]  ? exc_page_fault+0x8a/0xf0
[    1.554585]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[    1.554703] RIP: 0033:0x7f2f8a27abcd


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-21  5:09   ` Stanislav Fomichev
@ 2024-12-26 19:10     ` Stanislav Fomichev
  2025-01-27 22:52       ` Mina Almasry
  0 siblings, 1 reply; 26+ messages in thread
From: Stanislav Fomichev @ 2024-12-26 19:10 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 12/20, Stanislav Fomichev wrote:
> On 12/21, Mina Almasry wrote:
> > Augment dmabuf binding to be able to handle TX. Additional to all the RX
> > binding, we also create tx_vec and tx_iter needed for the TX path.
> > 
> > Provide API for sendmsg to be able to send dmabufs bound to this device:
> > 
> > - Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from,
> >   and the offset into the dmabuf to send from.
> > - MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf.
> > 
> > Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY
> > implementation, while disabling instances where MSG_ZEROCOPY falls back
> > to copying.
> > 
> > We additionally look up the dmabuf to send from by id, then pipe the
> > binding down to the new zerocopy_fill_skb_from_devmem which fills a TX skb
> > with net_iov netmems instead of the traditional page netmems.
> > 
> > We also special case skb_frag_dma_map to return the dma-address of these
> > dmabuf net_iovs instead of attempting to map pages.
> > 
> > Based on work by Stanislav Fomichev <sdf@fomichev.me>. A lot of the meat
> > of the implementation came from devmem TCP RFC v1[1], which included the
> > TX path, but Stan did all the rebasing on top of netmem/net_iov.
> > 
> > Cc: Stanislav Fomichev <sdf@fomichev.me>
> > Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com>
> > Signed-off-by: Mina Almasry <almasrymina@google.com>
> > 
> > ---
> >  include/linux/skbuff.h                  | 13 +++-
> >  include/net/sock.h                      |  2 +
> >  include/uapi/linux/uio.h                |  5 ++
> >  net/core/datagram.c                     | 40 ++++++++++-
> >  net/core/devmem.c                       | 91 +++++++++++++++++++++++--
> >  net/core/devmem.h                       | 40 +++++++++--
> >  net/core/netdev-genl.c                  | 65 +++++++++++++++++-
> >  net/core/skbuff.c                       |  8 ++-
> >  net/core/sock.c                         |  9 +++
> >  net/ipv4/tcp.c                          | 36 +++++++---
> >  net/vmw_vsock/virtio_transport_common.c |  4 +-
> >  11 files changed, 281 insertions(+), 32 deletions(-)
> > 
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index bb2b751d274a..e90dc0c4d542 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -1711,9 +1711,10 @@ struct ubuf_info *msg_zerocopy_realloc(struct sock *sk, size_t size,
> >  
> >  void msg_zerocopy_put_abort(struct ubuf_info *uarg, bool have_uref);
> >  
> > +struct net_devmem_dmabuf_binding;
> >  int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
> >  			    struct sk_buff *skb, struct iov_iter *from,
> > -			    size_t length);
> > +			    size_t length, bool is_devmem);
> >  
> >  int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
> >  				struct iov_iter *from, size_t length);
> > @@ -1721,12 +1722,14 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
> >  static inline int skb_zerocopy_iter_dgram(struct sk_buff *skb,
> >  					  struct msghdr *msg, int len)
> >  {
> > -	return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len);
> > +	return __zerocopy_sg_from_iter(msg, skb->sk, skb, &msg->msg_iter, len,
> > +				       false);
> >  }
> >  
> >  int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
> >  			     struct msghdr *msg, int len,
> > -			     struct ubuf_info *uarg);
> > +			     struct ubuf_info *uarg,
> > +			     struct net_devmem_dmabuf_binding *binding);
> >  
> >  /* Internal */
> >  #define skb_shinfo(SKB)	((struct skb_shared_info *)(skb_end_pointer(SKB)))
> > @@ -3697,6 +3700,10 @@ static inline dma_addr_t __skb_frag_dma_map(struct device *dev,
> >  					    size_t offset, size_t size,
> >  					    enum dma_data_direction dir)
> >  {
> > +	if (skb_frag_is_net_iov(frag)) {
> > +		return netmem_to_net_iov(frag->netmem)->dma_addr + offset +
> > +		       frag->offset;
> > +	}
> >  	return dma_map_page(dev, skb_frag_page(frag),
> >  			    skb_frag_off(frag) + offset, size, dir);
> >  }
> > diff --git a/include/net/sock.h b/include/net/sock.h
> > index d4bdd3286e03..75bd580fe9c6 100644
> > --- a/include/net/sock.h
> > +++ b/include/net/sock.h
> > @@ -1816,6 +1816,8 @@ struct sockcm_cookie {
> >  	u32 tsflags;
> >  	u32 ts_opt_id;
> >  	u32 priority;
> > +	u32 dmabuf_id;
> > +	u64 dmabuf_offset;
> >  };
> >  
> >  static inline void sockcm_init(struct sockcm_cookie *sockc,
> > diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
> > index 649739e0c404..41490cde95ad 100644
> > --- a/include/uapi/linux/uio.h
> > +++ b/include/uapi/linux/uio.h
> > @@ -38,6 +38,11 @@ struct dmabuf_token {
> >  	__u32 token_count;
> >  };
> >  
> > +struct dmabuf_tx_cmsg {
> > +	__u32 dmabuf_id;
> > +	__u64 dmabuf_offset;
> > +};
> > +
> >  /*
> >   *	UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
> >   */
> > diff --git a/net/core/datagram.c b/net/core/datagram.c
> > index f0693707aece..3b09995db894 100644
> > --- a/net/core/datagram.c
> > +++ b/net/core/datagram.c
> > @@ -63,6 +63,8 @@
> >  #include <net/busy_poll.h>
> >  #include <crypto/hash.h>
> >  
> > +#include "devmem.h"
> > +
> >  /*
> >   *	Is a socket 'connection oriented' ?
> >   */
> > @@ -692,9 +694,41 @@ int zerocopy_fill_skb_from_iter(struct sk_buff *skb,
> >  	return 0;
> >  }
> >  
> > +static int zerocopy_fill_skb_from_devmem(struct sk_buff *skb,
> > +					 struct msghdr *msg,
> > +					 struct iov_iter *from, int length)
> > +{
> > +	int i = skb_shinfo(skb)->nr_frags;
> > +	int orig_length = length;
> > +	netmem_ref netmem;
> > +	size_t size;
> > +
> > +	while (length && iov_iter_count(from)) {
> > +		if (i == MAX_SKB_FRAGS)
> > +			return -EMSGSIZE;
> > +
> > +		size = min_t(size_t, iter_iov_len(from), length);
> > +		if (!size)
> > +			return -EFAULT;
> > +
> > +		netmem = net_iov_to_netmem(iter_iov(from)->iov_base);
> > +		get_netmem(netmem);
> > +		skb_add_rx_frag_netmem(skb, i, netmem, from->iov_offset, size,
> > +				       PAGE_SIZE);
> > +
> > +		iov_iter_advance(from, size);
> > +		length -= size;
> > +		i++;
> > +	}
> > +
> > +	iov_iter_advance(&msg->msg_iter, orig_length);
> > +
> > +	return 0;
> > +}
> > +
> >  int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
> >  			    struct sk_buff *skb, struct iov_iter *from,
> > -			    size_t length)
> > +			    size_t length, bool is_devmem)
> >  {
> >  	unsigned long orig_size = skb->truesize;
> >  	unsigned long truesize;
> > @@ -702,6 +736,8 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
> >  
> >  	if (msg && msg->msg_ubuf && msg->sg_from_iter)
> >  		ret = msg->sg_from_iter(skb, from, length);
> > +	else if (unlikely(is_devmem))
> > +		ret = zerocopy_fill_skb_from_devmem(skb, msg, from, length);
> >  	else
> >  		ret = zerocopy_fill_skb_from_iter(skb, from, length);
> >  
> > @@ -735,7 +771,7 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from)
> >  	if (skb_copy_datagram_from_iter(skb, 0, from, copy))
> >  		return -EFAULT;
> >  
> > -	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U);
> > +	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U, NULL);
> >  }
> >  EXPORT_SYMBOL(zerocopy_sg_from_iter);
> >  
> > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > index f7e06a8cba01..81f1b715cfa6 100644
> > --- a/net/core/devmem.c
> > +++ b/net/core/devmem.c
> > @@ -15,6 +15,7 @@
> >  #include <net/netdev_queues.h>
> >  #include <net/netdev_rx_queue.h>
> >  #include <net/page_pool/helpers.h>
> > +#include <net/sock.h>
> >  #include <trace/events/page_pool.h>
> >  
> >  #include "devmem.h"
> > @@ -63,8 +64,10 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
> >  	dma_buf_detach(binding->dmabuf, binding->attachment);
> >  	dma_buf_put(binding->dmabuf);
> >  	xa_destroy(&binding->bound_rxqs);
> > +	kfree(binding->tx_vec);
> >  	kfree(binding);
> >  }
> > +EXPORT_SYMBOL(__net_devmem_dmabuf_binding_free);
> >  
> >  struct net_iov *
> >  net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
> > @@ -109,6 +112,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
> >  	unsigned long xa_idx;
> >  	unsigned int rxq_idx;
> >  
> > +	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> > +
> > +	/* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
> > +	 * erase.
> > +	 */
> > +	synchronize_net();
> > +
> >  	if (binding->list.next)
> >  		list_del(&binding->list);
> >  
> > @@ -122,8 +132,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
> >  		WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
> >  	}
> >  
> > -	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> > -
> >  	net_devmem_dmabuf_binding_put(binding);
> >  }
> >  
> > @@ -174,8 +182,9 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> >  }
> >  
> >  struct net_devmem_dmabuf_binding *
> > -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> > -		       struct netlink_ext_ack *extack)
> > +net_devmem_bind_dmabuf(struct net_device *dev,
> > +		       enum dma_data_direction direction,
> > +		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack)
> >  {
> >  	struct net_devmem_dmabuf_binding *binding;
> >  	static u32 id_alloc_next;
> > @@ -183,6 +192,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> >  	struct dma_buf *dmabuf;
> >  	unsigned int sg_idx, i;
> >  	unsigned long virtual;
> > +	struct iovec *iov;
> >  	int err;
> >  
> >  	dmabuf = dma_buf_get(dmabuf_fd);
> > @@ -218,13 +228,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> >  	}
> >  
> >  	binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment,
> > -						       DMA_FROM_DEVICE);
> > +						       direction);
> >  	if (IS_ERR(binding->sgt)) {
> >  		err = PTR_ERR(binding->sgt);
> >  		NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment");
> >  		goto err_detach;
> >  	}
> >  
> > +	if (!binding->sgt || binding->sgt->nents == 0) {
> > +		err = -EINVAL;
> > +		NL_SET_ERR_MSG(extack, "Empty dmabuf attachment");
> > +		goto err_detach;
> > +	}
> > +
> >  	/* For simplicity we expect to make PAGE_SIZE allocations, but the
> >  	 * binding can be much more flexible than that. We may be able to
> >  	 * allocate MTU sized chunks here. Leave that for future work...
> > @@ -236,6 +252,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> >  		goto err_unmap;
> >  	}
> >  
> > +	if (direction == DMA_TO_DEVICE) {
> > +		virtual = 0;
> > +		for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx)
> > +			virtual += sg_dma_len(sg);
> > +
> > +		binding->tx_vec = kcalloc(virtual / PAGE_SIZE + 1,
> > +					  sizeof(struct iovec), GFP_KERNEL);
> > +		if (!binding->tx_vec) {
> > +			err = -ENOMEM;
> > +			goto err_unmap;
> > +		}
> > +	}
> > +
> >  	virtual = 0;
> >  	for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
> >  		dma_addr_t dma_addr = sg_dma_address(sg);
> > @@ -277,11 +306,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> >  			niov->owner = owner;
> >  			page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
> >  						      net_devmem_get_dma_addr(niov));
> > +
> > +			if (direction == DMA_TO_DEVICE) {
> > +				iov = &binding->tx_vec[virtual / PAGE_SIZE + i];
> > +				iov->iov_base = niov;
> > +				iov->iov_len = PAGE_SIZE;
> > +			}
> >  		}
> >  
> >  		virtual += len;
> >  	}
> >  
> > +	if (direction == DMA_TO_DEVICE)
> > +		iov_iter_init(&binding->tx_iter, WRITE, binding->tx_vec,
> > +			      virtual / PAGE_SIZE + 1, virtual);
> > +
> >  	return binding;
> >  
> >  err_free_chunks:
> > @@ -302,6 +341,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> >  	return ERR_PTR(err);
> >  }
> >  
> > +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
> > +{
> > +	struct net_devmem_dmabuf_binding *binding;
> > +
> > +	rcu_read_lock();
> > +	binding = xa_load(&net_devmem_dmabuf_bindings, id);
> > +	if (binding) {
> > +		if (!net_devmem_dmabuf_binding_get(binding))
> > +			binding = NULL;
> > +	}
> > +	rcu_read_unlock();
> > +
> > +	return binding;
> > +}
> > +
> >  void dev_dmabuf_uninstall(struct net_device *dev)
> >  {
> >  	struct net_devmem_dmabuf_binding *binding;
> > @@ -332,6 +386,33 @@ void net_devmem_put_net_iov(struct net_iov *niov)
> >  	net_devmem_dmabuf_binding_put(niov->owner->binding);
> >  }
> >  
> > +struct net_devmem_dmabuf_binding *
> > +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> > +{
> > +	struct net_devmem_dmabuf_binding *binding;
> > +	int err = 0;
> > +
> > +	binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
> > +	if (!binding || !binding->tx_vec) {
> > +		err = -EINVAL;
> > +		goto out_err;
> > +	}
> > +
> > +	if (sock_net(sk) != dev_net(binding->dev)) {
> > +		err = -ENODEV;
> > +		goto out_err;
> > +	}
> > +
> > +	iov_iter_advance(&binding->tx_iter, sockc->dmabuf_offset);
> > +	return binding;
> > +
> > +out_err:
> > +	if (binding)
> > +		net_devmem_dmabuf_binding_put(binding);
> > +
> > +	return ERR_PTR(err);
> > +}
> > +
> >  /*** "Dmabuf devmem memory provider" ***/
> >  
> >  int mp_dmabuf_devmem_init(struct page_pool *pool)
> > diff --git a/net/core/devmem.h b/net/core/devmem.h
> > index 54e30fea80b3..f923c77d9c45 100644
> > --- a/net/core/devmem.h
> > +++ b/net/core/devmem.h
> > @@ -11,6 +11,8 @@
> >  #define _NET_DEVMEM_H
> >  
> >  struct netlink_ext_ack;
> > +struct sockcm_cookie;
> > +struct sock;
> >  
> >  struct net_devmem_dmabuf_binding {
> >  	struct dma_buf *dmabuf;
> > @@ -27,6 +29,10 @@ struct net_devmem_dmabuf_binding {
> >  	 * The binding undos itself and unmaps the underlying dmabuf once all
> >  	 * those refs are dropped and the binding is no longer desired or in
> >  	 * use.
> > +	 *
> > +	 * net_devmem_get_net_iov() on dmabuf net_iovs will increment this
> > +	 * reference, making sure that that the binding remains alive until all
> > +	 * the net_iovs are no longer used.
> >  	 */
> >  	refcount_t ref;
> >  
> > @@ -42,6 +48,10 @@ struct net_devmem_dmabuf_binding {
> >  	 * active.
> >  	 */
> >  	u32 id;
> > +
> > +	/* iov_iter representing all possible net_iov chunks in the dmabuf. */
> > +	struct iov_iter tx_iter;
> > +	struct iovec *tx_vec;
> >  };
> >  
> >  #if defined(CONFIG_NET_DEVMEM)
> > @@ -66,8 +76,10 @@ struct dmabuf_genpool_chunk_owner {
> >  
> >  void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding);
> >  struct net_devmem_dmabuf_binding *
> > -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> > -		       struct netlink_ext_ack *extack);
> > +net_devmem_bind_dmabuf(struct net_device *dev,
> > +		       enum dma_data_direction direction,
> > +		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack);
> > +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id);
> >  void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding);
> >  int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> >  				    struct net_devmem_dmabuf_binding *binding,
> > @@ -104,10 +116,10 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov)
> >  	return net_iov_owner(niov)->binding->id;
> >  }
> >  
> > -static inline void
> > +static inline bool
> >  net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding)
> >  {
> > -	refcount_inc(&binding->ref);
> > +	return refcount_inc_not_zero(&binding->ref);
> >  }
> >  
> >  static inline void
> > @@ -126,6 +138,9 @@ struct net_iov *
> >  net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding);
> >  void net_devmem_free_dmabuf(struct net_iov *ppiov);
> >  
> > +struct net_devmem_dmabuf_binding *
> > +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc);
> > +
> >  #else
> >  struct net_devmem_dmabuf_binding;
> >  
> > @@ -144,11 +159,17 @@ __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
> >  
> >  static inline struct net_devmem_dmabuf_binding *
> >  net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> > +		       enum dma_data_direction direction,
> >  		       struct netlink_ext_ack *extack)
> >  {
> >  	return ERR_PTR(-EOPNOTSUPP);
> >  }
> >  
> > +static inline struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
> > +{
> > +	return NULL;
> > +}
> > +
> >  static inline void
> >  net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
> >  {
> > @@ -186,6 +207,17 @@ static inline u32 net_iov_binding_id(const struct net_iov *niov)
> >  {
> >  	return 0;
> >  }
> > +
> > +static inline void
> > +net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding)
> > +{
> > +}
> > +
> > +static inline struct net_devmem_dmabuf_binding *
> > +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> > +{
> > +	return ERR_PTR(-EOPNOTSUPP);
> > +}
> >  #endif
> >  
> >  #endif /* _NET_DEVMEM_H */
> > diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> > index 00d3d5851487..b9928bac94da 100644
> > --- a/net/core/netdev-genl.c
> > +++ b/net/core/netdev-genl.c
> > @@ -850,7 +850,8 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> >  		goto err_unlock;
> >  	}
> >  
> > -	binding = net_devmem_bind_dmabuf(netdev, dmabuf_fd, info->extack);
> > +	binding = net_devmem_bind_dmabuf(netdev, DMA_FROM_DEVICE, dmabuf_fd,
> > +					 info->extack);
> >  	if (IS_ERR(binding)) {
> >  		err = PTR_ERR(binding);
> >  		goto err_unlock;
> > @@ -907,10 +908,68 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> >  	return err;
> >  }
> >  
> > -/* stub */
> >  int netdev_nl_bind_tx_doit(struct sk_buff *skb, struct genl_info *info)
> >  {
> > -	return 0;
> > +	struct net_devmem_dmabuf_binding *binding;
> > +	struct list_head *sock_binding_list;
> > +	struct net_device *netdev;
> > +	u32 ifindex, dmabuf_fd;
> > +	struct sk_buff *rsp;
> > +	int err = 0;
> > +	void *hdr;
> > +
> > +	if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
> > +	    GENL_REQ_ATTR_CHECK(info, NETDEV_A_DMABUF_FD))
> > +		return -EINVAL;
> > +
> > +	ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
> > +	dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_DMABUF_FD]);
> > +
> > +	sock_binding_list =
> > +		genl_sk_priv_get(&netdev_nl_family, NETLINK_CB(skb).sk);
> > +	if (IS_ERR(sock_binding_list))
> > +		return PTR_ERR(sock_binding_list);
> > +
> > +	rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
> > +	if (!rsp)
> > +		return -ENOMEM;
> > +
> > +	hdr = genlmsg_iput(rsp, info);
> > +	if (!hdr) {
> > +		err = -EMSGSIZE;
> > +		goto err_genlmsg_free;
> > +	}
> > +
> > +	rtnl_lock();
> > +
> > +	netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> > +	if (!netdev || !netif_device_present(netdev)) {
> > +		err = -ENODEV;
> > +		goto err_unlock;
> > +	}
> > +
> > +	binding = net_devmem_bind_dmabuf(netdev, DMA_TO_DEVICE, dmabuf_fd,
> > +					 info->extack);
> > +	if (IS_ERR(binding)) {
> > +		err = PTR_ERR(binding);
> > +		goto err_unlock;
> > +	}
> > +
> > +	list_add(&binding->list, sock_binding_list);
> > +
> > +	nla_put_u32(rsp, NETDEV_A_DMABUF_ID, binding->id);
> > +	genlmsg_end(rsp, hdr);
> > +
> > +	rtnl_unlock();
> > +
> > +	return genlmsg_reply(rsp, info);
> > +
> > +	net_devmem_unbind_dmabuf(binding);
> > +err_unlock:
> > +	rtnl_unlock();
> > +err_genlmsg_free:
> > +	nlmsg_free(rsp);
> > +	return err;
> >  }
> >  
> >  void netdev_nl_sock_priv_init(struct list_head *priv)
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 815245d5c36b..eb6b41a32524 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -1882,8 +1882,10 @@ EXPORT_SYMBOL_GPL(msg_zerocopy_ubuf_ops);
> >  
> >  int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
> >  			     struct msghdr *msg, int len,
> > -			     struct ubuf_info *uarg)
> > +			     struct ubuf_info *uarg,
> > +			     struct net_devmem_dmabuf_binding *binding)
> >  {
> > +	struct iov_iter *from = binding ? &binding->tx_iter : &msg->msg_iter;
> 
> For tx, I feel like this needs a copy of binding->tx_iter:
> 
> 	struct iov_iter tx_iter = binding->tx_iter;
> 	struct iov_iter *from = binding ? &tx_iter : &msg->msg_iter;
> 
> Or something similar (rewind?). The tx_iter is advanced in
> zerocopy_fill_skb_from_devmem but never reset back it seems (or I'm
> missing something). In you case, if you call sendmsg twice with the same
> offset, the second one will copy from 2*offset.

Can confirm that it's broken. We should probably have a mode in ncdevmem
to call sendmsg with the fixed sized chunks, something like this:

@@ -912,7 +916,11 @@ static int do_client(struct memory_buffer *mem)
                                line_size, off);

                        iov.iov_base = NULL;
-                       iov.iov_len = line_size;
+                       iov.iov_len = line_size <= 4096 ?: 4096;

                        msg.msg_iov = &iov;
                        msg.msg_iovlen = 1;
@@ -933,6 +941,8 @@ static int do_client(struct memory_buffer *mem)
                        ret = sendmsg(socket_fd, &msg, MSG_ZEROCOPY);
                        if (ret < 0)
                                error(1, errno, "Failed sendmsg");
+                       if (ret == 0)
+                               break;

                        fprintf(stderr, "sendmsg_ret=%d\n", ret);

I can put it on my todo to extend the selftests..

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX
  2024-12-21  0:42 ` [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX Mina Almasry
  2024-12-21  4:57   ` Stanislav Fomichev
@ 2024-12-26 21:24   ` Willem de Bruijn
  1 sibling, 0 replies; 26+ messages in thread
From: Willem de Bruijn @ 2024-12-26 21:24 UTC (permalink / raw)
  To: Mina Almasry, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Mina Almasry wrote:
> Add support for devmem TX in ncdevmem.
> 
> This is a combination of the ncdevmem from the devmem TCP series RFCv1
> which included the TX path, and work by Stan to include the netlink API
> and refactored on top of his generic memory_provider support.
> 
> Signed-off-by: Mina Almasry <almasrymina@google.com>
> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
> ---
>  .../selftests/drivers/net/hw/ncdevmem.c       | 261 +++++++++++++++++-
>  1 file changed, 259 insertions(+), 2 deletions(-)
> 

> +static unsigned long gettimeofday_ms(void)
> +{
> +	struct timeval tv;
> +
> +	gettimeofday(&tv, NULL);
> +	return (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
> +}

Consider uint64_t and 1000ULL to avoid overflow on 32-bit platforms in
2034 (currently at 1.7^10^9 usec since start of epoch).

Or use clock_gettime CLOCK_MONOTONIC.

> +
> +static int do_poll(int fd)
> +{
> +	struct pollfd pfd;
> +	int ret;
> +
> +	pfd.events = POLLERR;
> +	pfd.revents = 0;

Not important but since demonstrator code:
no need to set POLLERR on events. Errors cannot be masked.

> +	pfd.fd = fd;
> +
> +	ret = poll(&pfd, 1, waittime_ms);
> +	if (ret == -1)
> +		error(1, errno, "poll");
> +
> +	return ret && (pfd.revents & POLLERR);
> +}

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-21  0:42 ` [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path Mina Almasry
  2024-12-21  5:09   ` Stanislav Fomichev
@ 2024-12-26 21:52   ` Willem de Bruijn
  2025-01-28  0:06     ` Mina Almasry
  2024-12-28 19:28   ` David Ahern
  2 siblings, 1 reply; 26+ messages in thread
From: Willem de Bruijn @ 2024-12-26 21:52 UTC (permalink / raw)
  To: Mina Almasry, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest
  Cc: Mina Almasry, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Mina Almasry wrote:
> Augment dmabuf binding to be able to handle TX. Additional to all the RX
> binding, we also create tx_vec and tx_iter needed for the TX path.
> 
> Provide API for sendmsg to be able to send dmabufs bound to this device:
> 
> - Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from,
>   and the offset into the dmabuf to send from.
> - MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf.
> 
> Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY
> implementation, while disabling instances where MSG_ZEROCOPY falls back
> to copying.
> 
> We additionally look up the dmabuf to send from by id, then pipe the
> binding down to the new zerocopy_fill_skb_from_devmem which fills a TX skb
> with net_iov netmems instead of the traditional page netmems.
> 
> We also special case skb_frag_dma_map to return the dma-address of these
> dmabuf net_iovs instead of attempting to map pages.
> 
> Based on work by Stanislav Fomichev <sdf@fomichev.me>. A lot of the meat
> of the implementation came from devmem TCP RFC v1[1], which included the
> TX path, but Stan did all the rebasing on top of netmem/net_iov.
> 
> Cc: Stanislav Fomichev <sdf@fomichev.me>
> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com>
> Signed-off-by: Mina Almasry <almasrymina@google.com>
> 
> ---
>  include/linux/skbuff.h                  | 13 +++-
>  include/net/sock.h                      |  2 +
>  include/uapi/linux/uio.h                |  5 ++
>  net/core/datagram.c                     | 40 ++++++++++-
>  net/core/devmem.c                       | 91 +++++++++++++++++++++++--
>  net/core/devmem.h                       | 40 +++++++++--
>  net/core/netdev-genl.c                  | 65 +++++++++++++++++-
>  net/core/skbuff.c                       |  8 ++-
>  net/core/sock.c                         |  9 +++
>  net/ipv4/tcp.c                          | 36 +++++++---
>  net/vmw_vsock/virtio_transport_common.c |  4 +-
>  11 files changed, 281 insertions(+), 32 deletions(-)
> 

> +static int zerocopy_fill_skb_from_devmem(struct sk_buff *skb,
> +					 struct msghdr *msg,
> +					 struct iov_iter *from, int length)
> +{
> +	int i = skb_shinfo(skb)->nr_frags;
> +	int orig_length = length;
> +	netmem_ref netmem;
> +	size_t size;
> +
> +	while (length && iov_iter_count(from)) {
> +		if (i == MAX_SKB_FRAGS)
> +			return -EMSGSIZE;
> +
> +		size = min_t(size_t, iter_iov_len(from), length);
> +		if (!size)
> +			return -EFAULT;

On error, should caller skb_zerocopy_iter_stream rewind from, rather
than (or as well as) msg->msg_iter?
> +
> +		netmem = net_iov_to_netmem(iter_iov(from)->iov_base);
> +		get_netmem(netmem);
> +		skb_add_rx_frag_netmem(skb, i, netmem, from->iov_offset, size,
> +				       PAGE_SIZE);
> +
> +		iov_iter_advance(from, size);
> +		length -= size;
> +		i++;
> +	}
> +
> +	iov_iter_advance(&msg->msg_iter, orig_length);

What does this do if sendmsg is called with NULL as buffer?
> +
> +	return 0;
> +}
> +
>  int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
>  			    struct sk_buff *skb, struct iov_iter *from,
> -			    size_t length)
> +			    size_t length, bool is_devmem)
>  {
>  	unsigned long orig_size = skb->truesize;
>  	unsigned long truesize;
> @@ -702,6 +736,8 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
>  
>  	if (msg && msg->msg_ubuf && msg->sg_from_iter)
>  		ret = msg->sg_from_iter(skb, from, length);
> +	else if (unlikely(is_devmem))
> +		ret = zerocopy_fill_skb_from_devmem(skb, msg, from, length);
>  	else
>  		ret = zerocopy_fill_skb_from_iter(skb, from, length);
>  
> @@ -735,7 +771,7 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from)
>  	if (skb_copy_datagram_from_iter(skb, 0, from, copy))
>  		return -EFAULT;
>  
> -	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U);
> +	return __zerocopy_sg_from_iter(NULL, NULL, skb, from, ~0U, NULL);
>  }
>  EXPORT_SYMBOL(zerocopy_sg_from_iter);

>  struct net_iov *
>  net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
> @@ -109,6 +112,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
>  	unsigned long xa_idx;
>  	unsigned int rxq_idx;
>  
> +	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> +
> +	/* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
> +	 * erase.
> +	 */
> +	synchronize_net();
> +

What precisely does this protect?

synchronize_net() ensures no packet is in flight inside an rcu
readside section. But a packet can still be in flight, such as posted
to the device or queued in a qdisc.

>  	if (binding->list.next)
>  		list_del(&binding->list);
>  
> @@ -122,8 +132,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
>  		WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
>  	}
>  
> -	xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> -
>  	net_devmem_dmabuf_binding_put(binding);
>  }
>  
> @@ -174,8 +182,9 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
>  }
>  
>  struct net_devmem_dmabuf_binding *
> -net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> -		       struct netlink_ext_ack *extack)
> +net_devmem_bind_dmabuf(struct net_device *dev,
> +		       enum dma_data_direction direction,
> +		       unsigned int dmabuf_fd, struct netlink_ext_ack *extack)
>  {
>  	struct net_devmem_dmabuf_binding *binding;
>  	static u32 id_alloc_next;
> @@ -183,6 +192,7 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  	struct dma_buf *dmabuf;
>  	unsigned int sg_idx, i;
>  	unsigned long virtual;
> +	struct iovec *iov;
>  	int err;
>  
>  	dmabuf = dma_buf_get(dmabuf_fd);
> @@ -218,13 +228,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  	}
>  
>  	binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment,
> -						       DMA_FROM_DEVICE);
> +						       direction);
>  	if (IS_ERR(binding->sgt)) {
>  		err = PTR_ERR(binding->sgt);
>  		NL_SET_ERR_MSG(extack, "Failed to map dmabuf attachment");
>  		goto err_detach;
>  	}
>  
> +	if (!binding->sgt || binding->sgt->nents == 0) {
> +		err = -EINVAL;
> +		NL_SET_ERR_MSG(extack, "Empty dmabuf attachment");
> +		goto err_detach;
> +	}
> +
>  	/* For simplicity we expect to make PAGE_SIZE allocations, but the
>  	 * binding can be much more flexible than that. We may be able to
>  	 * allocate MTU sized chunks here. Leave that for future work...
> @@ -236,6 +252,19 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  		goto err_unmap;
>  	}
>  
> +	if (direction == DMA_TO_DEVICE) {
> +		virtual = 0;
> +		for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx)
> +			virtual += sg_dma_len(sg);
> +
> +		binding->tx_vec = kcalloc(virtual / PAGE_SIZE + 1,

instead of open coding this computation repeatedly, consider a local
variable. And parentheses and maybe round_up().

> +					  sizeof(struct iovec), GFP_KERNEL);
> +		if (!binding->tx_vec) {
> +			err = -ENOMEM;
> +			goto err_unmap;
> +		}
> +	}
> +
>  	virtual = 0;
>  	for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
>  		dma_addr_t dma_addr = sg_dma_address(sg);
> @@ -277,11 +306,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  			niov->owner = owner;
>  			page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov),
>  						      net_devmem_get_dma_addr(niov));
> +
> +			if (direction == DMA_TO_DEVICE) {
> +				iov = &binding->tx_vec[virtual / PAGE_SIZE + i];

why does this start counting at virtual / PAGE_SIZE, rather than 0?

> +				iov->iov_base = niov;
> +				iov->iov_len = PAGE_SIZE;
> +			}
>  		}
>  
>  		virtual += len;
>  	}
>  
> +	if (direction == DMA_TO_DEVICE)
> +		iov_iter_init(&binding->tx_iter, WRITE, binding->tx_vec,
> +			      virtual / PAGE_SIZE + 1, virtual);
> +
>  	return binding;
>  
>  err_free_chunks:
> @@ -302,6 +341,21 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
>  	return ERR_PTR(err);
>  }
>  
> +struct net_devmem_dmabuf_binding *net_devmem_lookup_dmabuf(u32 id)
> +{
> +	struct net_devmem_dmabuf_binding *binding;
> +
> +	rcu_read_lock();
> +	binding = xa_load(&net_devmem_dmabuf_bindings, id);
> +	if (binding) {
> +		if (!net_devmem_dmabuf_binding_get(binding))
> +			binding = NULL;
> +	}
> +	rcu_read_unlock();
> +
> +	return binding;
> +}
> +
>  void dev_dmabuf_uninstall(struct net_device *dev)
>  {
>  	struct net_devmem_dmabuf_binding *binding;
> @@ -332,6 +386,33 @@ void net_devmem_put_net_iov(struct net_iov *niov)
>  	net_devmem_dmabuf_binding_put(niov->owner->binding);
>  }
>  
> +struct net_devmem_dmabuf_binding *
> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> +{
> +	struct net_devmem_dmabuf_binding *binding;
> +	int err = 0;
> +
> +	binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);

This lookup is from global xarray net_devmem_dmabuf_bindings.

Is there a check that the socket is sending out through the device
to which this dmabuf was bound with netlink? Should there be?
(e.g., SO_BINDTODEVICE).

> +	if (!binding || !binding->tx_vec) {
> +		err = -EINVAL;
> +		goto out_err;
> +	}
> +
> +	if (sock_net(sk) != dev_net(binding->dev)) {
> +		err = -ENODEV;
> +		goto out_err;
> +	}
> +
> +	iov_iter_advance(&binding->tx_iter, sockc->dmabuf_offset);
> +	return binding;
> +
> +out_err:
> +	if (binding)
> +		net_devmem_dmabuf_binding_put(binding);
> +
> +	return ERR_PTR(err);
> +}
> +
>  /*** "Dmabuf devmem memory provider" ***/
>  

> @@ -1063,6 +1064,15 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
>  
>  	flags = msg->msg_flags;
>  
> +	sockcm_init(&sockc, sk);
> +	if (msg->msg_controllen) {
> +		err = sock_cmsg_send(sk, msg, &sockc);
> +		if (unlikely(err)) {
> +			err = -EINVAL;
> +			goto out_err;
> +		}
> +	}
> +
>  	if ((flags & MSG_ZEROCOPY) && size) {
>  		if (msg->msg_ubuf) {
>  			uarg = msg->msg_ubuf;
> @@ -1080,6 +1090,15 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
>  			else
>  				uarg_to_msgzc(uarg)->zerocopy = 0;
>  		}
> +
> +		if (sockc.dmabuf_id != 0) {
> +			binding = net_devmem_get_sockc_binding(sk, &sockc);
> +			if (IS_ERR(binding)) {
> +				err = PTR_ERR(binding);
> +				binding = NULL;
> +				goto out_err;
> +			}
> +		}

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-21  0:42 ` [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path Mina Almasry
  2024-12-21  5:09   ` Stanislav Fomichev
  2024-12-26 21:52   ` Willem de Bruijn
@ 2024-12-28 19:28   ` David Ahern
  2 siblings, 0 replies; 26+ messages in thread
From: David Ahern @ 2024-12-28 19:28 UTC (permalink / raw)
  To: Mina Almasry, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Donald Hunter, Jonathan Corbet, Andrew Lunn,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
	Stefan Hajnoczi, Stefano Garzarella, Shuah Khan, Kaiyuan Zhang,
	Pavel Begunkov, Willem de Bruijn, Samiullah Khawaja,
	Stanislav Fomichev, Joe Damato, dw

On 12/20/24 5:42 PM, Mina Almasry wrote:
> diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
> index 649739e0c404..41490cde95ad 100644
> --- a/include/uapi/linux/uio.h
> +++ b/include/uapi/linux/uio.h
> @@ -38,6 +38,11 @@ struct dmabuf_token {
>  	__u32 token_count;
>  };
>  
> +struct dmabuf_tx_cmsg {
> +	__u32 dmabuf_id;

I believe you need to make sure the u64 is properly aligned:

	__u32 unused;  // and verify it is set to 0

> +	__u64 dmabuf_offset;
> +};
> +
>  /*
>   *	UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
>   */


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation
  2024-12-21  4:56   ` Stanislav Fomichev
@ 2025-01-27 22:45     ` Mina Almasry
  2025-01-28  3:51       ` Stanislav Fomichev
  0 siblings, 1 reply; 26+ messages in thread
From: Mina Almasry @ 2025-01-27 22:45 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On Fri, Dec 20, 2024 at 8:56 PM Stanislav Fomichev <stfomichev@gmail.com> wrote:
>
> On 12/21, Mina Almasry wrote:
> > Add documentation outlining the usage and details of the devmem TCP TX
> > API.
> >
> > Signed-off-by: Mina Almasry <almasrymina@google.com>
> > ---
> >  Documentation/networking/devmem.rst | 140 +++++++++++++++++++++++++++-
> >  1 file changed, 136 insertions(+), 4 deletions(-)
> >
> > diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
> > index d95363645331..9be01cd96ee2 100644
> > --- a/Documentation/networking/devmem.rst
> > +++ b/Documentation/networking/devmem.rst
> > @@ -62,15 +62,15 @@ More Info
> >      https://lore.kernel.org/netdev/20240831004313.3713467-1-almasrymina@google.com/
> >
> >
> > -Interface
> > -=========
> > +RX Interface
> > +============
> >
> >
> >  Example
> >  -------
> >
> > -tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up
> > -the RX path of this API.
> > +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_server shows an example of
> > +setting up the RX path of this API.
> >
> >
> >  NIC Setup
> > @@ -235,6 +235,138 @@ can be less than the tokens provided by the user in case of:
> >  (a) an internal kernel leak bug.
> >  (b) the user passed more than 1024 frags.
> >
> > +TX Interface
> > +============
> > +
> > +
> > +Example
> > +-------
> > +
> > +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_client shows an example of
> > +setting up the TX path of this API.
> > +
> > +
> > +NIC Setup
> > +---------
> > +
> > +The user must bind a TX dmabuf to a given NIC using the netlink API::
> > +
> > +        struct netdev_bind_tx_req *req = NULL;
> > +        struct netdev_bind_tx_rsp *rsp = NULL;
> > +        struct ynl_error yerr;
> > +
> > +        *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
> > +
> > +        req = netdev_bind_tx_req_alloc();
> > +        netdev_bind_tx_req_set_ifindex(req, ifindex);
> > +        netdev_bind_tx_req_set_fd(req, dmabuf_fd);
> > +
> > +        rsp = netdev_bind_tx(*ys, req);
> > +
> > +        tx_dmabuf_id = rsp->id;
> > +
> > +
> > +The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf
> > +that has been bound.
> > +
> > +The user can unbind the dmabuf from the netdevice by closing the netlink socket
> > +that established the binding. We do this so that the binding is automatically
> > +unbound even if the userspace process crashes.
> > +
> > +Note that any reasonably well-behaved dmabuf from any exporter should work with
> > +devmem TCP, even if the dmabuf is not actually backed by devmem. An example of
> > +this is udmabuf, which wraps user memory (non-devmem) in a dmabuf.
> > +
> > +Socket Setup
> > +------------
> > +
> > +The user application must use MSG_ZEROCOPY flag when sending devmem TCP. Devmem
> > +cannot be copied by the kernel, so the semantics of the devmem TX are similar
> > +to the semantics of MSG_ZEROCOPY.
> > +
> > +     ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt));
> > +
> > +Sending data
> > +--------------
> > +
> > +Devmem data is sent using the SCM_DEVMEM_DMABUF cmsg.
> > +
>
> [...]
>
> > +The user should create a msghdr with iov_base set to NULL and iov_len set to the
> > +number of bytes to be sent from the dmabuf.
>
> Should we verify that iov_base is NULL in the kernel?
>
> But also, alternatively, why not go with iov_base == offset? This way we
> can support several offsets in a single message, just like regular
> sendmsg with host memory. Any reason to not do that?
>

Sorry for the late reply. Some of these suggestions took a bit to
investigate and other priorities pulled me a bit from this.

I've prototyped using iov_base as offset with some help from your
published branch, and it works fine. It seems to me a big improvement
to the UAPI. Will reupload RFC v2 while the tree is closed with this
change.

-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support
  2024-12-26 19:07   ` Stanislav Fomichev
@ 2025-01-27 22:47     ` Mina Almasry
  0 siblings, 0 replies; 26+ messages in thread
From: Mina Almasry @ 2025-01-27 22:47 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On Thu, Dec 26, 2024 at 11:07 AM Stanislav Fomichev
<stfomichev@gmail.com> wrote:
>
> On 12/21, Mina Almasry wrote:
> > Currently net_iovs support only pp ref counts, and do not support a
> > page ref equivalent.
> >
> > This is fine for the RX path as net_iovs are used exclusively with the
> > pp and only pp refcounting is needed there. The TX path however does not
> > use pp ref counts, thus, support for get_page/put_page equivalent is
> > needed for netmem.
> >
> > Support get_netmem/put_netmem. Check the type of the netmem before
> > passing it to page or net_iov specific code to obtain a page ref
> > equivalent.
> >
> > For dmabuf net_iovs, we obtain a ref on the underlying binding. This
> > ensures the entire binding doesn't disappear until all the net_iovs have
> > been put_netmem'ed. We do not need to track the refcount of individual
> > dmabuf net_iovs as we don't allocate/free them from a pool similar to
> > what the buddy allocator does for pages.
> >
> > This code is written to be extensible by other net_iov implementers.
> > get_netmem/put_netmem will check the type of the netmem and route it to
> > the correct helper:
> >
> > pages -> [get|put]_page()
> > dmabuf net_iovs -> net_devmem_[get|put]_net_iov()
> > new net_iovs ->       new helpers
> >
> > Signed-off-by: Mina Almasry <almasrymina@google.com>
> >
> > ---
> >  include/linux/skbuff_ref.h |  4 ++--
> >  include/net/netmem.h       |  3 +++
> >  net/core/devmem.c          | 10 ++++++++++
> >  net/core/devmem.h          | 11 +++++++++++
> >  net/core/skbuff.c          | 30 ++++++++++++++++++++++++++++++
> >  5 files changed, 56 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h
> > index 0f3c58007488..9e49372ef1a0 100644
> > --- a/include/linux/skbuff_ref.h
> > +++ b/include/linux/skbuff_ref.h
> > @@ -17,7 +17,7 @@
> >   */
> >  static inline void __skb_frag_ref(skb_frag_t *frag)
> >  {
> > -     get_page(skb_frag_page(frag));
> > +     get_netmem(skb_frag_netmem(frag));
> >  }
> >
> >  /**
> > @@ -40,7 +40,7 @@ static inline void skb_page_unref(netmem_ref netmem, bool recycle)
> >       if (recycle && napi_pp_put_page(netmem))
> >               return;
> >  #endif
>
> [..]
>
> > -     put_page(netmem_to_page(netmem));
> > +     put_netmem(netmem);
>
> I moved the release operation onto a workqueue in my series [1] to avoid
> calling dmabuf detach (which can sleep) from the socket close path
> (which is called with bh disabled). You should probably do something similar,
> see the trace attached below.
>
> 1: https://github.com/fomichev/linux/commit/3b3ad4f36771a376c204727e5a167c4993d4c65a#diff-3c58b866674b2f9beb5ac7349f81566e4df595c25c647710203549589d450f2dR436
>
> (the condition to trigger that is to have an skb in the write queue
> and call close from the userspace)
>

Thanks for catching this indeed. I've also changed the unbinding to
scheduled_work, although I arrived at a slightly different
implementation that is simpler to my eye. I'll upload what I have for
review shortly as RFC.

-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-26 19:10     ` Stanislav Fomichev
@ 2025-01-27 22:52       ` Mina Almasry
  0 siblings, 0 replies; 26+ messages in thread
From: Mina Almasry @ 2025-01-27 22:52 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On Thu, Dec 26, 2024 at 11:10 AM Stanislav Fomichev
<stfomichev@gmail.com> wrote:
>
> On 12/20, Stanislav Fomichev wrote:
> > On 12/21, Mina Almasry wrote:
> > >  void netdev_nl_sock_priv_init(struct list_head *priv)
> > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > > index 815245d5c36b..eb6b41a32524 100644
> > > --- a/net/core/skbuff.c
> > > +++ b/net/core/skbuff.c
> > > @@ -1882,8 +1882,10 @@ EXPORT_SYMBOL_GPL(msg_zerocopy_ubuf_ops);
> > >
> > >  int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb,
> > >                          struct msghdr *msg, int len,
> > > -                        struct ubuf_info *uarg)
> > > +                        struct ubuf_info *uarg,
> > > +                        struct net_devmem_dmabuf_binding *binding)
> > >  {
> > > +   struct iov_iter *from = binding ? &binding->tx_iter : &msg->msg_iter;
> >
> > For tx, I feel like this needs a copy of binding->tx_iter:
> >
> >       struct iov_iter tx_iter = binding->tx_iter;
> >       struct iov_iter *from = binding ? &tx_iter : &msg->msg_iter;
> >
> > Or something similar (rewind?). The tx_iter is advanced in
> > zerocopy_fill_skb_from_devmem but never reset back it seems (or I'm
> > missing something). In you case, if you call sendmsg twice with the same
> > offset, the second one will copy from 2*offset.
>
> Can confirm that it's broken. We should probably have a mode in ncdevmem
> to call sendmsg with the fixed sized chunks, something like this:
>

Thanks for catching. Yes, I've been able to repro and I believe I
fixed it locally and will include a fix with the next iteration.

I also agree using a binding->tx_iter here is not necessary, and it
makes the code a bit confusing as there is an iteration in msg and
another one in binding and we have to be careful which to
advance/revert etc. I've prototyped implementation without
binding->tx_iter with help from your series on github and seems to
work fine in my tests.

> @@ -912,7 +916,11 @@ static int do_client(struct memory_buffer *mem)
>                                 line_size, off);
>
>                         iov.iov_base = NULL;
> -                       iov.iov_len = line_size;
> +                       iov.iov_len = line_size <= 4096 ?: 4096;
>
>                         msg.msg_iov = &iov;
>                         msg.msg_iovlen = 1;
> @@ -933,6 +941,8 @@ static int do_client(struct memory_buffer *mem)
>                         ret = sendmsg(socket_fd, &msg, MSG_ZEROCOPY);
>                         if (ret < 0)
>                                 error(1, errno, "Failed sendmsg");
> +                       if (ret == 0)
> +                               break;
>
>                         fprintf(stderr, "sendmsg_ret=%d\n", ret);
>
> I can put it on my todo to extend the selftests..

FWIW I've been able to repro this and extended the tests to catch
this; those changes should come with the next iteration.

-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2024-12-26 21:52   ` Willem de Bruijn
@ 2025-01-28  0:06     ` Mina Almasry
  2025-01-28 14:49       ` Willem de Bruijn
  0 siblings, 1 reply; 26+ messages in thread
From: Mina Almasry @ 2025-01-28  0:06 UTC (permalink / raw)
  To: Willem de Bruijn
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

Hi Willem, sorry for the late reply, some holiday vacations and other
priorities pulled me away from this a bit. I'm getting ready to post
RFC V2, so answering some questions ahead of when I do that.

On Thu, Dec 26, 2024 at 1:52 PM Willem de Bruijn
<willemdebruijn.kernel@gmail.com> wrote:
...
> > +static int zerocopy_fill_skb_from_devmem(struct sk_buff *skb,
> > +                                      struct msghdr *msg,
> > +                                      struct iov_iter *from, int length)
> > +{
> > +     int i = skb_shinfo(skb)->nr_frags;
> > +     int orig_length = length;
> > +     netmem_ref netmem;
> > +     size_t size;
> > +
> > +     while (length && iov_iter_count(from)) {
> > +             if (i == MAX_SKB_FRAGS)
> > +                     return -EMSGSIZE;
> > +
> > +             size = min_t(size_t, iter_iov_len(from), length);
> > +             if (!size)
> > +                     return -EFAULT;
>
> On error, should caller skb_zerocopy_iter_stream rewind from, rather
> than (or as well as) msg->msg_iter?

Ah, so this was confusing, because there were 2 iterators to keep
track off, (a) binding->tx_iter, which is `from` and (b)
msg->msg_iter.

Stan suggested removing binding->tx_iter entirely, so that we're back
to using only 1 iterator, which is msg->msg_iter. That does simplify
the code greatly, and I think addresses this comment as well, because
there will be no need to make sure from is advanced/reverted with
msg->msg_iter.

> > +
> > +             netmem = net_iov_to_netmem(iter_iov(from)->iov_base);
> > +             get_netmem(netmem);
> > +             skb_add_rx_frag_netmem(skb, i, netmem, from->iov_offset, size,
> > +                                    PAGE_SIZE);
> > +
> > +             iov_iter_advance(from, size);
> > +             length -= size;
> > +             i++;
> > +     }
> > +
> > +     iov_iter_advance(&msg->msg_iter, orig_length);
>
> What does this do if sendmsg is called with NULL as buffer?

So even if iov_base == NULL, the iterator is created anyhow. The
iterator will be from addresses 0 -> iov_len.

In the next iteration, I've applied Stan's suggestion to use iov_base
as the offset into the dma-buf to send from. I think it ends up being
a much cleaner UAPI, but let me know what you think.

...

> >  struct net_iov *
> >  net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
> > @@ -109,6 +112,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
> >       unsigned long xa_idx;
> >       unsigned int rxq_idx;
> >
> > +     xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> > +
> > +     /* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
> > +      * erase.
> > +      */
> > +     synchronize_net();
> > +
>
> What precisely does this protect?
>
> synchronize_net() ensures no packet is in flight inside an rcu
> readside section. But a packet can still be in flight, such as posted
> to the device or queued in a qdisc.
>

The TX data path does a net_devmem_lookup_dmabuf() to lookup the
dmabuf_id provided by the user.

But that dmabuf_id may be unbind'd via net_devmem_unbind_dmabuf () by
the user at any time, so some synchronization is needed to make sure
we don't do a send from a dmabuf that is being freed in another
thread.

The synchronization in this patch is such that the lookup path obtains
a reference under rcu lock, and the unbind control path makes sure to
wait a full RCU grace period before dropping reference via
net_devmem_dmabuf_binding_put(). net_devmem_dmabuf_binding_put() will
trigger freeing the binding if the refcount hits zero.

...

> > +struct net_devmem_dmabuf_binding *
> > +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> > +{
> > +     struct net_devmem_dmabuf_binding *binding;
> > +     int err = 0;
> > +
> > +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
>
> This lookup is from global xarray net_devmem_dmabuf_bindings.
>
> Is there a check that the socket is sending out through the device
> to which this dmabuf was bound with netlink? Should there be?
> (e.g., SO_BINDTODEVICE).
>

Yes, I think it may be an issue if the user triggers a send from a
different netdevice, because indeed when we bind a dmabuf we bind it
to a specific netdevice.

One option is as you say to require TX sockets to be bound and to
check that we're bound to the correct netdev. I also wonder if I can
make this work without SO_BINDTODEVICE, by querying the netdev the
sock is currently trying to send out on and doing a check in the
tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.

--
Thanks,
Mina

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation
  2025-01-27 22:45     ` Mina Almasry
@ 2025-01-28  3:51       ` Stanislav Fomichev
  0 siblings, 0 replies; 26+ messages in thread
From: Stanislav Fomichev @ 2025-01-28  3:51 UTC (permalink / raw)
  To: Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

On 01/27, Mina Almasry wrote:
> On Fri, Dec 20, 2024 at 8:56 PM Stanislav Fomichev <stfomichev@gmail.com> wrote:
> >
> > On 12/21, Mina Almasry wrote:
> > > Add documentation outlining the usage and details of the devmem TCP TX
> > > API.
> > >
> > > Signed-off-by: Mina Almasry <almasrymina@google.com>
> > > ---
> > >  Documentation/networking/devmem.rst | 140 +++++++++++++++++++++++++++-
> > >  1 file changed, 136 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
> > > index d95363645331..9be01cd96ee2 100644
> > > --- a/Documentation/networking/devmem.rst
> > > +++ b/Documentation/networking/devmem.rst
> > > @@ -62,15 +62,15 @@ More Info
> > >      https://lore.kernel.org/netdev/20240831004313.3713467-1-almasrymina@google.com/
> > >
> > >
> > > -Interface
> > > -=========
> > > +RX Interface
> > > +============
> > >
> > >
> > >  Example
> > >  -------
> > >
> > > -tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up
> > > -the RX path of this API.
> > > +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_server shows an example of
> > > +setting up the RX path of this API.
> > >
> > >
> > >  NIC Setup
> > > @@ -235,6 +235,138 @@ can be less than the tokens provided by the user in case of:
> > >  (a) an internal kernel leak bug.
> > >  (b) the user passed more than 1024 frags.
> > >
> > > +TX Interface
> > > +============
> > > +
> > > +
> > > +Example
> > > +-------
> > > +
> > > +./tools/testing/selftests/drivers/net/hw/ncdevmem:do_client shows an example of
> > > +setting up the TX path of this API.
> > > +
> > > +
> > > +NIC Setup
> > > +---------
> > > +
> > > +The user must bind a TX dmabuf to a given NIC using the netlink API::
> > > +
> > > +        struct netdev_bind_tx_req *req = NULL;
> > > +        struct netdev_bind_tx_rsp *rsp = NULL;
> > > +        struct ynl_error yerr;
> > > +
> > > +        *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
> > > +
> > > +        req = netdev_bind_tx_req_alloc();
> > > +        netdev_bind_tx_req_set_ifindex(req, ifindex);
> > > +        netdev_bind_tx_req_set_fd(req, dmabuf_fd);
> > > +
> > > +        rsp = netdev_bind_tx(*ys, req);
> > > +
> > > +        tx_dmabuf_id = rsp->id;
> > > +
> > > +
> > > +The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf
> > > +that has been bound.
> > > +
> > > +The user can unbind the dmabuf from the netdevice by closing the netlink socket
> > > +that established the binding. We do this so that the binding is automatically
> > > +unbound even if the userspace process crashes.
> > > +
> > > +Note that any reasonably well-behaved dmabuf from any exporter should work with
> > > +devmem TCP, even if the dmabuf is not actually backed by devmem. An example of
> > > +this is udmabuf, which wraps user memory (non-devmem) in a dmabuf.
> > > +
> > > +Socket Setup
> > > +------------
> > > +
> > > +The user application must use MSG_ZEROCOPY flag when sending devmem TCP. Devmem
> > > +cannot be copied by the kernel, so the semantics of the devmem TX are similar
> > > +to the semantics of MSG_ZEROCOPY.
> > > +
> > > +     ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, sizeof(opt));
> > > +
> > > +Sending data
> > > +--------------
> > > +
> > > +Devmem data is sent using the SCM_DEVMEM_DMABUF cmsg.
> > > +
> >
> > [...]
> >
> > > +The user should create a msghdr with iov_base set to NULL and iov_len set to the
> > > +number of bytes to be sent from the dmabuf.
> >
> > Should we verify that iov_base is NULL in the kernel?
> >
> > But also, alternatively, why not go with iov_base == offset? This way we
> > can support several offsets in a single message, just like regular
> > sendmsg with host memory. Any reason to not do that?
> >
> 
> Sorry for the late reply. Some of these suggestions took a bit to
> investigate and other priorities pulled me a bit from this.
> 
> I've prototyped using iov_base as offset with some help from your
> published branch, and it works fine. It seems to me a big improvement
> to the UAPI. Will reupload RFC v2 while the tree is closed with this
> change.

Great, thanks for the update, looking forward!

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2025-01-28  0:06     ` Mina Almasry
@ 2025-01-28 14:49       ` Willem de Bruijn
  2025-02-05 12:41         ` Pavel Begunkov
  0 siblings, 1 reply; 26+ messages in thread
From: Willem de Bruijn @ 2025-01-28 14:49 UTC (permalink / raw)
  To: Mina Almasry, Willem de Bruijn
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Pavel Begunkov,
	Willem de Bruijn, Samiullah Khawaja, Stanislav Fomichev,
	Joe Damato, dw

> > > +struct net_devmem_dmabuf_binding *
> > > +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> > > +{
> > > +     struct net_devmem_dmabuf_binding *binding;
> > > +     int err = 0;
> > > +
> > > +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
> >
> > This lookup is from global xarray net_devmem_dmabuf_bindings.
> >
> > Is there a check that the socket is sending out through the device
> > to which this dmabuf was bound with netlink? Should there be?
> > (e.g., SO_BINDTODEVICE).
> >
> 
> Yes, I think it may be an issue if the user triggers a send from a
> different netdevice, because indeed when we bind a dmabuf we bind it
> to a specific netdevice.
> 
> One option is as you say to require TX sockets to be bound and to
> check that we're bound to the correct netdev. I also wonder if I can
> make this work without SO_BINDTODEVICE, by querying the netdev the
> sock is currently trying to send out on and doing a check in the
> tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.

I was a bit quick on mentioning SO_BINDTODEVICE. Agreed that it is
vastly preferable to not require that, but infer the device from
the connected TCP sock.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2025-01-28 14:49       ` Willem de Bruijn
@ 2025-02-05 12:41         ` Pavel Begunkov
  2025-02-05 20:22           ` Mina Almasry
  0 siblings, 1 reply; 26+ messages in thread
From: Pavel Begunkov @ 2025-02-05 12:41 UTC (permalink / raw)
  To: Willem de Bruijn, Mina Almasry
  Cc: netdev, linux-kernel, linux-doc, virtualization, kvm,
	linux-kselftest, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman, Donald Hunter, Jonathan Corbet,
	Andrew Lunn, David Ahern, Michael S. Tsirkin, Jason Wang,
	Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Willem de Bruijn,
	Samiullah Khawaja, Stanislav Fomichev, Joe Damato, dw

On 1/28/25 14:49, Willem de Bruijn wrote:
>>>> +struct net_devmem_dmabuf_binding *
>>>> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
>>>> +{
>>>> +     struct net_devmem_dmabuf_binding *binding;
>>>> +     int err = 0;
>>>> +
>>>> +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
>>>
>>> This lookup is from global xarray net_devmem_dmabuf_bindings.
>>>
>>> Is there a check that the socket is sending out through the device
>>> to which this dmabuf was bound with netlink? Should there be?
>>> (e.g., SO_BINDTODEVICE).
>>>
>>
>> Yes, I think it may be an issue if the user triggers a send from a
>> different netdevice, because indeed when we bind a dmabuf we bind it
>> to a specific netdevice.
>>
>> One option is as you say to require TX sockets to be bound and to
>> check that we're bound to the correct netdev. I also wonder if I can
>> make this work without SO_BINDTODEVICE, by querying the netdev the
>> sock is currently trying to send out on and doing a check in the
>> tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.
> 
> I was a bit quick on mentioning SO_BINDTODEVICE. Agreed that it is
> vastly preferable to not require that, but infer the device from
> the connected TCP sock.

I wonder why so? I'd imagine something like SO_BINDTODEVICE is a
better way to go. The user has to do it anyway, otherwise packets
might go to a different device and the user would suddenly start
getting errors with no good way to alleviate them (apart from
likes of SO_BINDTODEVICE). It's even worse if it works for a while
but starts to unpredictably fail as time passes. With binding at
least it'd fail fast if the setup is not done correctly.

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2025-02-05 12:41         ` Pavel Begunkov
@ 2025-02-05 20:22           ` Mina Almasry
  2025-02-05 22:16             ` Pavel Begunkov
  0 siblings, 1 reply; 26+ messages in thread
From: Mina Almasry @ 2025-02-05 20:22 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Willem de Bruijn, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Simon Horman, Donald Hunter,
	Jonathan Corbet, Andrew Lunn, David Ahern, Michael S. Tsirkin,
	Jason Wang, Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Willem de Bruijn,
	Samiullah Khawaja, Stanislav Fomichev, Joe Damato, dw

On Wed, Feb 5, 2025 at 4:41 AM Pavel Begunkov <asml.silence@gmail.com> wrote:
>
> On 1/28/25 14:49, Willem de Bruijn wrote:
> >>>> +struct net_devmem_dmabuf_binding *
> >>>> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> >>>> +{
> >>>> +     struct net_devmem_dmabuf_binding *binding;
> >>>> +     int err = 0;
> >>>> +
> >>>> +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
> >>>
> >>> This lookup is from global xarray net_devmem_dmabuf_bindings.
> >>>
> >>> Is there a check that the socket is sending out through the device
> >>> to which this dmabuf was bound with netlink? Should there be?
> >>> (e.g., SO_BINDTODEVICE).
> >>>
> >>
> >> Yes, I think it may be an issue if the user triggers a send from a
> >> different netdevice, because indeed when we bind a dmabuf we bind it
> >> to a specific netdevice.
> >>
> >> One option is as you say to require TX sockets to be bound and to
> >> check that we're bound to the correct netdev. I also wonder if I can
> >> make this work without SO_BINDTODEVICE, by querying the netdev the
> >> sock is currently trying to send out on and doing a check in the
> >> tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.
> >
> > I was a bit quick on mentioning SO_BINDTODEVICE. Agreed that it is
> > vastly preferable to not require that, but infer the device from
> > the connected TCP sock.
>
> I wonder why so? I'd imagine something like SO_BINDTODEVICE is a
> better way to go. The user has to do it anyway, otherwise packets
> might go to a different device and the user would suddenly start
> getting errors with no good way to alleviate them (apart from
> likes of SO_BINDTODEVICE). It's even worse if it works for a while
> but starts to unpredictably fail as time passes. With binding at
> least it'd fail fast if the setup is not done correctly.
>

I think there may be a misunderstanding. There is nothing preventing
the user from SO_BINDTODEVICE to make sure the socket is bound to the
ifindex, and the test changes in the latest series actually do this
binding.

It's just that on TX, we check what device we happen to be going out
over, and fail if we're going out of a different device.

There are setups where the device will always be correct even without
SO_BINDTODEVICE. Like if the host has only 1 interface or if the
egress IP is only reachable over 1 interface. I don't see much reason
to require the user to SO_BINDTODEVICE in these cases.

-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2025-02-05 20:22           ` Mina Almasry
@ 2025-02-05 22:16             ` Pavel Begunkov
  2025-02-05 22:22               ` Pavel Begunkov
  0 siblings, 1 reply; 26+ messages in thread
From: Pavel Begunkov @ 2025-02-05 22:16 UTC (permalink / raw)
  To: Mina Almasry
  Cc: Willem de Bruijn, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Simon Horman, Donald Hunter,
	Jonathan Corbet, Andrew Lunn, David Ahern, Michael S. Tsirkin,
	Jason Wang, Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Willem de Bruijn,
	Samiullah Khawaja, Stanislav Fomichev, Joe Damato, dw

On 2/5/25 20:22, Mina Almasry wrote:
> On Wed, Feb 5, 2025 at 4:41 AM Pavel Begunkov <asml.silence@gmail.com> wrote:
>>
>> On 1/28/25 14:49, Willem de Bruijn wrote:
>>>>>> +struct net_devmem_dmabuf_binding *
>>>>>> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
>>>>>> +{
>>>>>> +     struct net_devmem_dmabuf_binding *binding;
>>>>>> +     int err = 0;
>>>>>> +
>>>>>> +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
>>>>>
>>>>> This lookup is from global xarray net_devmem_dmabuf_bindings.
>>>>>
>>>>> Is there a check that the socket is sending out through the device
>>>>> to which this dmabuf was bound with netlink? Should there be?
>>>>> (e.g., SO_BINDTODEVICE).
>>>>>
>>>>
>>>> Yes, I think it may be an issue if the user triggers a send from a
>>>> different netdevice, because indeed when we bind a dmabuf we bind it
>>>> to a specific netdevice.
>>>>
>>>> One option is as you say to require TX sockets to be bound and to
>>>> check that we're bound to the correct netdev. I also wonder if I can
>>>> make this work without SO_BINDTODEVICE, by querying the netdev the
>>>> sock is currently trying to send out on and doing a check in the
>>>> tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.
>>>
>>> I was a bit quick on mentioning SO_BINDTODEVICE. Agreed that it is
>>> vastly preferable to not require that, but infer the device from
>>> the connected TCP sock.
>>
>> I wonder why so? I'd imagine something like SO_BINDTODEVICE is a
>> better way to go. The user has to do it anyway, otherwise packets
>> might go to a different device and the user would suddenly start
>> getting errors with no good way to alleviate them (apart from
>> likes of SO_BINDTODEVICE). It's even worse if it works for a while
>> but starts to unpredictably fail as time passes. With binding at
>> least it'd fail fast if the setup is not done correctly.
>>
> 
> I think there may be a misunderstanding. There is nothing preventing
> the user from SO_BINDTODEVICE to make sure the socket is bound to the

Right, not arguing otherwise

> ifindex, and the test changes in the latest series actually do this
> binding.
> 
> It's just that on TX, we check what device we happen to be going out
> over, and fail if we're going out of a different device.
> 
> There are setups where the device will always be correct even without
> SO_BINDTODEVICE. Like if the host has only 1 interface or if the
> egress IP is only reachable over 1 interface. I don't see much reason
> to require the user to SO_BINDTODEVICE in these cases.

That's exactly the problem. People would test their code with one setup
where it works just fine, but then there will be a rare user of a
library used by some other framework or a lonely server where it starts
to fails for no apparent reason while "it worked before and nothing has
changed". It's more predictable if enforced.

I don't think we'd care about setup overhead one extra ioctl() here(?),
but with this option we'd need to be careful about not racing with
rebinding, if it's allowed.

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2025-02-05 22:16             ` Pavel Begunkov
@ 2025-02-05 22:22               ` Pavel Begunkov
  2025-02-10 21:14                 ` Mina Almasry
  0 siblings, 1 reply; 26+ messages in thread
From: Pavel Begunkov @ 2025-02-05 22:22 UTC (permalink / raw)
  To: Mina Almasry
  Cc: Willem de Bruijn, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Simon Horman, Donald Hunter,
	Jonathan Corbet, Andrew Lunn, David Ahern, Michael S. Tsirkin,
	Jason Wang, Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Willem de Bruijn,
	Samiullah Khawaja, Stanislav Fomichev, Joe Damato, dw

On 2/5/25 22:16, Pavel Begunkov wrote:
> On 2/5/25 20:22, Mina Almasry wrote:
>> On Wed, Feb 5, 2025 at 4:41 AM Pavel Begunkov <asml.silence@gmail.com> wrote:
>>>
>>> On 1/28/25 14:49, Willem de Bruijn wrote:
>>>>>>> +struct net_devmem_dmabuf_binding *
>>>>>>> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
>>>>>>> +{
>>>>>>> +     struct net_devmem_dmabuf_binding *binding;
>>>>>>> +     int err = 0;
>>>>>>> +
>>>>>>> +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
>>>>>>
>>>>>> This lookup is from global xarray net_devmem_dmabuf_bindings.
>>>>>>
>>>>>> Is there a check that the socket is sending out through the device
>>>>>> to which this dmabuf was bound with netlink? Should there be?
>>>>>> (e.g., SO_BINDTODEVICE).
>>>>>>
>>>>>
>>>>> Yes, I think it may be an issue if the user triggers a send from a
>>>>> different netdevice, because indeed when we bind a dmabuf we bind it
>>>>> to a specific netdevice.
>>>>>
>>>>> One option is as you say to require TX sockets to be bound and to
>>>>> check that we're bound to the correct netdev. I also wonder if I can
>>>>> make this work without SO_BINDTODEVICE, by querying the netdev the
>>>>> sock is currently trying to send out on and doing a check in the
>>>>> tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.
>>>>
>>>> I was a bit quick on mentioning SO_BINDTODEVICE. Agreed that it is
>>>> vastly preferable to not require that, but infer the device from
>>>> the connected TCP sock.
>>>
>>> I wonder why so? I'd imagine something like SO_BINDTODEVICE is a
>>> better way to go. The user has to do it anyway, otherwise packets
>>> might go to a different device and the user would suddenly start
>>> getting errors with no good way to alleviate them (apart from
>>> likes of SO_BINDTODEVICE). It's even worse if it works for a while
>>> but starts to unpredictably fail as time passes. With binding at
>>> least it'd fail fast if the setup is not done correctly.
>>>
>>
>> I think there may be a misunderstanding. There is nothing preventing
>> the user from SO_BINDTODEVICE to make sure the socket is bound to the
> 
> Right, not arguing otherwise
> 
>> ifindex, and the test changes in the latest series actually do this
>> binding.
>>
>> It's just that on TX, we check what device we happen to be going out
>> over, and fail if we're going out of a different device.
>>
>> There are setups where the device will always be correct even without
>> SO_BINDTODEVICE. Like if the host has only 1 interface or if the
>> egress IP is only reachable over 1 interface. I don't see much reason
>> to require the user to SO_BINDTODEVICE in these cases.
> 
> That's exactly the problem. People would test their code with one setup
> where it works just fine, but then there will be a rare user of a
> library used by some other framework or a lonely server where it starts
> to fails for no apparent reason while "it worked before and nothing has
> changed". It's more predictable if enforced.
> 
> I don't think we'd care about setup overhead one extra ioctl() here(?),
> but with this option we'd need to be careful about not racing with
> rebinding, if it's allowed.

FWIW, it's surely not a big deal, but it makes a clearer api.
Hence my curiosity what are the other reasons.

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path
  2025-02-05 22:22               ` Pavel Begunkov
@ 2025-02-10 21:14                 ` Mina Almasry
  0 siblings, 0 replies; 26+ messages in thread
From: Mina Almasry @ 2025-02-10 21:14 UTC (permalink / raw)
  To: Pavel Begunkov
  Cc: Willem de Bruijn, netdev, linux-kernel, linux-doc, virtualization,
	kvm, linux-kselftest, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Simon Horman, Donald Hunter,
	Jonathan Corbet, Andrew Lunn, David Ahern, Michael S. Tsirkin,
	Jason Wang, Xuan Zhuo, Eugenio Pérez, Stefan Hajnoczi,
	Stefano Garzarella, Shuah Khan, Kaiyuan Zhang, Willem de Bruijn,
	Samiullah Khawaja, Stanislav Fomichev, Joe Damato, dw

On Wed, Feb 5, 2025 at 2:22 PM Pavel Begunkov <asml.silence@gmail.com> wrote:
>
> On 2/5/25 22:16, Pavel Begunkov wrote:
> > On 2/5/25 20:22, Mina Almasry wrote:
> >> On Wed, Feb 5, 2025 at 4:41 AM Pavel Begunkov <asml.silence@gmail.com> wrote:
> >>>
> >>> On 1/28/25 14:49, Willem de Bruijn wrote:
> >>>>>>> +struct net_devmem_dmabuf_binding *
> >>>>>>> +net_devmem_get_sockc_binding(struct sock *sk, struct sockcm_cookie *sockc)
> >>>>>>> +{
> >>>>>>> +     struct net_devmem_dmabuf_binding *binding;
> >>>>>>> +     int err = 0;
> >>>>>>> +
> >>>>>>> +     binding = net_devmem_lookup_dmabuf(sockc->dmabuf_id);
> >>>>>>
> >>>>>> This lookup is from global xarray net_devmem_dmabuf_bindings.
> >>>>>>
> >>>>>> Is there a check that the socket is sending out through the device
> >>>>>> to which this dmabuf was bound with netlink? Should there be?
> >>>>>> (e.g., SO_BINDTODEVICE).
> >>>>>>
> >>>>>
> >>>>> Yes, I think it may be an issue if the user triggers a send from a
> >>>>> different netdevice, because indeed when we bind a dmabuf we bind it
> >>>>> to a specific netdevice.
> >>>>>
> >>>>> One option is as you say to require TX sockets to be bound and to
> >>>>> check that we're bound to the correct netdev. I also wonder if I can
> >>>>> make this work without SO_BINDTODEVICE, by querying the netdev the
> >>>>> sock is currently trying to send out on and doing a check in the
> >>>>> tcp_sendmsg. I'm not sure if this is possible but I'll give it a look.
> >>>>
> >>>> I was a bit quick on mentioning SO_BINDTODEVICE. Agreed that it is
> >>>> vastly preferable to not require that, but infer the device from
> >>>> the connected TCP sock.
> >>>
> >>> I wonder why so? I'd imagine something like SO_BINDTODEVICE is a
> >>> better way to go. The user has to do it anyway, otherwise packets
> >>> might go to a different device and the user would suddenly start
> >>> getting errors with no good way to alleviate them (apart from
> >>> likes of SO_BINDTODEVICE). It's even worse if it works for a while
> >>> but starts to unpredictably fail as time passes. With binding at
> >>> least it'd fail fast if the setup is not done correctly.
> >>>
> >>
> >> I think there may be a misunderstanding. There is nothing preventing
> >> the user from SO_BINDTODEVICE to make sure the socket is bound to the
> >
> > Right, not arguing otherwise
> >
> >> ifindex, and the test changes in the latest series actually do this
> >> binding.
> >>
> >> It's just that on TX, we check what device we happen to be going out
> >> over, and fail if we're going out of a different device.
> >>
> >> There are setups where the device will always be correct even without
> >> SO_BINDTODEVICE. Like if the host has only 1 interface or if the
> >> egress IP is only reachable over 1 interface. I don't see much reason
> >> to require the user to SO_BINDTODEVICE in these cases.
> >

For my taste it's slightly too defensive for the kernel to fail a
perfectly valid operation because it detects that the user is not
"doing things properly". I can't think of a precedent for this in the
kernel.

Additionally there may be tricky implementation details. I think
sk->sk_bound_dev_if which SO_BINDTODEVICE set can be changed
concurrently.

FWIW I can add a line to the documentation saying it's recommended to
SO_BINDTODEVICE, and the selftest (that is demonstrator code) does
this anway.

-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2025-02-10 21:14 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-21  0:42 [PATCH RFC net-next v1 0/5] Device memory TCP TX Mina Almasry
2024-12-21  0:42 ` [PATCH RFC net-next v1 1/5] net: add devmem TCP TX documentation Mina Almasry
2024-12-21  4:56   ` Stanislav Fomichev
2025-01-27 22:45     ` Mina Almasry
2025-01-28  3:51       ` Stanislav Fomichev
2024-12-21  0:42 ` [PATCH RFC net-next v1 2/5] selftests: ncdevmem: Implement devmem TCP TX Mina Almasry
2024-12-21  4:57   ` Stanislav Fomichev
2024-12-26 21:24   ` Willem de Bruijn
2024-12-21  0:42 ` [PATCH RFC net-next v1 3/5] net: add get_netmem/put_netmem support Mina Almasry
2024-12-26 19:07   ` Stanislav Fomichev
2025-01-27 22:47     ` Mina Almasry
2024-12-21  0:42 ` [PATCH RFC net-next v1 4/5] net: devmem TCP tx netlink api Mina Almasry
2024-12-21  0:42 ` [PATCH RFC net-next v1 5/5] net: devmem: Implement TX path Mina Almasry
2024-12-21  5:09   ` Stanislav Fomichev
2024-12-26 19:10     ` Stanislav Fomichev
2025-01-27 22:52       ` Mina Almasry
2024-12-26 21:52   ` Willem de Bruijn
2025-01-28  0:06     ` Mina Almasry
2025-01-28 14:49       ` Willem de Bruijn
2025-02-05 12:41         ` Pavel Begunkov
2025-02-05 20:22           ` Mina Almasry
2025-02-05 22:16             ` Pavel Begunkov
2025-02-05 22:22               ` Pavel Begunkov
2025-02-10 21:14                 ` Mina Almasry
2024-12-28 19:28   ` David Ahern
2024-12-21  4:53 ` [PATCH RFC net-next v1 0/5] Device memory TCP TX Stanislav Fomichev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).