io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] io_uring zcrx ifq sharing
@ 2025-10-26 17:34 David Wei
  2025-10-26 17:34 ` [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings() David Wei
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: David Wei @ 2025-10-26 17:34 UTC (permalink / raw)
  To: io-uring, netdev; +Cc: Jens Axboe, Pavel Begunkov

Each ifq is bound to a HW RX queue with no way to share this across
multiple rings. It is possible that one ring will not be able to fully
saturate an entire HW RX queue due to userspace work. There are two ways
to handle more work:

  1. Move work to other threads, but have to pay context switch overhead
     and cold caches.
  2. Add more rings with ifqs, but HW RX queues are a limited resource.

This patchset add a way for multiple rings to share the same underlying
src ifq that is bound to a HW RX queue. Rings with shared ifqs can issue
io_recvzc on zero copy sockets, just like the src ring.

Userspace are expected to create rings in separate threads and not
processes, such that all rings share the same address space. This is
because the sharing and synchronisation of refill rings is purely done
in userspace with no kernel involvement e.g. dst rings do not mmap the
refill ring. Also, userspace must distribute zero copy sockets steered
into the same HW RX queue across rings sharing the ifq.

v3:
- drop ifq->proxy
- use dec_and_test to clean up ifq

v2:
- split patch

David Wei (3):
  io_uring/rsrc: rename and export io_lock_two_rings()
  io_uring/zcrx: add refcount to struct io_zcrx_ifq
  io_uring/zcrx: share an ifq between rings

 include/uapi/linux/io_uring.h |  4 ++
 io_uring/rsrc.c               |  4 +-
 io_uring/rsrc.h               |  1 +
 io_uring/zcrx.c               | 92 +++++++++++++++++++++++++++++++++--
 io_uring/zcrx.h               |  2 +
 5 files changed, 97 insertions(+), 6 deletions(-)

-- 
2.47.3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings()
  2025-10-26 17:34 [PATCH v3 0/3] io_uring zcrx ifq sharing David Wei
@ 2025-10-26 17:34 ` David Wei
  2025-10-27 10:04   ` Pavel Begunkov
  2025-10-26 17:34 ` [PATCH v3 2/3] io_uring/zcrx: add refcount to struct io_zcrx_ifq David Wei
  2025-10-26 17:34 ` [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings David Wei
  2 siblings, 1 reply; 12+ messages in thread
From: David Wei @ 2025-10-26 17:34 UTC (permalink / raw)
  To: io-uring, netdev; +Cc: Jens Axboe, Pavel Begunkov

Rename lock_two_rings() to io_lock_two_rings() and export. This will be
used when sharing a src ifq owned by one ring with another ring. During
this process both rings need to be locked in a deterministic order,
similar to the current user io_clone_buffers().

Signed-off-by: David Wei <dw@davidwei.uk>
---
 io_uring/rsrc.c | 4 ++--
 io_uring/rsrc.h | 1 +
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index d787c16dc1c3..d245b7592eee 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -1148,7 +1148,7 @@ int io_import_reg_buf(struct io_kiocb *req, struct iov_iter *iter,
 }
 
 /* Lock two rings at once. The rings must be different! */
-static void lock_two_rings(struct io_ring_ctx *ctx1, struct io_ring_ctx *ctx2)
+void io_lock_two_rings(struct io_ring_ctx *ctx1, struct io_ring_ctx *ctx2)
 {
 	if (ctx1 > ctx2)
 		swap(ctx1, ctx2);
@@ -1299,7 +1299,7 @@ int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg)
 	src_ctx = file->private_data;
 	if (src_ctx != ctx) {
 		mutex_unlock(&ctx->uring_lock);
-		lock_two_rings(ctx, src_ctx);
+		io_lock_two_rings(ctx, src_ctx);
 
 		if (src_ctx->submitter_task &&
 		    src_ctx->submitter_task != current) {
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index a3ca6ba66596..b002c4a5a8cd 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -70,6 +70,7 @@ int io_import_reg_vec(int ddir, struct iov_iter *iter,
 int io_prep_reg_iovec(struct io_kiocb *req, struct iou_vec *iv,
 			const struct iovec __user *uvec, size_t uvec_segs);
 
+void io_lock_two_rings(struct io_ring_ctx *ctx1, struct io_ring_ctx *ctx2);
 int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg);
 int io_sqe_buffers_unregister(struct io_ring_ctx *ctx);
 int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/3] io_uring/zcrx: add refcount to struct io_zcrx_ifq
  2025-10-26 17:34 [PATCH v3 0/3] io_uring zcrx ifq sharing David Wei
  2025-10-26 17:34 ` [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings() David Wei
@ 2025-10-26 17:34 ` David Wei
  2025-10-26 17:34 ` [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings David Wei
  2 siblings, 0 replies; 12+ messages in thread
From: David Wei @ 2025-10-26 17:34 UTC (permalink / raw)
  To: io-uring, netdev; +Cc: Jens Axboe, Pavel Begunkov

Add a refcount to struct io_zcrx_ifq to track the number of rings that
share it. For now, this is only ever 1 i.e. not shared, but will be
larger once shared ifqs are added.

This ref is dec and tested in io_shutdown_zcrx_ifqs() to ensure that an
ifq is not cleaned up while there are still rings using it.

It's important to note that io_shutdown_zcrx_ifqs() may be called in a
loop in io_ring_exit_work() while waiting for ctx->refs to drop to 0.
Use XArray marks to ensure that the refcount dec only happens once.

The cleanup functions io_zcrx_scrub() and io_close_queue() only take ifq
locks and do not need anything from the ring ctx. Therefore it is safe
to call from any ring.

Opted for a bog standard refcount_t. The inc and dec ops are expected to
happen during the slow setup/teardown paths only, and a src ifq is only
expected to be shared a handful of times at most.

Signed-off-by: David Wei <dw@davidwei.uk>
---
 io_uring/zcrx.c | 18 ++++++++++++++++--
 io_uring/zcrx.h |  2 ++
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index a816f5902091..569cc0338acb 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -587,6 +587,7 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 	if (!ifq)
 		return -ENOMEM;
 	ifq->rq_entries = reg.rq_entries;
+	refcount_set(&ifq->refs, 1);
 
 	scoped_guard(mutex, &ctx->mmap_lock) {
 		/* preallocate id */
@@ -730,8 +731,21 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 	lockdep_assert_held(&ctx->uring_lock);
 
 	xa_for_each(&ctx->zcrx_ctxs, index, ifq) {
-		io_zcrx_scrub(ifq);
-		io_close_queue(ifq);
+		if (xa_get_mark(&ctx->zcrx_ctxs, index, XA_MARK_0))
+			continue;
+
+		/* Safe to clean up from any ring. */
+		if (refcount_dec_and_test(&ifq->refs)) {
+			io_zcrx_scrub(ifq);
+			io_close_queue(ifq);
+		}
+
+		/*
+		 * This is called in a loop in io_ring_exit_work() until
+		 * ctx->refs drops to 0. Use marks to ensure refcounts are only
+		 * decremented once per ifq per ring.
+		 */
+		xa_set_mark(&ctx->zcrx_ctxs, index, XA_MARK_0);
 	}
 }
 
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 33ef61503092..566d519cbaf6 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -60,6 +60,8 @@ struct io_zcrx_ifq {
 	 */
 	struct mutex			pp_lock;
 	struct io_mapped_region		region;
+
+	refcount_t			refs;
 };
 
 #if defined(CONFIG_IO_URING_ZCRX)
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings
  2025-10-26 17:34 [PATCH v3 0/3] io_uring zcrx ifq sharing David Wei
  2025-10-26 17:34 ` [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings() David Wei
  2025-10-26 17:34 ` [PATCH v3 2/3] io_uring/zcrx: add refcount to struct io_zcrx_ifq David Wei
@ 2025-10-26 17:34 ` David Wei
  2025-10-27 10:20   ` Pavel Begunkov
  2 siblings, 1 reply; 12+ messages in thread
From: David Wei @ 2025-10-26 17:34 UTC (permalink / raw)
  To: io-uring, netdev; +Cc: Jens Axboe, Pavel Begunkov

Add a way to share an ifq from a src ring that is real i.e. bound to a
HW RX queue with other rings. This is done by passing a new flag
IORING_ZCRX_IFQ_REG_SHARE in the registration struct
io_uring_zcrx_ifq_reg, alongside the fd of the src ring and the ifq id
to be shared.

To prevent the src ring or ifq from being cleaned up or freed while
there are still shared ifqs, take the appropriate refs on the src ring
(ctx->refs) and src ifq (ifq->refs).

Signed-off-by: David Wei <dw@davidwei.uk>
---
 include/uapi/linux/io_uring.h |  4 ++
 io_uring/zcrx.c               | 74 ++++++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 04797a9b76bc..4da4552a4215 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -1063,6 +1063,10 @@ struct io_uring_zcrx_area_reg {
 	__u64	__resv2[2];
 };
 
+enum io_uring_zcrx_ifq_reg_flags {
+	IORING_ZCRX_IFQ_REG_SHARE	= 1,
+};
+
 /*
  * Argument for IORING_REGISTER_ZCRX_IFQ
  */
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 569cc0338acb..7418c959390a 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -22,10 +22,10 @@
 #include <uapi/linux/io_uring.h>
 
 #include "io_uring.h"
-#include "kbuf.h"
 #include "memmap.h"
 #include "zcrx.h"
 #include "rsrc.h"
+#include "register.h"
 
 #define IO_ZCRX_AREA_SUPPORTED_FLAGS	(IORING_ZCRX_AREA_DMABUF)
 
@@ -541,6 +541,67 @@ struct io_mapped_region *io_zcrx_get_region(struct io_ring_ctx *ctx,
 	return ifq ? &ifq->region : NULL;
 }
 
+static int io_share_zcrx_ifq(struct io_ring_ctx *ctx,
+			     struct io_uring_zcrx_ifq_reg __user *arg,
+			     struct io_uring_zcrx_ifq_reg *reg)
+{
+	struct io_ring_ctx *src_ctx;
+	struct io_zcrx_ifq *src_ifq;
+	struct file *file;
+	int src_fd, ret;
+	u32 src_id, id;
+
+	src_fd = reg->if_idx;
+	src_id = reg->if_rxq;
+
+	file = io_uring_register_get_file(src_fd, false);
+	if (IS_ERR(file))
+		return PTR_ERR(file);
+
+	src_ctx = file->private_data;
+	if (src_ctx == ctx)
+		return -EBADFD;
+
+	mutex_unlock(&ctx->uring_lock);
+	io_lock_two_rings(ctx, src_ctx);
+
+	ret = -EINVAL;
+	src_ifq = xa_load(&src_ctx->zcrx_ctxs, src_id);
+	if (!src_ifq)
+		goto err_unlock;
+
+	percpu_ref_get(&src_ctx->refs);
+	refcount_inc(&src_ifq->refs);
+
+	scoped_guard(mutex, &ctx->mmap_lock) {
+		ret = xa_alloc(&ctx->zcrx_ctxs, &id, NULL, xa_limit_31b, GFP_KERNEL);
+		if (ret)
+			goto err_unlock;
+
+		ret = -ENOMEM;
+		if (xa_store(&ctx->zcrx_ctxs, id, src_ifq, GFP_KERNEL)) {
+			xa_erase(&ctx->zcrx_ctxs, id);
+			goto err_unlock;
+		}
+	}
+
+	reg->zcrx_id = id;
+	if (copy_to_user(arg, reg, sizeof(*reg))) {
+		ret = -EFAULT;
+		goto err;
+	}
+	mutex_unlock(&src_ctx->uring_lock);
+	fput(file);
+	return 0;
+err:
+	scoped_guard(mutex, &ctx->mmap_lock)
+		xa_erase(&ctx->zcrx_ctxs, id);
+err_unlock:
+	mutex_unlock(&src_ctx->uring_lock);
+	fput(file);
+	return ret;
+}
+
 int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 			  struct io_uring_zcrx_ifq_reg __user *arg)
 {
@@ -566,6 +627,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 		return -EINVAL;
 	if (copy_from_user(&reg, arg, sizeof(reg)))
 		return -EFAULT;
+	if (reg.flags & IORING_ZCRX_IFQ_REG_SHARE)
+		return io_share_zcrx_ifq(ctx, arg, &reg);
 	if (copy_from_user(&rd, u64_to_user_ptr(reg.region_ptr), sizeof(rd)))
 		return -EFAULT;
 	if (!mem_is_zero(&reg.__resv, sizeof(reg.__resv)) ||
@@ -663,7 +726,7 @@ void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
 			if (ifq)
 				xa_erase(&ctx->zcrx_ctxs, id);
 		}
-		if (!ifq)
+		if (!ifq || ctx != ifq->ctx)
 			break;
 		io_zcrx_ifq_free(ifq);
 	}
@@ -734,6 +797,13 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 		if (xa_get_mark(&ctx->zcrx_ctxs, index, XA_MARK_0))
 			continue;
 
+		/*
+		 * Only shared ifqs want to put ctx->refs on the owning ifq
+		 * ring. This matches the get in io_share_zcrx_ifq().
+		 */
+		if (ctx != ifq->ctx)
+			percpu_ref_put(&ifq->ctx->refs);
+
 		/* Safe to clean up from any ring. */
 		if (refcount_dec_and_test(&ifq->refs)) {
 			io_zcrx_scrub(ifq);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings()
  2025-10-26 17:34 ` [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings() David Wei
@ 2025-10-27 10:04   ` Pavel Begunkov
  2025-10-28 14:54     ` David Wei
  0 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2025-10-27 10:04 UTC (permalink / raw)
  To: David Wei, io-uring, netdev; +Cc: Jens Axboe

On 10/26/25 17:34, David Wei wrote:
> Rename lock_two_rings() to io_lock_two_rings() and export. This will be
> used when sharing a src ifq owned by one ring with another ring. During
> this process both rings need to be locked in a deterministic order,
> similar to the current user io_clone_buffers().

unlock();
double_lock();

It's quite a bad pattern just like any temporary unlocks in the
registration path, it gives a lot of space for exploitation.

Ideally, it'd be

lock(ctx1);
zcrx = grab_zcrx(ctx1, id); // with some refcounting inside
unlock(ctx1);

lock(ctx2);
install(ctx2, zcrx);
unlock(ctx2);

And as discussed, we need to think about turning it into a temp
file, bc of sync, and it's also hard to send an io_uring fd.
Though, that'd need moving bits around to avoid refcounting
cycles.

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings
  2025-10-26 17:34 ` [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings David Wei
@ 2025-10-27 10:20   ` Pavel Begunkov
  2025-10-27 11:47     ` Pavel Begunkov
  2025-10-28 14:55     ` David Wei
  0 siblings, 2 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-10-27 10:20 UTC (permalink / raw)
  To: David Wei, io-uring, netdev; +Cc: Jens Axboe

On 10/26/25 17:34, David Wei wrote:
> Add a way to share an ifq from a src ring that is real i.e. bound to a
> HW RX queue with other rings. This is done by passing a new flag
> IORING_ZCRX_IFQ_REG_SHARE in the registration struct
> io_uring_zcrx_ifq_reg, alongside the fd of the src ring and the ifq id
> to be shared.
> 
> To prevent the src ring or ifq from being cleaned up or freed while
> there are still shared ifqs, take the appropriate refs on the src ring
> (ctx->refs) and src ifq (ifq->refs).
> 
> Signed-off-by: David Wei <dw@davidwei.uk>
> ---
>   include/uapi/linux/io_uring.h |  4 ++
>   io_uring/zcrx.c               | 74 ++++++++++++++++++++++++++++++++++-
>   2 files changed, 76 insertions(+), 2 deletions(-)
> 
> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
> index 04797a9b76bc..4da4552a4215 100644
> --- a/include/uapi/linux/io_uring.h
> +++ b/include/uapi/linux/io_uring.h
> @@ -1063,6 +1063,10 @@ struct io_uring_zcrx_area_reg {
>   	__u64	__resv2[2];
>   };
>   
> +enum io_uring_zcrx_ifq_reg_flags {
> +	IORING_ZCRX_IFQ_REG_SHARE	= 1,
> +};
> +
>   /*
>    * Argument for IORING_REGISTER_ZCRX_IFQ
>    */
> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
> index 569cc0338acb..7418c959390a 100644
> --- a/io_uring/zcrx.c
> +++ b/io_uring/zcrx.c
> @@ -22,10 +22,10 @@
>   #include <uapi/linux/io_uring.h>
>   
>   #include "io_uring.h"
> -#include "kbuf.h"
>   #include "memmap.h"
>   #include "zcrx.h"
>   #include "rsrc.h"
> +#include "register.h"
>   
>   #define IO_ZCRX_AREA_SUPPORTED_FLAGS	(IORING_ZCRX_AREA_DMABUF)
>   
> @@ -541,6 +541,67 @@ struct io_mapped_region *io_zcrx_get_region(struct io_ring_ctx *ctx,
>   	return ifq ? &ifq->region : NULL;
>   }
>   
> +static int io_share_zcrx_ifq(struct io_ring_ctx *ctx,
> +			     struct io_uring_zcrx_ifq_reg __user *arg,
> +			     struct io_uring_zcrx_ifq_reg *reg)
> +{
> +	struct io_ring_ctx *src_ctx;
> +	struct io_zcrx_ifq *src_ifq;
> +	struct file *file;
> +	int src_fd, ret;
> +	u32 src_id, id;
> +
> +	src_fd = reg->if_idx;
> +	src_id = reg->if_rxq;
> +
> +	file = io_uring_register_get_file(src_fd, false);
> +	if (IS_ERR(file))
> +		return PTR_ERR(file);
> +
> +	src_ctx = file->private_data;
> +	if (src_ctx == ctx)
> +		return -EBADFD;
> +
> +	mutex_unlock(&ctx->uring_lock);
> +	io_lock_two_rings(ctx, src_ctx);
> +
> +	ret = -EINVAL;
> +	src_ifq = xa_load(&src_ctx->zcrx_ctxs, src_id);
> +	if (!src_ifq)
> +		goto err_unlock;
> +
> +	percpu_ref_get(&src_ctx->refs);
> +	refcount_inc(&src_ifq->refs);
> +
> +	scoped_guard(mutex, &ctx->mmap_lock) {
> +		ret = xa_alloc(&ctx->zcrx_ctxs, &id, NULL, xa_limit_31b, GFP_KERNEL);
> +		if (ret)
> +			goto err_unlock;
> +
> +		ret = -ENOMEM;
> +		if (xa_store(&ctx->zcrx_ctxs, id, src_ifq, GFP_KERNEL)) {
> +			xa_erase(&ctx->zcrx_ctxs, id);
> +			goto err_unlock;
> +		}

It's just xa_alloc(..., src_ifq, ...);

> +	}
> +
> +	reg->zcrx_id = id;
> +	if (copy_to_user(arg, reg, sizeof(*reg))) {
> +		ret = -EFAULT;
> +		goto err;
> +	}

Better to do that before publishing zcrx into ctx->zcrx_ctxs

> +	mutex_unlock(&src_ctx->uring_lock);
> +	fput(file);
> +	return 0;
> +err:
> +	scoped_guard(mutex, &ctx->mmap_lock)
> +		xa_erase(&ctx->zcrx_ctxs, id);
> +err_unlock:
> +	mutex_unlock(&src_ctx->uring_lock);
> +	fput(file);
> +	return ret;
> +}
> +
>   int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
>   			  struct io_uring_zcrx_ifq_reg __user *arg)
>   {
> @@ -566,6 +627,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
>   		return -EINVAL;
>   	if (copy_from_user(&reg, arg, sizeof(reg)))
>   		return -EFAULT;
> +	if (reg.flags & IORING_ZCRX_IFQ_REG_SHARE)
> +		return io_share_zcrx_ifq(ctx, arg, &reg);
>   	if (copy_from_user(&rd, u64_to_user_ptr(reg.region_ptr), sizeof(rd)))
>   		return -EFAULT;
>   	if (!mem_is_zero(&reg.__resv, sizeof(reg.__resv)) ||
> @@ -663,7 +726,7 @@ void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
>   			if (ifq)
>   				xa_erase(&ctx->zcrx_ctxs, id);
>   		}
> -		if (!ifq)
> +		if (!ifq || ctx != ifq->ctx)
>   			break;
>   		io_zcrx_ifq_free(ifq);
>   	}
> @@ -734,6 +797,13 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
>   		if (xa_get_mark(&ctx->zcrx_ctxs, index, XA_MARK_0))
>   			continue;
>   
> +		/*
> +		 * Only shared ifqs want to put ctx->refs on the owning ifq
> +		 * ring. This matches the get in io_share_zcrx_ifq().
> +		 */
> +		if (ctx != ifq->ctx)
> +			percpu_ref_put(&ifq->ctx->refs);

After you put this and ifq->refs below down, the zcrx object can get
destroyed, but this ctx might still have requests using the object.
Waiting on ctx refs would ensure requests are killed, but that'd
create a cycle.

> +
>   		/* Safe to clean up from any ring. */
>   		if (refcount_dec_and_test(&ifq->refs)) {
>   			io_zcrx_scrub(ifq);

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings
  2025-10-27 10:20   ` Pavel Begunkov
@ 2025-10-27 11:47     ` Pavel Begunkov
  2025-10-27 15:10       ` David Wei
  2025-10-28 14:55     ` David Wei
  1 sibling, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2025-10-27 11:47 UTC (permalink / raw)
  To: David Wei, io-uring, netdev; +Cc: Jens Axboe

On 10/27/25 10:20, Pavel Begunkov wrote:
> On 10/26/25 17:34, David Wei wrote:
>> Add a way to share an ifq from a src ring that is real i.e. bound to a
>> HW RX queue with other rings. This is done by passing a new flag
>> IORING_ZCRX_IFQ_REG_SHARE in the registration struct
>> io_uring_zcrx_ifq_reg, alongside the fd of the src ring and the ifq id
>> to be shared.
>>
>> To prevent the src ring or ifq from being cleaned up or freed while
>> there are still shared ifqs, take the appropriate refs on the src ring
>> (ctx->refs) and src ifq (ifq->refs).
>>
>> Signed-off-by: David Wei <dw@davidwei.uk>
>> ---
>>   include/uapi/linux/io_uring.h |  4 ++
>>   io_uring/zcrx.c               | 74 ++++++++++++++++++++++++++++++++++-
>>   2 files changed, 76 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
>> index 04797a9b76bc..4da4552a4215 100644
>> --- a/include/uapi/linux/io_uring.h
>> +++ b/include/uapi/linux/io_uring.h
>> @@ -1063,6 +1063,10 @@ struct io_uring_zcrx_area_reg {
>>       __u64    __resv2[2];
>>   };
>> +enum io_uring_zcrx_ifq_reg_flags {
>> +    IORING_ZCRX_IFQ_REG_SHARE    = 1,
>> +};
>> +
>>   /*
>>    * Argument for IORING_REGISTER_ZCRX_IFQ
>>    */
>> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
>> index 569cc0338acb..7418c959390a 100644
>> --- a/io_uring/zcrx.c
>> +++ b/io_uring/zcrx.c
>> @@ -22,10 +22,10 @@
>>   #include <uapi/linux/io_uring.h>
>>   #include "io_uring.h"
>> -#include "kbuf.h"
>>   #include "memmap.h"
>>   #include "zcrx.h"
>>   #include "rsrc.h"
>> +#include "register.h"
>>   #define IO_ZCRX_AREA_SUPPORTED_FLAGS    (IORING_ZCRX_AREA_DMABUF)
>> @@ -541,6 +541,67 @@ struct io_mapped_region *io_zcrx_get_region(struct io_ring_ctx *ctx,
>>       return ifq ? &ifq->region : NULL;
>>   }
>> +static int io_share_zcrx_ifq(struct io_ring_ctx *ctx,
>> +                 struct io_uring_zcrx_ifq_reg __user *arg,
>> +                 struct io_uring_zcrx_ifq_reg *reg)
>> +{
>> +    struct io_ring_ctx *src_ctx;
>> +    struct io_zcrx_ifq *src_ifq;
>> +    struct file *file;
>> +    int src_fd, ret;
>> +    u32 src_id, id;
>> +
>> +    src_fd = reg->if_idx;
>> +    src_id = reg->if_rxq;
>> +
>> +    file = io_uring_register_get_file(src_fd, false);
>> +    if (IS_ERR(file))
>> +        return PTR_ERR(file);
>> +
>> +    src_ctx = file->private_data;
>> +    if (src_ctx == ctx)
>> +        return -EBADFD;
>> +
>> +    mutex_unlock(&ctx->uring_lock);
>> +    io_lock_two_rings(ctx, src_ctx);
>> +
>> +    ret = -EINVAL;
>> +    src_ifq = xa_load(&src_ctx->zcrx_ctxs, src_id);
>> +    if (!src_ifq)
>> +        goto err_unlock;
>> +
>> +    percpu_ref_get(&src_ctx->refs);
>> +    refcount_inc(&src_ifq->refs);
>> +
>> +    scoped_guard(mutex, &ctx->mmap_lock) {
>> +        ret = xa_alloc(&ctx->zcrx_ctxs, &id, NULL, xa_limit_31b, GFP_KERNEL);
>> +        if (ret)
>> +            goto err_unlock;
>> +
>> +        ret = -ENOMEM;
>> +        if (xa_store(&ctx->zcrx_ctxs, id, src_ifq, GFP_KERNEL)) {
>> +            xa_erase(&ctx->zcrx_ctxs, id);
>> +            goto err_unlock;
>> +        }
> 
> It's just xa_alloc(..., src_ifq, ...);
> 
>> +    }
>> +
>> +    reg->zcrx_id = id;
>> +    if (copy_to_user(arg, reg, sizeof(*reg))) {
>> +        ret = -EFAULT;
>> +        goto err;
>> +    }
> 
> Better to do that before publishing zcrx into ctx->zcrx_ctxs
> 
>> +    mutex_unlock(&src_ctx->uring_lock);
>> +    fput(file);
>> +    return 0;
>> +err:
>> +    scoped_guard(mutex, &ctx->mmap_lock)
>> +        xa_erase(&ctx->zcrx_ctxs, id);
>> +err_unlock:
>> +    mutex_unlock(&src_ctx->uring_lock);
>> +    fput(file);
>> +    return ret;
>> +}
>> +
>>   int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
>>                 struct io_uring_zcrx_ifq_reg __user *arg)
>>   {
>> @@ -566,6 +627,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
>>           return -EINVAL;
>>       if (copy_from_user(&reg, arg, sizeof(reg)))
>>           return -EFAULT;
>> +    if (reg.flags & IORING_ZCRX_IFQ_REG_SHARE)
>> +        return io_share_zcrx_ifq(ctx, arg, &reg);
>>       if (copy_from_user(&rd, u64_to_user_ptr(reg.region_ptr), sizeof(rd)))
>>           return -EFAULT;
>>       if (!mem_is_zero(&reg.__resv, sizeof(reg.__resv)) ||
>> @@ -663,7 +726,7 @@ void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx)
>>               if (ifq)
>>                   xa_erase(&ctx->zcrx_ctxs, id);
>>           }
>> -        if (!ifq)
>> +        if (!ifq || ctx != ifq->ctx)
>>               break;
>>           io_zcrx_ifq_free(ifq);
>>       }
>> @@ -734,6 +797,13 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
>>           if (xa_get_mark(&ctx->zcrx_ctxs, index, XA_MARK_0))
>>               continue;
>> +        /*
>> +         * Only shared ifqs want to put ctx->refs on the owning ifq
>> +         * ring. This matches the get in io_share_zcrx_ifq().
>> +         */
>> +        if (ctx != ifq->ctx)
>> +            percpu_ref_put(&ifq->ctx->refs);
> 
> After you put this and ifq->refs below down, the zcrx object can get
> destroyed, but this ctx might still have requests using the object.
> Waiting on ctx refs would ensure requests are killed, but that'd
> create a cycle.

Another concerning part is long term cross ctx referencing,
which is even worse than pp locking it up. I mentioned
that it'd be great to reverse the refcounting relation,
but that'd also need additional ground work to break
dependencies.

> 
>> +
>>           /* Safe to clean up from any ring. */
>>           if (refcount_dec_and_test(&ifq->refs)) {
>>               io_zcrx_scrub(ifq);
> 

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings
  2025-10-27 11:47     ` Pavel Begunkov
@ 2025-10-27 15:10       ` David Wei
  0 siblings, 0 replies; 12+ messages in thread
From: David Wei @ 2025-10-27 15:10 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, netdev; +Cc: Jens Axboe

On 2025-10-27 04:47, Pavel Begunkov wrote:
> On 10/27/25 10:20, Pavel Begunkov wrote:
>> On 10/26/25 17:34, David Wei wrote:
>>> Add a way to share an ifq from a src ring that is real i.e. bound to a
>>> HW RX queue with other rings. This is done by passing a new flag
>>> IORING_ZCRX_IFQ_REG_SHARE in the registration struct
>>> io_uring_zcrx_ifq_reg, alongside the fd of the src ring and the ifq id
>>> to be shared.
>>>
>>> To prevent the src ring or ifq from being cleaned up or freed while
>>> there are still shared ifqs, take the appropriate refs on the src ring
>>> (ctx->refs) and src ifq (ifq->refs).
>>>
>>> Signed-off-by: David Wei <dw@davidwei.uk>
>>> ---
>>>   include/uapi/linux/io_uring.h |  4 ++
>>>   io_uring/zcrx.c               | 74 ++++++++++++++++++++++++++++++++++-
>>>   2 files changed, 76 insertions(+), 2 deletions(-)
>>>
[...]
>>> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
>>> index 569cc0338acb..7418c959390a 100644
>>> --- a/io_uring/zcrx.c
>>> +++ b/io_uring/zcrx.c
[...]
>>> @@ -734,6 +797,13 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
>>>           if (xa_get_mark(&ctx->zcrx_ctxs, index, XA_MARK_0))
>>>               continue;
>>> +        /*
>>> +         * Only shared ifqs want to put ctx->refs on the owning ifq
>>> +         * ring. This matches the get in io_share_zcrx_ifq().
>>> +         */
>>> +        if (ctx != ifq->ctx)
>>> +            percpu_ref_put(&ifq->ctx->refs);
>>
>> After you put this and ifq->refs below down, the zcrx object can get
>> destroyed, but this ctx might still have requests using the object.
>> Waiting on ctx refs would ensure requests are killed, but that'd
>> create a cycle.
> 
> Another concerning part is long term cross ctx referencing,
> which is even worse than pp locking it up. I mentioned
> that it'd be great to reverse the refcounting relation,
> but that'd also need additional ground work to break
> dependencies.

Yeah, Jens said the same. I did refactoring to break the dep, so now
rings take refs on ifqs that have an independent lifetime.
io_shutdown_zcrx_ifqs() is gone, and all cleanup is done after ctx->refs
drops to 0 in io_unregister_zcrx_ifqs(). From each ring's perspective,
the ifq remains alive until all of its requests are done, and the last
ring frees the ifq. I'll send it a bit later today.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings()
  2025-10-27 10:04   ` Pavel Begunkov
@ 2025-10-28 14:54     ` David Wei
  2025-10-28 15:19       ` Pavel Begunkov
  0 siblings, 1 reply; 12+ messages in thread
From: David Wei @ 2025-10-28 14:54 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, netdev; +Cc: Jens Axboe

On 2025-10-27 03:04, Pavel Begunkov wrote:
> On 10/26/25 17:34, David Wei wrote:
>> Rename lock_two_rings() to io_lock_two_rings() and export. This will be
>> used when sharing a src ifq owned by one ring with another ring. During
>> this process both rings need to be locked in a deterministic order,
>> similar to the current user io_clone_buffers().
> 
> unlock();
> double_lock();
> 
> It's quite a bad pattern just like any temporary unlocks in the
> registration path, it gives a lot of space for exploitation.
> 
> Ideally, it'd be
> 
> lock(ctx1);
> zcrx = grab_zcrx(ctx1, id); // with some refcounting inside
> unlock(ctx1);
> 
> lock(ctx2);
> install(ctx2, zcrx);
> unlock(ctx2);

Thanks, I've refactored this to lock rings in sequence instead of both
rings.

> 
> And as discussed, we need to think about turning it into a temp
> file, bc of sync, and it's also hard to send an io_uring fd.
> Though, that'd need moving bits around to avoid refcounting
> cycles.
> 

My next version of this adds a refcount to ifq and decouple its lifetime
from ring ctx as a first step. Could we defer turning ifq into a file as
a follow up?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings
  2025-10-27 10:20   ` Pavel Begunkov
  2025-10-27 11:47     ` Pavel Begunkov
@ 2025-10-28 14:55     ` David Wei
  2025-10-28 15:22       ` Pavel Begunkov
  1 sibling, 1 reply; 12+ messages in thread
From: David Wei @ 2025-10-28 14:55 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, netdev; +Cc: Jens Axboe

On 2025-10-27 03:20, Pavel Begunkov wrote:
> On 10/26/25 17:34, David Wei wrote:
>> Add a way to share an ifq from a src ring that is real i.e. bound to a
>> HW RX queue with other rings. This is done by passing a new flag
>> IORING_ZCRX_IFQ_REG_SHARE in the registration struct
>> io_uring_zcrx_ifq_reg, alongside the fd of the src ring and the ifq id
>> to be shared.
>>
>> To prevent the src ring or ifq from being cleaned up or freed while
>> there are still shared ifqs, take the appropriate refs on the src ring
>> (ctx->refs) and src ifq (ifq->refs).
>>
>> Signed-off-by: David Wei <dw@davidwei.uk>
>> ---
>>   include/uapi/linux/io_uring.h |  4 ++
>>   io_uring/zcrx.c               | 74 ++++++++++++++++++++++++++++++++++-
>>   2 files changed, 76 insertions(+), 2 deletions(-)
>>
>> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
>> index 569cc0338acb..7418c959390a 100644
>> --- a/io_uring/zcrx.c
>> +++ b/io_uring/zcrx.c
[...]
>> @@ -541,6 +541,67 @@ struct io_mapped_region *io_zcrx_get_region(struct io_ring_ctx *ctx,
>>       return ifq ? &ifq->region : NULL;
>>   }
>> +static int io_share_zcrx_ifq(struct io_ring_ctx *ctx,
>> +                 struct io_uring_zcrx_ifq_reg __user *arg,
>> +                 struct io_uring_zcrx_ifq_reg *reg)
>> +{
>> +    struct io_ring_ctx *src_ctx;
>> +    struct io_zcrx_ifq *src_ifq;
>> +    struct file *file;
>> +    int src_fd, ret;
>> +    u32 src_id, id;
>> +
>> +    src_fd = reg->if_idx;
>> +    src_id = reg->if_rxq;
>> +
>> +    file = io_uring_register_get_file(src_fd, false);
>> +    if (IS_ERR(file))
>> +        return PTR_ERR(file);
>> +
>> +    src_ctx = file->private_data;
>> +    if (src_ctx == ctx)
>> +        return -EBADFD;
>> +
>> +    mutex_unlock(&ctx->uring_lock);
>> +    io_lock_two_rings(ctx, src_ctx);
>> +
>> +    ret = -EINVAL;
>> +    src_ifq = xa_load(&src_ctx->zcrx_ctxs, src_id);
>> +    if (!src_ifq)
>> +        goto err_unlock;
>> +
>> +    percpu_ref_get(&src_ctx->refs);
>> +    refcount_inc(&src_ifq->refs);
>> +
>> +    scoped_guard(mutex, &ctx->mmap_lock) {
>> +        ret = xa_alloc(&ctx->zcrx_ctxs, &id, NULL, xa_limit_31b, GFP_KERNEL);
>> +        if (ret)
>> +            goto err_unlock;
>> +
>> +        ret = -ENOMEM;
>> +        if (xa_store(&ctx->zcrx_ctxs, id, src_ifq, GFP_KERNEL)) {
>> +            xa_erase(&ctx->zcrx_ctxs, id);
>> +            goto err_unlock;
>> +        }
> 
> It's just xa_alloc(..., src_ifq, ...);
> 
>> +    }
>> +
>> +    reg->zcrx_id = id;
>> +    if (copy_to_user(arg, reg, sizeof(*reg))) {
>> +        ret = -EFAULT;
>> +        goto err;
>> +    }
> 
> Better to do that before publishing zcrx into ctx->zcrx_ctxs

I can only do one of the two suggestions above. No valid id until
xa_alloc() returns, so I either split xa_alloc()/xa_store() with
copy_to_user() in between, or I do a single xa_alloc() and
copy_to_user() after.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings()
  2025-10-28 14:54     ` David Wei
@ 2025-10-28 15:19       ` Pavel Begunkov
  0 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-10-28 15:19 UTC (permalink / raw)
  To: David Wei, io-uring, netdev; +Cc: Jens Axboe

On 10/28/25 14:54, David Wei wrote:
> On 2025-10-27 03:04, Pavel Begunkov wrote:
>> On 10/26/25 17:34, David Wei wrote:
>>> Rename lock_two_rings() to io_lock_two_rings() and export. This will be
>>> used when sharing a src ifq owned by one ring with another ring. During
>>> this process both rings need to be locked in a deterministic order,
>>> similar to the current user io_clone_buffers().
>>
>> unlock();
>> double_lock();
>>
>> It's quite a bad pattern just like any temporary unlocks in the
>> registration path, it gives a lot of space for exploitation.
>>
>> Ideally, it'd be
>>
>> lock(ctx1);
>> zcrx = grab_zcrx(ctx1, id); // with some refcounting inside
>> unlock(ctx1);
>>
>> lock(ctx2);
>> install(ctx2, zcrx);
>> unlock(ctx2);
> 
> Thanks, I've refactored this to lock rings in sequence instead of both
> rings.
> 
>>
>> And as discussed, we need to think about turning it into a temp
>> file, bc of sync, and it's also hard to send an io_uring fd.
>> Though, that'd need moving bits around to avoid refcounting
>> cycles.
>>
> 
> My next version of this adds a refcount to ifq and decouple its lifetime
> from ring ctx as a first step. Could we defer turning ifq into a file as
> a follow up?

The mentioned sync problems is about using a ring bound to another
task. Decoupling of the zcrx object from io_uring instance should
do here as well. Please send out the next version since it sounds
you already have it prepared and we'll take it from there.

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings
  2025-10-28 14:55     ` David Wei
@ 2025-10-28 15:22       ` Pavel Begunkov
  0 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-10-28 15:22 UTC (permalink / raw)
  To: David Wei, io-uring, netdev; +Cc: Jens Axboe

On 10/28/25 14:55, David Wei wrote:
> On 2025-10-27 03:20, Pavel Begunkov wrote:
>> On 10/26/25 17:34, David Wei wrote:
>>> Add a way to share an ifq from a src ring that is real i.e. bound to a
>>> HW RX queue with other rings. This is done by passing a new flag
>>> IORING_ZCRX_IFQ_REG_SHARE in the registration struct
>>> io_uring_zcrx_ifq_reg, alongside the fd of the src ring and the ifq id
>>> to be shared.
>>>
>>> To prevent the src ring or ifq from being cleaned up or freed while
>>> there are still shared ifqs, take the appropriate refs on the src ring
>>> (ctx->refs) and src ifq (ifq->refs).
>>>
>>> Signed-off-by: David Wei <dw@davidwei.uk>
>>> ---
>>>   include/uapi/linux/io_uring.h |  4 ++
>>>   io_uring/zcrx.c               | 74 ++++++++++++++++++++++++++++++++++-
>>>   2 files changed, 76 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
>>> index 569cc0338acb..7418c959390a 100644
>>> --- a/io_uring/zcrx.c
>>> +++ b/io_uring/zcrx.c
> [...]
>>> @@ -541,6 +541,67 @@ struct io_mapped_region *io_zcrx_get_region(struct io_ring_ctx *ctx,
>>>       return ifq ? &ifq->region : NULL;
>>>   }
>>> +static int io_share_zcrx_ifq(struct io_ring_ctx *ctx,
>>> +                 struct io_uring_zcrx_ifq_reg __user *arg,
>>> +                 struct io_uring_zcrx_ifq_reg *reg)
>>> +{
>>> +    struct io_ring_ctx *src_ctx;
>>> +    struct io_zcrx_ifq *src_ifq;
>>> +    struct file *file;
>>> +    int src_fd, ret;
>>> +    u32 src_id, id;
>>> +
>>> +    src_fd = reg->if_idx;
>>> +    src_id = reg->if_rxq;
>>> +
>>> +    file = io_uring_register_get_file(src_fd, false);
>>> +    if (IS_ERR(file))
>>> +        return PTR_ERR(file);
>>> +
>>> +    src_ctx = file->private_data;
>>> +    if (src_ctx == ctx)
>>> +        return -EBADFD;
>>> +
>>> +    mutex_unlock(&ctx->uring_lock);
>>> +    io_lock_two_rings(ctx, src_ctx);
>>> +
>>> +    ret = -EINVAL;
>>> +    src_ifq = xa_load(&src_ctx->zcrx_ctxs, src_id);
>>> +    if (!src_ifq)
>>> +        goto err_unlock;
>>> +
>>> +    percpu_ref_get(&src_ctx->refs);
>>> +    refcount_inc(&src_ifq->refs);
>>> +
>>> +    scoped_guard(mutex, &ctx->mmap_lock) {
>>> +        ret = xa_alloc(&ctx->zcrx_ctxs, &id, NULL, xa_limit_31b, GFP_KERNEL);
>>> +        if (ret)
>>> +            goto err_unlock;
>>> +
>>> +        ret = -ENOMEM;
>>> +        if (xa_store(&ctx->zcrx_ctxs, id, src_ifq, GFP_KERNEL)) {
>>> +            xa_erase(&ctx->zcrx_ctxs, id);
>>> +            goto err_unlock;
>>> +        }
>>
>> It's just xa_alloc(..., src_ifq, ...);
>>
>>> +    }
>>> +
>>> +    reg->zcrx_id = id;
>>> +    if (copy_to_user(arg, reg, sizeof(*reg))) {
>>> +        ret = -EFAULT;
>>> +        goto err;
>>> +    }
>>
>> Better to do that before publishing zcrx into ctx->zcrx_ctxs
> 
> I can only do one of the two suggestions above. No valid id until
> xa_alloc() returns, so I either split xa_alloc()/xa_store() with
> copy_to_user() in between, or I do a single xa_alloc() and
> copy_to_user() after.

Makes sense, I'd do splitting then, at least this way it's
not exposing it to the user space for a brief moment.

-- 
Pavel Begunkov


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-10-28 15:22 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-26 17:34 [PATCH v3 0/3] io_uring zcrx ifq sharing David Wei
2025-10-26 17:34 ` [PATCH v3 1/3] io_uring/rsrc: rename and export io_lock_two_rings() David Wei
2025-10-27 10:04   ` Pavel Begunkov
2025-10-28 14:54     ` David Wei
2025-10-28 15:19       ` Pavel Begunkov
2025-10-26 17:34 ` [PATCH v3 2/3] io_uring/zcrx: add refcount to struct io_zcrx_ifq David Wei
2025-10-26 17:34 ` [PATCH v3 3/3] io_uring/zcrx: share an ifq between rings David Wei
2025-10-27 10:20   ` Pavel Begunkov
2025-10-27 11:47     ` Pavel Begunkov
2025-10-27 15:10       ` David Wei
2025-10-28 14:55     ` David Wei
2025-10-28 15:22       ` Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).