From: "Michael S. Tsirkin" <mst@redhat.com>
To: Simon Schippers <simon.schippers@tu-dortmund.de>
Cc: willemdebruijn.kernel@gmail.com, jasowang@redhat.com,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, eperezma@redhat.com,
leiyang@redhat.com, stephen@networkplumber.org, jon@nutanix.com,
tim.gebauer@tu-dortmund.de, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
virtualization@lists.linux.dev
Subject: Re: [PATCH net-next v12 0/4] tun/tap & vhost-net: apply qdisc backpressure on full ptr_ring to reduce TX drops
Date: Mon, 11 May 2026 05:10:50 -0400 [thread overview]
Message-ID: <20260511051037-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20260510151529.43895-1-simon.schippers@tu-dortmund.de>
On Sun, May 10, 2026 at 05:15:25PM +0200, Simon Schippers wrote:
> This patch series deals with tun/tap & vhost-net which drop incoming
> SKBs whenever their internal ptr_ring buffer is full. Instead, with this
> patch series, the associated netdev queue is stopped - but only when a
> qdisc is attached. If no qdisc is present the existing behavior is
> preserved. The XDP transmit path is not affected. This patch series
> touches tun/tap and vhost-net, as they share common logic and must be
> updated together. Modifying only one of them would break the other.
>
> By applying proper backpressure, this change allows the connected qdisc to
> operate correctly, as reported in [1], and significantly improves
> performance in real-world scenarios, as demonstrated in our paper [2]. For
> example, we observed a 36% TCP throughput improvement for an OpenVPN
> connection between Germany and the USA.
>
> Synthetic pktgen benchmarks indicate a slight regression, and packet
> loss is reduced to near zero. Pktgen benchmarks are provided per commit,
> with the final commit showing the overall performance.
at v12, time to merge this.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
> Thanks!
>
> [1] Link: https://unix.stackexchange.com/questions/762935/traffic-shaping-ineffective-on-tun-device
> [2] Link: https://cni.etit.tu-dortmund.de/storages/cni-etit/r/Research/Publications/2025/Gebauer_2025_VTCFall/Gebauer_VTCFall2025_AuthorsVersion.pdf
>
> ---
> Changelog:
> v12:
> Patch 1:
> - Revert tun_queue_purge() to plain ptr_ring_consume() and instead
> explicitly wake the queue in __tun_detach() for the ntfile taking
> over the queue slot (if its ring is empty).
> - Inlined tun_reset_cons_cnt(), because only tun_attach() uses it.
>
> - Patches 2-4 and cover letter unchanged.
> - Compiled and short pktgen test.
>
> v11:
> - Renamed __ptr_ring_produce_peek() to __ptr_ring_check_produce()
> (Sashiko)
> - Add return code -EINVAL to __ptr_ring_check_produce() which lets
> tun_net_xmit() stop the queue only on -ENOSPC. (MST)
> - Resolve race on tfile->queue_index by locking tx_ring.consumer_lock
> in __tun_detach(). (Sashiko)
> - Wake the queue in tun_queue_resize() to avoid possible stalls.
> - Other minor adjustments & reran the benchmarks.
>
> v10: https://lore.kernel.org/netdev/20260506141033.180450-1-simon.schippers@tu-dortmund.de/
> - Changed the term "Transmitted" to "Received" in the benchmarks,
> as correctly pointed out by MST, and reran the benchmarks.
>
> Addressed the Sashiko AI review:
> - Avoid a data race on tfile->cons_cnt by always locking.
> - Correctly count the number of consumed packets for vhost-net.
> - Corrected a typo in the commit message of commit 3.
> - Added a missing barrier on the consumer side.
> --> The barriers now follow the "store buffering" principle.
> - No longer return NETDEV_TX_BUSY at all, because it is unsafe.
> --> Result: There are still a few drops with multiple senders, which
> would be avoided by disabling LLTX.
>
> V9: https://lore.kernel.org/netdev/20260428123859.19578-1-simon.schippers@tu-dortmund.de/
> - Addressed minor nit by MST in patches 1 and 2.
> - Rebased patch 3 because of commit d748047
> ("ptr_ring: disable KCSAN warnings").
> - Documented the pair of the smp_mb__after_atomic() in tun_net_xmit()
> with tun_ring_consume().
> --> It simply pairs with the test_and_clear_bit() inside of
> netif_wake_subqueue().
> - Use 1 ptr_ring consumer spinlock instead of 2.
> - Ran pktgen benchmarks with pg_set SHARED for 50 iterations on
> latest kernel
> --> No significant performance difference noticed
>
> V8: https://lore.kernel.org/netdev/20260312130639.138988-1-simon.schippers@tu-dortmund.de/
> - Drop code changes in drivers/net/tap.c; The code there deals with
> ipvtap/macvtap which are unrelated to the goal of this patch series
> and I did not realize that before
> -> Greatly simplified logic, 4 instead of 9 commits
> -> No more duplicated logics and distinction in vhost required
> - Only wake after the queue stopped and half of the ring was consumed
> as suggested by MST
> -> Performance improvements for TAP, but still slightly slower
> - Better benchmarking with pinned threads, XDP drop program for
> tap+vhost-net and disabling CPU mitigations (and newer Ryzen 5 5600X
> processor) as suggested by Jason Wang
>
> V7: https://lore.kernel.org/netdev/20260107210448.37851-1-simon.schippers@tu-dortmund.de/
> - Switch to an approach similar to veth (excluding the recently fixed
> variant), as suggested by MST, with minor adjustments discussed in V6
> - Rename the cover-letter title
> - Add multithreaded pktgen and iperf3 benchmarks, as suggested by Jason
> Wang
> - Rework __ptr_ring_consume_created_space() so it can also be used after
> batched consume
>
> ...
>
> ---
>
> Simon Schippers (4):
> tun/tap: add ptr_ring consume helper with netdev queue wakeup
> vhost-net: wake queue of tun/tap after ptr_ring consume
> ptr_ring: move free-space check into separate helper
> tun/tap & vhost-net: avoid ptr_ring tail-drop when a qdisc is present
>
> drivers/net/tun.c | 109 ++++++++++++++++++++++++++++++++++++---
> drivers/vhost/net.c | 21 +++++---
> include/linux/if_tun.h | 3 ++
> include/linux/ptr_ring.h | 20 ++++++-
> 4 files changed, 139 insertions(+), 14 deletions(-)
>
> --
> 2.43.0
prev parent reply other threads:[~2026-05-11 9:10 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-10 15:15 [PATCH net-next v12 0/4] tun/tap & vhost-net: apply qdisc backpressure on full ptr_ring to reduce TX drops Simon Schippers
2026-05-10 15:15 ` [PATCH net-next v12 1/4] tun/tap: add ptr_ring consume helper with netdev queue wakeup Simon Schippers
2026-05-10 15:15 ` [PATCH net-next v12 2/4] vhost-net: wake queue of tun/tap after ptr_ring consume Simon Schippers
2026-05-10 15:15 ` [PATCH net-next v12 3/4] ptr_ring: move free-space check into separate helper Simon Schippers
2026-05-10 15:15 ` [PATCH net-next v12 4/4] tun/tap & vhost-net: avoid ptr_ring tail-drop when a qdisc is present Simon Schippers
2026-05-11 9:10 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260511051037-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=jon@nutanix.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=leiyang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=simon.schippers@tu-dortmund.de \
--cc=stephen@networkplumber.org \
--cc=tim.gebauer@tu-dortmund.de \
--cc=virtualization@lists.linux.dev \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox