From: Simon Schippers <simon.schippers@tu-dortmund.de>
To: willemdebruijn.kernel@gmail.com, jasowang@redhat.com,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, mst@redhat.com,
eperezma@redhat.com, leiyang@redhat.com,
stephen@networkplumber.org, jon@nutanix.com,
tim.gebauer@tu-dortmund.de, simon.schippers@tu-dortmund.de,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org, virtualization@lists.linux.dev
Subject: [PATCH net-next v11 4/4] tun/tap & vhost-net: avoid ptr_ring tail-drop when a qdisc is present
Date: Fri, 8 May 2026 17:10:48 +0200 [thread overview]
Message-ID: <20260508151048.183125-5-simon.schippers@tu-dortmund.de> (raw)
In-Reply-To: <20260508151048.183125-1-simon.schippers@tu-dortmund.de>
This commit prevents tail-drop when a qdisc is present and the ptr_ring
becomes full. Once the ring reaches capacity after a produce attempt,
the netdev queue is stopped instead of dropping subsequent packets.
If no qdisc is present, the previous tail-drop behavior is preserved.
If producing an entry fails anyway due to a race, tun_net_xmit() drops
the packet. Such races are expected because LLTX is enabled and the
transmit path operates without the usual locking.
The __tun_wake_queue() function of the consumer races with the producer
for waking/stopping the netdev queue, which could result in a stalled
queue. Therefore, an smp_mb__after_atomic() is introduced that pairs
with the smp_mb() of the consumer. It follows the principle of store
buffering described in tools/memory-model/Documentation/recipes.txt:
- The producer in tun_net_xmit() first sets __QUEUE_STATE_DRV_XOFF,
followed by an smp_mb__after_atomic() (= smp_mb()), and then reads the
ring with __ptr_ring_check_produce().
- The consumer in __tun_wake_queue() first writes zero to the ring in
__ptr_ring_consume(), followed by an smp_mb(), and then reads the queue
status with netif_tx_queue_stopped().
=> Following the aforementioned principle, it is impossible for the
producer to see a full ring (and therefore not wake the queue on the
re-check) while the consumer simultaneously fails to see a stopped
queue (and therefore also does not wake it).
Benchmarks:
The benchmarks show a slight regression in raw transmission performance
when using two sending threads. Packet loss also occurs only in the
two-thread sending case; no packet loss was observed with a single
sending thread.
Test setup:
AMD Ryzen 5 5600X at 4.3 GHz, 3200 MHz RAM, isolated QEMU threads;
Average over 50 runs @ 100,000,000 packets. SRSO and spectre v2
mitigations disabled.
Note for tap+vhost-net:
XDP drop program active in VM -> ~2.5x faster; slower for tap due to
more syscalls (high utilization of entry_SYSRETQ_unsafe_stack in perf)
+--------------------------+--------------+----------------+----------+
| 1 thread | Stock | Patched with | diff |
| sending | | fq_codel qdisc | |
+------------+-------------+--------------+----------------+----------+
| TAP | Received | 1.132 Mpps | 1.123 Mpps | -0.8% |
| +-------------+--------------+----------------+----------+
| | Lost/s | 3.765 Mpps | 0 pps | |
+------------+-------------+--------------+----------------+----------+
| TAP | Received | 3.857 Mpps | 3.901 Mpps | +1.1% |
| +-------------+--------------+----------------+----------+
| +vhost-net | Lost/s | 0.802 Mpps | 0 pps | |
+------------+-------------+--------------+----------------+----------+
+--------------------------+--------------+----------------+----------+
| 2 threads | Stock | Patched with | diff |
| sending | | fq_codel qdisc | |
+------------+-------------+--------------+----------------+----------+
| TAP | Received | 1.115 Mpps | 1.081 Mpps | -3.0% |
| +-------------+--------------+----------------+----------+
| | Lost/s | 8.490 Mpps | 391 pps | |
+------------+-------------+--------------+----------------+----------+
| TAP | Received | 3.664 Mpps | 3.555 Mpps | -3.0% |
| +-------------+--------------+----------------+----------+
| +vhost-net | Lost/s | 5.330 Mpps | 938 pps | |
+------------+-------------+--------------+----------------+----------+
Co-developed-by: Tim Gebauer <tim.gebauer@tu-dortmund.de>
Signed-off-by: Tim Gebauer <tim.gebauer@tu-dortmund.de>
Signed-off-by: Simon Schippers <simon.schippers@tu-dortmund.de>
---
drivers/net/tun.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 4ee1ed6e815a..e56358878c36 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1052,6 +1052,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
struct netdev_queue *queue;
struct tun_file *tfile;
int len = skb->len;
+ int ret;
rcu_read_lock();
tfile = rcu_dereference(tun->tfiles[txq]);
@@ -1106,13 +1107,33 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
nf_reset_ct(skb);
- if (ptr_ring_produce(&tfile->tx_ring, skb)) {
+ queue = netdev_get_tx_queue(dev, txq);
+
+ spin_lock(&tfile->tx_ring.producer_lock);
+ ret = __ptr_ring_produce(&tfile->tx_ring, skb);
+ if (!qdisc_txq_has_no_queue(queue) &&
+ __ptr_ring_check_produce(&tfile->tx_ring) == -ENOSPC) {
+ netif_tx_stop_queue(queue);
+ /* Paired with smp_mb() in __tun_wake_queue() */
+ smp_mb__after_atomic();
+ if (!__ptr_ring_check_produce(&tfile->tx_ring))
+ netif_tx_wake_queue(queue);
+ }
+ spin_unlock(&tfile->tx_ring.producer_lock);
+
+ if (ret) {
+ /* This should be a rare case if a qdisc is present, but
+ * can happen due to lltx.
+ * Since skb_tx_timestamp(), skb_orphan(),
+ * run_ebpf_filter() and pskb_trim() could have tinkered
+ * with the SKB, returning NETDEV_TX_BUSY is unsafe and
+ * we must drop instead.
+ */
drop_reason = SKB_DROP_REASON_FULL_RING;
goto drop;
}
/* dev->lltx requires to do our own update of trans_start */
- queue = netdev_get_tx_queue(dev, txq);
txq_trans_cond_update(queue);
/* Notify and wake up reader process */
--
2.43.0
prev parent reply other threads:[~2026-05-08 15:11 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 15:10 [PATCH net-next v11 0/4] tun/tap & vhost-net: apply qdisc backpressure on full ptr_ring to reduce TX drops Simon Schippers
2026-05-08 15:10 ` [PATCH net-next v11 1/4] tun/tap: add ptr_ring consume helper with netdev queue wakeup Simon Schippers
2026-05-09 16:31 ` Simon Schippers
2026-05-09 22:44 ` Michael S. Tsirkin
2026-05-10 7:03 ` Simon Schippers
2026-05-10 8:55 ` Simon Schippers
2026-05-10 13:40 ` Michael S. Tsirkin
2026-05-10 14:01 ` Simon Schippers
2026-05-10 15:44 ` Michael S. Tsirkin
2026-05-10 16:22 ` Simon Schippers
2026-05-10 18:27 ` Michael S. Tsirkin
2026-05-08 15:10 ` [PATCH net-next v11 2/4] vhost-net: wake queue of tun/tap after ptr_ring consume Simon Schippers
2026-05-08 15:10 ` [PATCH net-next v11 3/4] ptr_ring: move free-space check into separate helper Simon Schippers
2026-05-08 15:10 ` Simon Schippers [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260508151048.183125-5-simon.schippers@tu-dortmund.de \
--to=simon.schippers@tu-dortmund.de \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=jon@nutanix.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=leiyang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=stephen@networkplumber.org \
--cc=tim.gebauer@tu-dortmund.de \
--cc=virtualization@lists.linux.dev \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox