From: hawk@kernel.org
To: netdev@vger.kernel.org
Cc: hawk@kernel.org, kernel-team@cloudflare.com,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Simon Horman" <horms@kernel.org>,
"Chris Arges" <carges@cloudflare.com>,
"Mike Freemon" <mfreemon@cloudflare.com>,
"Toke Høiland-Jørgensen" <toke@toke.dk>,
"Jonas Köppeler" <j.koeppeler@tu-berlin.de>,
"Breno Leitao" <leitao@debian.org>,
"Simon Schippers" <simon.schippers@tu-dortmund.de>,
"Alexei Starovoitov" <ast@kernel.org>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"John Fastabend" <john.fastabend@gmail.com>,
"Stanislav Fomichev" <sdf@fomichev.me>,
bpf@vger.kernel.org
Subject: [PATCH net-next v5 0/5] veth: add Byte Queue Limits (BQL) support
Date: Tue, 5 May 2026 15:21:52 +0200 [thread overview]
Message-ID: <20260505132159.241305-1-hawk@kernel.org> (raw)
From: Jesper Dangaard Brouer <hawk@kernel.org>
This series adds BQL (Byte Queue Limits) to the veth driver, reducing
latency by dynamically limiting in-flight packets in the ptr_ring and
moving buffering into the qdisc where AQM algorithms can act on it.
Problem:
veth's 256-entry ptr_ring acts as a "dark buffer" -- packets queued
there are invisible to the qdisc's AQM. Under load, the ring fills
completely (DRV_XOFF backpressure), adding up to 256 packets of
unmanaged latency before the qdisc even sees congestion.
Solution:
BQL (STACK_XOFF) dynamically limits in-flight packets, stopping the
queue before the ring fills. This keeps the ring shallow and pushes
excess packets into the qdisc, where sojourn-based AQM can measure
and drop them.
Test setup: veth pair, UDP flood, 13000 iptables rules in consumer
namespace (slows NAPI-64 cycle to ~6-7ms), ping measures RTT under load.
BQL off BQL on
fq_codel: RTT ~22ms, 4% loss RTT ~1.3ms, 0% loss
sfq: RTT ~24ms, 0% loss RTT ~1.5ms, 0% loss
BQL reduces ping RTT by ~17x for both qdiscs. Consumer throughput
is unchanged (~10K pps) -- BQL adds no overhead.
CoDel bug discovered during BQL development:
Our original motivation for BQL was fq_codel ping loss observed under
load (4-26% depending on NAPI cycle time). Investigating this led us
to discover a bug in the CoDel implementation: codel_dequeue() does
not reset vars->first_above_time when a flow goes empty, contrary to
the reference algorithm. This causes stale CoDel state to persist
across empty periods in fq_codel's per-flow queues, penalizing sparse
flows like ICMP ping. A fix for this has been applied to the net tree
815980fe6dbb ("net_sched: codel: fix stale state for empty flows in fq_codel")
BQL remains valuable independently: it reduces RTT by ~17x by moving
buffering from the dark ptr_ring into the qdisc. Additionally, BQL
clears STACK_XOFF per-SKB as each packet completes, rather than
batch-waking after 64 packets (DRV_XOFF). This keeps sojourn times
below fq_codel's target, preventing CoDel from entering dropping
state on non-congested flows in the first place.
Key design decisions:
- Charge-under-lock in veth_xdp_rx(): The BQL charge must precede
the ptr_ring produce, because the NAPI consumer can run on another
CPU and complete the SKB immediately after it becomes visible. To
avoid a pre-charge/undo pattern, the charge is done under the
ptr_ring producer_lock after confirming the ring is not full. BQL
is only charged when produce is guaranteed to succeed, keeping
num_queued monotonically increasing. HARD_TX_LOCK already
serializes dql_queued() (veth requires a qdisc for BQL); the
ptr_ring lock additionally would allow noqueue to work correctly.
- Per-SKB BQL tracking via pointer tag: A VETH_BQL_FLAG bit in the
ptr_ring pointer records whether each SKB was BQL-charged. This is
necessary because the qdisc can be replaced live (noqueue->sfq or
vice versa) while SKBs are in-flight -- the completion side must
know the charge state that was decided at enqueue time.
- IFF_NO_QUEUE + BQL coexistence: A new dev->bql flag enables BQL
sysfs exposure for IFF_NO_QUEUE devices that opt in to DQL
accounting, without changing IFF_NO_QUEUE semantics.
Background and acknowledgments:
Mike Freemon reported the veth dark buffer problem internally at
Cloudflare and showed that recompiling the kernel with a ptr_ring
size of 30 (down from 256) made fq_codel work dramatically better.
This was the primary motivation for a proper BQL solution that
achieves the same effect dynamically without a kernel rebuild.
Chris Arges wrote a reproducer for the dark buffer latency problem:
https://github.com/netoptimizer/veth-backpressure-performance-testing
This is where we first observed ping packets being dropped under
fq_codel, which became our secondary motivation for BQL. In
production we switched to SFQ on veth devices as a workaround.
Jonas Koeppeler provided extensive testing and code review.
Together we discovered that the fq_codel ping loss was actually a
12-year-old CoDel bug (stale first_above_time in empty flows), not
caused by the dark buffer itself.
Patch overview:
1. veth: fix OOB txq access in veth_poll() with asymmetric queue counts
2. net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
3. veth: implement Byte Queue Limits (BQL) for latency reduction
4. veth: add tx_timeout watchdog as BQL safety net
5. net: sched: add timeout count to NETDEV WATCHDOG message
Jesper Dangaard Brouer (5):
veth: fix OOB txq access in veth_poll() with asymmetric queue counts
net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
veth: implement Byte Queue Limits (BQL) for latency reduction
veth: add tx_timeout watchdog as BQL safety net
net: sched: add timeout count to NETDEV WATCHDOG message
.../networking/net_cachelines/net_device.rst | 1 +
drivers/net/veth.c | 107 ++++++++++++++++--
include/linux/netdevice.h | 2 +
net/core/net-sysfs.c | 8 +-
net/sched/sch_generic.c | 8 +-
5 files changed, 109 insertions(+), 17 deletions(-)
V4: https://lore.kernel.org/all/20260501071633.644353-1-hawk@kernel.org/
Changes since V4:
- New patch 1: fix OOB txq access in veth_poll() when veth peers have
asymmetric RX/TX queue counts. XDP redirect can deliver frames to
an RX queue index that exceeds the peer's TX queue count, causing
an out-of-bounds netdev_get_tx_queue() access. Found by sashiko-bot.
- Patch 3 (veth BQL): wake stopped peer txqs in veth_napi_del_range()
to clear DRV_XOFF after NAPI teardown. A concurrent veth_xmit()
can set DRV_XOFF between rcu_assign_pointer(napi, NULL) and
synchronize_net(); with NAPI gone, no veth_poll() clears it.
Guarded by netif_running() to skip during device close.
V3: https://lore.kernel.org/all/20260429172036.1028526-1-hawk@kernel.org/
Changes since V3:
- Drop selftest patch (patch 5 from V3) per maintainer request.
- Rebase on latest net-next.
V2: https://lore.kernel.org/all/20260413094442.1376022-1-hawk@kernel.org/
Changes since V2:
- Patch 2 (veth BQL): fix syzbot WARNING in veth_napi_del_range():
clamp BQL reset loop to peer's real_num_tx_queues. The loop was
iterating dev->real_num_rx_queues but indexing peer's txq[], which
goes out of bounds when the peer has fewer TX queues (e.g. veth
enslaved to a bond with XDP attached).
V1: https://lore.kernel.org/all/20260324174719.1224337-1-hawk@kernel.org/
Changes since V1:
- Patch 1 (dev->bql flag): add kdoc entry for @bql in struct net_device.
- Patch 2 (veth BQL): charge fixed VETH_BQL_UNIT (1) per packet instead
of skb->len. veth has no link speed; the ptr_ring is packet-indexed.
Byte-based charging lets small packets sneak many entries into the ring.
Testing: min-size packet flood causes 3.7x ping RTT degradation with
skb->len vs no change with fixed-unit charging.
- Patch 3 (tx_timeout watchdog): fix race with peer NAPI: replace
netdev_tx_reset_queue() with clear_bit(STACK_XOFF) + netif_tx_wake_queue()
to avoid dql_reset() racing with concurrent dql_completed().
- Cover letter: update CoDel fix reference to merged commit in net tree.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Simon Horman <horms@kernel.org>
Cc: Chris Arges <carges@cloudflare.com>
Cc: Mike Freemon <mfreemon@cloudflare.com>
Cc: Toke Høiland-Jørgensen <toke@toke.dk>
Cc: Jonas Köppeler <j.koeppeler@tu-berlin.de>
Cc: Breno Leitao <leitao@debian.org>
Cc: Simon Schippers <simon.schippers@tu-dortmund.de>
Cc: kernel-team@cloudflare.com
--
2.43.0
next reply other threads:[~2026-05-05 13:22 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-05 13:21 hawk [this message]
2026-05-05 13:21 ` [PATCH net-next v5 1/5] veth: fix OOB txq access in veth_poll() with asymmetric queue counts hawk
2026-05-07 14:25 ` Paolo Abeni
2026-05-05 13:21 ` [PATCH net-next v5 2/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices hawk
2026-05-05 13:21 ` [PATCH net-next v5 3/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
2026-05-06 18:50 ` sashiko-bot
2026-05-07 6:54 ` Simon Schippers
2026-05-07 13:21 ` Paolo Abeni
2026-05-07 14:34 ` Paolo Abeni
2026-05-07 14:46 ` Simon Schippers
2026-05-07 19:09 ` Jesper Dangaard Brouer
2026-05-07 20:12 ` Simon Schippers
2026-05-07 20:45 ` Jesper Dangaard Brouer
2026-05-08 8:01 ` Simon Schippers
2026-05-08 9:20 ` Simon Schippers
2026-05-09 2:06 ` Jakub Kicinski
2026-05-09 9:09 ` Jesper Dangaard Brouer
2026-05-10 15:56 ` Jakub Kicinski
2026-05-11 8:11 ` Jesper Dangaard Brouer
2026-05-11 9:55 ` Simon Schippers
2026-05-11 18:08 ` Jesper Dangaard Brouer
2026-05-11 20:37 ` Simon Schippers
2026-05-12 13:54 ` Jesper Dangaard Brouer
2026-05-12 21:55 ` Simon Schippers
2026-05-05 13:21 ` [PATCH net-next v5 4/5] veth: add tx_timeout watchdog as BQL safety net hawk
2026-05-05 13:21 ` [PATCH net-next v5 5/5] net: sched: add timeout count to NETDEV WATCHDOG message hawk
2026-05-07 14:30 ` [PATCH net-next v5 0/5] veth: add Byte Queue Limits (BQL) support patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260505132159.241305-1-hawk@kernel.org \
--to=hawk@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=carges@cloudflare.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=j.koeppeler@tu-berlin.de \
--cc=john.fastabend@gmail.com \
--cc=kernel-team@cloudflare.com \
--cc=kuba@kernel.org \
--cc=leitao@debian.org \
--cc=mfreemon@cloudflare.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=simon.schippers@tu-dortmund.de \
--cc=toke@toke.dk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.