* [PATCH net-next v2 0/5] veth: add Byte Queue Limits (BQL) support
@ 2026-04-13 9:44 hawk
2026-04-13 9:44 ` [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
2026-04-13 19:49 ` [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support syzbot ci
0 siblings, 2 replies; 5+ messages in thread
From: hawk @ 2026-04-13 9:44 UTC (permalink / raw)
To: netdev
Cc: kernel-team, Jesper Dangaard Brouer, Chris Arges, Mike Freemon,
Toke Høiland-Jørgensen, Shuah Khan, linux-kselftest,
Jonas Köppeler, Alexei Starovoitov, Daniel Borkmann,
David S. Miller, Jakub Kicinski, John Fastabend,
Stanislav Fomichev, bpf
From: Jesper Dangaard Brouer <hawk@kernel.org>
This series adds BQL (Byte Queue Limits) to the veth driver, reducing
latency by dynamically limiting in-flight packets in the ptr_ring and
moving buffering into the qdisc where AQM algorithms can act on it.
Problem:
veth's 256-entry ptr_ring acts as a "dark buffer" -- packets queued
there are invisible to the qdisc's AQM. Under load, the ring fills
completely (DRV_XOFF backpressure), adding up to 256 packets of
unmanaged latency before the qdisc even sees congestion.
Solution:
BQL (STACK_XOFF) dynamically limits in-flight packets, stopping the
queue before the ring fills. This keeps the ring shallow and pushes
excess packets into the qdisc, where sojourn-based AQM can measure
and drop them.
Test setup: veth pair, UDP flood, 13000 iptables rules in consumer
namespace (slows NAPI-64 cycle to ~6-7ms), ping measures RTT under load.
BQL off BQL on
fq_codel: RTT ~22ms, 4% loss RTT ~1.3ms, 0% loss
sfq: RTT ~24ms, 0% loss RTT ~1.5ms, 0% loss
BQL reduces ping RTT by ~17x for both qdiscs. Consumer throughput
is unchanged (~10K pps) -- BQL adds no overhead.
CoDel bug discovered during BQL development:
Our original motivation for BQL was fq_codel ping loss observed under
load (4-26% depending on NAPI cycle time). Investigating this led us
to discover a bug in the CoDel implementation: codel_dequeue() does
not reset vars->first_above_time when a flow goes empty, contrary to
the reference algorithm. This causes stale CoDel state to persist
across empty periods in fq_codel's per-flow queues, penalizing sparse
flows like ICMP ping. A fix for this has been applied to the net tree:
https://git.kernel.org/netdev/net/c/815980fe6dbb
BQL remains valuable independently: it reduces RTT by ~17x by moving
buffering from the dark ptr_ring into the qdisc. Additionally, BQL
clears STACK_XOFF per-SKB as each packet completes, rather than
batch-waking after 64 packets (DRV_XOFF). This keeps sojourn times
below fq_codel's target, preventing CoDel from entering dropping
state on non-congested flows in the first place.
Key design decisions:
- Charge-under-lock in veth_xdp_rx(): The BQL charge must precede
the ptr_ring produce, because the NAPI consumer can run on another
CPU and complete the SKB immediately after it becomes visible. To
avoid a pre-charge/undo pattern, the charge is done under the
ptr_ring producer_lock after confirming the ring is not full. BQL
is only charged when produce is guaranteed to succeed, keeping
num_queued monotonically increasing. HARD_TX_LOCK already
serializes dql_queued() (veth requires a qdisc for BQL); the
ptr_ring lock additionally would allow noqueue to work correctly.
- Per-SKB BQL tracking via pointer tag: A VETH_BQL_FLAG bit in the
ptr_ring pointer records whether each SKB was BQL-charged. This is
necessary because the qdisc can be replaced live (noqueue->sfq or
vice versa) while SKBs are in-flight -- the completion side must
know the charge state that was decided at enqueue time.
- IFF_NO_QUEUE + BQL coexistence: A new dev->bql flag enables BQL
sysfs exposure for IFF_NO_QUEUE devices that opt in to DQL
accounting, without changing IFF_NO_QUEUE semantics.
Background and acknowledgments:
Mike Freemon reported the veth dark buffer problem internally at
Cloudflare and showed that recompiling the kernel with a ptr_ring
size of 30 (down from 256) made fq_codel work dramatically better.
This was the primary motivation for a proper BQL solution that
achieves the same effect dynamically without a kernel rebuild.
Chris Arges wrote a reproducer for the dark buffer latency problem:
https://github.com/netoptimizer/veth-backpressure-performance-testing
This is where we first observed ping packets being dropped under
fq_codel, which became our secondary motivation for BQL. In
production we switched to SFQ on veth devices as a workaround.
Jonas Koeppeler provided extensive testing and code review.
Together we discovered that the fq_codel ping loss was actually a
12-year-old CoDel bug (stale first_above_time in empty flows), not
caused by the dark buffer itself. A fix has been applied to the net tree:
https://git.kernel.org/netdev/net/c/815980fe6dbb
Patch overview:
1. net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
2. veth: implement Byte Queue Limits (BQL) for latency reduction
3. veth: add tx_timeout watchdog as BQL safety net
4. net: sched: add timeout count to NETDEV WATCHDOG message
5. selftests: net: add veth BQL stress test
Jesper Dangaard Brouer (5):
net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
veth: implement Byte Queue Limits (BQL) for latency reduction
veth: add tx_timeout watchdog as BQL safety net
net: sched: add timeout count to NETDEV WATCHDOG message
selftests: net: add veth BQL stress test
.../networking/net_cachelines/net_device.rst | 1 +
drivers/net/veth.c | 92 +-
include/linux/netdevice.h | 2 +
net/core/net-sysfs.c | 8 +-
net/sched/sch_generic.c | 8 +-
tools/testing/selftests/net/Makefile | 3 +
tools/testing/selftests/net/config | 1 +
tools/testing/selftests/net/napi_poll_hist.bt | 40 +
tools/testing/selftests/net/veth_bql_test.sh | 821 ++++++++++++++++++
.../selftests/net/veth_bql_test_virtme.sh | 124 +++
10 files changed, 1084 insertions(+), 16 deletions(-)
create mode 100644 tools/testing/selftests/net/napi_poll_hist.bt
create mode 100755 tools/testing/selftests/net/veth_bql_test.sh
create mode 100755 tools/testing/selftests/net/veth_bql_test_virtme.sh
V1: https://lore.kernel.org/all/20260324174719.1224337-1-hawk@kernel.org/
Changes since V1:
- Patch 1 (dev->bql flag): add kdoc entry for @bql in struct net_device.
- Patch 2 (veth BQL): charge fixed VETH_BQL_UNIT (1) per packet instead
of skb->len. veth has no link speed; the ptr_ring is packet-indexed.
Byte-based charging lets small packets sneak many entries into the ring.
Testing: min-size packet flood causes 3.7x ping RTT degradation with
skb->len vs no change with fixed-unit charging.
- Patch 3 (tx_timeout watchdog): fix race with peer NAPI: replace
netdev_tx_reset_queue() with clear_bit(STACK_XOFF) + netif_tx_wake_queue()
to avoid dql_reset() racing with concurrent dql_completed().
- Patch 5 (selftests): fix shellcheck warnings and infos:
- Quote variables passed to kill_process and exit.
- Declare and assign local variables separately (SC2155).
- Use read -r to avoid mangling backslashes (SC2162).
- Add shellcheck disable comments for intentional word splitting
(set -- $line, tc $qdisc $opts) and indirect invocation (trap).
- Make iptables-restore failure a hard FAIL instead of continuing.
- Add veth_bql_test.sh to TEST_PROGS in net/Makefile.
- Add veth_bql_test_virtme.sh to TEST_FILES (needs kernel build tree).
- Add napi_poll_hist.bt to TEST_FILES in net/Makefile.
- Add CONFIG_NET_SCH_SFQ=m to net/config (default qdisc is sfq).
- Reduce default test duration from 300s to 30s for kselftest CI.
- Fix virtme wrapper: empty args bug, check vmlinux instead of test path.
- Cover letter: update CoDel fix reference to merged commit in net tree.
Cc: Chris Arges <carges@cloudflare.com>
Cc: Mike Freemon <mfreemon@cloudflare.com>
Cc: Toke Høiland-Jørgensen <toke@toke.dk>
Cc: Shuah Khan <shuah@kernel.org>
Cc: linux-kselftest@vger.kernel.org
Cc: Jonas Köppeler <j.koeppeler@tu-berlin.de>
Cc: kernel-team@cloudflare.com
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction
2026-04-13 9:44 [PATCH net-next v2 0/5] veth: add Byte Queue Limits (BQL) support hawk
@ 2026-04-13 9:44 ` hawk
2026-04-13 19:49 ` [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support syzbot ci
1 sibling, 0 replies; 5+ messages in thread
From: hawk @ 2026-04-13 9:44 UTC (permalink / raw)
To: netdev
Cc: kernel-team, Jesper Dangaard Brouer, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Alexei Starovoitov,
Daniel Borkmann, John Fastabend, Stanislav Fomichev, linux-kernel,
bpf
From: Jesper Dangaard Brouer <hawk@kernel.org>
Commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to
reduce TX drops") gave qdiscs control over veth by returning
NETDEV_TX_BUSY when the ptr_ring is full (DRV_XOFF). That commit noted
a known limitation: the 256-entry ptr_ring sits in front of the qdisc as
a dark buffer, adding base latency because the qdisc has no visibility
into how many bytes are already queued there.
Add BQL support so the qdisc gets feedback and can begin shaping traffic
before the ring fills. In testing with fq_codel, BQL reduces ping RTT
under UDP load from ~6.61ms to ~0.36ms (18x).
Charge a fixed VETH_BQL_UNIT (1) per packet rather than skb->len, so
the DQL limit tracks packets-in-flight. Unlike a physical NIC, veth
has no link speed -- the ptr_ring drains at CPU speed and is
packet-indexed, not byte-indexed, so bytes are not the natural unit.
With byte-based charging, small packets sneak many more entries into
the ring before STACK_XOFF fires, deepening the dark buffer under
mixed-size workloads. Testing with a concurrent min-size packet flood
shows 3.7x ping RTT degradation with skb->len charging versus no
change with fixed-unit charging.
Charge BQL inside veth_xdp_rx() under the ptr_ring producer_lock, after
confirming the ring is not full. The charge must precede the produce
because the NAPI consumer can run on another CPU and complete the SKB
the instant it becomes visible in the ring. Doing both under the same
lock avoids a pre-charge/undo pattern -- BQL is only charged when
produce is guaranteed to succeed.
BQL is enabled only when a real qdisc is attached (guarded by
!qdisc_txq_has_no_queue), as HARD_TX_LOCK provides serialization
for TXQ modification like dql_queued(). For lltx devices, like veth,
this HARD_TX_LOCK serialization isn't provided. The ptr_ring
producer_lock provides additional serialization that would allow
BQL to work correctly even with noqueue, though that combination
is not currently enabled, as the netstack will drop and warn.
Track per-SKB BQL state via a VETH_BQL_FLAG pointer tag in the ptr_ring
entry. This is necessary because the qdisc can be replaced live while
SKBs are in-flight -- each SKB must carry the charge decision made at
enqueue time rather than re-checking the peer's qdisc at completion.
Complete per-SKB in veth_xdp_rcv() rather than in bulk, so STACK_XOFF
clears promptly when producer and consumer run on different CPUs.
BQL introduces a second independent queue-stop mechanism (STACK_XOFF)
alongside the existing DRV_XOFF (ring full). Both must be clear for
the queue to transmit. Reset BQL state in veth_napi_del_range() after
synchronize_net() to avoid racing with in-flight veth_poll() calls.
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
---
drivers/net/veth.c | 74 +++++++++++++++++++++++++++++++++++++++-------
1 file changed, 63 insertions(+), 11 deletions(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index e35df717e65e..6431dc40f9b4 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -34,9 +34,13 @@
#define DRV_VERSION "1.0"
#define VETH_XDP_FLAG BIT(0)
+#define VETH_BQL_FLAG BIT(1)
#define VETH_RING_SIZE 256
#define VETH_XDP_HEADROOM (XDP_PACKET_HEADROOM + NET_IP_ALIGN)
+/* Fixed BQL charge: DQL limit tracks packets-in-flight, not bytes */
+#define VETH_BQL_UNIT 1
+
#define VETH_XDP_TX_BULK_SIZE 16
#define VETH_XDP_BATCH 16
@@ -280,6 +284,21 @@ static bool veth_is_xdp_frame(void *ptr)
return (unsigned long)ptr & VETH_XDP_FLAG;
}
+static bool veth_ptr_is_bql(void *ptr)
+{
+ return (unsigned long)ptr & VETH_BQL_FLAG;
+}
+
+static struct sk_buff *veth_ptr_to_skb(void *ptr)
+{
+ return (void *)((unsigned long)ptr & ~VETH_BQL_FLAG);
+}
+
+static void *veth_skb_to_ptr(struct sk_buff *skb, bool bql)
+{
+ return bql ? (void *)((unsigned long)skb | VETH_BQL_FLAG) : skb;
+}
+
static struct xdp_frame *veth_ptr_to_xdp(void *ptr)
{
return (void *)((unsigned long)ptr & ~VETH_XDP_FLAG);
@@ -295,7 +314,7 @@ static void veth_ptr_free(void *ptr)
if (veth_is_xdp_frame(ptr))
xdp_return_frame(veth_ptr_to_xdp(ptr));
else
- kfree_skb(ptr);
+ kfree_skb(veth_ptr_to_skb(ptr));
}
static void __veth_xdp_flush(struct veth_rq *rq)
@@ -309,19 +328,33 @@ static void __veth_xdp_flush(struct veth_rq *rq)
}
}
-static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
+static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb, bool do_bql,
+ struct netdev_queue *txq)
{
- if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
+ struct ptr_ring *ring = &rq->xdp_ring;
+
+ spin_lock(&ring->producer_lock);
+ if (unlikely(!ring->size) || __ptr_ring_full(ring)) {
+ spin_unlock(&ring->producer_lock);
return NETDEV_TX_BUSY; /* signal qdisc layer */
+ }
+
+ /* BQL charge before produce; consumer cannot see entry yet */
+ if (do_bql)
+ netdev_tx_sent_queue(txq, VETH_BQL_UNIT);
+
+ __ptr_ring_produce(ring, veth_skb_to_ptr(skb, do_bql));
+ spin_unlock(&ring->producer_lock);
return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
}
static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb,
- struct veth_rq *rq, bool xdp)
+ struct veth_rq *rq, bool xdp, bool do_bql,
+ struct netdev_queue *txq)
{
return __dev_forward_skb(dev, skb) ?: xdp ?
- veth_xdp_rx(rq, skb) :
+ veth_xdp_rx(rq, skb, do_bql, txq) :
__netif_rx(skb);
}
@@ -352,6 +385,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
struct net_device *rcv;
int length = skb->len;
bool use_napi = false;
+ bool do_bql = false;
int ret, rxq;
rcu_read_lock();
@@ -375,8 +409,11 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
}
skb_tx_timestamp(skb);
+ txq = netdev_get_tx_queue(dev, rxq);
- ret = veth_forward_skb(rcv, skb, rq, use_napi);
+ /* BQL charge happens inside veth_xdp_rx() under producer_lock */
+ do_bql = use_napi && !qdisc_txq_has_no_queue(txq);
+ ret = veth_forward_skb(rcv, skb, rq, use_napi, do_bql, txq);
switch (ret) {
case NET_RX_SUCCESS: /* same as NETDEV_TX_OK */
if (!use_napi)
@@ -388,8 +425,6 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
/* If a qdisc is attached to our virtual device, returning
* NETDEV_TX_BUSY is allowed.
*/
- txq = netdev_get_tx_queue(dev, rxq);
-
if (qdisc_txq_has_no_queue(txq)) {
dev_kfree_skb_any(skb);
goto drop;
@@ -412,6 +447,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
net_crit_ratelimited("%s(%s): Invalid return code(%d)",
__func__, dev->name, ret);
}
+
rcu_read_unlock();
return ret;
@@ -900,7 +936,8 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
static int veth_xdp_rcv(struct veth_rq *rq, int budget,
struct veth_xdp_tx_bq *bq,
- struct veth_stats *stats)
+ struct veth_stats *stats,
+ struct netdev_queue *peer_txq)
{
int i, done = 0, n_xdpf = 0;
void *xdpf[VETH_XDP_BATCH];
@@ -928,9 +965,13 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
}
} else {
/* ndo_start_xmit */
- struct sk_buff *skb = ptr;
+ bool bql_charged = veth_ptr_is_bql(ptr);
+ struct sk_buff *skb = veth_ptr_to_skb(ptr);
stats->xdp_bytes += skb->len;
+ if (peer_txq && bql_charged)
+ netdev_tx_completed_queue(peer_txq, 1, VETH_BQL_UNIT);
+
skb = veth_xdp_rcv_skb(rq, skb, bq, stats);
if (skb) {
if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC))
@@ -975,7 +1016,7 @@ static int veth_poll(struct napi_struct *napi, int budget)
peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL;
xdp_set_return_frame_no_direct();
- done = veth_xdp_rcv(rq, budget, &bq, &stats);
+ done = veth_xdp_rcv(rq, budget, &bq, &stats, peer_txq);
if (stats.xdp_redirect > 0)
xdp_do_flush();
@@ -1073,6 +1114,7 @@ static int __veth_napi_enable(struct net_device *dev)
static void veth_napi_del_range(struct net_device *dev, int start, int end)
{
struct veth_priv *priv = netdev_priv(dev);
+ struct net_device *peer;
int i;
for (i = start; i < end; i++) {
@@ -1091,6 +1133,15 @@ static void veth_napi_del_range(struct net_device *dev, int start, int end)
ptr_ring_cleanup(&rq->xdp_ring, veth_ptr_free);
}
+ /* Reset BQL on peer's txqs: remaining ring items were freed above
+ * without BQL completion, so DQL state must be reset.
+ */
+ peer = rtnl_dereference(priv->peer);
+ if (peer) {
+ for (i = start; i < end; i++)
+ netdev_tx_reset_queue(netdev_get_tx_queue(peer, i));
+ }
+
for (i = start; i < end; i++) {
page_pool_destroy(priv->rq[i].page_pool);
priv->rq[i].page_pool = NULL;
@@ -1740,6 +1791,7 @@ static void veth_setup(struct net_device *dev)
dev->priv_flags |= IFF_PHONY_HEADROOM;
dev->priv_flags |= IFF_DISABLE_NETPOLL;
dev->lltx = true;
+ dev->bql = true;
dev->netdev_ops = &veth_netdev_ops;
dev->xdp_metadata_ops = &veth_xdp_metadata_ops;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support
2026-04-13 9:44 [PATCH net-next v2 0/5] veth: add Byte Queue Limits (BQL) support hawk
2026-04-13 9:44 ` [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
@ 2026-04-13 19:49 ` syzbot ci
2026-04-14 8:06 ` Jesper Dangaard Brouer
1 sibling, 1 reply; 5+ messages in thread
From: syzbot ci @ 2026-04-13 19:49 UTC (permalink / raw)
To: andrew, ast, bpf, corbet, daniel, davem, edumazet, frederic, hawk,
horms, j.koeppeler, jhs, jiri, john.fastabend, kernel-team,
krikku, kuba, kuniyu, linux-doc, linux-kernel, linux-kselftest,
netdev, pabeni, sdf, shuah, skhan, yajun.deng
Cc: syzbot, syzkaller-bugs
syzbot ci has tested the following series
[v2] veth: add Byte Queue Limits (BQL) support
https://lore.kernel.org/all/20260413094442.1376022-1-hawk@kernel.org
* [PATCH net-next v2 1/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
* [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction
* [PATCH net-next v2 3/5] veth: add tx_timeout watchdog as BQL safety net
* [PATCH net-next v2 4/5] net: sched: add timeout count to NETDEV WATCHDOG message
* [PATCH net-next v2 5/5] selftests: net: add veth BQL stress test
and found the following issue:
WARNING in veth_napi_del_range
Full report is available here:
https://ci.syzbot.org/series/ee732006-8545-4abd-a105-b4b1592a7baf
***
WARNING in veth_napi_del_range
tree: net-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git
base: 8806d502e0a7e7d895b74afbd24e8550a65a2b17
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/90743a26-f003-44cf-abcc-5991c47588b2/config
syz repro: https://ci.syzbot.org/findings/d068bfb2-9f8b-466a-95b4-cd7e7b00006c/syz_repro
------------[ cut here ]------------
index >= dev->num_tx_queues
WARNING: ./include/linux/netdevice.h:2672 at netdev_get_tx_queue include/linux/netdevice.h:2672 [inline], CPU#0: syz.1.27/6002
WARNING: ./include/linux/netdevice.h:2672 at veth_napi_del_range+0x3b7/0x4e0 drivers/net/veth.c:1142, CPU#0: syz.1.27/6002
Modules linked in:
CPU: 0 UID: 0 PID: 6002 Comm: syz.1.27 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:netdev_get_tx_queue include/linux/netdevice.h:2672 [inline]
RIP: 0010:veth_napi_del_range+0x3b7/0x4e0 drivers/net/veth.c:1142
Code: 00 e8 ad 96 69 fe 44 39 6c 24 10 74 5e e8 41 61 44 fb 41 ff c5 49 bc 00 00 00 00 00 fc ff df e9 6d ff ff ff e8 2a 61 44 fb 90 <0f> 0b 90 42 80 3c 23 00 75 8e eb 94 48 8b 0c 24 80 e1 07 80 c1 03
RSP: 0018:ffffc90003adf918 EFLAGS: 00010293
RAX: ffffffff86814ec6 RBX: 1ffff110227a6c03 RCX: ffff888103a857c0
RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000000002
RBP: 1ffff110227a6c9a R08: ffff888113f01ab7 R09: 0000000000000000
R10: ffff888113f01a98 R11: ffffed10227e0357 R12: dffffc0000000000
R13: 0000000000000002 R14: 0000000000000002 R15: ffff888113d36018
FS: 000055555ea16500(0000) GS:ffff88818de4a000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007efc287456b8 CR3: 000000010cdd0000 CR4: 00000000000006f0
Call Trace:
<TASK>
veth_napi_del drivers/net/veth.c:1153 [inline]
veth_disable_xdp+0x1b0/0x310 drivers/net/veth.c:1255
veth_xdp_set drivers/net/veth.c:1693 [inline]
veth_xdp+0x48e/0x730 drivers/net/veth.c:1717
dev_xdp_propagate+0x125/0x260 net/core/dev_api.c:348
bond_xdp_set drivers/net/bonding/bond_main.c:5715 [inline]
bond_xdp+0x3ca/0x830 drivers/net/bonding/bond_main.c:5761
dev_xdp_install+0x42c/0x600 net/core/dev.c:10387
dev_xdp_detach_link net/core/dev.c:10579 [inline]
bpf_xdp_link_release+0x362/0x540 net/core/dev.c:10595
bpf_link_free+0x103/0x480 kernel/bpf/syscall.c:3292
bpf_link_put_direct kernel/bpf/syscall.c:3344 [inline]
bpf_link_release+0x6b/0x80 kernel/bpf/syscall.c:3351
__fput+0x44f/0xa70 fs/file_table.c:469
task_work_run+0x1d9/0x270 kernel/task_work.c:233
resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
__exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
exit_to_user_mode_loop+0xed/0x480 kernel/entry/common.c:98
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
do_syscall_64+0x32d/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5bda39c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffdca2969e8 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007f5bda617da0 RCX: 00007f5bda39c819
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007f5bda617da0 R08: 00007f5bda616128 R09: 0000000000000000
R10: 000000000003fd78 R11: 0000000000000246 R12: 0000000000010fb8
R13: 00007f5bda61609c R14: 0000000000010cdd R15: 00007ffdca296af0
</TASK>
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
To test a patch for this bug, please reply with `#syz test`
(should be on a separate line).
The patch should be attached to the email.
Note: arguments like custom git repos and branches are not supported.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support
2026-04-13 19:49 ` [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support syzbot ci
@ 2026-04-14 8:06 ` Jesper Dangaard Brouer
2026-04-14 8:08 ` syzbot ci
0 siblings, 1 reply; 5+ messages in thread
From: Jesper Dangaard Brouer @ 2026-04-14 8:06 UTC (permalink / raw)
To: syzbot ci, andrew, ast, bpf, corbet, daniel, davem, edumazet,
frederic, horms, j.koeppeler, john.fastabend, kernel-team, kuba,
linux-doc, linux-kernel, linux-kselftest, netdev, pabeni, sdf,
shuah
Cc: syzbot, syzkaller-bugs
[-- Attachment #1: Type: text/plain, Size: 4594 bytes --]
On 13/04/2026 21.49, syzbot ci wrote:
> syzbot ci has tested the following series
>
> [v2] veth: add Byte Queue Limits (BQL) support
> https://lore.kernel.org/all/20260413094442.1376022-1-hawk@kernel.org
> * [PATCH net-next v2 1/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
> * [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction
> * [PATCH net-next v2 3/5] veth: add tx_timeout watchdog as BQL safety net
> * [PATCH net-next v2 4/5] net: sched: add timeout count to NETDEV WATCHDOG message
> * [PATCH net-next v2 5/5] selftests: net: add veth BQL stress test
>
> and found the following issue:
> WARNING in veth_napi_del_range
>
> Full report is available here:
> https://ci.syzbot.org/series/ee732006-8545-4abd-a105-b4b1592a7baf
>
> ***
>
> WARNING in veth_napi_del_range
>
Attached a reproducer myself.
- I have V3 ready see below for diff
> tree: net-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git
> base: 8806d502e0a7e7d895b74afbd24e8550a65a2b17
> arch: amd64
> compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
> config: https://ci.syzbot.org/builds/90743a26-f003-44cf-abcc-5991c47588b2/config
> syz repro: https://ci.syzbot.org/findings/d068bfb2-9f8b-466a-95b4-cd7e7b00006c/syz_repro
>
> ------------[ cut here ]------------
> index >= dev->num_tx_queues
> WARNING: ./include/linux/netdevice.h:2672 at netdev_get_tx_queue include/linux/netdevice.h:2672 [inline], CPU#0: syz.1.27/6002
> WARNING: ./include/linux/netdevice.h:2672 at veth_napi_del_range+0x3b7/0x4e0 drivers/net/veth.c:1142, CPU#0: syz.1.27/6002
> Modules linked in:
> CPU: 0 UID: 0 PID: 6002 Comm: syz.1.27 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:netdev_get_tx_queue include/linux/netdevice.h:2672 [inline]
> RIP: 0010:veth_napi_del_range+0x3b7/0x4e0 drivers/net/veth.c:1142
> Code: 00 e8 ad 96 69 fe 44 39 6c 24 10 74 5e e8 41 61 44 fb 41 ff c5 49 bc 00 00 00 00 00 fc ff df e9 6d ff ff ff e8 2a 61 44 fb 90 <0f> 0b 90 42 80 3c 23 00 75 8e eb 94 48 8b 0c 24 80 e1 07 80 c1 03
> RSP: 0018:ffffc90003adf918 EFLAGS: 00010293
> RAX: ffffffff86814ec6 RBX: 1ffff110227a6c03 RCX: ffff888103a857c0
> RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000000002
> RBP: 1ffff110227a6c9a R08: ffff888113f01ab7 R09: 0000000000000000
> R10: ffff888113f01a98 R11: ffffed10227e0357 R12: dffffc0000000000
> R13: 0000000000000002 R14: 0000000000000002 R15: ffff888113d36018
> FS: 000055555ea16500(0000) GS:ffff88818de4a000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007efc287456b8 CR3: 000000010cdd0000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> veth_napi_del drivers/net/veth.c:1153 [inline]
> veth_disable_xdp+0x1b0/0x310 drivers/net/veth.c:1255
> veth_xdp_set drivers/net/veth.c:1693 [inline]
> veth_xdp+0x48e/0x730 drivers/net/veth.c:1717
> dev_xdp_propagate+0x125/0x260 net/core/dev_api.c:348
> bond_xdp_set drivers/net/bonding/bond_main.c:5715 [inline]
> bond_xdp+0x3ca/0x830 drivers/net/bonding/bond_main.c:5761
> dev_xdp_install+0x42c/0x600 net/core/dev.c:10387
> dev_xdp_detach_link net/core/dev.c:10579 [inline]
> bpf_xdp_link_release+0x362/0x540 net/core/dev.c:10595
> bpf_link_free+0x103/0x480 kernel/bpf/syscall.c:3292
> bpf_link_put_direct kernel/bpf/syscall.c:3344 [inline]
> bpf_link_release+0x6b/0x80 kernel/bpf/syscall.c:3351
> __fput+0x44f/0xa70 fs/file_table.c:469
> task_work_run+0x1d9/0x270 kernel/task_work.c:233
The BQL reset loop in veth_napi_del_range() iterates
dev->real_num_rx_queues but indexes into peer's TX queues,
which goes out of bounds when the peer has fewer TX queues
(e.g. veth enslaved to a bond with XDP).
Fix is to clamp the loop to the peer's real_num_tx_queues.
Will be included in the V3 submission.
#syz test
---
drivers/net/veth.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 911e7e36e166..9d7b085c9548 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1138,7 +1138,9 @@ static void veth_napi_del_range(struct net_device
*dev, int start, int end)
*/
peer = rtnl_dereference(priv->peer);
if (peer) {
- for (i = start; i < end; i++)
+ int peer_end = min(end, (int)peer->real_num_tx_queues);
+
+ for (i = start; i < peer_end; i++)
netdev_tx_reset_queue(netdev_get_tx_queue(peer, i));
}
[-- Attachment #2: repro-syzbot-veth-bql.sh --]
[-- Type: application/x-shellscript, Size: 2967 bytes --]
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: Re: [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support
2026-04-14 8:06 ` Jesper Dangaard Brouer
@ 2026-04-14 8:08 ` syzbot ci
0 siblings, 0 replies; 5+ messages in thread
From: syzbot ci @ 2026-04-14 8:08 UTC (permalink / raw)
To: hawk
Cc: andrew, ast, bpf, corbet, daniel, davem, edumazet, frederic, hawk,
horms, j.koeppeler, john.fastabend, kernel-team, kuba, linux-doc,
linux-kernel, linux-kselftest, netdev, pabeni, sdf, shuah, syzbot,
syzkaller-bugs
Please attach the patch to act upon.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-14 8:08 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-13 9:44 [PATCH net-next v2 0/5] veth: add Byte Queue Limits (BQL) support hawk
2026-04-13 9:44 ` [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
2026-04-13 19:49 ` [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support syzbot ci
2026-04-14 8:06 ` Jesper Dangaard Brouer
2026-04-14 8:08 ` syzbot ci
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox