From: Jesper Dangaard Brouer <hawk@kernel.org>
To: syzbot ci <syzbot+cib904ea9ebb647254@syzkaller.appspotmail.com>,
andrew@lunn.ch, ast@kernel.org, bpf@vger.kernel.org,
corbet@lwn.net, daniel@iogearbox.net, davem@davemloft.net,
edumazet@google.com, frederic@kernel.org, horms@kernel.org,
j.koeppeler@tu-berlin.de, john.fastabend@gmail.com,
kernel-team@cloudflare.com, kuba@kernel.org,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-kselftest@vger.kernel.org, netdev@vger.kernel.org,
pabeni@redhat.com, sdf@fomichev.me, shuah@kernel.org
Cc: syzbot@lists.linux.dev, syzkaller-bugs@googlegroups.com
Subject: Re: [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support
Date: Tue, 14 Apr 2026 10:06:54 +0200 [thread overview]
Message-ID: <41689f2e-8786-49a6-912d-f65e48245a61@kernel.org> (raw)
In-Reply-To: <69dd48c2.a00a0220.468cb.004e.GAE@google.com>
[-- Attachment #1: Type: text/plain, Size: 4594 bytes --]
On 13/04/2026 21.49, syzbot ci wrote:
> syzbot ci has tested the following series
>
> [v2] veth: add Byte Queue Limits (BQL) support
> https://lore.kernel.org/all/20260413094442.1376022-1-hawk@kernel.org
> * [PATCH net-next v2 1/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
> * [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction
> * [PATCH net-next v2 3/5] veth: add tx_timeout watchdog as BQL safety net
> * [PATCH net-next v2 4/5] net: sched: add timeout count to NETDEV WATCHDOG message
> * [PATCH net-next v2 5/5] selftests: net: add veth BQL stress test
>
> and found the following issue:
> WARNING in veth_napi_del_range
>
> Full report is available here:
> https://ci.syzbot.org/series/ee732006-8545-4abd-a105-b4b1592a7baf
>
> ***
>
> WARNING in veth_napi_del_range
>
Attached a reproducer myself.
- I have V3 ready see below for diff
> tree: net-next
> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git
> base: 8806d502e0a7e7d895b74afbd24e8550a65a2b17
> arch: amd64
> compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
> config: https://ci.syzbot.org/builds/90743a26-f003-44cf-abcc-5991c47588b2/config
> syz repro: https://ci.syzbot.org/findings/d068bfb2-9f8b-466a-95b4-cd7e7b00006c/syz_repro
>
> ------------[ cut here ]------------
> index >= dev->num_tx_queues
> WARNING: ./include/linux/netdevice.h:2672 at netdev_get_tx_queue include/linux/netdevice.h:2672 [inline], CPU#0: syz.1.27/6002
> WARNING: ./include/linux/netdevice.h:2672 at veth_napi_del_range+0x3b7/0x4e0 drivers/net/veth.c:1142, CPU#0: syz.1.27/6002
> Modules linked in:
> CPU: 0 UID: 0 PID: 6002 Comm: syz.1.27 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> RIP: 0010:netdev_get_tx_queue include/linux/netdevice.h:2672 [inline]
> RIP: 0010:veth_napi_del_range+0x3b7/0x4e0 drivers/net/veth.c:1142
> Code: 00 e8 ad 96 69 fe 44 39 6c 24 10 74 5e e8 41 61 44 fb 41 ff c5 49 bc 00 00 00 00 00 fc ff df e9 6d ff ff ff e8 2a 61 44 fb 90 <0f> 0b 90 42 80 3c 23 00 75 8e eb 94 48 8b 0c 24 80 e1 07 80 c1 03
> RSP: 0018:ffffc90003adf918 EFLAGS: 00010293
> RAX: ffffffff86814ec6 RBX: 1ffff110227a6c03 RCX: ffff888103a857c0
> RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000000002
> RBP: 1ffff110227a6c9a R08: ffff888113f01ab7 R09: 0000000000000000
> R10: ffff888113f01a98 R11: ffffed10227e0357 R12: dffffc0000000000
> R13: 0000000000000002 R14: 0000000000000002 R15: ffff888113d36018
> FS: 000055555ea16500(0000) GS:ffff88818de4a000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007efc287456b8 CR3: 000000010cdd0000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> veth_napi_del drivers/net/veth.c:1153 [inline]
> veth_disable_xdp+0x1b0/0x310 drivers/net/veth.c:1255
> veth_xdp_set drivers/net/veth.c:1693 [inline]
> veth_xdp+0x48e/0x730 drivers/net/veth.c:1717
> dev_xdp_propagate+0x125/0x260 net/core/dev_api.c:348
> bond_xdp_set drivers/net/bonding/bond_main.c:5715 [inline]
> bond_xdp+0x3ca/0x830 drivers/net/bonding/bond_main.c:5761
> dev_xdp_install+0x42c/0x600 net/core/dev.c:10387
> dev_xdp_detach_link net/core/dev.c:10579 [inline]
> bpf_xdp_link_release+0x362/0x540 net/core/dev.c:10595
> bpf_link_free+0x103/0x480 kernel/bpf/syscall.c:3292
> bpf_link_put_direct kernel/bpf/syscall.c:3344 [inline]
> bpf_link_release+0x6b/0x80 kernel/bpf/syscall.c:3351
> __fput+0x44f/0xa70 fs/file_table.c:469
> task_work_run+0x1d9/0x270 kernel/task_work.c:233
The BQL reset loop in veth_napi_del_range() iterates
dev->real_num_rx_queues but indexes into peer's TX queues,
which goes out of bounds when the peer has fewer TX queues
(e.g. veth enslaved to a bond with XDP).
Fix is to clamp the loop to the peer's real_num_tx_queues.
Will be included in the V3 submission.
#syz test
---
drivers/net/veth.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 911e7e36e166..9d7b085c9548 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1138,7 +1138,9 @@ static void veth_napi_del_range(struct net_device
*dev, int start, int end)
*/
peer = rtnl_dereference(priv->peer);
if (peer) {
- for (i = start; i < end; i++)
+ int peer_end = min(end, (int)peer->real_num_tx_queues);
+
+ for (i = start; i < peer_end; i++)
netdev_tx_reset_queue(netdev_get_tx_queue(peer, i));
}
[-- Attachment #2: repro-syzbot-veth-bql.sh --]
[-- Type: application/x-shellscript, Size: 2967 bytes --]
next prev parent reply other threads:[~2026-04-14 8:07 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-13 9:44 [PATCH net-next v2 0/5] veth: add Byte Queue Limits (BQL) support hawk
2026-04-13 9:44 ` [PATCH net-next v2 1/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices hawk
2026-04-13 9:44 ` [PATCH net-next v2 2/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
2026-04-13 9:44 ` [PATCH net-next v2 3/5] veth: add tx_timeout watchdog as BQL safety net hawk
2026-04-13 9:44 ` [PATCH net-next v2 4/5] net: sched: add timeout count to NETDEV WATCHDOG message hawk
2026-04-13 9:44 ` [PATCH net-next v2 5/5] selftests: net: add veth BQL stress test hawk
2026-04-15 11:47 ` Breno Leitao
2026-04-13 19:49 ` [syzbot ci] Re: veth: add Byte Queue Limits (BQL) support syzbot ci
2026-04-14 8:06 ` Jesper Dangaard Brouer [this message]
2026-04-14 8:08 ` syzbot ci
2026-04-14 8:17 ` Aleksandr Nogikh
2026-04-14 8:33 ` Aleksandr Nogikh
2026-04-14 17:05 ` syzbot ci
2026-04-15 13:05 ` Aleksandr Nogikh
2026-04-15 16:22 ` syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41689f2e-8786-49a6-912d-f65e48245a61@kernel.org \
--to=hawk@kernel.org \
--cc=andrew@lunn.ch \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=frederic@kernel.org \
--cc=horms@kernel.org \
--cc=j.koeppeler@tu-berlin.de \
--cc=john.fastabend@gmail.com \
--cc=kernel-team@cloudflare.com \
--cc=kuba@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=shuah@kernel.org \
--cc=syzbot+cib904ea9ebb647254@syzkaller.appspotmail.com \
--cc=syzbot@lists.linux.dev \
--cc=syzkaller-bugs@googlegroups.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox