* [PATCH net-next v5 1/5] veth: fix OOB txq access in veth_poll() with asymmetric queue counts
[not found] <20260505132159.241305-1-hawk@kernel.org>
@ 2026-05-05 13:21 ` hawk
2026-05-05 13:21 ` [PATCH net-next v5 2/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices hawk
` (3 subsequent siblings)
4 siblings, 0 replies; 5+ messages in thread
From: hawk @ 2026-05-05 13:21 UTC (permalink / raw)
To: netdev
Cc: hawk, kernel-team, Sashiko, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Alexei Starovoitov,
Daniel Borkmann, John Fastabend, Stanislav Fomichev,
Toshiaki Makita, linux-kernel, bpf
From: Jesper Dangaard Brouer <hawk@kernel.org>
XDP redirect into a veth device (via bpf_redirect()) calls
veth_xdp_xmit(), which enqueues frames into the peer's ptr_ring using
smp_processor_id() % peer->real_num_rx_queues
as the ring index. With an asymmetric veth pair where the peer has
fewer TX queues than RX queues, that index can exceed
peer->real_num_tx_queues.
veth_poll() then resolves peer_txq for the ring via:
peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL;
where queue_idx = rq->xdp_rxq.queue_index. When queue_idx exceeds
peer_dev->real_num_tx_queues this is an out-of-bounds (OOB) access
into the peer's netdev_queue array, triggering DEBUG_NET_WARN_ON_ONCE
in netdev_get_tx_queue().
The normal ndo_start_xmit path is not affected: the stack clamps
skb->queue_mapping via netdev_cap_txqueue() before invoking
ndo_start_xmit, so rxq in veth_xmit() never exceeds real_num_tx_queues.
Fix veth_poll() by clamping: only dereference peer_txq when queue_idx is
within bounds, otherwise set it to NULL. The out-of-range rings are fed
exclusively via XDP redirect (veth_xdp_xmit), never via ndo_start_xmit
(veth_xmit), so the peer txq was never stopped and there is nothing to
wake; NULL is the correct fallback.
Reported-by: Sashiko <sashiko-bot@kernel.org>
Closes: https://lore.kernel.org/all/20260502071828.616C3C19425@smtp.kernel.org/
Fixes: dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to reduce TX drops")
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
---
drivers/net/veth.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index e35df717e65e..0cfb19b760dd 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -972,7 +972,8 @@ static int veth_poll(struct napi_struct *napi, int budget)
/* NAPI functions as RCU section */
peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held());
- peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL;
+ peer_txq = (peer_dev && queue_idx < peer_dev->real_num_tx_queues) ?
+ netdev_get_tx_queue(peer_dev, queue_idx) : NULL;
xdp_set_return_frame_no_direct();
done = veth_xdp_rcv(rq, budget, &bq, &stats);
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH net-next v5 2/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices
[not found] <20260505132159.241305-1-hawk@kernel.org>
2026-05-05 13:21 ` [PATCH net-next v5 1/5] veth: fix OOB txq access in veth_poll() with asymmetric queue counts hawk
@ 2026-05-05 13:21 ` hawk
2026-05-05 13:21 ` [PATCH net-next v5 3/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
` (2 subsequent siblings)
4 siblings, 0 replies; 5+ messages in thread
From: hawk @ 2026-05-05 13:21 UTC (permalink / raw)
To: netdev
Cc: hawk, kernel-team, Jonas Köppeler, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Simon Horman, Jonathan Corbet, Shuah Khan, Kuniyuki Iwashima,
Stanislav Fomichev, Christian Brauner, Yury Norov, Krishna Kumar,
Yajun Deng, linux-doc, linux-kernel
From: Jesper Dangaard Brouer <hawk@kernel.org>
Virtual devices with IFF_NO_QUEUE or lltx are excluded from BQL sysfs
by netdev_uses_bql(), since they traditionally lack real hardware
queues. However, some virtual devices like veth implement a real
ptr_ring FIFO with NAPI processing and benefit from BQL to limit
in-flight bytes and reduce latency.
Add a per-device 'bql' bitfield boolean in the priv_flags_slow section
of struct net_device. When set, it overrides the IFF_NO_QUEUE/lltx
exclusion and exposes BQL sysfs entries (/sys/class/net/<dev>/queues/
tx-<n>/byte_queue_limits/). The flag is still gated on CONFIG_BQL.
This allows drivers that use BQL despite being IFF_NO_QUEUE to opt in
to sysfs visibility for monitoring and debugging.
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
Tested-by: Jonas Köppeler <j.koeppeler@tu-berlin.de>
---
Documentation/networking/net_cachelines/net_device.rst | 1 +
include/linux/netdevice.h | 2 ++
net/core/net-sysfs.c | 8 +++++++-
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/Documentation/networking/net_cachelines/net_device.rst b/Documentation/networking/net_cachelines/net_device.rst
index 1c19bb7705df..b775d3235a2d 100644
--- a/Documentation/networking/net_cachelines/net_device.rst
+++ b/Documentation/networking/net_cachelines/net_device.rst
@@ -170,6 +170,7 @@ unsigned_long:1 see_all_hwtstamp_requests
unsigned_long:1 change_proto_down
unsigned_long:1 netns_immutable
unsigned_long:1 fcoe_mtu
+unsigned_long:1 bql netdev_uses_bql(net-sysfs.c)
struct list_head net_notifier_list
struct macsec_ops* macsec_ops
struct udp_tunnel_nic_info* udp_tunnel_nic_info
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 744ffa243501..d4c7b020b6a7 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2065,6 +2065,7 @@ enum netdev_reg_state {
* @change_proto_down: device supports setting carrier via IFLA_PROTO_DOWN
* @netns_immutable: interface can't change network namespaces
* @fcoe_mtu: device supports maximum FCoE MTU, 2158 bytes
+ * @bql: device uses BQL (DQL sysfs) despite having IFF_NO_QUEUE
*
* @net_notifier_list: List of per-net netdev notifier block
* that follow this device when it is moved
@@ -2479,6 +2480,7 @@ struct net_device {
unsigned long change_proto_down:1;
unsigned long netns_immutable:1;
unsigned long fcoe_mtu:1;
+ unsigned long bql:1;
struct list_head net_notifier_list;
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index 3318b5666e43..82833e5dae03 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -1945,10 +1945,16 @@ static const struct kobj_type netdev_queue_ktype = {
static bool netdev_uses_bql(const struct net_device *dev)
{
+ if (!IS_ENABLED(CONFIG_BQL))
+ return false;
+
+ if (dev->bql)
+ return true;
+
if (dev->lltx || (dev->priv_flags & IFF_NO_QUEUE))
return false;
- return IS_ENABLED(CONFIG_BQL);
+ return true;
}
static int netdev_queue_add_kobject(struct net_device *dev, int index)
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH net-next v5 3/5] veth: implement Byte Queue Limits (BQL) for latency reduction
[not found] <20260505132159.241305-1-hawk@kernel.org>
2026-05-05 13:21 ` [PATCH net-next v5 1/5] veth: fix OOB txq access in veth_poll() with asymmetric queue counts hawk
2026-05-05 13:21 ` [PATCH net-next v5 2/5] net: add dev->bql flag to allow BQL sysfs for IFF_NO_QUEUE devices hawk
@ 2026-05-05 13:21 ` hawk
2026-05-05 13:21 ` [PATCH net-next v5 4/5] veth: add tx_timeout watchdog as BQL safety net hawk
2026-05-05 13:21 ` [PATCH net-next v5 5/5] net: sched: add timeout count to NETDEV WATCHDOG message hawk
4 siblings, 0 replies; 5+ messages in thread
From: hawk @ 2026-05-05 13:21 UTC (permalink / raw)
To: netdev
Cc: hawk, kernel-team, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
John Fastabend, Stanislav Fomichev, linux-kernel, bpf
From: Jesper Dangaard Brouer <hawk@kernel.org>
Commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to
reduce TX drops") gave qdiscs control over veth by returning
NETDEV_TX_BUSY when the ptr_ring is full (DRV_XOFF). That commit noted
a known limitation: the 256-entry ptr_ring sits in front of the qdisc as
a dark buffer, adding base latency because the qdisc has no visibility
into how many bytes are already queued there.
Add BQL support so the qdisc gets feedback and can begin shaping traffic
before the ring fills. In testing with fq_codel, BQL reduces ping RTT
under UDP load from ~6.61ms to ~0.36ms (18x).
Charge a fixed VETH_BQL_UNIT (1) per packet rather than skb->len, so
the DQL limit tracks packets-in-flight. Unlike a physical NIC, veth
has no link speed -- the ptr_ring drains at CPU speed and is
packet-indexed, not byte-indexed, so bytes are not the natural unit.
With byte-based charging, small packets sneak many more entries into
the ring before STACK_XOFF fires, deepening the dark buffer under
mixed-size workloads. Testing with a concurrent min-size packet flood
shows 3.7x ping RTT degradation with skb->len charging versus no
change with fixed-unit charging.
Charge BQL inside veth_xdp_rx() under the ptr_ring producer_lock, after
confirming the ring is not full. The charge must precede the produce
because the NAPI consumer can run on another CPU and complete the SKB
the instant it becomes visible in the ring. Doing both under the same
lock avoids a pre-charge/undo pattern -- BQL is only charged when
produce is guaranteed to succeed.
BQL is enabled only when a real qdisc is attached (guarded by
!qdisc_txq_has_no_queue), as HARD_TX_LOCK provides serialization
for TXQ modification like dql_queued(). For lltx devices, like veth,
this HARD_TX_LOCK serialization isn't provided. The ptr_ring
producer_lock provides additional serialization that would allow
BQL to work correctly even with noqueue, though that combination
is not currently enabled, as the netstack will drop and warn.
Track per-SKB BQL state via a VETH_BQL_FLAG pointer tag in the ptr_ring
entry. This is necessary because the qdisc can be replaced live while
SKBs are in-flight -- each SKB must carry the charge decision made at
enqueue time rather than re-checking the peer's qdisc at completion.
Complete per-SKB in veth_xdp_rcv() rather than in bulk, so STACK_XOFF
clears promptly when producer and consumer run on different CPUs.
BQL introduces a second independent queue-stop mechanism (STACK_XOFF)
alongside the existing DRV_XOFF (ring full). Both must be clear for
the queue to transmit. Reset BQL state in veth_napi_del_range() after
synchronize_net() to avoid racing with in-flight veth_poll() calls.
Clamp the reset loop to the peer's real_num_tx_queues, since the peer
may have fewer TX queues than the local device has RX queues (e.g. when
veth is enslaved to a bond with XDP attached).
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
---
drivers/net/veth.c | 86 ++++++++++++++++++++++++++++++++++++++++------
1 file changed, 75 insertions(+), 11 deletions(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 0cfb19b760dd..86b78900c48e 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -34,9 +34,13 @@
#define DRV_VERSION "1.0"
#define VETH_XDP_FLAG BIT(0)
+#define VETH_BQL_FLAG BIT(1)
#define VETH_RING_SIZE 256
#define VETH_XDP_HEADROOM (XDP_PACKET_HEADROOM + NET_IP_ALIGN)
+/* Fixed BQL charge: DQL limit tracks packets-in-flight, not bytes */
+#define VETH_BQL_UNIT 1
+
#define VETH_XDP_TX_BULK_SIZE 16
#define VETH_XDP_BATCH 16
@@ -280,6 +284,21 @@ static bool veth_is_xdp_frame(void *ptr)
return (unsigned long)ptr & VETH_XDP_FLAG;
}
+static bool veth_ptr_is_bql(void *ptr)
+{
+ return (unsigned long)ptr & VETH_BQL_FLAG;
+}
+
+static struct sk_buff *veth_ptr_to_skb(void *ptr)
+{
+ return (void *)((unsigned long)ptr & ~VETH_BQL_FLAG);
+}
+
+static void *veth_skb_to_ptr(struct sk_buff *skb, bool bql)
+{
+ return bql ? (void *)((unsigned long)skb | VETH_BQL_FLAG) : skb;
+}
+
static struct xdp_frame *veth_ptr_to_xdp(void *ptr)
{
return (void *)((unsigned long)ptr & ~VETH_XDP_FLAG);
@@ -295,7 +314,7 @@ static void veth_ptr_free(void *ptr)
if (veth_is_xdp_frame(ptr))
xdp_return_frame(veth_ptr_to_xdp(ptr));
else
- kfree_skb(ptr);
+ kfree_skb(veth_ptr_to_skb(ptr));
}
static void __veth_xdp_flush(struct veth_rq *rq)
@@ -309,19 +328,33 @@ static void __veth_xdp_flush(struct veth_rq *rq)
}
}
-static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb)
+static int veth_xdp_rx(struct veth_rq *rq, struct sk_buff *skb, bool do_bql,
+ struct netdev_queue *txq)
{
- if (unlikely(ptr_ring_produce(&rq->xdp_ring, skb)))
+ struct ptr_ring *ring = &rq->xdp_ring;
+
+ spin_lock(&ring->producer_lock);
+ if (unlikely(!ring->size) || __ptr_ring_full(ring)) {
+ spin_unlock(&ring->producer_lock);
return NETDEV_TX_BUSY; /* signal qdisc layer */
+ }
+
+ /* BQL charge before produce; consumer cannot see entry yet */
+ if (do_bql)
+ netdev_tx_sent_queue(txq, VETH_BQL_UNIT);
+
+ __ptr_ring_produce(ring, veth_skb_to_ptr(skb, do_bql));
+ spin_unlock(&ring->producer_lock);
return NET_RX_SUCCESS; /* same as NETDEV_TX_OK */
}
static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb,
- struct veth_rq *rq, bool xdp)
+ struct veth_rq *rq, bool xdp, bool do_bql,
+ struct netdev_queue *txq)
{
return __dev_forward_skb(dev, skb) ?: xdp ?
- veth_xdp_rx(rq, skb) :
+ veth_xdp_rx(rq, skb, do_bql, txq) :
__netif_rx(skb);
}
@@ -348,10 +381,11 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
struct veth_rq *rq = NULL;
- struct netdev_queue *txq;
+ struct netdev_queue *txq = NULL;
struct net_device *rcv;
int length = skb->len;
bool use_napi = false;
+ bool do_bql = false;
int ret, rxq;
rcu_read_lock();
@@ -375,8 +409,12 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
}
skb_tx_timestamp(skb);
-
- ret = veth_forward_skb(rcv, skb, rq, use_napi);
+ if (rxq < dev->real_num_tx_queues) {
+ txq = netdev_get_tx_queue(dev, rxq);
+ /* BQL charge happens inside veth_xdp_rx() under producer_lock */
+ do_bql = use_napi && !qdisc_txq_has_no_queue(txq);
+ }
+ ret = veth_forward_skb(rcv, skb, rq, use_napi, do_bql, txq);
switch (ret) {
case NET_RX_SUCCESS: /* same as NETDEV_TX_OK */
if (!use_napi)
@@ -412,6 +450,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
net_crit_ratelimited("%s(%s): Invalid return code(%d)",
__func__, dev->name, ret);
}
+
rcu_read_unlock();
return ret;
@@ -900,7 +939,8 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
static int veth_xdp_rcv(struct veth_rq *rq, int budget,
struct veth_xdp_tx_bq *bq,
- struct veth_stats *stats)
+ struct veth_stats *stats,
+ struct netdev_queue *peer_txq)
{
int i, done = 0, n_xdpf = 0;
void *xdpf[VETH_XDP_BATCH];
@@ -928,9 +968,13 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
}
} else {
/* ndo_start_xmit */
- struct sk_buff *skb = ptr;
+ bool bql_charged = veth_ptr_is_bql(ptr);
+ struct sk_buff *skb = veth_ptr_to_skb(ptr);
stats->xdp_bytes += skb->len;
+ if (peer_txq && bql_charged)
+ netdev_tx_completed_queue(peer_txq, 1, VETH_BQL_UNIT);
+
skb = veth_xdp_rcv_skb(rq, skb, bq, stats);
if (skb) {
if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC))
@@ -976,7 +1020,7 @@ static int veth_poll(struct napi_struct *napi, int budget)
netdev_get_tx_queue(peer_dev, queue_idx) : NULL;
xdp_set_return_frame_no_direct();
- done = veth_xdp_rcv(rq, budget, &bq, &stats);
+ done = veth_xdp_rcv(rq, budget, &bq, &stats, peer_txq);
if (stats.xdp_redirect > 0)
xdp_do_flush();
@@ -1074,6 +1118,7 @@ static int __veth_napi_enable(struct net_device *dev)
static void veth_napi_del_range(struct net_device *dev, int start, int end)
{
struct veth_priv *priv = netdev_priv(dev);
+ struct net_device *peer;
int i;
for (i = start; i < end; i++) {
@@ -1092,6 +1137,24 @@ static void veth_napi_del_range(struct net_device *dev, int start, int end)
ptr_ring_cleanup(&rq->xdp_ring, veth_ptr_free);
}
+ /* Reset BQL and wake stopped peer txqs. A concurrent veth_xmit()
+ * may have set DRV_XOFF between rcu_assign_pointer(napi, NULL) and
+ * synchronize_net(), and NAPI can no longer clear it.
+ * Only wake when the device is still up.
+ */
+ peer = rtnl_dereference(priv->peer);
+ if (peer) {
+ int peer_end = min_t(int, end, peer->real_num_tx_queues);
+
+ for (i = start; i < peer_end; i++) {
+ struct netdev_queue *txq = netdev_get_tx_queue(peer, i);
+
+ netdev_tx_reset_queue(txq);
+ if (netif_running(dev))
+ netif_tx_wake_queue(txq);
+ }
+ }
+
for (i = start; i < end; i++) {
page_pool_destroy(priv->rq[i].page_pool);
priv->rq[i].page_pool = NULL;
@@ -1741,6 +1804,7 @@ static void veth_setup(struct net_device *dev)
dev->priv_flags |= IFF_PHONY_HEADROOM;
dev->priv_flags |= IFF_DISABLE_NETPOLL;
dev->lltx = true;
+ dev->bql = true;
dev->netdev_ops = &veth_netdev_ops;
dev->xdp_metadata_ops = &veth_xdp_metadata_ops;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH net-next v5 4/5] veth: add tx_timeout watchdog as BQL safety net
[not found] <20260505132159.241305-1-hawk@kernel.org>
` (2 preceding siblings ...)
2026-05-05 13:21 ` [PATCH net-next v5 3/5] veth: implement Byte Queue Limits (BQL) for latency reduction hawk
@ 2026-05-05 13:21 ` hawk
2026-05-05 13:21 ` [PATCH net-next v5 5/5] net: sched: add timeout count to NETDEV WATCHDOG message hawk
4 siblings, 0 replies; 5+ messages in thread
From: hawk @ 2026-05-05 13:21 UTC (permalink / raw)
To: netdev
Cc: hawk, kernel-team, Jonas Köppeler, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
linux-kernel
From: Jesper Dangaard Brouer <hawk@kernel.org>
With the introduction of BQL (Byte Queue Limits) for veth, there are
now two independent mechanisms that can stop a transmit queue:
- DRV_XOFF: set by netif_tx_stop_queue() when the ptr_ring is full
- STACK_XOFF: set by BQL when the byte-in-flight limit is reached
If either mechanism stalls without a corresponding wake/completion,
the queue stops permanently. Enable the net device watchdog timer and
implement ndo_tx_timeout as a failsafe recovery.
The timeout handler resets BQL state (clearing STACK_XOFF) and wakes
the queue (clearing DRV_XOFF), covering both stop mechanisms. The
watchdog fires after 16 seconds, which accommodates worst-case NAPI
processing (budget=64 packets x 250ms per-packet consumer delay)
without false positives under normal backpressure.
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
Tested-by: Jonas Köppeler <j.koeppeler@tu-berlin.de>
---
drivers/net/veth.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 86b78900c48e..4103d298aa9b 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1442,6 +1442,22 @@ static int veth_set_channels(struct net_device *dev,
goto out;
}
+static void veth_tx_timeout(struct net_device *dev, unsigned int txqueue)
+{
+ struct netdev_queue *txq = netdev_get_tx_queue(dev, txqueue);
+
+ netdev_err(dev,
+ "veth backpressure(0x%lX) stalled(n:%ld) TXQ(%u) re-enable\n",
+ txq->state, atomic_long_read(&txq->trans_timeout), txqueue);
+
+ /* Cannot call netdev_tx_reset_queue(): dql_reset() races with
+ * peer NAPI calling dql_completed() concurrently.
+ * Just clear the stop bits; the qdisc will re-stop if still stuck.
+ */
+ clear_bit(__QUEUE_STATE_STACK_XOFF, &txq->state);
+ netif_tx_wake_queue(txq);
+}
+
static int veth_open(struct net_device *dev)
{
struct veth_priv *priv = netdev_priv(dev);
@@ -1780,6 +1796,7 @@ static const struct net_device_ops veth_netdev_ops = {
.ndo_bpf = veth_xdp,
.ndo_xdp_xmit = veth_ndo_xdp_xmit,
.ndo_get_peer_dev = veth_peer_dev,
+ .ndo_tx_timeout = veth_tx_timeout,
};
static const struct xdp_metadata_ops veth_xdp_metadata_ops = {
@@ -1819,6 +1836,7 @@ static void veth_setup(struct net_device *dev)
dev->priv_destructor = veth_dev_free;
dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS;
dev->max_mtu = ETH_MAX_MTU;
+ dev->watchdog_timeo = msecs_to_jiffies(16000);
dev->hw_features = VETH_FEATURES;
dev->hw_enc_features = VETH_FEATURES;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH net-next v5 5/5] net: sched: add timeout count to NETDEV WATCHDOG message
[not found] <20260505132159.241305-1-hawk@kernel.org>
` (3 preceding siblings ...)
2026-05-05 13:21 ` [PATCH net-next v5 4/5] veth: add tx_timeout watchdog as BQL safety net hawk
@ 2026-05-05 13:21 ` hawk
4 siblings, 0 replies; 5+ messages in thread
From: hawk @ 2026-05-05 13:21 UTC (permalink / raw)
To: netdev
Cc: hawk, kernel-team, Jakub Kicinski, Jonas Köppeler,
Jamal Hadi Salim, Jiri Pirko, David S. Miller, Eric Dumazet,
Paolo Abeni, Simon Horman, linux-kernel
From: Jesper Dangaard Brouer <hawk@kernel.org>
Add the per-queue timeout counter (trans_timeout) to the core NETDEV
WATCHDOG log message. This makes it easy to determine how frequently
a particular queue is stalling from a single log line, without having
to search through and correlate spaced-out log entries.
Useful for production monitoring where timeouts are spaced by the
watchdog interval, making frequency hard to judge.
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/all/20251107175445.58eba452@kernel.org/
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
Tested-by: Jonas Köppeler <j.koeppeler@tu-berlin.de>
---
net/sched/sch_generic.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index a93321db8fd7..3e2e2e887a86 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -533,13 +533,12 @@ static void dev_watchdog(struct timer_list *t)
netif_running(dev) &&
netif_carrier_ok(dev)) {
unsigned int timedout_ms = 0;
+ struct netdev_queue *txq;
unsigned int i;
unsigned long trans_start;
unsigned long oldest_start = jiffies;
for (i = 0; i < dev->num_tx_queues; i++) {
- struct netdev_queue *txq;
-
txq = netdev_get_tx_queue(dev, i);
if (!netif_xmit_stopped(txq))
continue;
@@ -561,9 +560,10 @@ static void dev_watchdog(struct timer_list *t)
if (unlikely(timedout_ms)) {
trace_net_dev_xmit_timeout(dev, i);
- netdev_crit(dev, "NETDEV WATCHDOG: CPU: %d: transmit queue %u timed out %u ms\n",
+ netdev_crit(dev, "NETDEV WATCHDOG: CPU: %d: transmit queue %u timed out %u ms (n:%ld)\n",
raw_smp_processor_id(),
- i, timedout_ms);
+ i, timedout_ms,
+ atomic_long_read(&txq->trans_timeout));
netif_freeze_queues(dev);
dev->netdev_ops->ndo_tx_timeout(dev, i);
netif_unfreeze_queues(dev);
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread