* [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps
@ 2026-05-07 22:19 Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 1/8] net/sched: add READ_ONCE() in gnet_stats_add_queue[_cpu] Eric Dumazet
` (7 more replies)
0 siblings, 8 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Goal is to no longer acquire RTNL in qdisc dumps.
This series annotate data-races, and change mq and mq_prio to
no longer acquire children qdisc spinlocks.
Eric Dumazet (8):
net/sched: add READ_ONCE() in gnet_stats_add_queue[_cpu]
net/sched: add qdisc_qlen_inc() and qdisc_qlen_dec()
net/sched: annotate data-races around sch->qstats.backlog
net/sched: add qdisc_qlen_lockless() helper
net/sched: add const qualifiers to gnet_stats helpers
net/sched: mq: no longer acquire qdisc spinlocks in dump operations
net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump()
net/sched: mq_prio: no longer acquire qdisc spinlocks in
mqprio_dump_class_stats()
include/net/gen_stats.h | 12 +++---
include/net/sch_generic.h | 56 ++++++++++++++++++++-----
net/core/gen_stats.c | 44 ++++++++++----------
net/sched/sch_api.c | 4 +-
net/sched/sch_cake.c | 15 ++++---
net/sched/sch_cbs.c | 6 +--
net/sched/sch_choke.c | 8 ++--
net/sched/sch_codel.c | 2 +-
net/sched/sch_drr.c | 6 +--
net/sched/sch_dualpi2.c | 6 +--
net/sched/sch_etf.c | 8 ++--
net/sched/sch_ets.c | 6 +--
net/sched/sch_fq.c | 8 ++--
net/sched/sch_fq_codel.c | 11 ++---
net/sched/sch_fq_pie.c | 8 ++--
net/sched/sch_generic.c | 12 +++---
net/sched/sch_gred.c | 2 +-
net/sched/sch_hfsc.c | 6 +--
net/sched/sch_hhf.c | 7 ++--
net/sched/sch_htb.c | 6 +--
net/sched/sch_mq.c | 35 +++++++++++-----
net/sched/sch_mqprio.c | 86 +++++++++++++++++++++------------------
net/sched/sch_multiq.c | 4 +-
net/sched/sch_netem.c | 12 +++---
net/sched/sch_prio.c | 6 +--
net/sched/sch_qfq.c | 8 ++--
net/sched/sch_red.c | 6 +--
net/sched/sch_sfb.c | 8 ++--
net/sched/sch_sfq.c | 11 ++---
net/sched/sch_skbprio.c | 4 +-
net/sched/sch_taprio.c | 4 +-
net/sched/sch_tbf.c | 10 ++---
net/sched/sch_teql.c | 2 +-
33 files changed, 242 insertions(+), 187 deletions(-)
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH net-next 1/8] net/sched: add READ_ONCE() in gnet_stats_add_queue[_cpu]
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 2/8] net/sched: add qdisc_qlen_inc() and qdisc_qlen_dec() Eric Dumazet
` (6 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Stats are read locklessly, add READ_ONCE() to prevent load-stearing.
Write side will be handled in separate patches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
net/core/gen_stats.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
index b71ccaec0991461333dbe465ee619bca4a06e75b..1a2380e74272de8eaf3d4ef453e56105a31e9edf 100644
--- a/net/core/gen_stats.c
+++ b/net/core/gen_stats.c
@@ -345,11 +345,11 @@ static void gnet_stats_add_queue_cpu(struct gnet_stats_queue *qstats,
for_each_possible_cpu(i) {
const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
- qstats->qlen += qcpu->qlen;
- qstats->backlog += qcpu->backlog;
- qstats->drops += qcpu->drops;
- qstats->requeues += qcpu->requeues;
- qstats->overlimits += qcpu->overlimits;
+ qstats->qlen += READ_ONCE(qcpu->qlen);
+ qstats->backlog += READ_ONCE(qcpu->backlog);
+ qstats->drops += READ_ONCE(qcpu->drops);
+ qstats->requeues += READ_ONCE(qcpu->requeues);
+ qstats->overlimits += READ_ONCE(qcpu->overlimits);
}
}
@@ -360,11 +360,11 @@ void gnet_stats_add_queue(struct gnet_stats_queue *qstats,
if (cpu) {
gnet_stats_add_queue_cpu(qstats, cpu);
} else {
- qstats->qlen += q->qlen;
- qstats->backlog += q->backlog;
- qstats->drops += q->drops;
- qstats->requeues += q->requeues;
- qstats->overlimits += q->overlimits;
+ qstats->qlen += READ_ONCE(q->qlen);
+ qstats->backlog += READ_ONCE(q->backlog);
+ qstats->drops += READ_ONCE(q->drops);
+ qstats->requeues += READ_ONCE(q->requeues);
+ qstats->overlimits += READ_ONCE(q->overlimits);
}
}
EXPORT_SYMBOL(gnet_stats_add_queue);
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 2/8] net/sched: add qdisc_qlen_inc() and qdisc_qlen_dec()
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 1/8] net/sched: add READ_ONCE() in gnet_stats_add_queue[_cpu] Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 3/8] net/sched: annotate data-races around sch->qstats.backlog Eric Dumazet
` (5 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Helpers to increment or decrement sch->q.qlen, with appropriate
WRITE_ONCE() to prevent store tearing.
Add other WRITE_ONCE() when sch->q.qlen is changed.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/sch_generic.h | 26 ++++++++++++++++++--------
net/sched/sch_api.c | 2 +-
net/sched/sch_cake.c | 8 ++++----
net/sched/sch_cbs.c | 4 ++--
net/sched/sch_choke.c | 8 ++++----
net/sched/sch_drr.c | 4 ++--
net/sched/sch_dualpi2.c | 6 +++---
net/sched/sch_etf.c | 8 ++++----
net/sched/sch_ets.c | 4 ++--
net/sched/sch_fq.c | 6 +++---
net/sched/sch_fq_codel.c | 7 ++++---
net/sched/sch_fq_pie.c | 4 ++--
net/sched/sch_generic.c | 10 +++++-----
net/sched/sch_hfsc.c | 4 ++--
net/sched/sch_hhf.c | 7 ++++---
net/sched/sch_htb.c | 4 ++--
net/sched/sch_mq.c | 5 +++--
net/sched/sch_mqprio.c | 18 ++++++++++--------
net/sched/sch_multiq.c | 4 ++--
net/sched/sch_netem.c | 10 +++++-----
net/sched/sch_prio.c | 4 ++--
net/sched/sch_qfq.c | 6 +++---
net/sched/sch_red.c | 4 ++--
net/sched/sch_sfb.c | 4 ++--
net/sched/sch_sfq.c | 9 +++++----
net/sched/sch_skbprio.c | 4 ++--
net/sched/sch_taprio.c | 4 ++--
net/sched/sch_tbf.c | 6 +++---
net/sched/sch_teql.c | 2 +-
29 files changed, 104 insertions(+), 88 deletions(-)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index ccfabfac674ef8617faeabd2fcb15daf8a1ea17f..3893fbb29960d9b32042616b747168b689b355fd 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -542,6 +542,16 @@ static inline int qdisc_qlen(const struct Qdisc *q)
return q->q.qlen;
}
+static inline void qdisc_qlen_inc(struct Qdisc *q)
+{
+ WRITE_ONCE(q->q.qlen, q->q.qlen + 1);
+}
+
+static inline void qdisc_qlen_dec(struct Qdisc *q)
+{
+ WRITE_ONCE(q->q.qlen, q->q.qlen - 1);
+}
+
static inline int qdisc_qlen_sum(const struct Qdisc *q)
{
__u32 qlen = q->qstats.qlen;
@@ -549,9 +559,9 @@ static inline int qdisc_qlen_sum(const struct Qdisc *q)
if (qdisc_is_percpu_stats(q)) {
for_each_possible_cpu(i)
- qlen += per_cpu_ptr(q->cpu_qstats, i)->qlen;
+ qlen += READ_ONCE(per_cpu_ptr(q->cpu_qstats, i)->qlen);
} else {
- qlen += q->q.qlen;
+ qlen += READ_ONCE(q->q.qlen);
}
return qlen;
@@ -1110,7 +1120,7 @@ static inline struct sk_buff *qdisc_dequeue_internal(struct Qdisc *sch, bool dir
skb = __skb_dequeue(&sch->gso_skb);
if (skb) {
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
return skb;
}
@@ -1266,7 +1276,7 @@ static inline struct sk_buff *qdisc_peek_dequeued(struct Qdisc *sch)
__skb_queue_head(&sch->gso_skb, skb);
/* it's still part of the queue */
qdisc_qstats_backlog_inc(sch, skb);
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
}
}
@@ -1283,7 +1293,7 @@ static inline void qdisc_update_stats_at_dequeue(struct Qdisc *sch,
} else {
qdisc_qstats_backlog_dec(sch, skb);
qdisc_bstats_update(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
}
}
@@ -1295,7 +1305,7 @@ static inline void qdisc_update_stats_at_enqueue(struct Qdisc *sch,
this_cpu_add(sch->cpu_qstats->backlog, pkt_len);
} else {
sch->qstats.backlog += pkt_len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
}
}
@@ -1311,7 +1321,7 @@ static inline struct sk_buff *qdisc_dequeue_peeked(struct Qdisc *sch)
qdisc_qstats_cpu_qlen_dec(sch);
} else {
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
}
} else {
skb = sch->dequeue(sch);
@@ -1332,7 +1342,7 @@ static inline void __qdisc_reset_queue(struct qdisc_skb_head *qh)
qh->head = NULL;
qh->tail = NULL;
- qh->qlen = 0;
+ WRITE_ONCE(qh->qlen, 0);
}
}
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index 6f7847c5536f16e6754954f0a606581e17257361..cefa2d8ac5ec00c78b08b520a11672120d10cdef 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -805,7 +805,7 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
cl = cops->find(sch, parentid);
cops->qlen_notify(sch, cl);
}
- sch->q.qlen -= n;
+ WRITE_ONCE(sch->q.qlen, sch->q.qlen - n);
sch->qstats.backlog -= len;
__qdisc_qstats_drop(sch, drops);
}
diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index d931e8d51f723fdedea9f3f90efceec6e0a070d3..7ab75a52f7d1a46d87fc8f7c099c749a5331ccf6 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -1612,7 +1612,7 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
cake_advance_shaper(q, b, skb, now, true);
qdisc_drop_reason(skb, sch, to_free, QDISC_DROP_OVERLIMIT);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
cake_heapify(q, 0);
@@ -1822,7 +1822,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
segs);
flow_queue_add(flow, segs);
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
numsegs++;
slen += segs->len;
q->buffer_used += segs->truesize;
@@ -1861,7 +1861,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
qdisc_tree_reduce_backlog(sch, 1, ack_pkt_len);
consume_skb(ack);
} else {
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
q->buffer_used += skb->truesize;
}
@@ -1987,7 +1987,7 @@ static struct sk_buff *cake_dequeue_one(struct Qdisc *sch)
WRITE_ONCE(b->tin_backlog, b->tin_backlog - len);
sch->qstats.backlog -= len;
q->buffer_used -= skb->truesize;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
if (q->overflow_timeout)
cake_heapify(q, b->overflow_idx[q->cur_flow]);
diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
index 8c9a0400c8622c652db290796f2dd338eb61799c..a75e58876797952f2218725f6da5cff29f330ae2 100644
--- a/net/sched/sch_cbs.c
+++ b/net/sched/sch_cbs.c
@@ -97,7 +97,7 @@ static int cbs_child_enqueue(struct sk_buff *skb, struct Qdisc *sch,
return err;
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
@@ -168,7 +168,7 @@ static struct sk_buff *cbs_child_dequeue(struct Qdisc *sch, struct Qdisc *child)
qdisc_qstats_backlog_dec(sch, skb);
qdisc_bstats_update(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
index 2875bcdb18a413075c795665e95f9dbbaac45962..73d3e673dc7b16cf2b9ac1d622da280c2ceb064a 100644
--- a/net/sched/sch_choke.c
+++ b/net/sched/sch_choke.c
@@ -123,7 +123,7 @@ static void choke_drop_by_idx(struct Qdisc *sch, unsigned int idx,
if (idx == q->tail)
choke_zap_tail_holes(q);
- --sch->q.qlen;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(skb));
qdisc_drop(skb, sch, to_free);
@@ -271,7 +271,7 @@ static int choke_enqueue(struct sk_buff *skb, struct Qdisc *sch,
if (sch->q.qlen < q->limit) {
q->tab[q->tail] = skb;
q->tail = (q->tail + 1) & q->tab_mask;
- ++sch->q.qlen;
+ qdisc_qlen_inc(sch);
qdisc_qstats_backlog_inc(sch, skb);
return NET_XMIT_SUCCESS;
}
@@ -298,7 +298,7 @@ static struct sk_buff *choke_dequeue(struct Qdisc *sch)
skb = q->tab[q->head];
q->tab[q->head] = NULL;
choke_zap_head_holes(q);
- --sch->q.qlen;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
qdisc_bstats_update(sch, skb);
@@ -396,7 +396,7 @@ static int choke_change(struct Qdisc *sch, struct nlattr *opt,
}
dropped += qdisc_pkt_len(skb);
qdisc_qstats_backlog_dec(sch, skb);
- --sch->q.qlen;
+ qdisc_qlen_dec(sch);
rtnl_qdisc_drop(skb, sch);
}
qdisc_tree_reduce_backlog(sch, oqlen - sch->q.qlen, dropped);
diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
index 01335a49e091444747635ee8bc7e22ded504d571..925fa0cfd730ce72e45e8983ba02eb913afb1235 100644
--- a/net/sched/sch_drr.c
+++ b/net/sched/sch_drr.c
@@ -366,7 +366,7 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return err;
}
@@ -399,7 +399,7 @@ static struct sk_buff *drr_dequeue(struct Qdisc *sch)
bstats_update(&cl->bstats, skb);
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
diff --git a/net/sched/sch_dualpi2.c b/net/sched/sch_dualpi2.c
index 241e6a46bd00e39820f5ba9dc71d559f205a4de0..c6416f09dddd8f170b92e50fb89377a15773c5bf 100644
--- a/net/sched/sch_dualpi2.c
+++ b/net/sched/sch_dualpi2.c
@@ -415,7 +415,7 @@ static int dualpi2_enqueue_skb(struct sk_buff *skb, struct Qdisc *sch,
dualpi2_skb_cb(skb)->apply_step = skb_apply_step(skb, q);
/* Keep the overall qdisc stats consistent */
- ++sch->q.qlen;
+ qdisc_qlen_inc(sch);
qdisc_qstats_backlog_inc(sch, skb);
++q->packets_in_l;
if (!q->l_head_ts)
@@ -530,7 +530,7 @@ static struct sk_buff *dequeue_packet(struct Qdisc *sch,
qdisc_qstats_backlog_dec(q->l_queue, skb);
/* Keep the global queue size consistent */
- --sch->q.qlen;
+ qdisc_qlen_dec(sch);
q->memory_used -= skb->truesize;
} else if (c_len) {
skb = __qdisc_dequeue_head(&sch->q);
@@ -888,7 +888,7 @@ static int dualpi2_change(struct Qdisc *sch, struct nlattr *opt,
* l_queue on enqueue; qdisc_dequeue_internal()
* handled l_queue, so we further account for sch.
*/
- --sch->q.qlen;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
q->memory_used -= skb->truesize;
rtnl_qdisc_drop(skb, q->l_queue);
diff --git a/net/sched/sch_etf.c b/net/sched/sch_etf.c
index c74d778c32a1eda639650df4d1d103c5338f14e6..ada87a81da6ac4c20e036b5391eb4efe9795ab91 100644
--- a/net/sched/sch_etf.c
+++ b/net/sched/sch_etf.c
@@ -189,7 +189,7 @@ static int etf_enqueue_timesortedlist(struct sk_buff *nskb, struct Qdisc *sch,
rb_insert_color_cached(&nskb->rbnode, &q->head, leftmost);
qdisc_qstats_backlog_inc(sch, nskb);
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
/* Now we may need to re-arm the qdisc watchdog for the next packet. */
reset_watchdog(sch);
@@ -222,7 +222,7 @@ static void timesortedlist_drop(struct Qdisc *sch, struct sk_buff *skb,
qdisc_qstats_backlog_dec(sch, skb);
qdisc_drop(skb, sch, &to_free);
qdisc_qstats_overlimit(sch);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
}
kfree_skb_list(to_free);
@@ -247,7 +247,7 @@ static void timesortedlist_remove(struct Qdisc *sch, struct sk_buff *skb)
q->last = skb->tstamp;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
}
static struct sk_buff *etf_dequeue_timesortedlist(struct Qdisc *sch)
@@ -426,7 +426,7 @@ static void timesortedlist_clear(struct Qdisc *sch)
rb_erase_cached(&skb->rbnode, &q->head);
rtnl_kfree_skbs(skb, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
}
}
diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
index a4b07b661b7756a675d22c0f84f8f0a713cdb7eb..c817e0a6c14653a35f5ebb9de1a5ccc44d1a2f98 100644
--- a/net/sched/sch_ets.c
+++ b/net/sched/sch_ets.c
@@ -449,7 +449,7 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return err;
}
@@ -458,7 +458,7 @@ ets_qdisc_dequeue_skb(struct Qdisc *sch, struct sk_buff *skb)
{
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index f2edcf872981fd8181dfb97a3bc665fd4a869115..1e34ac136b15cf24742f2810d201420cf763021a 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -497,7 +497,7 @@ static void fq_dequeue_skb(struct Qdisc *sch, struct fq_flow *flow,
fq_erase_head(sch, flow, skb);
skb_mark_not_on_list(skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_bstats_update(sch, skb);
}
@@ -597,7 +597,7 @@ static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
flow_queue_add(f, skb);
qdisc_qstats_backlog_inc(sch, skb);
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
@@ -801,7 +801,7 @@ static void fq_reset(struct Qdisc *sch)
struct fq_flow *f;
unsigned int idx;
- sch->q.qlen = 0;
+ WRITE_ONCE(sch->q.qlen, 0);
sch->qstats.backlog = 0;
fq_flow_purge(&q->internal);
diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
index ed42ce62a17f1de9516af90533d16b65657f86cd..cae8483fbb0c4f62f28dba4c15b4426485390bcf 100644
--- a/net/sched/sch_fq_codel.c
+++ b/net/sched/sch_fq_codel.c
@@ -178,7 +178,7 @@ static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets,
q->memory_usage -= mem;
__qdisc_qstats_drop(sch, i);
sch->qstats.backlog -= len;
- sch->q.qlen -= i;
+ WRITE_ONCE(sch->q.qlen, sch->q.qlen - i);
return idx;
}
@@ -215,7 +215,8 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch,
get_codel_cb(skb)->mem_usage = skb->truesize;
q->memory_usage += get_codel_cb(skb)->mem_usage;
memory_limited = q->memory_usage > q->memory_limit;
- if (++sch->q.qlen <= sch->limit && !memory_limited)
+ qdisc_qlen_inc(sch);
+ if (sch->q.qlen <= sch->limit && !memory_limited)
return NET_XMIT_SUCCESS;
prev_backlog = sch->qstats.backlog;
@@ -266,7 +267,7 @@ static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx)
WRITE_ONCE(q->backlogs[flow - q->flows],
q->backlogs[flow - q->flows] - qdisc_pkt_len(skb));
q->memory_usage -= get_codel_cb(skb)->mem_usage;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
sch->qstats.backlog -= qdisc_pkt_len(skb);
}
return skb;
diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
index 7becbf5362b3165bac4517f32887386b01301612..0a4eca4ab086ebebbdba17784f12370c301bbac6 100644
--- a/net/sched/sch_fq_pie.c
+++ b/net/sched/sch_fq_pie.c
@@ -185,7 +185,7 @@ static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
q->stats.packets_in++;
q->memory_usage += skb->truesize;
sch->qstats.backlog += pkt_len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
flow_queue_add(sel_flow, skb);
if (list_empty(&sel_flow->flowchain)) {
list_add_tail(&sel_flow->flowchain, &q->new_flows);
@@ -263,7 +263,7 @@ static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch)
skb = dequeue_head(flow);
pkt_len = qdisc_pkt_len(skb);
sch->qstats.backlog -= pkt_len;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_bstats_update(sch, skb);
}
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index a93321db8fd75d30c61e146c290bbc139c37c913..e35d9c58850fa9d82471d64daedfdf8c47e92b68 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -118,7 +118,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
qdisc_qstats_cpu_qlen_dec(q);
} else {
qdisc_qstats_backlog_dec(q, skb);
- q->q.qlen--;
+ qdisc_qlen_dec(q);
}
} else {
skb = SKB_XOFF_MAGIC;
@@ -159,7 +159,7 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
qdisc_qstats_cpu_qlen_inc(q);
} else {
qdisc_qstats_backlog_inc(q, skb);
- q->q.qlen++;
+ qdisc_qlen_inc(q);
}
if (lock)
@@ -188,7 +188,7 @@ static inline void dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
} else {
q->qstats.requeues++;
qdisc_qstats_backlog_inc(q, skb);
- q->q.qlen++;
+ qdisc_qlen_inc(q);
}
skb = next;
@@ -294,7 +294,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
qdisc_qstats_cpu_qlen_dec(q);
} else {
qdisc_qstats_backlog_dec(q, skb);
- q->q.qlen--;
+ qdisc_qlen_dec(q);
}
} else {
skb = NULL;
@@ -1059,7 +1059,7 @@ void qdisc_reset(struct Qdisc *qdisc)
__skb_queue_purge(&qdisc->gso_skb);
__skb_queue_purge(&qdisc->skb_bad_txq);
- qdisc->q.qlen = 0;
+ WRITE_ONCE(qdisc->q.qlen, 0);
qdisc->qstats.backlog = 0;
}
EXPORT_SYMBOL(qdisc_reset);
diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
index 83b2ca2e37fc82cfebf089e6c0e36f18af939887..e71a565100edf60881ca7542faa408c5bb1a0984 100644
--- a/net/sched/sch_hfsc.c
+++ b/net/sched/sch_hfsc.c
@@ -1561,7 +1561,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
}
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
if (first && !cl_in_el_or_vttree(cl)) {
if (cl->cl_flags & HFSC_RSC)
@@ -1650,7 +1650,7 @@ hfsc_dequeue(struct Qdisc *sch)
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c
index 96021f52d835b56339509565ca03fe796593e231..1e25b75daae2e5de31bd212dfa1f6d7aea927174 100644
--- a/net/sched/sch_hhf.c
+++ b/net/sched/sch_hhf.c
@@ -360,7 +360,7 @@ static unsigned int hhf_drop(struct Qdisc *sch, struct sk_buff **to_free)
if (bucket->head) {
struct sk_buff *skb = dequeue_head(bucket);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
qdisc_drop(skb, sch, to_free);
}
@@ -400,7 +400,8 @@ static int hhf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
bucket->deficit = weight * q->quantum;
}
- if (++sch->q.qlen <= sch->limit)
+ qdisc_qlen_inc(sch);
+ if (sch->q.qlen <= sch->limit)
return NET_XMIT_SUCCESS;
prev_backlog = sch->qstats.backlog;
@@ -443,7 +444,7 @@ static struct sk_buff *hhf_dequeue(struct Qdisc *sch)
if (bucket->head) {
skb = dequeue_head(bucket);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
}
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index eb12381795ce1bb0f3b8c5f502e16ad64c4408c8..c22ccd8eae8c73323ccdf425e62857b3b851d74e 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -651,7 +651,7 @@ static int htb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
@@ -951,7 +951,7 @@ static struct sk_buff *htb_dequeue(struct Qdisc *sch)
ok:
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
index a0133a7b9d3b09a0d2a6064234c8fdef60dbf955..ec8c91d3fde04e59daec2aecdb14d6bf50715e15 100644
--- a/net/sched/sch_mq.c
+++ b/net/sched/sch_mq.c
@@ -143,10 +143,10 @@ EXPORT_SYMBOL_NS_GPL(mq_attach, "NET_SCHED_INTERNAL");
void mq_dump_common(struct Qdisc *sch, struct sk_buff *skb)
{
struct net_device *dev = qdisc_dev(sch);
+ unsigned int qlen = 0;
struct Qdisc *qdisc;
unsigned int ntx;
- sch->q.qlen = 0;
gnet_stats_basic_sync_init(&sch->bstats);
memset(&sch->qstats, 0, sizeof(sch->qstats));
@@ -163,10 +163,11 @@ void mq_dump_common(struct Qdisc *sch, struct sk_buff *skb)
&qdisc->bstats, false);
gnet_stats_add_queue(&sch->qstats, qdisc->cpu_qstats,
&qdisc->qstats);
- sch->q.qlen += qdisc_qlen(qdisc);
+ qlen += qdisc_qlen(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
}
+ WRITE_ONCE(sch->q.qlen, qlen);
}
EXPORT_SYMBOL_NS_GPL(mq_dump_common, "NET_SCHED_INTERNAL");
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 002add5ce9e0ab04a6260495d1bec02983c2a204..91a92992cd24ab6c30bf7db2288c08cd493c7bc3 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -555,10 +555,11 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
struct mqprio_sched *priv = qdisc_priv(sch);
struct nlattr *nla = (struct nlattr *)skb_tail_pointer(skb);
struct tc_mqprio_qopt opt = { 0 };
+ unsigned int qlen = 0;
struct Qdisc *qdisc;
unsigned int ntx;
- sch->q.qlen = 0;
+ qlen = 0;
gnet_stats_basic_sync_init(&sch->bstats);
memset(&sch->qstats, 0, sizeof(sch->qstats));
@@ -575,10 +576,11 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
&qdisc->bstats, false);
gnet_stats_add_queue(&sch->qstats, qdisc->cpu_qstats,
&qdisc->qstats);
- sch->q.qlen += qdisc_qlen(qdisc);
+ qlen += qdisc_qlen(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
}
+ WRITE_ONCE(sch->q.qlen, qlen);
mqprio_qopt_reconstruct(dev, &opt);
opt.hw = priv->hw_offload;
@@ -663,12 +665,12 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
__acquires(d->lock)
{
if (cl >= TC_H_MIN_PRIORITY) {
- int i;
- __u32 qlen;
- struct gnet_stats_queue qstats = {0};
- struct gnet_stats_basic_sync bstats;
struct net_device *dev = qdisc_dev(sch);
struct netdev_tc_txq tc = dev->tc_to_txq[cl & TC_BITMASK];
+ struct gnet_stats_queue qstats = {0};
+ struct gnet_stats_basic_sync bstats;
+ u32 qlen = 0;
+ int i;
gnet_stats_basic_sync_init(&bstats);
/* Drop lock here it will be reclaimed before touching
@@ -689,11 +691,11 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
&qdisc->bstats, false);
gnet_stats_add_queue(&qstats, qdisc->cpu_qstats,
&qdisc->qstats);
- sch->q.qlen += qdisc_qlen(qdisc);
+ qlen += qdisc_qlen(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
}
- qlen = qdisc_qlen(sch) + qstats.qlen;
+ qlen = qlen + qstats.qlen;
/* Reclaim root sleeping lock before completing stats */
if (d->lock)
diff --git a/net/sched/sch_multiq.c b/net/sched/sch_multiq.c
index 9f822fee113df6562ddac89092357434547a4599..4e465d11e3d75e36b875b66f8c8087c2e15cdad9 100644
--- a/net/sched/sch_multiq.c
+++ b/net/sched/sch_multiq.c
@@ -76,7 +76,7 @@ multiq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
ret = qdisc_enqueue(skb, qdisc, to_free);
if (ret == NET_XMIT_SUCCESS) {
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
if (net_xmit_drop_count(ret))
@@ -106,7 +106,7 @@ static struct sk_buff *multiq_dequeue(struct Qdisc *sch)
skb = qdisc->dequeue(qdisc);
if (skb) {
qdisc_bstats_update(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
}
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index bc18e1976b6e07f81f975ceeb35c8b1a5125e8df..57b12cbca45355c69780614fa87aaf37255d64cc 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -416,7 +416,7 @@ static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
rb_insert_color(&nskb->rbnode, &q->t_root);
}
q->t_len++;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
}
/* netem can't properly corrupt a megapacket (like we get from GSO), so instead
@@ -751,19 +751,19 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
if (net_xmit_drop_count(err))
qdisc_qstats_drop(sch);
sch->qstats.backlog -= pkt_len;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_tree_reduce_backlog(sch, 1, pkt_len);
}
goto tfifo_dequeue;
}
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
goto deliver;
}
if (q->qdisc) {
skb = q->qdisc->ops->dequeue(q->qdisc);
if (skb) {
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
goto deliver;
}
}
@@ -776,7 +776,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
if (q->qdisc) {
skb = q->qdisc->ops->dequeue(q->qdisc);
if (skb) {
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
goto deliver;
}
}
diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
index 9e2b9a490db23d858b27b7fc073b05a06535b05e..fe42ae3d6b696b2fc47f4d397af32e950eeec194 100644
--- a/net/sched/sch_prio.c
+++ b/net/sched/sch_prio.c
@@ -86,7 +86,7 @@ prio_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
ret = qdisc_enqueue(skb, qdisc, to_free);
if (ret == NET_XMIT_SUCCESS) {
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
if (net_xmit_drop_count(ret))
@@ -119,7 +119,7 @@ static struct sk_buff *prio_dequeue(struct Qdisc *sch)
if (skb) {
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
}
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
index 699e45873f86145e96abd0d9ca77a6d0ff763b1b..195c434aae5f7e03d1a1238ed73bb64b3f04e105 100644
--- a/net/sched/sch_qfq.c
+++ b/net/sched/sch_qfq.c
@@ -1152,12 +1152,12 @@ static struct sk_buff *qfq_dequeue(struct Qdisc *sch)
if (!skb)
return NULL;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
skb = agg_dequeue(in_serv_agg, cl, len);
if (!skb) {
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NULL;
}
@@ -1265,7 +1265,7 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
_bstats_update(&cl->bstats, len, gso_segs);
sch->qstats.backlog += len;
- ++sch->q.qlen;
+ qdisc_qlen_inc(sch);
agg = cl->agg;
/* if the class is active, then done here */
diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
index 4d0e44a2e7c664e1599699d21ef482529ee2b119..0719590dfd73b64d21f71ab00621f64ed0eefc89 100644
--- a/net/sched/sch_red.c
+++ b/net/sched/sch_red.c
@@ -139,7 +139,7 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch,
ret = qdisc_enqueue(skb, child, to_free);
if (likely(ret == NET_XMIT_SUCCESS)) {
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
} else if (net_xmit_drop_count(ret)) {
WRITE_ONCE(q->stats.pdrop,
q->stats.pdrop + 1);
@@ -166,7 +166,7 @@ static struct sk_buff *red_dequeue(struct Qdisc *sch)
if (skb) {
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
} else {
if (!red_is_idling(&q->vars))
red_start_of_idle_period(&q->vars);
diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
index d3ee8e5479b35e38b71b0979e78aeadb40eb1655..efd9251c3add317f3b817f08c732fca0c347bf35 100644
--- a/net/sched/sch_sfb.c
+++ b/net/sched/sch_sfb.c
@@ -416,7 +416,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
ret = qdisc_enqueue(skb, child, to_free);
if (likely(ret == NET_XMIT_SUCCESS)) {
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
increment_qlen(&cb, q);
} else if (net_xmit_drop_count(ret)) {
WRITE_ONCE(q->stats.childdrop,
@@ -446,7 +446,7 @@ static struct sk_buff *sfb_dequeue(struct Qdisc *sch)
if (skb) {
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
decrement_qlen(skb, q);
}
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index f39822babf88bee9d52cac9f39637d38ec36994f..f9807ee2cf6c72101ce39c4f43bf32c03c0a5f62 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -302,7 +302,7 @@ static unsigned int sfq_drop(struct Qdisc *sch, struct sk_buff **to_free)
len = qdisc_pkt_len(skb);
WRITE_ONCE(slot->backlog, slot->backlog - len);
sfq_dec(q, x);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
qdisc_drop_reason(skb, sch, to_free, QDISC_DROP_OVERLIMIT);
return len;
@@ -456,7 +456,8 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
/* We could use a bigger initial quantum for new flows */
WRITE_ONCE(slot->allot, q->quantum);
}
- if (++sch->q.qlen <= q->limit)
+ qdisc_qlen_inc(sch);
+ if (sch->q.qlen <= q->limit)
return NET_XMIT_SUCCESS;
qlen = slot->qlen;
@@ -497,7 +498,7 @@ sfq_dequeue(struct Qdisc *sch)
skb = slot_dequeue_head(slot);
sfq_dec(q, a);
qdisc_bstats_update(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
WRITE_ONCE(slot->backlog, slot->backlog - qdisc_pkt_len(skb));
/* Is the slot empty? */
@@ -596,7 +597,7 @@ static void sfq_rehash(struct Qdisc *sch)
WRITE_ONCE(slot->allot, q->quantum);
}
}
- sch->q.qlen -= dropped;
+ WRITE_ONCE(sch->q.qlen, sch->q.qlen - dropped);
qdisc_tree_reduce_backlog(sch, dropped, drop_len);
}
diff --git a/net/sched/sch_skbprio.c b/net/sched/sch_skbprio.c
index f485f62ab721ab8cde21230c60514708fb479982..52abfb4015a36408046d96b349497419ab5dacf8 100644
--- a/net/sched/sch_skbprio.c
+++ b/net/sched/sch_skbprio.c
@@ -93,7 +93,7 @@ static int skbprio_enqueue(struct sk_buff *skb, struct Qdisc *sch,
if (prio < q->lowest_prio)
q->lowest_prio = prio;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
@@ -145,7 +145,7 @@ static struct sk_buff *skbprio_dequeue(struct Qdisc *sch)
if (unlikely(!skb))
return NULL;
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_qstats_backlog_dec(sch, skb);
qdisc_bstats_update(sch, skb);
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index 71b690e1974dad8fbab7e12998e03f86a0847a98..d6b981e5df11cba060c9c92212479c0d5a058f5b 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -574,7 +574,7 @@ static int taprio_enqueue_one(struct sk_buff *skb, struct Qdisc *sch,
}
qdisc_qstats_backlog_inc(sch, skb);
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return qdisc_enqueue(skb, child, to_free);
}
@@ -755,7 +755,7 @@ static struct sk_buff *taprio_dequeue_from_txq(struct Qdisc *sch, int txq,
qdisc_bstats_update(sch, skb);
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
return skb;
}
diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index f2340164f579a25431979e12ec3d23ab828edd16..25edf11a7d671fe63878b0995998c5920b86ef74 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -231,7 +231,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
len += seg_len;
}
}
- sch->q.qlen += nb;
+ WRITE_ONCE(sch->q.qlen, sch->q.qlen + nb);
sch->qstats.backlog += len;
if (nb > 0) {
qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
@@ -264,7 +264,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
sch->qstats.backlog += len;
- sch->q.qlen++;
+ qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
@@ -309,7 +309,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
q->tokens = toks;
q->ptokens = ptoks;
qdisc_qstats_backlog_dec(sch, skb);
- sch->q.qlen--;
+ qdisc_qlen_dec(sch);
qdisc_bstats_update(sch, skb);
return skb;
}
diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
index ec4039a201a2c2c502bc649fa5f6a0e4feee8fd5..bd10da46f5ddbc53f914648066dab526c8064e55 100644
--- a/net/sched/sch_teql.c
+++ b/net/sched/sch_teql.c
@@ -107,7 +107,7 @@ teql_dequeue(struct Qdisc *sch)
} else {
qdisc_bstats_update(sch, skb);
}
- sch->q.qlen = dat->q.qlen + q->q.qlen;
+ WRITE_ONCE(sch->q.qlen, dat->q.qlen + q->q.qlen);
return skb;
}
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 3/8] net/sched: annotate data-races around sch->qstats.backlog
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 1/8] net/sched: add READ_ONCE() in gnet_stats_add_queue[_cpu] Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 2/8] net/sched: add qdisc_qlen_inc() and qdisc_qlen_dec() Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 4/8] net/sched: add qdisc_qlen_lockless() helper Eric Dumazet
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Add qstats_backlog_sub() and qstats_backlog_add() helpers
and use them instead of open-coding them.
These helpers use WRITE_ONCE() to prevent store-tearing.
Also use WRITE_ONCE() in fq_reset() and qdisc_reset()
when sch->qstats.backlog is cleared.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/sch_generic.h | 16 +++++++++++++---
net/sched/sch_api.c | 2 +-
net/sched/sch_cake.c | 7 +++----
net/sched/sch_cbs.c | 2 +-
net/sched/sch_codel.c | 2 +-
net/sched/sch_drr.c | 2 +-
net/sched/sch_ets.c | 2 +-
net/sched/sch_fq.c | 2 +-
net/sched/sch_fq_codel.c | 4 ++--
net/sched/sch_fq_pie.c | 4 ++--
net/sched/sch_generic.c | 2 +-
net/sched/sch_gred.c | 2 +-
net/sched/sch_hfsc.c | 2 +-
net/sched/sch_htb.c | 2 +-
net/sched/sch_netem.c | 2 +-
net/sched/sch_prio.c | 2 +-
net/sched/sch_qfq.c | 2 +-
net/sched/sch_red.c | 2 +-
net/sched/sch_sfb.c | 4 ++--
net/sched/sch_sfq.c | 2 +-
net/sched/sch_tbf.c | 4 ++--
21 files changed, 39 insertions(+), 30 deletions(-)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 3893fbb29960d9b32042616b747168b689b355fd..d147549169a4d43c80684db2e1815a8a0d6596c6 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -965,10 +965,15 @@ static inline void qdisc_bstats_update(struct Qdisc *sch,
bstats_update(&sch->bstats, skb);
}
+static inline void qstats_backlog_sub(struct Qdisc *sch, u32 val)
+{
+ WRITE_ONCE(sch->qstats.backlog, sch->qstats.backlog - val);
+}
+
static inline void qdisc_qstats_backlog_dec(struct Qdisc *sch,
const struct sk_buff *skb)
{
- sch->qstats.backlog -= qdisc_pkt_len(skb);
+ qstats_backlog_sub(sch, qdisc_pkt_len(skb));
}
static inline void qdisc_qstats_cpu_backlog_dec(struct Qdisc *sch,
@@ -977,10 +982,15 @@ static inline void qdisc_qstats_cpu_backlog_dec(struct Qdisc *sch,
this_cpu_sub(sch->cpu_qstats->backlog, qdisc_pkt_len(skb));
}
+static inline void qstats_backlog_add(struct Qdisc *sch, u32 val)
+{
+ WRITE_ONCE(sch->qstats.backlog, sch->qstats.backlog + val);
+}
+
static inline void qdisc_qstats_backlog_inc(struct Qdisc *sch,
const struct sk_buff *skb)
{
- sch->qstats.backlog += qdisc_pkt_len(skb);
+ qstats_backlog_add(sch, qdisc_pkt_len(skb));
}
static inline void qdisc_qstats_cpu_backlog_inc(struct Qdisc *sch,
@@ -1304,7 +1314,7 @@ static inline void qdisc_update_stats_at_enqueue(struct Qdisc *sch,
qdisc_qstats_cpu_qlen_inc(sch);
this_cpu_add(sch->cpu_qstats->backlog, pkt_len);
} else {
- sch->qstats.backlog += pkt_len;
+ qstats_backlog_add(sch, pkt_len);
qdisc_qlen_inc(sch);
}
}
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index cefa2d8ac5ec00c78b08b520a11672120d10cdef..3c779e5098efd6602ec4efb0abadb8dac21c4b44 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -806,7 +806,7 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
cops->qlen_notify(sch, cl);
}
WRITE_ONCE(sch->q.qlen, sch->q.qlen - n);
- sch->qstats.backlog -= len;
+ qstats_backlog_sub(sch, len);
__qdisc_qstats_drop(sch, drops);
}
rcu_read_unlock();
diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 7ab75a52f7d1a46d87fc8f7c099c749a5331ccf6..7d59f52a4617b7ca3adaf040457ca8d30aa44be7 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -1603,7 +1603,6 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
q->buffer_used -= skb->truesize;
WRITE_ONCE(b->tin_backlog, b->tin_backlog - len);
WRITE_ONCE(b->backlogs[idx], b->backlogs[idx] - len);
- sch->qstats.backlog -= len;
WRITE_ONCE(flow->dropped, flow->dropped + 1);
WRITE_ONCE(b->tin_dropped, b->tin_dropped + 1);
@@ -1830,7 +1829,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
/* stats */
- sch->qstats.backlog += slen;
+ qstats_backlog_add(sch, slen);
q->avg_window_bytes += slen;
WRITE_ONCE(b->bytes, b->bytes + slen);
WRITE_ONCE(b->tin_backlog, b->tin_backlog + slen);
@@ -1867,7 +1866,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
/* stats */
WRITE_ONCE(b->packets, b->packets + 1);
- sch->qstats.backlog += len - ack_pkt_len;
+ qstats_backlog_add(sch, len - ack_pkt_len);
q->avg_window_bytes += len - ack_pkt_len;
WRITE_ONCE(b->bytes, b->bytes + len - ack_pkt_len);
WRITE_ONCE(b->tin_backlog, b->tin_backlog + len - ack_pkt_len);
@@ -1985,7 +1984,7 @@ static struct sk_buff *cake_dequeue_one(struct Qdisc *sch)
len = qdisc_pkt_len(skb);
WRITE_ONCE(b->backlogs[q->cur_flow], b->backlogs[q->cur_flow] - len);
WRITE_ONCE(b->tin_backlog, b->tin_backlog - len);
- sch->qstats.backlog -= len;
+ qstats_backlog_sub(sch, len);
q->buffer_used -= skb->truesize;
qdisc_qlen_dec(sch);
diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
index a75e58876797952f2218725f6da5cff29f330ae2..2cfa0fd92829ad7eba7454e09dc17eb8f22519b8 100644
--- a/net/sched/sch_cbs.c
+++ b/net/sched/sch_cbs.c
@@ -96,7 +96,7 @@ static int cbs_child_enqueue(struct sk_buff *skb, struct Qdisc *sch,
if (err != NET_XMIT_SUCCESS)
return err;
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c
index 317aae0ec7bd6aedb4bae09b18423c981fed16e7..91dd2e629af8f2d1a29f439a6dbb5c186fa01d33 100644
--- a/net/sched/sch_codel.c
+++ b/net/sched/sch_codel.c
@@ -42,7 +42,7 @@ static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx)
struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);
if (skb) {
- sch->qstats.backlog -= qdisc_pkt_len(skb);
+ qstats_backlog_sub(sch, qdisc_pkt_len(skb));
prefetch(&skb->end); /* we'll need skb_shinfo() */
}
return skb;
diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
index 925fa0cfd730ce72e45e8983ba02eb913afb1235..3f6687fa9666257952be5d44f9e3460845fe2a40 100644
--- a/net/sched/sch_drr.c
+++ b/net/sched/sch_drr.c
@@ -365,7 +365,7 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
cl->deficit = cl->quantum;
}
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
return err;
}
diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
index c817e0a6c14653a35f5ebb9de1a5ccc44d1a2f98..1cc559634ed27ce5a6630186a51a8ac8180dad96 100644
--- a/net/sched/sch_ets.c
+++ b/net/sched/sch_ets.c
@@ -448,7 +448,7 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
cl->deficit = cl->quantum;
}
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
return err;
}
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index 1e34ac136b15cf24742f2810d201420cf763021a..796cb8046a902b94952a571b250813c5e557d600 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -802,7 +802,7 @@ static void fq_reset(struct Qdisc *sch)
unsigned int idx;
WRITE_ONCE(sch->q.qlen, 0);
- sch->qstats.backlog = 0;
+ WRITE_ONCE(sch->qstats.backlog, 0);
fq_flow_purge(&q->internal);
diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
index cae8483fbb0c4f62f28dba4c15b4426485390bcf..1b1de693d4c64a1f5f4e9e788371829dea91740e 100644
--- a/net/sched/sch_fq_codel.c
+++ b/net/sched/sch_fq_codel.c
@@ -177,7 +177,7 @@ static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets,
WRITE_ONCE(q->backlogs[idx], q->backlogs[idx] - len);
q->memory_usage -= mem;
__qdisc_qstats_drop(sch, i);
- sch->qstats.backlog -= len;
+ qstats_backlog_sub(sch, len);
WRITE_ONCE(sch->q.qlen, sch->q.qlen - i);
return idx;
}
@@ -268,7 +268,7 @@ static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx)
q->backlogs[flow - q->flows] - qdisc_pkt_len(skb));
q->memory_usage -= get_codel_cb(skb)->mem_usage;
qdisc_qlen_dec(sch);
- sch->qstats.backlog -= qdisc_pkt_len(skb);
+ qdisc_qstats_backlog_dec(sch, skb);
}
return skb;
}
diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
index 0a4eca4ab086ebebbdba17784f12370c301bbac6..72f48fa4010bebbe6be212938b457db21ff3c5a0 100644
--- a/net/sched/sch_fq_pie.c
+++ b/net/sched/sch_fq_pie.c
@@ -184,7 +184,7 @@ static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
pkt_len = qdisc_pkt_len(skb);
q->stats.packets_in++;
q->memory_usage += skb->truesize;
- sch->qstats.backlog += pkt_len;
+ qstats_backlog_add(sch, pkt_len);
qdisc_qlen_inc(sch);
flow_queue_add(sel_flow, skb);
if (list_empty(&sel_flow->flowchain)) {
@@ -262,7 +262,7 @@ static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch)
if (flow->head) {
skb = dequeue_head(flow);
pkt_len = qdisc_pkt_len(skb);
- sch->qstats.backlog -= pkt_len;
+ qstats_backlog_sub(sch, pkt_len);
qdisc_qlen_dec(sch);
qdisc_bstats_update(sch, skb);
}
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index e35d9c58850fa9d82471d64daedfdf8c47e92b68..e8647a5c74af237d20fc73a05b27a03cc8b62427 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -1060,7 +1060,7 @@ void qdisc_reset(struct Qdisc *qdisc)
__skb_queue_purge(&qdisc->skb_bad_txq);
WRITE_ONCE(qdisc->q.qlen, 0);
- qdisc->qstats.backlog = 0;
+ WRITE_ONCE(qdisc->qstats.backlog, 0);
}
EXPORT_SYMBOL(qdisc_reset);
diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
index 8ae65572162c188cca5ac8f030dc6f2054a7fcd0..fcc1a4c0363624293986f221c70572ce6503e220 100644
--- a/net/sched/sch_gred.c
+++ b/net/sched/sch_gred.c
@@ -388,7 +388,7 @@ static int gred_offload_dump_stats(struct Qdisc *sch)
bytes += u64_stats_read(&hw_stats->stats.bstats[i].bytes);
packets += u64_stats_read(&hw_stats->stats.bstats[i].packets);
sch->qstats.qlen += hw_stats->stats.qstats[i].qlen;
- sch->qstats.backlog += hw_stats->stats.qstats[i].backlog;
+ qstats_backlog_add(sch, hw_stats->stats.qstats[i].backlog);
__qdisc_qstats_drop(sch, hw_stats->stats.qstats[i].drops);
sch->qstats.requeues += hw_stats->stats.qstats[i].requeues;
sch->qstats.overlimits += hw_stats->stats.qstats[i].overlimits;
diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
index e71a565100edf60881ca7542faa408c5bb1a0984..59409ee2d2ff9279d7439b744030c0e845386de0 100644
--- a/net/sched/sch_hfsc.c
+++ b/net/sched/sch_hfsc.c
@@ -1560,7 +1560,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
return err;
}
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
if (first && !cl_in_el_or_vttree(cl)) {
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index c22ccd8eae8c73323ccdf425e62857b3b851d74e..1e600f65c8769a74286c4f060b0d45da9a13eeeb 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -650,7 +650,7 @@ static int htb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
htb_activate(q, cl);
}
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 57b12cbca45355c69780614fa87aaf37255d64cc..ddbfea9dd32a7cee381dc82e0291db709ee57f8a 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -750,7 +750,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
if (err != NET_XMIT_SUCCESS) {
if (net_xmit_drop_count(err))
qdisc_qstats_drop(sch);
- sch->qstats.backlog -= pkt_len;
+ qstats_backlog_sub(sch, pkt_len);
qdisc_qlen_dec(sch);
qdisc_tree_reduce_backlog(sch, 1, pkt_len);
}
diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
index fe42ae3d6b696b2fc47f4d397af32e950eeec194..e4dd56a890725b4c14d6715c96f5b3fa44a8f4f2 100644
--- a/net/sched/sch_prio.c
+++ b/net/sched/sch_prio.c
@@ -85,7 +85,7 @@ prio_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
ret = qdisc_enqueue(skb, qdisc, to_free);
if (ret == NET_XMIT_SUCCESS) {
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
index 195c434aae5f7e03d1a1238ed73bb64b3f04e105..cb56787e1d258c06f2e86959c3b2cfaeb12df1ac 100644
--- a/net/sched/sch_qfq.c
+++ b/net/sched/sch_qfq.c
@@ -1264,7 +1264,7 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
_bstats_update(&cl->bstats, len, gso_segs);
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
agg = cl->agg;
diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
index 0719590dfd73b64d21f71ab00621f64ed0eefc89..d7598214270b8e5b6b818be37f1519f64ad537c4 100644
--- a/net/sched/sch_red.c
+++ b/net/sched/sch_red.c
@@ -138,7 +138,7 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch,
len = qdisc_pkt_len(skb);
ret = qdisc_enqueue(skb, child, to_free);
if (likely(ret == NET_XMIT_SUCCESS)) {
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
} else if (net_xmit_drop_count(ret)) {
WRITE_ONCE(q->stats.pdrop,
diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
index efd9251c3add317f3b817f08c732fca0c347bf35..b1d46509427692eeeabcfa19957c83fae3fa306e 100644
--- a/net/sched/sch_sfb.c
+++ b/net/sched/sch_sfb.c
@@ -415,7 +415,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
memcpy(&cb, sfb_skb_cb(skb), sizeof(cb));
ret = qdisc_enqueue(skb, child, to_free);
if (likely(ret == NET_XMIT_SUCCESS)) {
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
increment_qlen(&cb, q);
} else if (net_xmit_drop_count(ret)) {
@@ -592,7 +592,7 @@ static int sfb_dump(struct Qdisc *sch, struct sk_buff *skb)
.penalty_burst = q->penalty_burst,
};
- sch->qstats.backlog = q->qdisc->qstats.backlog;
+ WRITE_ONCE(sch->qstats.backlog, READ_ONCE(q->qdisc->qstats.backlog));
opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
if (opts == NULL)
goto nla_put_failure;
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index f9807ee2cf6c72101ce39c4f43bf32c03c0a5f62..758b88f218652704454647f25da270a0254cafcf 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -427,7 +427,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
/* We know we have at least one packet in queue */
head = slot_dequeue_head(slot);
delta = qdisc_pkt_len(head) - qdisc_pkt_len(skb);
- sch->qstats.backlog -= delta;
+ qstats_backlog_sub(sch, delta);
WRITE_ONCE(slot->backlog, slot->backlog - delta);
qdisc_drop_reason(head, sch, to_free, QDISC_DROP_FLOW_LIMIT);
diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index 25edf11a7d671fe63878b0995998c5920b86ef74..67c7aaaf8f607e82ad13b7fdf177405a1dd075bb 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -232,7 +232,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
}
}
WRITE_ONCE(sch->q.qlen, sch->q.qlen + nb);
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
if (nb > 0) {
qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
consume_skb(skb);
@@ -263,7 +263,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
return ret;
}
- sch->qstats.backlog += len;
+ qstats_backlog_add(sch, len);
qdisc_qlen_inc(sch);
return NET_XMIT_SUCCESS;
}
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 4/8] net/sched: add qdisc_qlen_lockless() helper
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
` (2 preceding siblings ...)
2026-05-07 22:19 ` [PATCH net-next 3/8] net/sched: annotate data-races around sch->qstats.backlog Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers Eric Dumazet
` (3 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Used in contexts were qdisc spinlock is not held.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/sch_generic.h | 7 ++++++-
net/sched/sch_mq.c | 2 +-
net/sched/sch_mqprio.c | 4 ++--
3 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index d147549169a4d43c80684db2e1815a8a0d6596c6..3070a717bb98386838f3e8149f34d52572fe208f 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -542,6 +542,11 @@ static inline int qdisc_qlen(const struct Qdisc *q)
return q->q.qlen;
}
+static inline int qdisc_qlen_lockless(const struct Qdisc *q)
+{
+ return READ_ONCE(q->q.qlen);
+}
+
static inline void qdisc_qlen_inc(struct Qdisc *q)
{
WRITE_ONCE(q->q.qlen, q->q.qlen + 1);
@@ -561,7 +566,7 @@ static inline int qdisc_qlen_sum(const struct Qdisc *q)
for_each_possible_cpu(i)
qlen += READ_ONCE(per_cpu_ptr(q->cpu_qstats, i)->qlen);
} else {
- qlen += READ_ONCE(q->q.qlen);
+ qlen += qdisc_qlen_lockless(q);
}
return qlen;
diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
index ec8c91d3fde04e59daec2aecdb14d6bf50715e15..4172ec24a43d1c2fe56789986a46da93eb522721 100644
--- a/net/sched/sch_mq.c
+++ b/net/sched/sch_mq.c
@@ -163,7 +163,7 @@ void mq_dump_common(struct Qdisc *sch, struct sk_buff *skb)
&qdisc->bstats, false);
gnet_stats_add_queue(&sch->qstats, qdisc->cpu_qstats,
&qdisc->qstats);
- qlen += qdisc_qlen(qdisc);
+ qlen += qdisc_qlen_lockless(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
}
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 91a92992cd24ab6c30bf7db2288c08cd493c7bc3..3b4881c389c535368687454ea268bec892ecb942 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -576,7 +576,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
&qdisc->bstats, false);
gnet_stats_add_queue(&sch->qstats, qdisc->cpu_qstats,
&qdisc->qstats);
- qlen += qdisc_qlen(qdisc);
+ qlen += qdisc_qlen_lockless(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
}
@@ -691,7 +691,7 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
&qdisc->bstats, false);
gnet_stats_add_queue(&qstats, qdisc->cpu_qstats,
&qdisc->qstats);
- qlen += qdisc_qlen(qdisc);
+ qlen += qdisc_qlen_lockless(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
}
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
` (3 preceding siblings ...)
2026-05-07 22:19 ` [PATCH net-next 4/8] net/sched: add qdisc_qlen_lockless() helper Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-08 18:33 ` Victor Nogueira
2026-05-07 22:19 ` [PATCH net-next 6/8] net/sched: mq: no longer acquire qdisc spinlocks in dump operations Eric Dumazet
` (2 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
In preparation of lockless qdisc dumps, add const qualifiers to:
- gnet_stats_add_basic()
- gnet_stats_copy_basic()
- gnet_stats_copy_queue()
- gnet_stats_read_basic()
- ___gnet_stats_copy_basic()
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/gen_stats.h | 12 ++++++------
net/core/gen_stats.c | 24 ++++++++++++------------
2 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/include/net/gen_stats.h b/include/net/gen_stats.h
index 7aa2b8e1fb298c4f994a745b114fc4da785ddf4b..6e661b743bc35743de9c211bdf5c24d69be5c0f1 100644
--- a/include/net/gen_stats.h
+++ b/include/net/gen_stats.h
@@ -47,19 +47,19 @@ int gnet_stats_start_copy_compat(struct sk_buff *skb, int type,
int padattr);
int gnet_stats_copy_basic(struct gnet_dump *d,
- struct gnet_stats_basic_sync __percpu *cpu,
- struct gnet_stats_basic_sync *b, bool running);
+ const struct gnet_stats_basic_sync __percpu *cpu,
+ const struct gnet_stats_basic_sync *b, bool running);
void gnet_stats_add_basic(struct gnet_stats_basic_sync *bstats,
- struct gnet_stats_basic_sync __percpu *cpu,
- struct gnet_stats_basic_sync *b, bool running);
+ const struct gnet_stats_basic_sync __percpu *cpu,
+ const struct gnet_stats_basic_sync *b, bool running);
int gnet_stats_copy_basic_hw(struct gnet_dump *d,
struct gnet_stats_basic_sync __percpu *cpu,
struct gnet_stats_basic_sync *b, bool running);
int gnet_stats_copy_rate_est(struct gnet_dump *d,
struct net_rate_estimator __rcu **ptr);
int gnet_stats_copy_queue(struct gnet_dump *d,
- struct gnet_stats_queue __percpu *cpu_q,
- struct gnet_stats_queue *q, __u32 qlen);
+ const struct gnet_stats_queue __percpu *cpu_q,
+ const struct gnet_stats_queue *q, __u32 qlen);
void gnet_stats_add_queue(struct gnet_stats_queue *qstats,
const struct gnet_stats_queue __percpu *cpu_q,
const struct gnet_stats_queue *q);
diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
index 1a2380e74272de8eaf3d4ef453e56105a31e9edf..3b2f9ea2eb072dde792aad5b60cf00dcc2efa76d 100644
--- a/net/core/gen_stats.c
+++ b/net/core/gen_stats.c
@@ -124,7 +124,7 @@ void gnet_stats_basic_sync_init(struct gnet_stats_basic_sync *b)
EXPORT_SYMBOL(gnet_stats_basic_sync_init);
static void gnet_stats_add_basic_cpu(struct gnet_stats_basic_sync *bstats,
- struct gnet_stats_basic_sync __percpu *cpu)
+ const struct gnet_stats_basic_sync __percpu *cpu)
{
u64 t_bytes = 0, t_packets = 0;
int i;
@@ -147,8 +147,8 @@ static void gnet_stats_add_basic_cpu(struct gnet_stats_basic_sync *bstats,
}
void gnet_stats_add_basic(struct gnet_stats_basic_sync *bstats,
- struct gnet_stats_basic_sync __percpu *cpu,
- struct gnet_stats_basic_sync *b, bool running)
+ const struct gnet_stats_basic_sync __percpu *cpu,
+ const struct gnet_stats_basic_sync *b, bool running)
{
unsigned int start;
u64 bytes = 0;
@@ -172,8 +172,8 @@ void gnet_stats_add_basic(struct gnet_stats_basic_sync *bstats,
EXPORT_SYMBOL(gnet_stats_add_basic);
static void gnet_stats_read_basic(u64 *ret_bytes, u64 *ret_packets,
- struct gnet_stats_basic_sync __percpu *cpu,
- struct gnet_stats_basic_sync *b, bool running)
+ const struct gnet_stats_basic_sync __percpu *cpu,
+ const struct gnet_stats_basic_sync *b, bool running)
{
unsigned int start;
@@ -182,7 +182,7 @@ static void gnet_stats_read_basic(u64 *ret_bytes, u64 *ret_packets,
int i;
for_each_possible_cpu(i) {
- struct gnet_stats_basic_sync *bcpu = per_cpu_ptr(cpu, i);
+ const struct gnet_stats_basic_sync *bcpu = per_cpu_ptr(cpu, i);
unsigned int start;
u64 bytes, packets;
@@ -209,8 +209,8 @@ static void gnet_stats_read_basic(u64 *ret_bytes, u64 *ret_packets,
static int
___gnet_stats_copy_basic(struct gnet_dump *d,
- struct gnet_stats_basic_sync __percpu *cpu,
- struct gnet_stats_basic_sync *b,
+ const struct gnet_stats_basic_sync __percpu *cpu,
+ const struct gnet_stats_basic_sync *b,
int type, bool running)
{
u64 bstats_bytes, bstats_packets;
@@ -258,8 +258,8 @@ ___gnet_stats_copy_basic(struct gnet_dump *d,
*/
int
gnet_stats_copy_basic(struct gnet_dump *d,
- struct gnet_stats_basic_sync __percpu *cpu,
- struct gnet_stats_basic_sync *b,
+ const struct gnet_stats_basic_sync __percpu *cpu,
+ const struct gnet_stats_basic_sync *b,
bool running)
{
return ___gnet_stats_copy_basic(d, cpu, b, TCA_STATS_BASIC, running);
@@ -385,8 +385,8 @@ EXPORT_SYMBOL(gnet_stats_add_queue);
*/
int
gnet_stats_copy_queue(struct gnet_dump *d,
- struct gnet_stats_queue __percpu *cpu_q,
- struct gnet_stats_queue *q, __u32 qlen)
+ const struct gnet_stats_queue __percpu *cpu_q,
+ const struct gnet_stats_queue *q, __u32 qlen)
{
struct gnet_stats_queue qstats = {0};
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 6/8] net/sched: mq: no longer acquire qdisc spinlocks in dump operations
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
` (4 preceding siblings ...)
2026-05-07 22:19 ` [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 7/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump() Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 8/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump_class_stats() Eric Dumazet
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Prepare mq_dump_common() for RTNL avoidance.
Use RCU instead of RTNL, and no longer acquire each children spinlock.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/sch_generic.h | 9 +++++++++
net/sched/sch_mq.c | 30 +++++++++++++++++++++---------
2 files changed, 30 insertions(+), 9 deletions(-)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 3070a717bb98386838f3e8149f34d52572fe208f..bfd1167ed575e5154c52a4491194e17e3998977c 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -952,6 +952,15 @@ static inline void _bstats_update(struct gnet_stats_basic_sync *bstats,
u64_stats_update_end(&bstats->syncp);
}
+static inline void _bstats_set(struct gnet_stats_basic_sync *bstats,
+ u64 bytes, u64 packets)
+{
+ u64_stats_update_begin(&bstats->syncp);
+ u64_stats_set(&bstats->bytes, bytes);
+ u64_stats_set(&bstats->packets, packets);
+ u64_stats_update_end(&bstats->syncp);
+}
+
static inline void bstats_update(struct gnet_stats_basic_sync *bstats,
const struct sk_buff *skb)
{
diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
index 4172ec24a43d1c2fe56789986a46da93eb522721..6bb1042e4595b50c6023d3ad81706ad7ab6fe0e5 100644
--- a/net/sched/sch_mq.c
+++ b/net/sched/sch_mq.c
@@ -143,30 +143,42 @@ EXPORT_SYMBOL_NS_GPL(mq_attach, "NET_SCHED_INTERNAL");
void mq_dump_common(struct Qdisc *sch, struct sk_buff *skb)
{
struct net_device *dev = qdisc_dev(sch);
+ struct gnet_stats_queue qstats = { 0 };
+ struct gnet_stats_basic_sync bstats;
+ const struct Qdisc *qdisc;
unsigned int qlen = 0;
- struct Qdisc *qdisc;
unsigned int ntx;
- gnet_stats_basic_sync_init(&sch->bstats);
- memset(&sch->qstats, 0, sizeof(sch->qstats));
+ gnet_stats_basic_sync_init(&bstats);
/* MQ supports lockless qdiscs. However, statistics accounting needs
* to account for all, none, or a mix of locked and unlocked child
* qdiscs. Percpu stats are added to counters in-band and locking
* qdisc totals are added at end.
*/
+ rcu_read_lock();
for (ntx = 0; ntx < dev->num_tx_queues; ntx++) {
- qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping);
- spin_lock_bh(qdisc_lock(qdisc));
+ qdisc = rcu_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping);
- gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats,
+ gnet_stats_add_basic(&bstats, qdisc->cpu_bstats,
&qdisc->bstats, false);
- gnet_stats_add_queue(&sch->qstats, qdisc->cpu_qstats,
+ gnet_stats_add_queue(&qstats, qdisc->cpu_qstats,
&qdisc->qstats);
qlen += qdisc_qlen_lockless(qdisc);
-
- spin_unlock_bh(qdisc_lock(qdisc));
}
+ rcu_read_unlock();
+
+ spin_lock_bh(qdisc_lock(sch));
+ _bstats_set(&sch->bstats, u64_stats_read(&bstats.bytes),
+ u64_stats_read(&bstats.packets));
+ spin_unlock_bh(qdisc_lock(sch));
+
+ WRITE_ONCE(sch->qstats.qlen, qstats.qlen);
+ WRITE_ONCE(sch->qstats.backlog, qstats.backlog);
+ WRITE_ONCE(sch->qstats.drops, qstats.drops);
+ WRITE_ONCE(sch->qstats.requeues, qstats.requeues);
+ WRITE_ONCE(sch->qstats.overlimits, qstats.overlimits);
+
WRITE_ONCE(sch->q.qlen, qlen);
}
EXPORT_SYMBOL_NS_GPL(mq_dump_common, "NET_SCHED_INTERNAL");
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 7/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump()
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
` (5 preceding siblings ...)
2026-05-07 22:19 ` [PATCH net-next 6/8] net/sched: mq: no longer acquire qdisc spinlocks in dump operations Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 8/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump_class_stats() Eric Dumazet
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Prepare mqprio_dump() for RTNL avoidance.
Use RCU instead of RTNL, and no longer acquire each children spinlock.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
net/sched/sch_mqprio.c | 35 +++++++++++++++++++++++------------
1 file changed, 23 insertions(+), 12 deletions(-)
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 3b4881c389c535368687454ea268bec892ecb942..37756932d4917caa7c3b96dff1999e30623fe953 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -551,35 +551,46 @@ static int mqprio_dump_tc_entries(struct mqprio_sched *priv,
static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
{
- struct net_device *dev = qdisc_dev(sch);
- struct mqprio_sched *priv = qdisc_priv(sch);
struct nlattr *nla = (struct nlattr *)skb_tail_pointer(skb);
+ struct mqprio_sched *priv = qdisc_priv(sch);
+ struct net_device *dev = qdisc_dev(sch);
+ struct gnet_stats_queue qstats = { 0 };
+ struct gnet_stats_basic_sync bstats;
struct tc_mqprio_qopt opt = { 0 };
+ const struct Qdisc *qdisc;
unsigned int qlen = 0;
- struct Qdisc *qdisc;
unsigned int ntx;
- qlen = 0;
- gnet_stats_basic_sync_init(&sch->bstats);
- memset(&sch->qstats, 0, sizeof(sch->qstats));
+ gnet_stats_basic_sync_init(&bstats);
/* MQ supports lockless qdiscs. However, statistics accounting needs
* to account for all, none, or a mix of locked and unlocked child
* qdiscs. Percpu stats are added to counters in-band and locking
* qdisc totals are added at end.
*/
+ rcu_read_lock();
for (ntx = 0; ntx < dev->num_tx_queues; ntx++) {
- qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping);
- spin_lock_bh(qdisc_lock(qdisc));
+ qdisc = rcu_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping);
- gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats,
+ gnet_stats_add_basic(&bstats, qdisc->cpu_bstats,
&qdisc->bstats, false);
- gnet_stats_add_queue(&sch->qstats, qdisc->cpu_qstats,
+ gnet_stats_add_queue(&qstats, qdisc->cpu_qstats,
&qdisc->qstats);
qlen += qdisc_qlen_lockless(qdisc);
-
- spin_unlock_bh(qdisc_lock(qdisc));
}
+ rcu_read_unlock();
+
+ spin_lock_bh(qdisc_lock(sch));
+ _bstats_set(&sch->bstats, u64_stats_read(&bstats.bytes),
+ u64_stats_read(&bstats.packets));
+ spin_unlock_bh(qdisc_lock(sch));
+
+ WRITE_ONCE(sch->qstats.qlen, qstats.qlen);
+ WRITE_ONCE(sch->qstats.backlog, qstats.backlog);
+ WRITE_ONCE(sch->qstats.drops, qstats.drops);
+ WRITE_ONCE(sch->qstats.requeues, qstats.requeues);
+ WRITE_ONCE(sch->qstats.overlimits, qstats.overlimits);
+
WRITE_ONCE(sch->q.qlen, qlen);
mqprio_qopt_reconstruct(dev, &opt);
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 8/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump_class_stats()
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
` (6 preceding siblings ...)
2026-05-07 22:19 ` [PATCH net-next 7/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump() Eric Dumazet
@ 2026-05-07 22:19 ` Eric Dumazet
7 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2026-05-07 22:19 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Prepare mqprio_dump_class_stats() for RTNL avoidance.
Use RCU instead of RTNL, and no longer acquire each children spinlock.
As a bonus we no longer have to release/acquire d->lock.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
net/sched/sch_mqprio.c | 35 ++++++++++++++---------------------
1 file changed, 14 insertions(+), 21 deletions(-)
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 37756932d4917caa7c3b96dff1999e30623fe953..6eb1db7b5d67548643b3e84f254cc1e034d1e6c7 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -672,9 +672,9 @@ static int mqprio_dump_class(struct Qdisc *sch, unsigned long cl,
static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
struct gnet_dump *d)
- __releases(d->lock)
- __acquires(d->lock)
{
+ const struct Qdisc *qdisc;
+
if (cl >= TC_H_MIN_PRIORITY) {
struct net_device *dev = qdisc_dev(sch);
struct netdev_tc_txq tc = dev->tc_to_txq[cl & TC_BITMASK];
@@ -684,44 +684,37 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
int i;
gnet_stats_basic_sync_init(&bstats);
- /* Drop lock here it will be reclaimed before touching
- * statistics this is required because the d->lock we
- * hold here is the look on dev_queue->qdisc_sleeping
- * also acquired below.
- */
- if (d->lock)
- spin_unlock_bh(d->lock);
+ rcu_read_lock();
for (i = tc.offset; i < tc.offset + tc.count; i++) {
struct netdev_queue *q = netdev_get_tx_queue(dev, i);
- struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
-
- spin_lock_bh(qdisc_lock(qdisc));
+ qdisc = rcu_dereference(q->qdisc);
gnet_stats_add_basic(&bstats, qdisc->cpu_bstats,
&qdisc->bstats, false);
gnet_stats_add_queue(&qstats, qdisc->cpu_qstats,
&qdisc->qstats);
qlen += qdisc_qlen_lockless(qdisc);
-
- spin_unlock_bh(qdisc_lock(qdisc));
}
+ rcu_read_unlock();
+
qlen = qlen + qstats.qlen;
- /* Reclaim root sleeping lock before completing stats */
- if (d->lock)
- spin_lock_bh(d->lock);
if (gnet_stats_copy_basic(d, NULL, &bstats, false) < 0 ||
gnet_stats_copy_queue(d, NULL, &qstats, qlen) < 0)
return -1;
} else {
struct netdev_queue *dev_queue = mqprio_queue_get(sch, cl);
+ int res = 0;
- sch = rtnl_dereference(dev_queue->qdisc_sleeping);
- if (gnet_stats_copy_basic(d, sch->cpu_bstats,
- &sch->bstats, true) < 0 ||
+ rcu_read_lock();
+ qdisc = rcu_dereference(dev_queue->qdisc_sleeping);
+ if (gnet_stats_copy_basic(d, qdisc->cpu_bstats,
+ &qdisc->bstats, true) < 0 ||
qdisc_qstats_copy(d, sch) < 0)
- return -1;
+ res = -1;
+ rcu_read_unlock();
+ return res;
}
return 0;
}
--
2.54.0.563.g4f69b47b94-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers
2026-05-07 22:19 ` [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers Eric Dumazet
@ 2026-05-08 18:33 ` Victor Nogueira
2026-05-09 17:53 ` Eric Dumazet
0 siblings, 1 reply; 12+ messages in thread
From: Victor Nogueira @ 2026-05-08 18:33 UTC (permalink / raw)
To: Eric Dumazet, David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet
On 07/05/2026 19:19, Eric Dumazet wrote:
> In preparation of lockless qdisc dumps, add const qualifiers to:
>
> - gnet_stats_add_basic()
> - gnet_stats_copy_basic()
> - gnet_stats_copy_queue()
> - gnet_stats_read_basic()
> - ___gnet_stats_copy_basic()
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> [...]
> diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
> index 1a2380e74272de8eaf3d4ef453e56105a31e9edf..3b2f9ea2eb072dde792aad5b60cf00dcc2efa76d 100644
> --- a/net/core/gen_stats.c
> +++ b/net/core/gen_stats.c
> @@ -124,7 +124,7 @@ void gnet_stats_basic_sync_init(struct gnet_stats_basic_sync *b)
> EXPORT_SYMBOL(gnet_stats_basic_sync_init);
>
> static void gnet_stats_add_basic_cpu(struct gnet_stats_basic_sync *bstats,
> - struct gnet_stats_basic_sync __percpu *cpu)
> + const struct gnet_stats_basic_sync __percpu *cpu)
This seems to be causing a compilation error:
net/core/gen_stats.c: In function ‘gnet_stats_add_basic_cpu’:
./include/linux/percpu-defs.h:238:1: error: initialization discards
‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
238 | ({
\
| ^
net/core/gen_stats.c:133:54: note: in expansion of macro ‘per_cpu_ptr’
133 | struct gnet_stats_basic_sync *bcpu =
per_cpu_ptr(cpu, i);
cheers,
Victor
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers
2026-05-08 18:33 ` Victor Nogueira
@ 2026-05-09 17:53 ` Eric Dumazet
2026-05-09 21:03 ` Victor Nogueira
0 siblings, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2026-05-09 17:53 UTC (permalink / raw)
To: Victor Nogueira
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet
On Fri, May 8, 2026 at 11:33 AM Victor Nogueira <victor@mojatatu.com> wrote:
>
> On 07/05/2026 19:19, Eric Dumazet wrote:
> > In preparation of lockless qdisc dumps, add const qualifiers to:
> >
> > - gnet_stats_add_basic()
> > - gnet_stats_copy_basic()
> > - gnet_stats_copy_queue()
> > - gnet_stats_read_basic()
> > - ___gnet_stats_copy_basic()
> >
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > [...]
> > diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
> > index 1a2380e74272de8eaf3d4ef453e56105a31e9edf..3b2f9ea2eb072dde792aad5b60cf00dcc2efa76d 100644
> > --- a/net/core/gen_stats.c
> > +++ b/net/core/gen_stats.c
> > @@ -124,7 +124,7 @@ void gnet_stats_basic_sync_init(struct gnet_stats_basic_sync *b)
> > EXPORT_SYMBOL(gnet_stats_basic_sync_init);
> >
> > static void gnet_stats_add_basic_cpu(struct gnet_stats_basic_sync *bstats,
> > - struct gnet_stats_basic_sync __percpu *cpu)
> > + const struct gnet_stats_basic_sync __percpu *cpu)
>
> This seems to be causing a compilation error:
>
> net/core/gen_stats.c: In function ‘gnet_stats_add_basic_cpu’:
> ./include/linux/percpu-defs.h:238:1: error: initialization discards
> ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
> 238 | ({
> \
> | ^
> net/core/gen_stats.c:133:54: note: in expansion of macro ‘per_cpu_ptr’
> 133 | struct gnet_stats_basic_sync *bcpu =
> per_cpu_ptr(cpu, i);
Interesting, no error on my side wth gcc or clang.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers
2026-05-09 17:53 ` Eric Dumazet
@ 2026-05-09 21:03 ` Victor Nogueira
0 siblings, 0 replies; 12+ messages in thread
From: Victor Nogueira @ 2026-05-09 21:03 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet
On Sat, May 9, 2026 at 2:53 PM Eric Dumazet <edumazet@google.com> wrote:
>
> On Fri, May 8, 2026 at 11:33 AM Victor Nogueira <victor@mojatatu.com> wrote:
> >
> > On 07/05/2026 19:19, Eric Dumazet wrote:
> > > In preparation of lockless qdisc dumps, add const qualifiers to:
> > >
> > > - gnet_stats_add_basic()
> > > - gnet_stats_copy_basic()
> > > - gnet_stats_copy_queue()
> > > - gnet_stats_read_basic()
> > > - ___gnet_stats_copy_basic()
> > >
> > > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > > [...]
> > > diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
> > > index 1a2380e74272de8eaf3d4ef453e56105a31e9edf..3b2f9ea2eb072dde792aad5b60cf00dcc2efa76d 100644
> > > --- a/net/core/gen_stats.c
> > > +++ b/net/core/gen_stats.c
> > > @@ -124,7 +124,7 @@ void gnet_stats_basic_sync_init(struct gnet_stats_basic_sync *b)
> > > EXPORT_SYMBOL(gnet_stats_basic_sync_init);
> > >
> > > static void gnet_stats_add_basic_cpu(struct gnet_stats_basic_sync *bstats,
> > > - struct gnet_stats_basic_sync __percpu *cpu)
> > > + const struct gnet_stats_basic_sync __percpu *cpu)
> >
> > This seems to be causing a compilation error:
> >
> > net/core/gen_stats.c: In function ‘gnet_stats_add_basic_cpu’:
> > ./include/linux/percpu-defs.h:238:1: error: initialization discards
> > ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
> > 238 | ({
> > \
> > | ^
> > net/core/gen_stats.c:133:54: note: in expansion of macro ‘per_cpu_ptr’
> > 133 | struct gnet_stats_basic_sync *bcpu =
> > per_cpu_ptr(cpu, i);
>
> Interesting, no error on my side wth gcc or clang.
Yes, it's weird.
I can reproduce with the gcc versions below:
gcc (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0
gcc (Ubuntu 11.4.0-1ubuntu1~22.04.3) 11.4.0
However with gcc 15 it's not reproducing.
There's probably been an update in gcc
that's masking the issue now.
cheers,
Victor
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-05-09 21:03 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-07 22:19 [PATCH net-next 0/8] net/sched: prepare lockless qdisc dumps Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 1/8] net/sched: add READ_ONCE() in gnet_stats_add_queue[_cpu] Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 2/8] net/sched: add qdisc_qlen_inc() and qdisc_qlen_dec() Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 3/8] net/sched: annotate data-races around sch->qstats.backlog Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 4/8] net/sched: add qdisc_qlen_lockless() helper Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 5/8] net/sched: add const qualifiers to gnet_stats helpers Eric Dumazet
2026-05-08 18:33 ` Victor Nogueira
2026-05-09 17:53 ` Eric Dumazet
2026-05-09 21:03 ` Victor Nogueira
2026-05-07 22:19 ` [PATCH net-next 6/8] net/sched: mq: no longer acquire qdisc spinlocks in dump operations Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 7/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump() Eric Dumazet
2026-05-07 22:19 ` [PATCH net-next 8/8] net/sched: mq_prio: no longer acquire qdisc spinlocks in mqprio_dump_class_stats() Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox