Netdev List
 help / color / mirror / Atom feed
From: Eric Dumazet <edumazet@google.com>
To: "David S . Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	 Paolo Abeni <pabeni@redhat.com>
Cc: "Simon Horman" <horms@kernel.org>,
	"Jamal Hadi Salim" <jhs@mojatatu.com>,
	"Victor Nogueira" <victor@mojatatu.com>,
	"Jiri Pirko" <jiri@resnulli.us>,
	"Ido Schimmel" <idosch@nvidia.com>,
	"David Ahern" <dsahern@kernel.org>,
	"Toke Høiland-Jørgensen" <toke@toke.dk>,
	netdev@vger.kernel.org, eric.dumazet@gmail.com,
	"Eric Dumazet" <edumazet@google.com>
Subject: [PATCH net-next 1/3] net/sched: qdisc_qstats_qlen_backlog() runs locklessly
Date: Wed, 13 May 2026 08:08:51 +0000	[thread overview]
Message-ID: <20260513080853.1383975-2-edumazet@google.com> (raw)
In-Reply-To: <20260513080853.1383975-1-edumazet@google.com>

qdisc_qstats_qlen_backlog() can be called without qdisc spinlock being held.

Use qdisc_qlen_lockless() instead of qdisc_qlen().

Add a const qualifier to its first parameter (struct Qdisc *sch).

Fixes: edb09eb17ed8 ("net: sched: do not acquire qdisc spinlock in qdisc/class stats dump")
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 include/net/sch_generic.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 9e6ed92729d282642e17a72cc578f25e1d22e4d9..d0ca932b1871c086f0ba7aad11241742bba0a9d8 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -1070,13 +1070,13 @@ static inline int qdisc_qstats_copy(struct gnet_dump *d, const struct Qdisc *sch
 	return gnet_stats_copy_queue(d, sch->cpu_qstats, &sch->qstats, qlen);
 }
 
-static inline void qdisc_qstats_qlen_backlog(struct Qdisc *sch,  __u32 *qlen,
-					     __u32 *backlog)
+static inline void qdisc_qstats_qlen_backlog(const struct Qdisc *sch,
+					     u32 *qlen, u32 *backlog)
 {
 	struct gnet_stats_queue qstats = { 0 };
 
 	gnet_stats_add_queue(&qstats, sch->cpu_qstats, &sch->qstats);
-	*qlen = qstats.qlen + qdisc_qlen(sch);
+	*qlen = qstats.qlen + qdisc_qlen_lockless(sch);
 	*backlog = qstats.backlog;
 }
 
-- 
2.54.0.563.g4f69b47b94-goog


  reply	other threads:[~2026-05-13  8:08 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-13  8:08 [PATCH net-next 0/3] net/sched: changes around qdisc_qstats_qlen_backlog() Eric Dumazet
2026-05-13  8:08 ` Eric Dumazet [this message]
2026-05-13  8:08 ` [PATCH net-next 2/3] net: ioam6: no longer acquire qdisc spinlock while calling qdisc_qstats_qlen_backlog() Eric Dumazet
2026-05-13  8:08 ` [PATCH net-next 3/3] net/sched: sch_hfsc: annotate data-races in hfsc_dump_class_stats() Eric Dumazet
2026-05-13 10:08 ` [PATCH net-next 0/3] net/sched: changes around qdisc_qstats_qlen_backlog() Toke Høiland-Jørgensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260513080853.1383975-2-edumazet@google.com \
    --to=edumazet@google.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=eric.dumazet@gmail.com \
    --cc=horms@kernel.org \
    --cc=idosch@nvidia.com \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=toke@toke.dk \
    --cc=victor@mojatatu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox