From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5104A3C2792 for ; Wed, 8 Apr 2026 12:56:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775652986; cv=none; b=IE+B6HQB83q/P5/qtBQJDOrBBcZDXdMhW+2g0ICCcIlAlbThB1DnAE+dU8mWKIEKLKUD+5qeMTY/x2CU//9w4ZjSV2ZFB94e1W8OpwFtQnz2xw5lauxzqAz6SD+MYFMq+QjKCbYD2WBu64M9ZqilXeKKSFbWGCRhUoA+w298iNI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775652986; c=relaxed/simple; bh=iOhlbm0cZldRaKf8/xr/W1grsmf8Xz/YNU+zUgeTWHM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZfA4as0O6YDRdOSRFn3JEGj4MRFXVq6Hih6ukUfOQzfZqbWoxA7ndCmOpnrfysUit9MV0hxNe94f/lV/wOaD/+ktCOepyl7nIe75lLYJLcpPmkjsPfoFnoAT+djrb4eLzfuGHu/50EOj9VaEykyGcxcmS5eAq3xSA/mf8WaWlwo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--edumazet.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fdR5WP71; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--edumazet.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fdR5WP71" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-8abd6e281c0so109917266d6.1 for ; Wed, 08 Apr 2026 05:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775652983; x=1776257783; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TGjuCy4BZ6pIzTtgQMYhlk1TlTAl29CA+gCRmCNty3Y=; b=fdR5WP71dqn88+e4cKXl38y/AEMTK7c9CaGFm9lnq/jhHXVdJHhQHM6ccihkpnXg/U /jU4CNxozxnb90r6PVRmueJS4RpHYsDldPXvzY/eNPHDea/owX/bECtZDqST54cHyq5K 0KccqI/Z0GFeQiWndnNiTxw5AjANA6pP6wJ4Jft0o3jXKEuhpBtFq7/OspPVo+OdJnF0 daZq9TLfKFiHsWv0xbp8MqLig04Ma4ZJCCpeJ1tX9ur3TDQESMGza4WivGhryYLNo+gY gqdeCmBNVaDW4BEbyENS4w9lfPk9J1aHwGJgkQclxOyDUPKv83vtM2Z402ZJj1PtRoJv gmxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775652983; x=1776257783; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TGjuCy4BZ6pIzTtgQMYhlk1TlTAl29CA+gCRmCNty3Y=; b=CgltoGzRJUEmfdrRt5PBuF4EHwI1ETEIxe34HaAYDznfmhByer2FAsoXaYV3SXhVto oYsswxjQxlKlAlU1+YLLBRVFtbEiz2jgYp3CKmi08txyEvVLVaIruUMOXZdwKulsxwYB +m0LfelibZczRKlFihTU5//CaRXdv3ycIZlkkmOLe8rVM5PXEW6aJdprXRsTOwGAu2r0 roKv3IgEG68QyTztUIQDClgTC4B5LjogLRs+L99YtePq/DNgn5fdisnKT+gQ1+eXUVBP xsalAcxkInM5XfyYlp3DGx/jeRkSpQ2FmjlG2LMb6tyYWriTkiuUIAc4/ZPuhoxyn3GN fRQg== X-Forwarded-Encrypted: i=1; AJvYcCXCfp6gw5Sg1iSFnp0pL3VT/B4H16l85WXUkbFhdzvW4a6LWKuoiHbIQhsPORH4dm5EpcT8HnY=@vger.kernel.org X-Gm-Message-State: AOJu0YzYDwKtlw9cqjouzdrLJ5opM6vdX5qwec3WxekbPXfdClWmGjxz +Qbs3iRi4U/Lt84/TvbhUqlO08Tvre0kkIN+Yb/SahYa6foGKYDTlXhDmbtRREq2zsXTipSZ3PR VjxypVVX86+/FJQ== X-Received: from qvbpf11.prod.google.com ([2002:a05:6214:498b:b0:89f:6882:dc51]) (user=edumazet job=prod-delivery.src-stubby-dispatcher) by 2002:ad4:5f4e:0:b0:8a0:846e:883f with SMTP id 6a1803df08f44-8a702d8bb04mr334969796d6.26.1775652982007; Wed, 08 Apr 2026 05:56:22 -0700 (PDT) Date: Wed, 8 Apr 2026 12:56:01 +0000 In-Reply-To: <20260408125611.3592751-1-edumazet@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260408125611.3592751-1-edumazet@google.com> X-Mailer: git-send-email 2.53.0.1213.gd9a14994de-goog Message-ID: <20260408125611.3592751-6-edumazet@google.com> Subject: [PATCH net-next 05/15] net/sched: annotate data-races around sch->qstats.backlog From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski , Paolo Abeni Cc: Simon Horman , Jamal Hadi Salim , Jiri Pirko , Kuniyuki Iwashima , netdev@vger.kernel.org, eric.dumazet@gmail.com, Eric Dumazet Content-Type: text/plain; charset="UTF-8" Add qstats_backlog_sub() and qstats_backlog_add() helpers and use them instead of open-coding them. These helpers use WRITE_ONCE() to prevent store-tearing. Signed-off-by: Eric Dumazet --- include/net/sch_generic.h | 17 ++++++++++++++--- net/sched/sch_api.c | 2 +- net/sched/sch_cake.c | 8 ++++---- net/sched/sch_cbs.c | 2 +- net/sched/sch_codel.c | 2 +- net/sched/sch_drr.c | 2 +- net/sched/sch_ets.c | 2 +- net/sched/sch_fq_codel.c | 4 ++-- net/sched/sch_fq_pie.c | 4 ++-- net/sched/sch_hfsc.c | 2 +- net/sched/sch_htb.c | 2 +- net/sched/sch_netem.c | 2 +- net/sched/sch_prio.c | 2 +- net/sched/sch_qfq.c | 2 +- net/sched/sch_red.c | 2 +- net/sched/sch_sfb.c | 4 ++-- net/sched/sch_sfq.c | 2 +- net/sched/sch_tbf.c | 4 ++-- 18 files changed, 38 insertions(+), 27 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 84acf7ac42cb5173d151d98fa0ea603e9cc80d69..8e4e12692a1048f5e626b89c4db9e0339b16265d 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -965,10 +965,15 @@ static inline void qdisc_bstats_update(struct Qdisc *sch, bstats_update(&sch->bstats, skb); } +static inline void qstats_backlog_sub(struct Qdisc *sch, u32 val) +{ + WRITE_ONCE(sch->qstats.backlog, sch->qstats.backlog - val); +} + static inline void qdisc_qstats_backlog_dec(struct Qdisc *sch, const struct sk_buff *skb) { - sch->qstats.backlog -= qdisc_pkt_len(skb); + qstats_backlog_sub(sch, qdisc_pkt_len(skb)); } static inline void qdisc_qstats_cpu_backlog_dec(struct Qdisc *sch, @@ -977,10 +982,15 @@ static inline void qdisc_qstats_cpu_backlog_dec(struct Qdisc *sch, this_cpu_sub(sch->cpu_qstats->backlog, qdisc_pkt_len(skb)); } +static inline void qstats_backlog_add(struct Qdisc *sch, u32 val) +{ + WRITE_ONCE(sch->qstats.backlog, sch->qstats.backlog + val); +} + static inline void qdisc_qstats_backlog_inc(struct Qdisc *sch, const struct sk_buff *skb) { - sch->qstats.backlog += qdisc_pkt_len(skb); + qstats_backlog_add(sch, qdisc_pkt_len(skb)); } static inline void qdisc_qstats_cpu_backlog_inc(struct Qdisc *sch, @@ -1294,7 +1304,8 @@ static inline void qdisc_update_stats_at_enqueue(struct Qdisc *sch, qdisc_qstats_cpu_qlen_inc(sch); this_cpu_add(sch->cpu_qstats->backlog, pkt_len); } else { - sch->qstats.backlog += pkt_len; + WRITE_ONCE(sch->qstats.backlog, + sch->qstats.backlog + pkt_len); qdisc_qlen_inc(sch); } } diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 0dd3efd86393870e9695dddb4a471c5bf854f81e..292bc8bb7a79922a83865ed54083c04ff72742ff 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -806,7 +806,7 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len) cops->qlen_notify(sch, cl); } WRITE_ONCE(sch->q.qlen, sch->q.qlen - n); - sch->qstats.backlog -= len; + qstats_backlog_sub(sch, len); __qdisc_qstats_drop(sch, drops); } rcu_read_unlock(); diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c index 0a8b067ba8ecbb85bd6f96ee9e5e959ba5e2efae..0104c29b20f8e43ffa025f0eb58bfe4e2b801010 100644 --- a/net/sched/sch_cake.c +++ b/net/sched/sch_cake.c @@ -1596,7 +1596,7 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free) q->buffer_used -= skb->truesize; b->backlogs[idx] -= len; b->tin_backlog -= len; - sch->qstats.backlog -= len; + qstats_backlog_sub(sch, len); flow->dropped++; b->tin_dropped++; @@ -1826,7 +1826,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, b->bytes += slen; b->backlogs[idx] += slen; b->tin_backlog += slen; - sch->qstats.backlog += slen; + qstats_backlog_add(sch, slen); q->avg_window_bytes += slen; qdisc_tree_reduce_backlog(sch, 1-numsegs, len-slen); @@ -1863,7 +1863,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, b->bytes += len - ack_pkt_len; b->backlogs[idx] += len - ack_pkt_len; b->tin_backlog += len - ack_pkt_len; - sch->qstats.backlog += len - ack_pkt_len; + qstats_backlog_add(sch, len - ack_pkt_len); q->avg_window_bytes += len - ack_pkt_len; } @@ -1978,7 +1978,7 @@ static struct sk_buff *cake_dequeue_one(struct Qdisc *sch) len = qdisc_pkt_len(skb); b->backlogs[q->cur_flow] -= len; b->tin_backlog -= len; - sch->qstats.backlog -= len; + qstats_backlog_sub(sch, len); q->buffer_used -= skb->truesize; qdisc_qlen_dec(sch); diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c index a75e58876797952f2218725f6da5cff29f330ae2..2cfa0fd92829ad7eba7454e09dc17eb8f22519b8 100644 --- a/net/sched/sch_cbs.c +++ b/net/sched/sch_cbs.c @@ -96,7 +96,7 @@ static int cbs_child_enqueue(struct sk_buff *skb, struct Qdisc *sch, if (err != NET_XMIT_SUCCESS) return err; - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); return NET_XMIT_SUCCESS; diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c index dc2be90666ffbd715550800790a0091acd28701d..0a0d07bc7055d29e56ef58c89a1b079967307177 100644 --- a/net/sched/sch_codel.c +++ b/net/sched/sch_codel.c @@ -42,7 +42,7 @@ static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx) struct sk_buff *skb = __qdisc_dequeue_head(&sch->q); if (skb) { - sch->qstats.backlog -= qdisc_pkt_len(skb); + qstats_backlog_sub(sch, qdisc_pkt_len(skb)); prefetch(&skb->end); /* we'll need skb_shinfo() */ } return skb; diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c index 925fa0cfd730ce72e45e8983ba02eb913afb1235..3f6687fa9666257952be5d44f9e3460845fe2a40 100644 --- a/net/sched/sch_drr.c +++ b/net/sched/sch_drr.c @@ -365,7 +365,7 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch, cl->deficit = cl->quantum; } - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); return err; } diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c index c817e0a6c14653a35f5ebb9de1a5ccc44d1a2f98..1cc559634ed27ce5a6630186a51a8ac8180dad96 100644 --- a/net/sched/sch_ets.c +++ b/net/sched/sch_ets.c @@ -448,7 +448,7 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, cl->deficit = cl->quantum; } - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); return err; } diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c index 183b1ea9d2076d8f709d50a38b39d28a2b14bad8..5c88278a30b3cd92ae525d1a88eb692ff0e296e0 100644 --- a/net/sched/sch_fq_codel.c +++ b/net/sched/sch_fq_codel.c @@ -177,7 +177,7 @@ static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets, q->backlogs[idx] -= len; q->memory_usage -= mem; sch->qstats.drops += i; - sch->qstats.backlog -= len; + qstats_backlog_sub(sch, len); WRITE_ONCE(sch->q.qlen, sch->q.qlen - i); return idx; } @@ -267,7 +267,7 @@ static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx) q->backlogs[flow - q->flows] -= qdisc_pkt_len(skb); q->memory_usage -= get_codel_cb(skb)->mem_usage; qdisc_qlen_dec(sch); - sch->qstats.backlog -= qdisc_pkt_len(skb); + qdisc_qstats_backlog_dec(sch, skb); } return skb; } diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c index dba49d44a5d2412b2deb983bf87428ade7944e51..197f0df0a6eb06ab4ce25eefe01d32a35dbd84af 100644 --- a/net/sched/sch_fq_pie.c +++ b/net/sched/sch_fq_pie.c @@ -184,7 +184,7 @@ static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, pkt_len = qdisc_pkt_len(skb); q->stats.packets_in++; q->memory_usage += skb->truesize; - sch->qstats.backlog += pkt_len; + qstats_backlog_add(sch, pkt_len); qdisc_qlen_inc(sch); flow_queue_add(sel_flow, skb); if (list_empty(&sel_flow->flowchain)) { @@ -262,7 +262,7 @@ static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch) if (flow->head) { skb = dequeue_head(flow); pkt_len = qdisc_pkt_len(skb); - sch->qstats.backlog -= pkt_len; + qstats_backlog_sub(sch, pkt_len); qdisc_qlen_dec(sch); qdisc_bstats_update(sch, skb); } diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c index e71a565100edf60881ca7542faa408c5bb1a0984..d61377e8551bcf60c8a47816112114684efe5e0b 100644 --- a/net/sched/sch_hfsc.c +++ b/net/sched/sch_hfsc.c @@ -1560,7 +1560,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) return err; } - sch->qstats.backlog += len; + qstats_backlog_sub(sch, len); qdisc_qlen_inc(sch); if (first && !cl_in_el_or_vttree(cl)) { diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c index c22ccd8eae8c73323ccdf425e62857b3b851d74e..1e600f65c8769a74286c4f060b0d45da9a13eeeb 100644 --- a/net/sched/sch_htb.c +++ b/net/sched/sch_htb.c @@ -650,7 +650,7 @@ static int htb_enqueue(struct sk_buff *skb, struct Qdisc *sch, htb_activate(q, cl); } - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); return NET_XMIT_SUCCESS; } diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index 4498dd440a02ea7a089c92ebc005d5064b87e2d2..2a2cdd1e4cc206ba00b8dd1821bef87156050950 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -751,7 +751,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch) if (err != NET_XMIT_SUCCESS) { if (net_xmit_drop_count(err)) qdisc_qstats_drop(sch); - sch->qstats.backlog -= pkt_len; + qstats_backlog_sub(sch, pkt_len); qdisc_qlen_dec(sch); qdisc_tree_reduce_backlog(sch, 1, pkt_len); } diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c index fe42ae3d6b696b2fc47f4d397af32e950eeec194..e4dd56a890725b4c14d6715c96f5b3fa44a8f4f2 100644 --- a/net/sched/sch_prio.c +++ b/net/sched/sch_prio.c @@ -85,7 +85,7 @@ prio_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) ret = qdisc_enqueue(skb, qdisc, to_free); if (ret == NET_XMIT_SUCCESS) { - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); return NET_XMIT_SUCCESS; } diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c index 195c434aae5f7e03d1a1238ed73bb64b3f04e105..cb56787e1d258c06f2e86959c3b2cfaeb12df1ac 100644 --- a/net/sched/sch_qfq.c +++ b/net/sched/sch_qfq.c @@ -1264,7 +1264,7 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, } _bstats_update(&cl->bstats, len, gso_segs); - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); agg = cl->agg; diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c index 61b9064d39f222bdfe5021e93e8172b7ae60c408..7db97c96351309bc3e64fa50570a1928f2b2ce55 100644 --- a/net/sched/sch_red.c +++ b/net/sched/sch_red.c @@ -132,7 +132,7 @@ static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch, len = qdisc_pkt_len(skb); ret = qdisc_enqueue(skb, child, to_free); if (likely(ret == NET_XMIT_SUCCESS)) { - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); } else if (net_xmit_drop_count(ret)) { q->stats.pdrop++; diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c index 17b6ce223ad3a6f2d289c3ebe27cce8168c66b2b..2258567cbcaf70863eace85d347efda882a00145 100644 --- a/net/sched/sch_sfb.c +++ b/net/sched/sch_sfb.c @@ -406,7 +406,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch, memcpy(&cb, sfb_skb_cb(skb), sizeof(cb)); ret = qdisc_enqueue(skb, child, to_free); if (likely(ret == NET_XMIT_SUCCESS)) { - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); increment_qlen(&cb, q); } else if (net_xmit_drop_count(ret)) { @@ -582,7 +582,7 @@ static int sfb_dump(struct Qdisc *sch, struct sk_buff *skb) .penalty_burst = q->penalty_burst, }; - sch->qstats.backlog = q->qdisc->qstats.backlog; + WRITE_ONCE(sch->qstats.backlog, READ_ONCE(q->qdisc->qstats.backlog)); opts = nla_nest_start_noflag(skb, TCA_OPTIONS); if (opts == NULL) goto nla_put_failure; diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index 5eb6d8abd1c334938f72259f5fc41526597e792f..37410dfd7e3c554d3a8180b4887bdd5872ba8aab 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -425,7 +425,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) /* We know we have at least one packet in queue */ head = slot_dequeue_head(slot); delta = qdisc_pkt_len(head) - qdisc_pkt_len(skb); - sch->qstats.backlog -= delta; + qstats_backlog_sub(sch, delta); slot->backlog -= delta; qdisc_drop_reason(head, sch, to_free, QDISC_DROP_FLOW_LIMIT); diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c index 25edf11a7d671fe63878b0995998c5920b86ef74..67c7aaaf8f607e82ad13b7fdf177405a1dd075bb 100644 --- a/net/sched/sch_tbf.c +++ b/net/sched/sch_tbf.c @@ -232,7 +232,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch, } } WRITE_ONCE(sch->q.qlen, sch->q.qlen + nb); - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); if (nb > 0) { qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len); consume_skb(skb); @@ -263,7 +263,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch, return ret; } - sch->qstats.backlog += len; + qstats_backlog_add(sch, len); qdisc_qlen_inc(sch); return NET_XMIT_SUCCESS; } -- 2.53.0.1213.gd9a14994de-goog