From: Jakub Kicinski <kuba@kernel.org>
To: edumazet@google.com
Cc: netdev@vger.kernel.org, davem@davemloft.net, pabeni@redhat.com,
Jakub Kicinski <kuba@kernel.org>,
jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us
Subject: [PATCH net] net_sched: sch_fq: don't follow the fast path if Tx is behind now
Date: Fri, 22 Nov 2024 08:21:08 -0800 [thread overview]
Message-ID: <20241122162108.2697803-1-kuba@kernel.org> (raw)
Recent kernels cause a lot of TCP retransmissions
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.24 GBytes 19.2 Gbits/sec 2767 442 KBytes
[ 5] 1.00-2.00 sec 2.23 GBytes 19.1 Gbits/sec 2312 350 KBytes
^^^^
Replacing the qdisc with pfifo makes them go away. It appears that
a flow may get throttled with a very near unthrottle time.
Later we may get busy processing Rx and the unthrottling time will
pass, but we won't service Tx since the core is busy with Rx.
If Rx sees an ACK and we try to push more data for the throttled flow
we may fastpath the skb, not realizing that there are already "ready
to send" packets for this flow sitting in the qdisc.
At least this is my theory on what happens.
Don't trust the fastpath if we are "behind" according to the projected
unthrottle time for some flow waiting in the Qdisc.
Qdisc config:
qdisc fq 8001: dev eth0 parent 1234:1 limit 10000p flow_limit 100p \
buckets 32768 orphan_mask 1023 bands 3 \
priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 \
weights 589824 196608 65536 quantum 3028b initial_quantum 15140b \
low_rate_threshold 550Kbit \
refill_delay 40ms timer_slack 10us horizon 10s horizon_drop
For iperf this change seems to do fine, the reordering is gone.
The fastpath still gets used most of the time:
gc 0 highprio 0 fastpath 142614 throttled 418309 latency 19.1us
xx_behind 2731
where "xx_behind" counts how many times we hit the new return false.
Fixes: 076433bd78d7 ("net_sched: sch_fq: add fast path for mostly idle qdisc")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: jhs@mojatatu.com
CC: xiyou.wangcong@gmail.com
CC: jiri@resnulli.us
---
net/sched/sch_fq.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index 19a49af5a9e5..3d932b262159 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -331,6 +331,12 @@ static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb,
*/
if (q->internal.qlen >= 8)
return false;
+
+ /* Ordering invariants fall apart if some throttled flows
+ * are ready but we haven't serviced them, yet.
+ */
+ if (q->throttled_flows && q->time_next_delayed_flow <= now)
+ return false;
}
sk = skb->sk;
--
2.47.0
next reply other threads:[~2024-11-22 16:21 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-22 16:21 Jakub Kicinski [this message]
2024-11-22 16:44 ` [PATCH net] net_sched: sch_fq: don't follow the fast path if Tx is behind now Eric Dumazet
2024-11-22 17:31 ` Jakub Kicinski
2024-11-22 18:09 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241122162108.2697803-1-kuba@kernel.org \
--to=kuba@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=jhs@mojatatu.com \
--cc=jiri@resnulli.us \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).