* [PATCH net-next 0/2] Two small fq_codel optimizations
@ 2019-08-03 23:37 Dave Taht
2019-08-03 23:37 ` [PATCH net-next 1/2] Increase fq_codel count in the bulk dropper Dave Taht
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Dave Taht @ 2019-08-03 23:37 UTC (permalink / raw)
To: netdev; +Cc: Dave Taht
These two patches improve fq_codel performance
under extreme network loads. The first patch
more rapidly escalates the codel count under
overload, the second just kills a totally useless
statistic.
(sent together because they'd otherwise conflict)
Signed-off-by: Dave Taht <dave.taht@gmail.com>
Dave Taht (2):
Increase fq_codel count in the bulk dropper
fq_codel: Kill useless per-flow dropped statistic
net/sched/sch_fq_codel.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH net-next 1/2] Increase fq_codel count in the bulk dropper
2019-08-03 23:37 [PATCH net-next 0/2] Two small fq_codel optimizations Dave Taht
@ 2019-08-03 23:37 ` Dave Taht
2019-08-03 23:37 ` [PATCH net-next 2/2] fq_codel: Kill useless per-flow dropped statistic Dave Taht
2019-08-06 21:18 ` [PATCH net-next 0/2] Two small fq_codel optimizations David Miller
2 siblings, 0 replies; 4+ messages in thread
From: Dave Taht @ 2019-08-03 23:37 UTC (permalink / raw)
To: netdev; +Cc: Dave Taht
In the field fq_codel is often used with a smaller memory or
packet limit than the default, and when the bulk dropper is hit,
the drop pattern bifircates into one that more slowly increases
the codel drop rate and hits the bulk dropper more than it should.
The scan through the 1024 queues happens more often than it needs to.
This patch increases the codel count in the bulk dropper, but
does not change the drop rate there, relying on the next codel round
to deliver the next packet at the original drop rate
(after that burst of loss), then escalate to a higher signaling rate.
Signed-off-by: Dave Taht <dave.taht@gmail.com>
---
net/sched/sch_fq_codel.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
index d59fbcc745d1..d67b2c40e6e6 100644
--- a/net/sched/sch_fq_codel.c
+++ b/net/sched/sch_fq_codel.c
@@ -173,6 +173,8 @@ static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets,
__qdisc_drop(skb, to_free);
} while (++i < max_packets && len < threshold);
+ /* Tell codel to increase its signal strength also */
+ flow->cvars.count += i;
flow->dropped += i;
q->backlogs[idx] -= len;
q->memory_usage -= mem;
--
2.17.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH net-next 2/2] fq_codel: Kill useless per-flow dropped statistic
2019-08-03 23:37 [PATCH net-next 0/2] Two small fq_codel optimizations Dave Taht
2019-08-03 23:37 ` [PATCH net-next 1/2] Increase fq_codel count in the bulk dropper Dave Taht
@ 2019-08-03 23:37 ` Dave Taht
2019-08-06 21:18 ` [PATCH net-next 0/2] Two small fq_codel optimizations David Miller
2 siblings, 0 replies; 4+ messages in thread
From: Dave Taht @ 2019-08-03 23:37 UTC (permalink / raw)
To: netdev; +Cc: Dave Taht
It is almost impossible to get anything other than a 0 out of
flow->dropped statistic with a tc class dump, as it resets to 0
on every round.
It also conflates ecn marks with drops.
It would have been useful had it kept a cumulative drop count, but
it doesn't. This patch doesn't change the API, it just stops
tracking a stat and state that is impossible to measure and nobody
uses.
Signed-off-by: Dave Taht <dave.taht@gmail.com>
---
net/sched/sch_fq_codel.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
index d67b2c40e6e6..9edd0f495001 100644
--- a/net/sched/sch_fq_codel.c
+++ b/net/sched/sch_fq_codel.c
@@ -45,7 +45,6 @@ struct fq_codel_flow {
struct sk_buff *tail;
struct list_head flowchain;
int deficit;
- u32 dropped; /* number of drops (or ECN marks) on this flow */
struct codel_vars cvars;
}; /* please try to keep this structure <= 64 bytes */
@@ -175,7 +174,6 @@ static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets,
/* Tell codel to increase its signal strength also */
flow->cvars.count += i;
- flow->dropped += i;
q->backlogs[idx] -= len;
q->memory_usage -= mem;
sch->qstats.drops += i;
@@ -213,7 +211,6 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch,
list_add_tail(&flow->flowchain, &q->new_flows);
q->new_flow_count++;
flow->deficit = q->quantum;
- flow->dropped = 0;
}
get_codel_cb(skb)->mem_usage = skb->truesize;
q->memory_usage += get_codel_cb(skb)->mem_usage;
@@ -312,9 +309,6 @@ static struct sk_buff *fq_codel_dequeue(struct Qdisc *sch)
&flow->cvars, &q->cstats, qdisc_pkt_len,
codel_get_enqueue_time, drop_func, dequeue_func);
- flow->dropped += q->cstats.drop_count - prev_drop_count;
- flow->dropped += q->cstats.ecn_mark - prev_ecn_mark;
-
if (!skb) {
/* force a pass through old_flows to prevent starvation */
if ((head == &q->new_flows) && !list_empty(&q->old_flows))
@@ -660,7 +654,7 @@ static int fq_codel_dump_class_stats(struct Qdisc *sch, unsigned long cl,
sch_tree_unlock(sch);
}
qs.backlog = q->backlogs[idx];
- qs.drops = flow->dropped;
+ qs.drops = 0;
}
if (gnet_stats_copy_queue(d, NULL, &qs, qs.qlen) < 0)
return -1;
--
2.17.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH net-next 0/2] Two small fq_codel optimizations
2019-08-03 23:37 [PATCH net-next 0/2] Two small fq_codel optimizations Dave Taht
2019-08-03 23:37 ` [PATCH net-next 1/2] Increase fq_codel count in the bulk dropper Dave Taht
2019-08-03 23:37 ` [PATCH net-next 2/2] fq_codel: Kill useless per-flow dropped statistic Dave Taht
@ 2019-08-06 21:18 ` David Miller
2 siblings, 0 replies; 4+ messages in thread
From: David Miller @ 2019-08-06 21:18 UTC (permalink / raw)
To: dave.taht; +Cc: netdev
From: Dave Taht <dave.taht@gmail.com>
Date: Sat, 3 Aug 2019 16:37:27 -0700
> These two patches improve fq_codel performance
> under extreme network loads. The first patch
> more rapidly escalates the codel count under
> overload, the second just kills a totally useless
> statistic.
>
> (sent together because they'd otherwise conflict)
>
> Signed-off-by: Dave Taht <dave.taht@gmail.com>
Series applied.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-08-06 21:18 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-08-03 23:37 [PATCH net-next 0/2] Two small fq_codel optimizations Dave Taht
2019-08-03 23:37 ` [PATCH net-next 1/2] Increase fq_codel count in the bulk dropper Dave Taht
2019-08-03 23:37 ` [PATCH net-next 2/2] fq_codel: Kill useless per-flow dropped statistic Dave Taht
2019-08-06 21:18 ` [PATCH net-next 0/2] Two small fq_codel optimizations David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).