* [PATCH net-next] net_sched: sch_fq: rework fq_gc() to avoid stack canary
@ 2026-02-04 19:00 Eric Dumazet
2026-02-07 4:20 ` patchwork-bot+netdevbpf
0 siblings, 1 reply; 2+ messages in thread
From: Eric Dumazet @ 2026-02-04 19:00 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Jamal Hadi Salim, Jiri Pirko, netdev, eric.dumazet,
Eric Dumazet
Using kmem_cache_free_bulk() in fq_gc() was not optimal.
1) It needs an array.
2) It is only saving cpu cycles for large batches.
The automatic array forces a stack canary, which is expensive.
In practive fq_gc was finding zero, one or two flows at most
per round.
Remove the array, use kmem_cache_free().
This makes fq_enqueue() smaller and faster.
$ scripts/bloat-o-meter -t vmlinux.old vmlinux.new
add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-79 (-79)
Function old new delta
fq_enqueue 1629 1550 -79
Total: Before=24886583, After=24886504, chg -0.00%
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
net/sched/sch_fq.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index d0200ec8ada62e86f10d823556bedcaefb470e6c..80235e85f8440ee83032f171cf28df6f161473db 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -245,8 +245,6 @@ static void fq_flow_set_throttled(struct fq_sched_data *q, struct fq_flow *f)
static struct kmem_cache *fq_flow_cachep __read_mostly;
-/* limit number of collected flows per round */
-#define FQ_GC_MAX 8
#define FQ_GC_AGE (3*HZ)
static bool fq_gc_candidate(const struct fq_flow *f)
@@ -259,10 +257,9 @@ static void fq_gc(struct fq_sched_data *q,
struct rb_root *root,
struct sock *sk)
{
+ struct fq_flow *f, *tofree = NULL;
struct rb_node **p, *parent;
- void *tofree[FQ_GC_MAX];
- struct fq_flow *f;
- int i, fcnt = 0;
+ int fcnt;
p = &root->rb_node;
parent = NULL;
@@ -274,9 +271,8 @@ static void fq_gc(struct fq_sched_data *q,
break;
if (fq_gc_candidate(f)) {
- tofree[fcnt++] = f;
- if (fcnt == FQ_GC_MAX)
- break;
+ f->next = tofree;
+ tofree = f;
}
if (f->sk > sk)
@@ -285,18 +281,20 @@ static void fq_gc(struct fq_sched_data *q,
p = &parent->rb_left;
}
- if (!fcnt)
+ if (!tofree)
return;
- for (i = fcnt; i > 0; ) {
- f = tofree[--i];
+ fcnt = 0;
+ while (tofree) {
+ f = tofree;
+ tofree = f->next;
rb_erase(&f->fq_node, root);
+ kmem_cache_free(fq_flow_cachep, f);
+ fcnt++;
}
q->flows -= fcnt;
q->inactive_flows -= fcnt;
q->stat_gc_flows += fcnt;
-
- kmem_cache_free_bulk(fq_flow_cachep, fcnt, tofree);
}
/* Fast path can be used if :
--
2.53.0.rc2.204.g2597b5adb4-goog
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH net-next] net_sched: sch_fq: rework fq_gc() to avoid stack canary
2026-02-04 19:00 [PATCH net-next] net_sched: sch_fq: rework fq_gc() to avoid stack canary Eric Dumazet
@ 2026-02-07 4:20 ` patchwork-bot+netdevbpf
0 siblings, 0 replies; 2+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-07 4:20 UTC (permalink / raw)
To: Eric Dumazet; +Cc: davem, kuba, pabeni, horms, jhs, jiri, netdev, eric.dumazet
Hello:
This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Wed, 4 Feb 2026 19:00:34 +0000 you wrote:
> Using kmem_cache_free_bulk() in fq_gc() was not optimal.
>
> 1) It needs an array.
> 2) It is only saving cpu cycles for large batches.
>
> The automatic array forces a stack canary, which is expensive.
>
> [...]
Here is the summary with links:
- [net-next] net_sched: sch_fq: rework fq_gc() to avoid stack canary
https://git.kernel.org/netdev/net-next/c/2214aab26811
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-02-07 4:20 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-04 19:00 [PATCH net-next] net_sched: sch_fq: rework fq_gc() to avoid stack canary Eric Dumazet
2026-02-07 4:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox