From: Jakub Kicinski <kuba@kernel.org>
To: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: netdev@vger.kernel.org, davem@davemloft.net, edumazet@google.com,
pabeni@redhat.com, horms@kernel.org, jiri@resnulli.us,
toke@toke.dk, vinicius.gomes@intel.com,
stephen@networkplumber.org, vladbu@nvidia.com,
cake@lists.bufferbloat.net, bpf@vger.kernel.org,
ghandatmanas@gmail.com, km.kim1503@gmail.com,
security@kernel.org, Victor Nogueira <victor@mojatatu.com>
Subject: Re: [PATCH net] net/sched: Mark qdisc for deletion if graft cannot delete
Date: Wed, 11 Mar 2026 17:52:49 -0700 [thread overview]
Message-ID: <20260311175249.54abe1b6@kernel.org> (raw)
In-Reply-To: <CAM0EoM=TAAfwZc3q1=RKWPrS778R_B8uM18ev_jYYz8oj4YiBA@mail.gmail.com>
On Wed, 11 Mar 2026 12:22:41 -0400 Jamal Hadi Salim wrote:
> > On Sat, 7 Mar 2026 16:20:58 -0500 Jamal Hadi Salim wrote:
> > > Note: We tried a couple of different approaches that had smaller code
> > > footprint but were a bit fugly. The first approach was to use recursion
> > > on the qdisc hash table to iterate the descendants of the qdisc; however,
> > > the challenge here is if the graph depth is "high" - we may overflow the
> > > stack. The second approach was to use a breadth first search to achieve
> > > the same goal; the challenge here was it was a quadratic algorithm.
> >
> > Lots of complexity when realistically only ingress/clsact support
> > the unlocked operations. Can we not just take rtnl before the
> > references and not bother all the real qdiscs with this @#%$ ?
> > (diff just to illustrate the point not even compiled)
>
> Two of the several (I think 4!) patches we had took a similar path. I
> am trying to remember at least one variant was bad for performance and
> the other was unstable. Let's see if we can revive it and take a
> closer look. BTW - none were pretty, it was maybe half the lines of
> code but touched many things.
FWIW / of course, we have to apply similar change to all(?) callers of
__tcf_qdisc_find in cls_api. So LOC-wise it may end up also pretty long.
And it's not going to help the already spaghetti-looking locking. But
even if it's more LoC I quite like the idea of containing the poopy
code to where problems originate which is the lockless filter handling.
Fingers crossed..
next prev parent reply other threads:[~2026-03-12 0:52 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-07 21:20 [PATCH net] net/sched: Mark qdisc for deletion if graft cannot delete Jamal Hadi Salim
2026-03-11 1:47 ` Jakub Kicinski
2026-03-11 16:22 ` Jamal Hadi Salim
2026-03-12 0:52 ` Jakub Kicinski [this message]
2026-03-12 20:36 ` Jamal Hadi Salim
2026-03-12 23:51 ` Jakub Kicinski
2026-03-13 15:56 ` Jamal Hadi Salim
2026-03-13 19:36 ` Jamal Hadi Salim
2026-03-14 15:00 ` Jakub Kicinski
2026-03-15 15:56 ` Jamal Hadi Salim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260311175249.54abe1b6@kernel.org \
--to=kuba@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cake@lists.bufferbloat.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=ghandatmanas@gmail.com \
--cc=horms@kernel.org \
--cc=jhs@mojatatu.com \
--cc=jiri@resnulli.us \
--cc=km.kim1503@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=security@kernel.org \
--cc=stephen@networkplumber.org \
--cc=toke@toke.dk \
--cc=victor@mojatatu.com \
--cc=vinicius.gomes@intel.com \
--cc=vladbu@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox