From: Petr Machata <petrm@nvidia.com>
To: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Ido Schimmel <idosch@idosch.org>, Jiri Pirko <jiri@resnulli.us>,
<netdev@vger.kernel.org>, <kuba@kernel.org>, <pabeni@redhat.com>,
<davem@davemloft.net>, <edumazet@google.com>,
<xiyou.wangcong@gmail.com>, <victor@mojatatu.com>,
<pctammela@mojatatu.com>, <mleitner@redhat.com>,
<vladbu@nvidia.com>, <paulb@nvidia.com>,
Petr Machata <petrm@nvidia.com>
Subject: Re: [patch net-next] net: sched: move block device tracking into tcf_block_get/put_ext()
Date: Thu, 11 Jan 2024 17:17:45 +0100 [thread overview]
Message-ID: <878r4volo0.fsf@nvidia.com> (raw)
In-Reply-To: <CAM0EoMkpzsEWXMw27xgsfzwA2g4CNeDYQ9niTJAkgu3=Kgp81g@mail.gmail.com>
Jamal Hadi Salim <jhs@mojatatu.com> writes:
> On Thu, Jan 11, 2024 at 10:40 AM Jamal Hadi Salim <jhs@mojatatu.com> wrote:
>>
>> On Wed, Jan 10, 2024 at 7:10 AM Ido Schimmel <idosch@idosch.org> wrote:
>> >
>> > On Thu, Jan 04, 2024 at 01:58:44PM +0100, Jiri Pirko wrote:
>> > > diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
>> > > index adf5de1ff773..253b26f2eddd 100644
>> > > --- a/net/sched/cls_api.c
>> > > +++ b/net/sched/cls_api.c
>> > > @@ -1428,6 +1428,7 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q,
>> > > struct tcf_block_ext_info *ei,
>> > > struct netlink_ext_ack *extack)
>> > > {
>> > > + struct net_device *dev = qdisc_dev(q);
>> > > struct net *net = qdisc_net(q);
>> > > struct tcf_block *block = NULL;
>> > > int err;
>> > > @@ -1461,9 +1462,18 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q,
>> > > if (err)
>> > > goto err_block_offload_bind;
>> > >
>> > > + if (tcf_block_shared(block)) {
>> > > + err = xa_insert(&block->ports, dev->ifindex, dev, GFP_KERNEL);
>> > > + if (err) {
>> > > + NL_SET_ERR_MSG(extack, "block dev insert failed");
>> > > + goto err_dev_insert;
>> > > + }
>> > > + }
>> >
>> > While this patch fixes the original issue, it creates another one:
>> >
>> > # ip link add name swp1 type dummy
>> > # tc qdisc replace dev swp1 root handle 10: prio bands 8 priomap 7 6 5 4 3 2 1
>> > # tc qdisc add dev swp1 parent 10:8 handle 108: red limit 1000000 min 200000 max 200001 probability 1.0 avpkt 8000 burst 38 qevent early_drop block 10
>> > RED: set bandwidth to 10Mbit
>> > # tc qdisc add dev swp1 parent 10:7 handle 107: red limit 1000000 min 500000 max 500001 probability 1.0 avpkt 8000 burst 63 qevent early_drop block 10
>> > RED: set bandwidth to 10Mbit
>> > Error: block dev insert failed.
>> >
>>
>>
>> +cc Petr
>> We'll add a testcase on tdc - it doesnt seem we have any for qevents.
>> If you have others that are related let us know.
>> But how does this work? I see no mention of block on red code and i
Look for qe_early_drop and qe_mark in sch_red.c.
>> see no mention of block on the reproducer above.
>
> Context: Yes, i see it on red setup but i dont see any block being setup.
qevents are binding locations for blocks, similar in principle to
clsact's ingress_block / egress_block. So the way to create a block is
the same: just mention the block number for the first time.
What qevents there are depends on the qdisc. They are supposed to
reflect events that are somehow interesting, from the point of view of
an skb within a qdisc. Thus RED has two qevents: early_drop for packets
that were chosen to be, well, dropped early, and mark for packets that
are ECN-marked. So when a packet is, say, early-dropped, the RED qdisc
passes it through the TC block bound at that qevent (if any).
> Also: Is it only Red or other qdiscs could behave this way?
Currently only red supports any qevents at all, but in principle the
mechanism is reusable. With my mlxsw hat on, an obvious next candidate
would be tail_drop on FIFO qdisc.
next prev parent reply other threads:[~2024-01-11 16:55 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-04 12:58 [patch net-next] net: sched: move block device tracking into tcf_block_get/put_ext() Jiri Pirko
2024-01-04 14:32 ` Ido Schimmel
2024-01-04 16:10 ` Jamal Hadi Salim
2024-01-04 18:03 ` Jiri Pirko
2024-01-04 18:22 ` Jamal Hadi Salim
2024-01-05 11:24 ` Jiri Pirko
2024-01-05 11:51 ` Pedro Tammela
2024-01-06 11:14 ` Jiri Pirko
2024-01-06 11:49 ` Jamal Hadi Salim
2024-01-04 19:26 ` Victor Nogueira
2024-01-04 19:36 ` Jamal Hadi Salim
2024-01-05 11:20 ` patchwork-bot+netdevbpf
2024-01-10 12:09 ` Ido Schimmel
2024-01-10 14:08 ` Jiri Pirko
2024-01-10 16:17 ` Jiri Pirko
2024-01-11 8:51 ` Ido Schimmel
2024-01-11 15:40 ` Jamal Hadi Salim
2024-01-11 15:42 ` Jamal Hadi Salim
2024-01-11 16:17 ` Petr Machata [this message]
2024-01-11 19:55 ` Jamal Hadi Salim
2024-01-11 21:44 ` Petr Machata
2024-01-12 14:47 ` Jamal Hadi Salim
2024-01-12 15:37 ` Petr Machata
2024-01-15 21:02 ` Jamal Hadi Salim
2024-01-16 10:15 ` Petr Machata
2024-01-17 20:44 ` Jamal Hadi Salim
2024-01-19 16:28 ` Petr Machata
2024-01-21 18:32 ` Jamal Hadi Salim
2024-01-11 16:27 ` Jiri Pirko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=878r4volo0.fsf@nvidia.com \
--to=petrm@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=idosch@idosch.org \
--cc=jhs@mojatatu.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=mleitner@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=paulb@nvidia.com \
--cc=pctammela@mojatatu.com \
--cc=victor@mojatatu.com \
--cc=vladbu@nvidia.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).