From: Donald Hunter <donald.hunter@gmail.com>
To: chia-yu.chang@nokia-bell-labs.com
Cc: xandfury@gmail.com, netdev@vger.kernel.org,
dave.taht@gmail.com, pabeni@redhat.com, jhs@mojatatu.com,
kuba@kernel.org, stephen@networkplumber.org,
xiyou.wangcong@gmail.com, jiri@resnulli.us,
davem@davemloft.net, edumazet@google.com, horms@kernel.org,
andrew+netdev@lunn.ch, ast@fiberby.net, liuhangbin@gmail.com,
shuah@kernel.org, linux-kselftest@vger.kernel.org,
ij@kernel.org, ncardwell@google.com,
koen.de_schepper@nokia-bell-labs.com, g.white@cablelabs.com,
ingemar.s.johansson@ericsson.com, mirja.kuehlewind@ericsson.com,
cheshire@apple.com, rs.ietf@gmx.at,
Jason_Livingood@comcast.com, vidhi_goel@apple.com
Subject: Re: [PATCH v12 net-next 3/5] sched: Struct definition and parsing of dualpi2 qdisc
Date: Wed, 23 Apr 2025 13:03:28 +0100 [thread overview]
Message-ID: <m2ikmvt78v.fsf@gmail.com> (raw)
In-Reply-To: <20250422201602.56368-4-chia-yu.chang@nokia-bell-labs.com> (chia-yu chang's message of "Tue, 22 Apr 2025 22:16:00 +0200")
chia-yu.chang@nokia-bell-labs.com writes:
> +
> +static const struct nla_policy dualpi2_policy[TCA_DUALPI2_MAX + 1] = {
> + [TCA_DUALPI2_LIMIT] = NLA_POLICY_MIN(NLA_U32, 1),
> + [TCA_DUALPI2_MEMORY_LIMIT] = NLA_POLICY_MIN(NLA_U32, 1),
> + [TCA_DUALPI2_TARGET] = {.type = NLA_U32},
> + [TCA_DUALPI2_TUPDATE] = NLA_POLICY_MIN(NLA_U32, 1),
> + [TCA_DUALPI2_ALPHA] =
> + NLA_POLICY_FULL_RANGE(NLA_U32, &dualpi2_alpha_beta_range),
> + [TCA_DUALPI2_BETA] =
> + NLA_POLICY_FULL_RANGE(NLA_U32, &dualpi2_alpha_beta_range),
> + [TCA_DUALPI2_STEP_THRESH] = {.type = NLA_U32},
> + [TCA_DUALPI2_STEP_PACKETS] = {.type = NLA_U8},
> + [TCA_DUALPI2_MIN_QLEN_STEP] = {.type = NLA_U32},
> + [TCA_DUALPI2_COUPLING] = NLA_POLICY_MIN(NLA_U8, 1),
> + [TCA_DUALPI2_DROP_OVERLOAD] = {.type = NLA_U8},
> + [TCA_DUALPI2_DROP_EARLY] = {.type = NLA_U8},
> + [TCA_DUALPI2_C_PROTECTION] =
> + NLA_POLICY_FULL_RANGE(NLA_U8, &dualpi2_wc_range),
> + [TCA_DUALPI2_ECN_MASK] = {.type = NLA_U8},
> + [TCA_DUALPI2_SPLIT_GSO] = {.type = NLA_U8},
> +};
> +
> +static int dualpi2_change(struct Qdisc *sch, struct nlattr *opt,
> + struct netlink_ext_ack *extack)
> +{
> + struct nlattr *tb[TCA_DUALPI2_MAX + 1];
> + struct dualpi2_sched_data *q;
> + int old_backlog;
> + int old_qlen;
> + int err;
> +
> + if (!opt)
> + return -EINVAL;
> + err = nla_parse_nested(tb, TCA_DUALPI2_MAX, opt, dualpi2_policy,
> + extack);
> + if (err < 0)
> + return err;
> +
> + q = qdisc_priv(sch);
> + sch_tree_lock(sch);
> +
> + if (tb[TCA_DUALPI2_LIMIT]) {
> + u32 limit = nla_get_u32(tb[TCA_DUALPI2_LIMIT]);
> +
> + WRITE_ONCE(sch->limit, limit);
> + WRITE_ONCE(q->memory_limit, get_memory_limit(sch, limit));
> + }
> +
> + if (tb[TCA_DUALPI2_MEMORY_LIMIT])
> + WRITE_ONCE(q->memory_limit,
> + nla_get_u32(tb[TCA_DUALPI2_MEMORY_LIMIT]));
> +
> + if (tb[TCA_DUALPI2_TARGET]) {
> + u64 target = nla_get_u32(tb[TCA_DUALPI2_TARGET]);
> +
> + WRITE_ONCE(q->pi2_target, target * NSEC_PER_USEC);
> + }
> +
> + if (tb[TCA_DUALPI2_TUPDATE]) {
> + u64 tupdate = nla_get_u32(tb[TCA_DUALPI2_TUPDATE]);
> +
> + WRITE_ONCE(q->pi2_tupdate, tupdate * NSEC_PER_USEC);
> + }
> +
> + if (tb[TCA_DUALPI2_ALPHA]) {
> + u32 alpha = nla_get_u32(tb[TCA_DUALPI2_ALPHA]);
> +
> + WRITE_ONCE(q->pi2_alpha, dualpi2_scale_alpha_beta(alpha));
> + }
> +
> + if (tb[TCA_DUALPI2_BETA]) {
> + u32 beta = nla_get_u32(tb[TCA_DUALPI2_BETA]);
> +
> + WRITE_ONCE(q->pi2_beta, dualpi2_scale_alpha_beta(beta));
> + }
> +
> + if (tb[TCA_DUALPI2_STEP_PACKETS]) {
> + bool step_pkt = !!nla_get_u8(tb[TCA_DUALPI2_STEP_PACKETS]);
Would it be better to define TCA_DUALPI2_STEP_PACKETS as type NLA_FLAG
to avoid the u8 to bool conversion?
> + u32 step_th = READ_ONCE(q->step_thresh);
> +
> + WRITE_ONCE(q->step_in_packets, step_pkt);
> + WRITE_ONCE(q->step_thresh,
> + step_pkt ? step_th : (step_th * NSEC_PER_USEC));
> + }
> +
> + if (tb[TCA_DUALPI2_STEP_THRESH]) {
> + u32 step_th = nla_get_u32(tb[TCA_DUALPI2_STEP_THRESH]);
> + bool step_pkt = READ_ONCE(q->step_in_packets);
> +
> + WRITE_ONCE(q->step_thresh,
> + step_pkt ? step_th : (step_th * NSEC_PER_USEC));
> + }
> +
> + if (tb[TCA_DUALPI2_MIN_QLEN_STEP])
> + WRITE_ONCE(q->min_qlen_step,
> + nla_get_u32(tb[TCA_DUALPI2_MIN_QLEN_STEP]));
> +
> + if (tb[TCA_DUALPI2_COUPLING]) {
> + u8 coupling = nla_get_u8(tb[TCA_DUALPI2_COUPLING]);
> +
> + WRITE_ONCE(q->coupling_factor, coupling);
> + }
> +
> + if (tb[TCA_DUALPI2_DROP_OVERLOAD])
> + WRITE_ONCE(q->drop_overload,
> + !!nla_get_u8(tb[TCA_DUALPI2_DROP_OVERLOAD]));
Type NLA_FLAG?
> +
> + if (tb[TCA_DUALPI2_DROP_EARLY])
> + WRITE_ONCE(q->drop_early,
> + !!nla_get_u8(tb[TCA_DUALPI2_DROP_EARLY]));
Type NLA_FLAG?
> +
> + if (tb[TCA_DUALPI2_C_PROTECTION]) {
> + u8 wc = nla_get_u8(tb[TCA_DUALPI2_C_PROTECTION]);
> +
> + dualpi2_calculate_c_protection(sch, q, wc);
> + }
> +
> + if (tb[TCA_DUALPI2_ECN_MASK])
> + WRITE_ONCE(q->ecn_mask,
> + nla_get_u8(tb[TCA_DUALPI2_ECN_MASK]));
> +
> + if (tb[TCA_DUALPI2_SPLIT_GSO])
> + WRITE_ONCE(q->split_gso,
> + !!nla_get_u8(tb[TCA_DUALPI2_SPLIT_GSO]));
Type NLA_FLAG?
> +
> + old_qlen = qdisc_qlen(sch);
> + old_backlog = sch->qstats.backlog;
> + while (qdisc_qlen(sch) > sch->limit ||
> + q->memory_used > q->memory_limit) {
> + struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);
> +
> + q->memory_used -= skb->truesize;
> + qdisc_qstats_backlog_dec(sch, skb);
> + rtnl_qdisc_drop(skb, sch);
> + }
> + qdisc_tree_reduce_backlog(sch, old_qlen - qdisc_qlen(sch),
> + old_backlog - sch->qstats.backlog);
> +
> + sch_tree_unlock(sch);
> + return 0;
> +}
next prev parent reply other threads:[~2025-04-23 12:11 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-22 20:15 [PATCH v12 net-next 0/5] DUALPI2 patch chia-yu.chang
2025-04-22 20:15 ` [PATCH v12 net-next 1/5] Documentation: netlink: specs: tc: Add DualPI2 specification chia-yu.chang
2025-04-23 11:29 ` Donald Hunter
2025-04-23 17:15 ` Chia-Yu Chang (Nokia)
2025-04-24 9:26 ` Donald Hunter
2025-04-22 20:15 ` [PATCH v12 net-next 2/5] selftests/tc-testing: Add selftests for qdisc DualPI2 chia-yu.chang
2025-04-22 20:16 ` [PATCH v12 net-next 3/5] sched: Struct definition and parsing of dualpi2 qdisc chia-yu.chang
2025-04-23 12:03 ` Donald Hunter [this message]
2025-04-25 21:17 ` Chia-Yu Chang (Nokia)
2025-04-22 20:16 ` [PATCH v12 net-next 4/5] sched: Dump configuration and statistics " chia-yu.chang
2025-04-22 20:16 ` [PATCH v12 net-next 5/5] sched: Add enqueue/dequeue " chia-yu.chang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m2ikmvt78v.fsf@gmail.com \
--to=donald.hunter@gmail.com \
--cc=Jason_Livingood@comcast.com \
--cc=andrew+netdev@lunn.ch \
--cc=ast@fiberby.net \
--cc=cheshire@apple.com \
--cc=chia-yu.chang@nokia-bell-labs.com \
--cc=dave.taht@gmail.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=g.white@cablelabs.com \
--cc=horms@kernel.org \
--cc=ij@kernel.org \
--cc=ingemar.s.johansson@ericsson.com \
--cc=jhs@mojatatu.com \
--cc=jiri@resnulli.us \
--cc=koen.de_schepper@nokia-bell-labs.com \
--cc=kuba@kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=liuhangbin@gmail.com \
--cc=mirja.kuehlewind@ericsson.com \
--cc=ncardwell@google.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=rs.ietf@gmx.at \
--cc=shuah@kernel.org \
--cc=stephen@networkplumber.org \
--cc=vidhi_goel@apple.com \
--cc=xandfury@gmail.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).