From: Wei Fang <wei.fang@nxp.com>
To: "sashiko@lists.linux.dev" <sashiko@lists.linux.dev>
Cc: "robh@kernel.org" <robh@kernel.org>,
"krzk+dt@kernel.org" <krzk+dt@kernel.org>,
"conor+dt@kernel.org" <conor+dt@kernel.org>,
"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
"Frank.Li@kernel.org" <Frank.Li@kernel.org>,
"imx@lists.linux.dev" <imx@lists.linux.dev>,
Networking <netdev@vger.kernel.org>
Subject: RE: [PATCH v6 net-next 13/15] net: dsa: netc: initialize buffer pool table and implement flow-control
Date: Mon, 11 May 2026 03:16:14 +0000 [thread overview]
Message-ID: <DBBPR04MB75003B8627FD78829C98433688382@DBBPR04MB7500.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <20260510104839.696E9C2BCB8@smtp.kernel.org>
> Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to
> consider:
> - [Medium] The IPV-to-buffer-pool mapping algorithm fails when a port is
> allocated more than 8 buffer pools, silently routing all traffic to a single pool.
> --
>
> commit 0b5e4aba36a49ce1de9eb84ec706f482331550cf
> Author: Wei Fang <wei.fang@nxp.com>
>
> net: dsa: netc: initialize buffer pool table and implement flow-control
>
> This commit initializes the buffer pool table and implements flow control
> for the NXP netc DSA driver. It adds logic to dynamically map the 8 internal
> priority values (IPVs) to the available buffer pools per port based on
> hardware capabilities.
>
> > diff --git a/drivers/net/dsa/netc/netc_main.c
> b/drivers/net/dsa/netc/netc_main.c
> > --- a/drivers/net/dsa/netc/netc_main.c
> > +++ b/drivers/net/dsa/netc/netc_main.c
>
> [ ... ]
>
> > +static void netc_ipv_to_buffer_pool_mapping(struct netc_switch *priv)
> > +{
> > + int bp_per_port = priv->num_bp / priv->info->num_ports;
> > + int q = NETC_IPV_NUM / bp_per_port;
> > + int r = NETC_IPV_NUM % bp_per_port;
> > + int num = q + r;
> > +
> > + /* IPV-to-buffer-pool mapping per port:
> > + * Each port is allocated 'bp_per_port' buffer pools and supports 8
> > + * IPVs, where a higher IPV indicates a higher frame priority. Each
> > + * IPV can be mapped to only one buffer pool.
> > + *
> > + * The mapping rule is as follows:
> > + * - The first 'num' IPVs share the port's first buffer pool (index
> > + * 'base_id').
> > + * - After that, every 'q' IPVs share one buffer pool, with pool
> > + * indices increasing sequentially.
> > + */
> > + for (int i = 0; i < priv->info->num_ports; i++) {
> > + u32 base_id = i * bp_per_port;
> > + u32 bp_id = base_id;
> > + u64 mapping = 0;
> > +
> > + for (int ipv = 0; ipv < NETC_IPV_NUM; ipv++) {
> > + /* Update the buffer pool index */
> > + if (ipv >= num)
> > + bp_id = base_id + ((ipv - num) / q) + 1;
>
> What happens if a future hardware revision allocates more than 8 buffer pools
> per port?
>
> If bp_per_port is greater than NETC_IPV_NUM (8), q evaluates to 0 and num
> evaluates to 8.
>
> Because ipv loops from 0 to 7, the condition ipv >= num will never be true.
> As a result, all 8 traffic priorities would be mapped to a single buffer pool
> (base_id), leaving the remaining pools unused instead of maximizing
> utilization.
>
> Should bp_per_port be clamped to NETC_IPV_NUM before calculating q to
> prevent this?
From hardware perspective, each port has 8 IPVs, each IPV can only be
mapped to one buffer pool, it is impossible that the hardware provides
more than 8 buffer pools per port. This is a false positive, please ignore it.
next prev parent reply other threads:[~2026-05-11 3:16 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-09 10:29 [PATCH v6 net-next 00/15] Add preliminary NETC switch support for i.MX94 Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 01/15] dt-bindings: net: dsa: update the description of 'dsa,member' property Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 02/15] dt-bindings: net: dsa: add NETC switch Wei Fang
[not found] ` <20260510104832.7061EC2BCB8@smtp.kernel.org>
2026-05-11 3:33 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 03/15] net: enetc: add pre-boot initialization for i.MX94 switch Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 04/15] net: enetc: add basic operations to the FDB table Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 05/15] net: enetc: add support for the "Add" operation to VLAN filter table Wei Fang
[not found] ` <20260510104834.3FA23C2BCB8@smtp.kernel.org>
[not found] ` <DBBPR04MB7500514E13E2678077462D9188382@DBBPR04MB7500.eurprd04.prod.outlook.com>
2026-05-11 2:21 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 06/15] net: enetc: add support for the "Update" operation to buffer pool table Wei Fang
[not found] ` <20260510104833.5F0AEC2BCB8@smtp.kernel.org>
[not found] ` <DBBPR04MB7500D614C25B584703C43BDD88382@DBBPR04MB7500.eurprd04.prod.outlook.com>
2026-05-11 2:22 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 07/15] net: enetc: add support for "Add" and "Delete" operations to IPFT Wei Fang
[not found] ` <20260510104835.0F27DC2BCB8@smtp.kernel.org>
[not found] ` <DBBPR04MB75000ABB26EFC96C65F98C1C88382@DBBPR04MB7500.eurprd04.prod.outlook.com>
2026-05-11 2:21 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 08/15] net: enetc: add multiple command BD rings support Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 09/15] net: dsa: add NETC switch tag support Wei Fang
[not found] ` <20260510104835.E0009C2BCB8@smtp.kernel.org>
2026-05-11 2:18 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 10/15] net: dsa: netc: introduce NXP NETC switch driver for i.MX94 Wei Fang
[not found] ` <20260510104836.BDD50C2BCB8@smtp.kernel.org>
2026-05-11 7:17 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 11/15] net: dsa: netc: add phylink MAC operations Wei Fang
[not found] ` <20260510104837.B72D6C2BCB8@smtp.kernel.org>
2026-05-11 2:17 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 12/15] net: dsa: netc: add FDB, STP, MTU, port setup and host flooding support Wei Fang
[not found] ` <20260510104838.8514DC2BCB8@smtp.kernel.org>
2026-05-11 3:14 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 13/15] net: dsa: netc: initialize buffer pool table and implement flow-control Wei Fang
[not found] ` <20260510104839.696E9C2BCB8@smtp.kernel.org>
2026-05-11 3:16 ` Wei Fang [this message]
2026-05-09 10:29 ` [PATCH v6 net-next 14/15] net: dsa: netc: add support for the standardized counters Wei Fang
[not found] ` <20260510104840.437A8C2BCC9@smtp.kernel.org>
2026-05-11 3:24 ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 15/15] net: dsa: netc: add support for ethtool private statistics Wei Fang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DBBPR04MB75003B8627FD78829C98433688382@DBBPR04MB7500.eurprd04.prod.outlook.com \
--to=wei.fang@nxp.com \
--cc=Frank.Li@kernel.org \
--cc=conor+dt@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=imx@lists.linux.dev \
--cc=krzk+dt@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=robh@kernel.org \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox