All of lore.kernel.org
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Wei Fang" <wei.fang@nxp.com>
Cc: robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org,
	devicetree@vger.kernel.org, Frank.Li@kernel.org,
	imx@lists.linux.dev
Subject: Re: [PATCH v6 net-next 13/15] net: dsa: netc: initialize buffer pool table and implement flow-control
Date: Sun, 10 May 2026 10:48:38 +0000	[thread overview]
Message-ID: <20260510104839.696E9C2BCB8@smtp.kernel.org> (raw)
In-Reply-To: <20260509102954.4116624-14-wei.fang@nxp.com>

Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [Medium] The IPV-to-buffer-pool mapping algorithm fails when a port is allocated more than 8 buffer pools, silently routing all traffic to a single pool.
--

commit 0b5e4aba36a49ce1de9eb84ec706f482331550cf
Author: Wei Fang <wei.fang@nxp.com>

net: dsa: netc: initialize buffer pool table and implement flow-control

This commit initializes the buffer pool table and implements flow control
for the NXP netc DSA driver. It adds logic to dynamically map the 8 internal
priority values (IPVs) to the available buffer pools per port based on
hardware capabilities.

> diff --git a/drivers/net/dsa/netc/netc_main.c b/drivers/net/dsa/netc/netc_main.c
> --- a/drivers/net/dsa/netc/netc_main.c
> +++ b/drivers/net/dsa/netc/netc_main.c

[ ... ]

> +static void netc_ipv_to_buffer_pool_mapping(struct netc_switch *priv)
> +{
> +	int bp_per_port = priv->num_bp / priv->info->num_ports;
> +	int q = NETC_IPV_NUM / bp_per_port;
> +	int r = NETC_IPV_NUM % bp_per_port;
> +	int num = q + r;
> +
> +	/* IPV-to–buffer-pool mapping per port:
> +	 * Each port is allocated 'bp_per_port' buffer pools and supports 8
> +	 * IPVs, where a higher IPV indicates a higher frame priority. Each
> +	 * IPV can be mapped to only one buffer pool.
> +	 *
> +	 * The mapping rule is as follows:
> +	 * - The first 'num' IPVs share the port's first buffer pool (index
> +	 * 'base_id').
> +	 * - After that, every 'q' IPVs share one buffer pool, with pool
> +	 * indices increasing sequentially.
> +	 */
> +	for (int i = 0; i < priv->info->num_ports; i++) {
> +		u32 base_id = i * bp_per_port;
> +		u32 bp_id = base_id;
> +		u64 mapping = 0;
> +
> +		for (int ipv = 0; ipv < NETC_IPV_NUM; ipv++) {
> +			/* Update the buffer pool index */
> +			if (ipv >= num)
> +				bp_id = base_id + ((ipv - num) / q) + 1;

What happens if a future hardware revision allocates more than 8 buffer pools
per port?

If bp_per_port is greater than NETC_IPV_NUM (8), q evaluates to 0 and num
evaluates to 8.

Because ipv loops from 0 to 7, the condition ipv >= num will never be true.
As a result, all 8 traffic priorities would be mapped to a single buffer pool
(base_id), leaving the remaining pools unused instead of maximizing
utilization.

Should bp_per_port be clamped to NETC_IPV_NUM before calculating q to
prevent this?

> +
> +			mapping |= (u64)bp_id << (ipv * 8);
> +		}
> +
> +		netc_port_set_pbpmcr(priv->ports[i], mapping);
> +	}
> +}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260509102954.4116624-1-wei.fang@nxp.com?part=13

  reply	other threads:[~2026-05-10 10:48 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-09 10:29 [PATCH v6 net-next 00/15] Add preliminary NETC switch support for i.MX94 Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 01/15] dt-bindings: net: dsa: update the description of 'dsa,member' property Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 02/15] dt-bindings: net: dsa: add NETC switch Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  3:33     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 03/15] net: enetc: add pre-boot initialization for i.MX94 switch Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 04/15] net: enetc: add basic operations to the FDB table Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 05/15] net: enetc: add support for the "Add" operation to VLAN filter table Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  2:05     ` Wei Fang
2026-05-11  2:21       ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 06/15] net: enetc: add support for the "Update" operation to buffer pool table Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  2:01     ` Wei Fang
2026-05-11  2:22       ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 07/15] net: enetc: add support for "Add" and "Delete" operations to IPFT Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  2:11     ` Wei Fang
2026-05-11  2:21       ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 08/15] net: enetc: add multiple command BD rings support Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 09/15] net: dsa: add NETC switch tag support Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  2:18     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 10/15] net: dsa: netc: introduce NXP NETC switch driver for i.MX94 Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  7:17     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 11/15] net: dsa: netc: add phylink MAC operations Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  2:17     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 12/15] net: dsa: netc: add FDB, STP, MTU, port setup and host flooding support Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  3:14     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 13/15] net: dsa: netc: initialize buffer pool table and implement flow-control Wei Fang
2026-05-10 10:48   ` sashiko-bot [this message]
2026-05-11  3:16     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 14/15] net: dsa: netc: add support for the standardized counters Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  3:24     ` Wei Fang
2026-05-09 10:29 ` [PATCH v6 net-next 15/15] net: dsa: netc: add support for ethtool private statistics Wei Fang
2026-05-10 10:48   ` sashiko-bot
2026-05-11  3:26     ` Wei Fang
2026-05-12  6:00   ` Wei Fang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260510104839.696E9C2BCB8@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=Frank.Li@kernel.org \
    --cc=conor+dt@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=imx@lists.linux.dev \
    --cc=krzk+dt@kernel.org \
    --cc=robh@kernel.org \
    --cc=sashiko@lists.linux.dev \
    --cc=wei.fang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.