Netdev List
 help / color / mirror / Atom feed
From: Wei Fang <wei.fang@nxp.com>
To: Claudiu Manoil <claudiu.manoil@nxp.com>,
	Vladimir Oltean <vladimir.oltean@nxp.com>,
	Clark Wang <xiaoning.wang@nxp.com>,
	"andrew+netdev@lunn.ch" <andrew+netdev@lunn.ch>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"edumazet@google.com" <edumazet@google.com>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"pabeni@redhat.com" <pabeni@redhat.com>,
	"robh@kernel.org" <robh@kernel.org>,
	"krzk+dt@kernel.org" <krzk+dt@kernel.org>,
	"conor+dt@kernel.org" <conor+dt@kernel.org>,
	"f.fainelli@gmail.com" <f.fainelli@gmail.com>,
	Frank Li <frank.li@nxp.com>,
	"chleroy@kernel.org" <chleroy@kernel.org>,
	"horms@kernel.org" <horms@kernel.org>,
	"linux@armlinux.org.uk" <linux@armlinux.org.uk>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"imx@lists.linux.dev" <imx@lists.linux.dev>
Subject: RE: [PATCH v5 net-next 13/15] net: dsa: netc: initialize buffer pool table and implement flow-control
Date: Thu, 7 May 2026 02:27:11 +0000	[thread overview]
Message-ID: <DBBPR04MB7500CA94186081EB8039A2C8883C2@DBBPR04MB7500.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <20260430024945.3413973-14-wei.fang@nxp.com>

> +static void netc_ipv_to_buffer_pool_mapping(struct netc_switch *priv)
> +{
> +	int bp_per_port = priv->num_bp / priv->info->num_ports;
> +	int q = NETC_IPV_NUM / bp_per_port;
> +	int r = NETC_IPV_NUM % bp_per_port;
> +	int num = q + r;
> +
> +	/* IPV-to–buffer-pool mapping per port:
> +	 * Each port is allocated 'bp_per_port' buffer pools and supports 8
> +	 * IPVs, where a higher IPV indicates a higher frame priority. Each
> +	 * IPV can be mapped to only one buffer pool.
> +	 *
> +	 * The mapping rule is as follows:
> +	 * - The first 'num' IPVs share the port's first buffer pool (index
> +	 * 'base_id').
> +	 * - After that, every 'q' IPVs share one buffer pool, with pool
> +	 * indices increasing sequentially.
> +	 */
> +	for (int i = 0; i < priv->info->num_ports; i++) {
> +		u32 base_id = i * bp_per_port;
> +		u32 bp_id = base_id;
> +		u64 mapping = 0;
> +
> +		for (int ipv = 0; ipv < NETC_IPV_NUM; ipv++) {
> +			/* Update the buffer pool index */
> +			if (ipv >= num)
> +				bp_id = base_id + ((ipv - num) / q) + 1;
> +
> +			mapping |= (u64)bp_id << (ipv * 8);

Sashiko says:

If hardware ever provides more than 8 buffer pools per port (for example,
bp_per_port = 10), will this logic fail to utilize the extra pools?

With bp_per_port > 8, q evaluates to 0, r evaluates to 8, and num becomes 8.
The condition if (ipv >= num) then evaluates to if (ipv >= 8), which
is never met since the loop terminates at ipv < 8.

This would leave bp_id at base_id for all priorities, mapping them all to a
single buffer pool and leaving the rest unused. Should bp_per_port be capped
to NETC_IPV_NUM before calculating q to prevent this silent fallback?

From hardware perspective, each port has 8 IPVs, each IPV can only be
mapped to one buffer pool, it is impossible that the hardware provides
more than 8 buffer pools per port. So this is a false positive.


  reply	other threads:[~2026-05-07  2:27 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30  2:49 [PATCH v5 net-next 00/15] Add preliminary NETC switch support for i.MX94 Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 01/15] dt-bindings: net: dsa: update the description of 'dsa,member' property Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 02/15] dt-bindings: net: dsa: add NETC switch Wei Fang
2026-05-06 22:22   ` Rob Herring (Arm)
2026-04-30  2:49 ` [PATCH v5 net-next 03/15] net: enetc: add pre-boot initialization for i.MX94 switch Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 04/15] net: enetc: add basic operations to the FDB table Wei Fang
2026-05-05  8:59   ` Paolo Abeni
2026-05-06  6:37     ` Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 05/15] net: enetc: add support for the "Add" operation to VLAN filter table Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 06/15] net: enetc: add support for the "Update" operation to buffer pool table Wei Fang
2026-05-06  7:21   ` Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 07/15] net: enetc: add support for "Add" and "Delete" operations to IPFT Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 08/15] net: enetc: add multiple command BD rings support Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 09/15] net: dsa: add NETC switch tag support Wei Fang
2026-05-06  7:34   ` Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 10/15] net: dsa: netc: introduce NXP NETC switch driver for i.MX94 Wei Fang
2026-05-06  8:03   ` Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 11/15] net: dsa: netc: add phylink MAC operations Wei Fang
2026-05-06  8:20   ` Wei Fang
2026-05-07 12:44   ` Maxime Chevallier
2026-04-30  2:49 ` [PATCH v5 net-next 12/15] net: dsa: netc: add FDB, STP, MTU, port setup and host flooding support Wei Fang
2026-05-07  2:08   ` Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 13/15] net: dsa: netc: initialize buffer pool table and implement flow-control Wei Fang
2026-05-07  2:27   ` Wei Fang [this message]
2026-04-30  2:49 ` [PATCH v5 net-next 14/15] net: dsa: netc: add support for the standardized counters Wei Fang
2026-05-07  2:41   ` Wei Fang
2026-04-30  2:49 ` [PATCH v5 net-next 15/15] net: dsa: netc: add support for ethtool private statistics Wei Fang
2026-05-05  9:43   ` Paolo Abeni
2026-05-06  7:06     ` Wei Fang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBBPR04MB7500CA94186081EB8039A2C8883C2@DBBPR04MB7500.eurprd04.prod.outlook.com \
    --to=wei.fang@nxp.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=chleroy@kernel.org \
    --cc=claudiu.manoil@nxp.com \
    --cc=conor+dt@kernel.org \
    --cc=davem@davemloft.net \
    --cc=devicetree@vger.kernel.org \
    --cc=edumazet@google.com \
    --cc=f.fainelli@gmail.com \
    --cc=frank.li@nxp.com \
    --cc=horms@kernel.org \
    --cc=imx@lists.linux.dev \
    --cc=krzk+dt@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=robh@kernel.org \
    --cc=vladimir.oltean@nxp.com \
    --cc=xiaoning.wang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox