public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Tariq Toukan <tariqt@nvidia.com>
To: Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>,
	"Mark Bloch" <mbloch@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>, <netdev@vger.kernel.org>,
	<linux-rdma@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	Gal Pressman <gal@nvidia.com>, Moshe Shemesh <moshe@nvidia.com>,
	Nimrod Oren <noren@nvidia.com>,
	Danielle Costantino <dcostantino@meta.com>
Subject: [PATCH net-next 4/4] net/mlx5e: Use standard unit definitions for bandwidth conversion
Date: Sun, 30 Nov 2025 12:25:34 +0200	[thread overview]
Message-ID: <1764498334-1327918-5-git-send-email-tariqt@nvidia.com> (raw)
In-Reply-To: <1764498334-1327918-1-git-send-email-tariqt@nvidia.com>

From: Gal Pressman <gal@nvidia.com>

MLX5E_100MB and MLX5E_1GB defines are confusing, MLX5E_100MB is not
equal to 100 * MEGA, and MLX5E_1GB is not equal to one GIGA, as they
hide the Kbps rate conversion required for ieee_maxrate.

Replace hardcoded bandwidth conversion values with standard unit
definitions from linux/units.h. Rename MLX5E_100MB/MLX5E_1GB to
MLX5E_100MB_TO_KB/MLX5E_1GB_TO_KB to clarify these are conversion
factors to Kbps, not absolute bandwidth values.

Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Nimrod Oren <noren@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_dcbnl.c  | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
index 79f9d43b09b3..fddf7c207f8e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
@@ -31,14 +31,15 @@
  */
 #include <linux/device.h>
 #include <linux/netdevice.h>
+#include <linux/units.h>
 #include "en.h"
 #include "en/port.h"
 #include "en/port_buffer.h"
 
 #define MLX5E_MAX_BW_ALLOC 100 /* Max percentage of BW allocation */
 
-#define MLX5E_100MB (100000)
-#define MLX5E_1GB   (1000000)
+#define MLX5E_100MB_TO_KB (100 * MEGA / KILO)
+#define MLX5E_1GB_TO_KB   (GIGA / KILO)
 
 #define MLX5E_CEE_STATE_UP    1
 #define MLX5E_CEE_STATE_DOWN  0
@@ -572,10 +573,10 @@ static int mlx5e_dcbnl_ieee_getmaxrate(struct net_device *netdev,
 	for (i = 0; i <= mlx5_max_tc(mdev); i++) {
 		switch (max_bw_unit[i]) {
 		case MLX5_100_MBPS_UNIT:
-			maxrate->tc_maxrate[i] = max_bw_value[i] * MLX5E_100MB;
+			maxrate->tc_maxrate[i] = max_bw_value[i] * MLX5E_100MB_TO_KB;
 			break;
 		case MLX5_GBPS_UNIT:
-			maxrate->tc_maxrate[i] = max_bw_value[i] * MLX5E_1GB;
+			maxrate->tc_maxrate[i] = max_bw_value[i] * MLX5E_1GB_TO_KB;
 			break;
 		case MLX5_BW_NO_LIMIT:
 			break;
@@ -614,8 +615,8 @@ static int mlx5e_dcbnl_ieee_setmaxrate(struct net_device *netdev,
 
 	memset(max_bw_value, 0, sizeof(max_bw_value));
 	memset(max_bw_unit, 0, sizeof(max_bw_unit));
-	upper_limit_100mbps = U8_MAX * MLX5E_100MB;
-	upper_limit_gbps = U8_MAX * MLX5E_1GB;
+	upper_limit_100mbps = U8_MAX * MLX5E_100MB_TO_KB;
+	upper_limit_gbps = U8_MAX * MLX5E_1GB_TO_KB;
 
 	for (i = 0; i <= mlx5_max_tc(mdev); i++) {
 		if (!maxrate->tc_maxrate[i]) {
@@ -624,12 +625,12 @@ static int mlx5e_dcbnl_ieee_setmaxrate(struct net_device *netdev,
 		}
 		if (maxrate->tc_maxrate[i] <= upper_limit_100mbps) {
 			max_bw_value[i] = div_u64(maxrate->tc_maxrate[i],
-						  MLX5E_100MB);
+						  MLX5E_100MB_TO_KB);
 			max_bw_value[i] = max_bw_value[i] ? max_bw_value[i] : 1;
 			max_bw_unit[i]  = MLX5_100_MBPS_UNIT;
 		} else if (maxrate->tc_maxrate[i] <= upper_limit_gbps) {
 			max_bw_value[i] = div_u64(maxrate->tc_maxrate[i],
-						  MLX5E_1GB);
+						  MLX5E_1GB_TO_KB);
 			max_bw_unit[i]  = MLX5_GBPS_UNIT;
 		} else {
 			netdev_err(netdev,
-- 
2.31.1


  parent reply	other threads:[~2025-11-30 10:26 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-30 10:25 [PATCH net-next 0/4] net/mlx5e: Enhance DCBNL get/set maxrate code Tariq Toukan
2025-11-30 10:25 ` [PATCH net-next 1/4] net/mlx5e: Use u64 instead of __u64 in ieee_setmaxrate Tariq Toukan
2025-11-30 10:25 ` [PATCH net-next 2/4] net/mlx5e: Rename upper_limit_mbps to upper_limit_100mbps Tariq Toukan
2025-11-30 10:25 ` [PATCH net-next 3/4] net/mlx5e: Use U8_MAX instead of hard coded magic number Tariq Toukan
2025-11-30 10:25 ` Tariq Toukan [this message]
2025-12-01 23:10 ` [PATCH net-next 0/4] net/mlx5e: Enhance DCBNL get/set maxrate code patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1764498334-1327918-5-git-send-email-tariqt@nvidia.com \
    --to=tariqt@nvidia.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=dcostantino@meta.com \
    --cc=edumazet@google.com \
    --cc=gal@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=moshe@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=noren@nvidia.com \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox