netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
	netdev@vger.kernel.org, Tariq Toukan <tariqt@nvidia.com>,
	Shay Drory <shayd@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
	Roi Dayan <roid@nvidia.com>
Subject: [net-next 06/14] net/mlx5: E-switch, enlarge peer miss group table
Date: Wed, 31 May 2023 23:01:10 -0700	[thread overview]
Message-ID: <20230601060118.154015-7-saeed@kernel.org> (raw)
In-Reply-To: <20230601060118.154015-1-saeed@kernel.org>

From: Shay Drory <shayd@nvidia.com>

There is an implicit assumption that peer miss group table
require to handle only a single peer.
Also, there is an assumption that total_vports of the master
is greater or equal to the total_vports of each peer.
Change the code to support peer miss group for more than one peer.

Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index a767f3d52c76..ca69ed487413 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1573,6 +1573,7 @@ esw_create_peer_esw_miss_group(struct mlx5_eswitch *esw,
 			       u32 *flow_group_in,
 			       int *ix)
 {
+	int max_peer_ports = (esw->total_vports - 1) * (MLX5_MAX_PORTS - 1);
 	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
 	struct mlx5_flow_group *g;
 	void *match_criteria;
@@ -1599,8 +1600,8 @@ esw_create_peer_esw_miss_group(struct mlx5_eswitch *esw,
 
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, *ix);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
-		 *ix + esw->total_vports - 1);
-	*ix += esw->total_vports;
+		 *ix + max_peer_ports);
+	*ix += max_peer_ports + 1;
 
 	g = mlx5_create_flow_group(fdb, flow_group_in);
 	if (IS_ERR(g)) {
@@ -1702,7 +1703,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw)
 	 * total vports of the peer (currently is also uses esw->total_vports).
 	 */
 	table_size = MLX5_MAX_PORTS * (esw->total_vports * MAX_SQ_NVPORTS + MAX_PF_SQ) +
-		     esw->total_vports * 2 + MLX5_ESW_MISS_FLOWS;
+		     esw->total_vports * MLX5_MAX_PORTS + MLX5_ESW_MISS_FLOWS;
 
 	/* create the slow path fdb with encap set, so further table instances
 	 * can be created at run time while VFs are probed if the FW allows that.
-- 
2.40.1


  parent reply	other threads:[~2023-06-01  6:01 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-01  6:01 [pull request][net-next 00/14] mlx5 updates 2023-05-31 Saeed Mahameed
2023-06-01  6:01 ` [net-next 01/14] net/mlx5e: en_tc, Extend peer flows to a list Saeed Mahameed
2023-06-01  6:01 ` [net-next 02/14] net/mlx5e: tc, Refactor peer add/del flow Saeed Mahameed
2023-06-01  6:01 ` [net-next 03/14] net/mlx5e: rep, store send to vport rules per peer Saeed Mahameed
2023-06-02 16:02   ` Simon Horman
2023-06-02 18:46     ` Saeed Mahameed
2023-06-03  7:25       ` Simon Horman
2023-06-01  6:01 ` [net-next 04/14] net/mlx5e: en_tc, re-factor query route port Saeed Mahameed
2023-06-01  6:01 ` [net-next 05/14] net/mlx5e: Handle offloads flows per peer Saeed Mahameed
2023-06-01  6:01 ` Saeed Mahameed [this message]
2023-06-01  6:01 ` [net-next 07/14] net/mlx5: E-switch, refactor FDB miss rule add/remove Saeed Mahameed
2023-06-01  6:01 ` [net-next 08/14] net/mlx5: E-switch, Handle multiple master egress rules Saeed Mahameed
2023-06-01  6:01 ` [net-next 09/14] net/mlx5: E-switch, generalize shared FDB creation Saeed Mahameed
2023-06-01  6:01 ` [net-next 10/14] net/mlx5: DR, handle more than one peer domain Saeed Mahameed
2023-06-01  6:01 ` [net-next 11/14] net/mlx5: Devcom, Rename paired to ready Saeed Mahameed
2023-06-01  6:01 ` [net-next 12/14] net/mlx5: E-switch, mark devcom as not ready when all eswitches are unpaired Saeed Mahameed
2023-06-01  6:01 ` [net-next 13/14] net/mlx5: Devcom, introduce devcom_for_each_peer_entry Saeed Mahameed
2023-06-01  6:01 ` [net-next 14/14] net/mlx5: Devcom, extend mlx5_devcom_send_event to work with more than two devices Saeed Mahameed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230601060118.154015-7-saeed@kernel.org \
    --to=saeed@kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=roid@nvidia.com \
    --cc=saeedm@nvidia.com \
    --cc=shayd@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).