netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
	netdev@vger.kernel.org, Tariq Toukan <tariqt@nvidia.com>,
	Yevgeny Kliteynik <kliteyn@nvidia.com>,
	Alex Vesker <valex@nvidia.com>
Subject: [net-next 06/14] net/mlx5: DR, Initialize chunk's ste_arrays at chunk creation
Date: Mon, 24 Oct 2022 14:57:26 +0100	[thread overview]
Message-ID: <20221024135734.69673-7-saeed@kernel.org> (raw)
In-Reply-To: <20221024135734.69673-1-saeed@kernel.org>

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

Rather than cleaning the corresponding chunk's section of ste_arrays on
chunk deletion, initialize these areas upon chunk creation.

Chunk destruction tend to come in large batches (during pool syncing).
To reduce the "hiccup" in such cases, moving ste_arrays init from chunk
destruction to initialization.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/steering/dr_icm_pool.c | 25 +++----------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
index 4cdc9e9a54e1..7ca1ef073f55 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
@@ -177,42 +177,25 @@ static int dr_icm_buddy_get_ste_size(struct mlx5dr_icm_buddy_mem *buddy)
 
 static void dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk, int offset)
 {
+	int num_of_entries = mlx5dr_icm_pool_get_chunk_num_of_entries(chunk);
 	struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem;
+	int ste_size = dr_icm_buddy_get_ste_size(buddy);
 	int index = offset / DR_STE_SIZE;
 
 	chunk->ste_arr = &buddy->ste_arr[index];
 	chunk->miss_list = &buddy->miss_list[index];
-	chunk->hw_ste_arr = buddy->hw_ste_arr +
-			    index * dr_icm_buddy_get_ste_size(buddy);
-}
-
-static void dr_icm_chunk_ste_cleanup(struct mlx5dr_icm_chunk *chunk)
-{
-	int num_of_entries = mlx5dr_icm_pool_get_chunk_num_of_entries(chunk);
-	struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem;
+	chunk->hw_ste_arr = buddy->hw_ste_arr + index * ste_size;
 
-	memset(chunk->hw_ste_arr, 0,
-	       num_of_entries * dr_icm_buddy_get_ste_size(buddy));
+	memset(chunk->hw_ste_arr, 0, num_of_entries * ste_size);
 	memset(chunk->ste_arr, 0,
 	       num_of_entries * sizeof(chunk->ste_arr[0]));
 }
 
-static enum mlx5dr_icm_type
-get_chunk_icm_type(struct mlx5dr_icm_chunk *chunk)
-{
-	return chunk->buddy_mem->pool->icm_type;
-}
-
 static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk)
 {
-	enum mlx5dr_icm_type icm_type = get_chunk_icm_type(chunk);
-
 	chunk->buddy_mem->used_memory -= mlx5dr_icm_pool_get_chunk_byte_size(chunk);
 	list_del(&chunk->chunk_list);
 
-	if (icm_type == DR_ICM_TYPE_STE)
-		dr_icm_chunk_ste_cleanup(chunk);
-
 	kvfree(chunk);
 }
 
-- 
2.37.3


  parent reply	other threads:[~2022-10-24 16:05 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-24 13:57 [pull request][net-next 00/14] mlx5 updates 2022-10-24 Saeed Mahameed
2022-10-24 13:57 ` [net-next 01/14] net/mlx5: DR, In destroy flow, free resources even if FW command failed Saeed Mahameed
2022-10-24 13:57 ` [net-next 02/14] net/mlx5: DR, Fix the SMFS sync_steering for fast teardown Saeed Mahameed
2022-10-24 13:57 ` [net-next 03/14] net/mlx5: DR, Check device state when polling CQ Saeed Mahameed
2022-10-24 13:57 ` [net-next 04/14] net/mlx5: DR, Remove unneeded argument from dr_icm_chunk_destroy Saeed Mahameed
2022-10-24 13:57 ` [net-next 05/14] net/mlx5: DR, Allocate ste_arr on stack instead of dynamically Saeed Mahameed
2022-10-25  4:24   ` Jakub Kicinski
2022-10-25 10:22     ` Yevgeny Kliteynik
2022-10-24 13:57 ` Saeed Mahameed [this message]
2022-10-24 13:57 ` [net-next 07/14] net/mlx5: DR, Handle domain memory resources init/uninit separately Saeed Mahameed
2022-10-24 13:57 ` [net-next 08/14] net/mlx5: DR, In rehash write the line in the entry immediately Saeed Mahameed
2022-10-24 13:57 ` [net-next 09/14] net/mlx5: DR, Manage STE send info objects in pool Saeed Mahameed
2022-10-24 13:57 ` [net-next 10/14] net/mlx5: DR, Allocate icm_chunks from their own slab allocator Saeed Mahameed
2022-10-24 13:57 ` [net-next 11/14] net/mlx5: DR, Allocate htbl from its " Saeed Mahameed
2022-10-24 13:57 ` [net-next 12/14] net/mlx5: DR, Lower sync threshold for ICM hot memory Saeed Mahameed
2022-10-24 13:57 ` [net-next 13/14] net/mlx5: DR, Keep track of hot ICM chunks in an array instead of list Saeed Mahameed
2022-10-24 13:57 ` [net-next 14/14] net/mlx5: DR, Remove the buddy used_list Saeed Mahameed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221024135734.69673-7-saeed@kernel.org \
    --to=saeed@kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kliteyn@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=valex@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).