From: Simon Horman <horms@kernel.org>
To: Leon Romanovsky <leon@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>, Shun Hao <shunh@nvidia.com>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>,
linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
Paolo Abeni <pabeni@redhat.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: Re: [PATCH mlx5-next 2/5] net/mlx5: Manage ICM type of SW encap
Date: Thu, 30 Nov 2023 18:16:11 +0000 [thread overview]
Message-ID: <20231130181611.GL32077@kernel.org> (raw)
In-Reply-To: <37dc4fd78dfa3374ff53aa602f038a2ec76eb069.1701172481.git.leon@kernel.org>
On Tue, Nov 28, 2023 at 02:29:46PM +0200, Leon Romanovsky wrote:
> From: Shun Hao <shunh@nvidia.com>
>
> Support allocate/deallocate the new SW encap ICM type memory.
> The new ICM type is used for encap context allocation managed by SW,
> instead FW. It can increase encap context maximum number and allocation
> speed
>
> Signed-off-by: Shun Hao <shunh@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
...
> @@ -164,6 +188,13 @@ int mlx5_dm_sw_icm_alloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type,
> log_header_modify_pattern_sw_icm_size);
> block_map = dm->header_modify_pattern_sw_icm_alloc_blocks;
> break;
> + case MLX5_SW_ICM_TYPE_SW_ENCAP:
> + icm_start_addr = MLX5_CAP64_DEV_MEM(dev,
> + indirect_encap_sw_icm_start_address);
> + log_icm_size = MLX5_CAP_DEV_MEM(dev,
> + log_indirect_encap_sw_icm_size);
> + block_map = dm->header_encap_sw_icm_alloc_blocks;
> + break;
> default:
> return -EINVAL;
> }
> @@ -242,6 +273,11 @@ int mlx5_dm_sw_icm_dealloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type
> header_modify_pattern_sw_icm_start_address);
> block_map = dm->header_modify_pattern_sw_icm_alloc_blocks;
> break;
> + case MLX5_SW_ICM_TYPE_SW_ENCAP:
> + icm_start_addr = MLX5_CAP64_DEV_MEM(dev,
> + indirect_encap_sw_icm_start_address);
> + block_map = dm->header_encap_sw_icm_alloc_blocks;
> + break;
> default:
> return -EINVAL;
> }
Hi Leon and Shun,
a minor nit from my side: this patch uses MLX5_SW_ICM_TYPE_SW_ENCAP,
but that enum value isn't present until the following patch.
next prev parent reply other threads:[~2023-11-30 18:16 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-28 12:29 [PATCH mlx5-next 0/5] Expose c0 and SW encap ICM for RDMA Leon Romanovsky
2023-11-28 12:29 ` [PATCH mlx5-next 1/5] net/mlx5: Introduce indirect-sw-encap icm properties Leon Romanovsky
2023-11-28 12:29 ` [PATCH mlx5-next 2/5] net/mlx5: Manage ICM type of SW encap Leon Romanovsky
2023-11-30 18:16 ` Simon Horman [this message]
2023-12-01 17:15 ` Leon Romanovsky
2023-11-28 12:29 ` [PATCH mlx5-next 3/5] RDMA/mlx5: Support handling of SW encap ICM area Leon Romanovsky
2023-11-28 12:29 ` [PATCH mlx5-next 4/5] net/mlx5: E-Switch, expose eswitch manager vport Leon Romanovsky
2023-11-28 12:29 ` [PATCH mlx5-next 5/5] RDMA/mlx5: Expose register c0 for RDMA device Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231130181611.GL32077@kernel.org \
--to=horms@kernel.org \
--cc=edumazet@google.com \
--cc=jgg@nvidia.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
--cc=shunh@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).