From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Maher Sanalla <msanalla@nvidia.com>,
Maor Gottlieb <maorg@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [PATCH net-next 07/15] net/mlx5: Update log_max_qp value to FW max capability
Date: Thu, 6 Jan 2022 16:29:48 -0800 [thread overview]
Message-ID: <20220107002956.74849-8-saeed@kernel.org> (raw)
In-Reply-To: <20220107002956.74849-1-saeed@kernel.org>
From: Maher Sanalla <msanalla@nvidia.com>
log_max_qp in driver's default profile #2 was set to 18, but FW actually
supports 17 at the most - a situation that led to the concerning print
when the driver is loaded:
"log_max_qp value in current profile is 18, changing to HCA capabaility
limit (17)"
The expected behavior from mlx5_profile #2 is to match the maximum FW
capability in regards to log_max_qp. Thus, log_max_qp in profile #2 is
initialized to a defined static value (0xff) - which basically means that
when loading this profile, log_max_qp value will be what the currently
installed FW supports at most.
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/main.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 6b225bb5751a..2c774f367199 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -98,6 +98,8 @@ enum {
MLX5_ATOMIC_REQ_MODE_HOST_ENDIANNESS = 0x1,
};
+#define LOG_MAX_SUPPORTED_QPS 0xff
+
static struct mlx5_profile profile[] = {
[0] = {
.mask = 0,
@@ -109,7 +111,7 @@ static struct mlx5_profile profile[] = {
[2] = {
.mask = MLX5_PROF_MASK_QP_SIZE |
MLX5_PROF_MASK_MR_CACHE,
- .log_max_qp = 18,
+ .log_max_qp = LOG_MAX_SUPPORTED_QPS,
.mr_cache[0] = {
.size = 500,
.limit = 250
@@ -523,7 +525,9 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
to_fw_pkey_sz(dev, 128));
/* Check log_max_qp from HCA caps to set in current profile */
- if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
+ if (prof->log_max_qp == LOG_MAX_SUPPORTED_QPS) {
+ prof->log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp);
+ } else if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n",
prof->log_max_qp,
MLX5_CAP_GEN_MAX(dev, log_max_qp));
--
2.33.1
next prev parent reply other threads:[~2022-01-07 0:30 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-07 0:29 [pull request][net-next 00/15] mlx5 updates 2022-01-06 Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 01/15] net/mlx5: mlx5e_hv_vhca_stats_create return type to void Saeed Mahameed
2022-01-07 11:30 ` patchwork-bot+netdevbpf
2022-01-07 0:29 ` [PATCH net-next 02/15] net/mlx5: Introduce control IRQ request API Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 03/15] net/mlx5: Move affinity assignment into irq_request Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 04/15] net/mlx5: Split irq_pool_affinity logic to new file Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 05/15] net/mlx5: Introduce API for bulk request and release of IRQs Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 06/15] net/mlx5: SF, Use all available cpu for setting cpu affinity Saeed Mahameed
2022-01-07 0:29 ` Saeed Mahameed [this message]
2022-01-07 0:29 ` [PATCH net-next 08/15] net/mlx5e: Expose FEC counters via ethtool Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 09/15] net/mlx5e: Unblock setting vid 0 for VF in case PF isn't eswitch manager Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 10/15] net/mlx5e: Fix feature check per profile Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 11/15] net/mlx5e: Move HW-GRO and CQE compression check to fix features flow Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 12/15] net/mlx5e: Refactor set_pflag_cqe_based_moder Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 13/15] net/mlx5e: TC, Remove redundant error logging Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 14/15] net/mlx5e: Add recovery flow in case of error CQE Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 15/15] Documentation: devlink: mlx5.rst: Fix htmldoc build warning Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220107002956.74849-8-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=maorg@nvidia.com \
--cc=msanalla@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).