From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Shay Drory <shayd@nvidia.com>,
Parav Pandit <parav@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [net 08/10] net/mlx5: Fix setting number of EQs of SFs
Date: Thu, 30 Sep 2021 16:14:59 -0700 [thread overview]
Message-ID: <20210930231501.39062-9-saeed@kernel.org> (raw)
In-Reply-To: <20210930231501.39062-1-saeed@kernel.org>
From: Shay Drory <shayd@nvidia.com>
When setting number of completion EQs of the SF, consider number of
online CPUs.
Without this consideration, when number of online cpus are less than 8,
unnecessary 8 completion EQs are allocated.
Fixes: c36326d38d93 ("net/mlx5: Round-Robin EQs over IRQs")
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
index df54f62a38ac..763c83a02380 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
@@ -633,8 +633,9 @@ void mlx5_irq_table_destroy(struct mlx5_core_dev *dev)
int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table)
{
if (table->sf_comp_pool)
- return table->sf_comp_pool->xa_num_irqs.max -
- table->sf_comp_pool->xa_num_irqs.min + 1;
+ return min_t(int, num_online_cpus(),
+ table->sf_comp_pool->xa_num_irqs.max -
+ table->sf_comp_pool->xa_num_irqs.min + 1);
else
return mlx5_irq_table_get_num_comp(table);
}
--
2.31.1
next prev parent reply other threads:[~2021-09-30 23:15 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-30 23:14 [pull request][net 00/10] mlx5 fixes 2021-09-30 Saeed Mahameed
2021-09-30 23:14 ` [net 01/10] net/mlx5e: IPSEC RX, enable checksum complete Saeed Mahameed
2021-10-01 13:10 ` patchwork-bot+netdevbpf
2021-10-01 18:27 ` Saeed Mahameed
2021-10-02 11:21 ` David Miller
2021-09-30 23:14 ` [net 02/10] net/mlx5e: Keep the value for maximum number of channels in-sync Saeed Mahameed
2021-09-30 23:14 ` [net 03/10] net/mlx5e: Improve MQPRIO resiliency Saeed Mahameed
2021-09-30 23:14 ` [net 04/10] net/mlx5: E-Switch, Fix double allocation of acl flow counter Saeed Mahameed
2021-09-30 23:14 ` [net 05/10] net/mlx5: Force round second at 1PPS out start time Saeed Mahameed
2021-09-30 23:14 ` [net 06/10] net/mlx5: Avoid generating event after PPS out in Real time mode Saeed Mahameed
2021-09-30 23:14 ` [net 07/10] net/mlx5: Fix length of irq_index in chars Saeed Mahameed
2021-09-30 23:14 ` Saeed Mahameed [this message]
2021-09-30 23:15 ` [net 09/10] net/mlx5e: Fix the presented RQ index in PTP stats Saeed Mahameed
2021-09-30 23:15 ` [net 10/10] net/mlx5e: Mutually exclude setting of TX-port-TS and MQPRIO in channel mode Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210930231501.39062-9-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).