From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Tariq Toukan <tariqt@nvidia.com>,
Shay Drory <shayd@nvidia.com>, Moshe Shemesh <moshe@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [net-next 15/15] net/mlx5: Enable single IRQ for PCI Function
Date: Mon, 4 Oct 2021 18:13:02 -0700 [thread overview]
Message-ID: <20211005011302.41793-16-saeed@kernel.org> (raw)
In-Reply-To: <20211005011302.41793-1-saeed@kernel.org>
From: Shay Drory <shayd@nvidia.com>
Prior to this patch the driver requires two IRQs to function properly,
one required IRQ for control and at least one required IRQ for IO.
This requirement can be relaxed to one as the driver now allows
sharing of IRQs, so control and IO EQs can share the same irq.
This is needed for high scale amount of VFs.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../net/ethernet/mellanox/mlx5/core/pci_irq.c | 26 ++++++++++++++-----
include/linux/mlx5/eq.h | 1 -
2 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
index a66144b54fc8..830444f927d4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
@@ -196,6 +196,12 @@ static void irq_sf_set_name(struct mlx5_irq_pool *pool, char *name, int vecidx)
static void irq_set_name(struct mlx5_irq_pool *pool, char *name, int vecidx)
{
+ if (!pool->xa_num_irqs.max) {
+ /* in case we only have a single irq for the device */
+ snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_combined%d", vecidx);
+ return;
+ }
+
if (vecidx == pool->xa_num_irqs.max) {
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_async%d", vecidx);
return;
@@ -204,6 +210,11 @@ static void irq_set_name(struct mlx5_irq_pool *pool, char *name, int vecidx)
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", vecidx);
}
+static bool irq_pool_is_sf_pool(struct mlx5_irq_pool *pool)
+{
+ return !strncmp("mlx5_sf", pool->name, strlen("mlx5_sf"));
+}
+
static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i)
{
struct mlx5_core_dev *dev = pool->dev;
@@ -215,7 +226,7 @@ static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i)
if (!irq)
return ERR_PTR(-ENOMEM);
irq->irqn = pci_irq_vector(dev->pdev, i);
- if (!pool->name[0])
+ if (!irq_pool_is_sf_pool(pool))
irq_set_name(pool, name, i);
else
irq_sf_set_name(pool, name, i);
@@ -385,6 +396,9 @@ irq_pool_request_vector(struct mlx5_irq_pool *pool, int vecidx,
if (IS_ERR(irq) || !affinity)
goto unlock;
cpumask_copy(irq->mask, affinity);
+ if (!irq_pool_is_sf_pool(pool) && !pool->xa_num_irqs.max &&
+ cpumask_empty(irq->mask))
+ cpumask_set_cpu(0, irq->mask);
irq_set_affinity_hint(irq->irqn, irq->mask);
unlock:
mutex_unlock(&pool->lock);
@@ -577,6 +591,8 @@ void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev)
int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table)
{
+ if (!table->pf_pool->xa_num_irqs.max)
+ return 1;
return table->pf_pool->xa_num_irqs.max - table->pf_pool->xa_num_irqs.min;
}
@@ -592,19 +608,15 @@ int mlx5_irq_table_create(struct mlx5_core_dev *dev)
if (mlx5_core_is_sf(dev))
return 0;
- pf_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() +
- MLX5_IRQ_VEC_COMP_BASE;
+ pf_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + 1;
pf_vec = min_t(int, pf_vec, num_eqs);
- if (pf_vec <= MLX5_IRQ_VEC_COMP_BASE)
- return -ENOMEM;
total_vec = pf_vec;
if (mlx5_sf_max_functions(dev))
total_vec += MLX5_IRQ_CTRL_SF_MAX +
MLX5_COMP_EQS_PER_SF * mlx5_sf_max_functions(dev);
- total_vec = pci_alloc_irq_vectors(dev->pdev, MLX5_IRQ_VEC_COMP_BASE + 1,
- total_vec, PCI_IRQ_MSIX);
+ total_vec = pci_alloc_irq_vectors(dev->pdev, 1, total_vec, PCI_IRQ_MSIX);
if (total_vec < 0)
return total_vec;
pf_vec = min(pf_vec, total_vec);
diff --git a/include/linux/mlx5/eq.h b/include/linux/mlx5/eq.h
index cea6ecb4b73e..ea3ff5a8ced3 100644
--- a/include/linux/mlx5/eq.h
+++ b/include/linux/mlx5/eq.h
@@ -4,7 +4,6 @@
#ifndef MLX5_CORE_EQ_H
#define MLX5_CORE_EQ_H
-#define MLX5_IRQ_VEC_COMP_BASE 1
#define MLX5_NUM_CMD_EQE (32)
#define MLX5_NUM_ASYNC_EQE (0x1000)
#define MLX5_NUM_SPARE_EQE (0x80)
--
2.31.1
prev parent reply other threads:[~2021-10-05 1:15 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-05 1:12 [pull request][net-next 00/15] mlx5 updates 2021-10-04 Saeed Mahameed
2021-10-05 1:12 ` [net-next 01/15] net/mlx5e: Specify SQ stats struct for mlx5e_open_txqsq() Saeed Mahameed
2021-10-05 10:50 ` patchwork-bot+netdevbpf
2021-10-05 1:12 ` [net-next 02/15] net/mlx5e: Add TX max rate support for MQPRIO channel mode Saeed Mahameed
2021-10-05 1:12 ` [net-next 03/15] net/mlx5e: TC, Refactor sample offload error flow Saeed Mahameed
2021-10-05 1:12 ` [net-next 04/15] net/mlx5e: Move mod hdr allocation to a single place Saeed Mahameed
2021-10-05 1:12 ` [net-next 05/15] net/mlx5e: Split actions_match_supported() into a sub function Saeed Mahameed
2021-10-05 1:12 ` [net-next 06/15] net/mlx5e: Move parse fdb check into actions_match_supported_fdb() Saeed Mahameed
2021-10-05 1:12 ` [net-next 07/15] net/mlx5e: Reserve a value from TC tunnel options mapping Saeed Mahameed
2021-10-05 1:12 ` [net-next 08/15] net/mlx5e: Specify out ifindex when looking up encap route Saeed Mahameed
2021-10-05 1:12 ` [net-next 09/15] net/mlx5e: Support accept action Saeed Mahameed
2021-10-05 1:12 ` [net-next 10/15] net/mlx5: Bridge, refactor eswitch instance usage Saeed Mahameed
2021-10-05 1:12 ` [net-next 11/15] net/mlx5: Bridge, extract VLAN pop code to dedicated functions Saeed Mahameed
2021-10-05 1:12 ` [net-next 12/15] net/mlx5: Bridge, mark reg_c1 when pushing VLAN Saeed Mahameed
2021-10-05 1:13 ` [net-next 13/15] net/mlx5: Bridge, pop VLAN on egress table miss Saeed Mahameed
2021-10-05 1:13 ` [net-next 14/15] net/mlx5: Shift control IRQ to the last index Saeed Mahameed
2021-10-05 1:13 ` Saeed Mahameed [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211005011302.41793-16-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=moshe@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).