From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Shay Drory <shayd@nvidia.com>,
Moshe Shemesh <moshe@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [PATCH net-next 03/15] net/mlx5: Move affinity assignment into irq_request
Date: Thu, 6 Jan 2022 16:29:44 -0800 [thread overview]
Message-ID: <20220107002956.74849-4-saeed@kernel.org> (raw)
In-Reply-To: <20220107002956.74849-1-saeed@kernel.org>
From: Shay Drory <shayd@nvidia.com>
Move affinity binding of the IRQ to irq_request function in order to
bind the IRQ before inserting it to the xarray.
After this change, the IRQ is ready for use when inserted to the xarray.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../net/ethernet/mellanox/mlx5/core/pci_irq.c | 22 ++++++++-----------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
index 510a9b91ff9a..656a55114600 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
@@ -215,7 +215,8 @@ static bool irq_pool_is_sf_pool(struct mlx5_irq_pool *pool)
return !strncmp("mlx5_sf", pool->name, strlen("mlx5_sf"));
}
-static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i)
+static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i,
+ struct cpumask *affinity)
{
struct mlx5_core_dev *dev = pool->dev;
char name[MLX5_MAX_IRQ_NAME];
@@ -244,6 +245,10 @@ static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i)
err = -ENOMEM;
goto err_cpumask;
}
+ if (affinity) {
+ cpumask_copy(irq->mask, affinity);
+ irq_set_affinity_hint(irq->irqn, irq->mask);
+ }
irq->pool = pool;
irq->refcount = 1;
irq->index = i;
@@ -255,6 +260,7 @@ static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i)
}
return irq;
err_xa:
+ irq_set_affinity_hint(irq->irqn, NULL);
free_cpumask_var(irq->mask);
err_cpumask:
free_irq(irq->irqn, &irq->nh);
@@ -304,7 +310,6 @@ int mlx5_irq_get_index(struct mlx5_irq *irq)
static struct mlx5_irq *irq_pool_create_irq(struct mlx5_irq_pool *pool,
struct cpumask *affinity)
{
- struct mlx5_irq *irq;
u32 irq_index;
int err;
@@ -312,12 +317,7 @@ static struct mlx5_irq *irq_pool_create_irq(struct mlx5_irq_pool *pool,
GFP_KERNEL);
if (err)
return ERR_PTR(err);
- irq = irq_request(pool, irq_index);
- if (IS_ERR(irq))
- return irq;
- cpumask_copy(irq->mask, affinity);
- irq_set_affinity_hint(irq->irqn, irq->mask);
- return irq;
+ return irq_request(pool, irq_index, affinity);
}
/* looking for the irq with the smallest refcount and the same affinity */
@@ -392,11 +392,7 @@ irq_pool_request_vector(struct mlx5_irq_pool *pool, int vecidx,
irq_get_locked(irq);
goto unlock;
}
- irq = irq_request(pool, vecidx);
- if (IS_ERR(irq) || !affinity)
- goto unlock;
- cpumask_copy(irq->mask, affinity);
- irq_set_affinity_hint(irq->irqn, irq->mask);
+ irq = irq_request(pool, vecidx, affinity);
unlock:
mutex_unlock(&pool->lock);
return irq;
--
2.33.1
next prev parent reply other threads:[~2022-01-07 0:30 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-07 0:29 [pull request][net-next 00/15] mlx5 updates 2022-01-06 Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 01/15] net/mlx5: mlx5e_hv_vhca_stats_create return type to void Saeed Mahameed
2022-01-07 11:30 ` patchwork-bot+netdevbpf
2022-01-07 0:29 ` [PATCH net-next 02/15] net/mlx5: Introduce control IRQ request API Saeed Mahameed
2022-01-07 0:29 ` Saeed Mahameed [this message]
2022-01-07 0:29 ` [PATCH net-next 04/15] net/mlx5: Split irq_pool_affinity logic to new file Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 05/15] net/mlx5: Introduce API for bulk request and release of IRQs Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 06/15] net/mlx5: SF, Use all available cpu for setting cpu affinity Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 07/15] net/mlx5: Update log_max_qp value to FW max capability Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 08/15] net/mlx5e: Expose FEC counters via ethtool Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 09/15] net/mlx5e: Unblock setting vid 0 for VF in case PF isn't eswitch manager Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 10/15] net/mlx5e: Fix feature check per profile Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 11/15] net/mlx5e: Move HW-GRO and CQE compression check to fix features flow Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 12/15] net/mlx5e: Refactor set_pflag_cqe_based_moder Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 13/15] net/mlx5e: TC, Remove redundant error logging Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 14/15] net/mlx5e: Add recovery flow in case of error CQE Saeed Mahameed
2022-01-07 0:29 ` [PATCH net-next 15/15] Documentation: devlink: mlx5.rst: Fix htmldoc build warning Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220107002956.74849-4-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=moshe@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).