From: Yajun Deng <yajun.deng@linux.dev>
To: saeedm@nvidia.com, leon@kernel.org, davem@davemloft.net,
edumazet@google.com, kuba@kernel.org, pabeni@redhat.com
Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kernel@vger.kernel.org, Yajun Deng <yajun.deng@linux.dev>
Subject: [PATCH] net/mlx5: Add affinity for each irq
Date: Mon, 6 Jun 2022 15:13:51 +0800 [thread overview]
Message-ID: <20220606071351.3550997-1-yajun.deng@linux.dev> (raw)
The mlx5 would allocate no less than one irq for per cpu, we can bond each
irq to a cpu to improve interrupt performance.
Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
---
.../net/ethernet/mellanox/mlx5/core/pci_irq.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
index 662f1d55e30e..d13fc403fe78 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
@@ -624,11 +624,27 @@ int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table)
return table->pf_pool->xa_num_irqs.max - table->pf_pool->xa_num_irqs.min;
}
+static void mlx5_calc_sets(struct irq_affinity *affd, unsigned int nvecs)
+{
+ int i;
+
+ affd->nr_sets = (nvecs - 1) / num_possible_cpus() + 1;
+
+ for (i = 0; i < affd->nr_sets; i++) {
+ affd->set_size[i] = min(nvecs, num_possible_cpus());
+ nvecs -= num_possible_cpus();
+ }
+}
+
int mlx5_irq_table_create(struct mlx5_core_dev *dev)
{
int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
MLX5_CAP_GEN(dev, max_num_eqs) :
1 << MLX5_CAP_GEN(dev, log_max_eq);
+ struct irq_affinity affd = {
+ .pre_vectors = 0,
+ .calc_sets = mlx5_calc_sets,
+ };
int total_vec;
int pf_vec;
int err;
@@ -644,7 +660,8 @@ int mlx5_irq_table_create(struct mlx5_core_dev *dev)
total_vec += MLX5_IRQ_CTRL_SF_MAX +
MLX5_COMP_EQS_PER_SF * mlx5_sf_max_functions(dev);
- total_vec = pci_alloc_irq_vectors(dev->pdev, 1, total_vec, PCI_IRQ_MSIX);
+ total_vec = pci_alloc_irq_vectors_affinity(dev->pdev, 1, total_vec,
+ PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, &affd);
if (total_vec < 0)
return total_vec;
pf_vec = min(pf_vec, total_vec);
--
2.25.1
next reply other threads:[~2022-06-06 7:14 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-06 7:13 Yajun Deng [this message]
2022-06-06 8:31 ` [PATCH] net/mlx5: Add affinity for each irq Shay Drory
2022-06-06 10:29 ` Yajun Deng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220606071351.3550997-1-yajun.deng@linux.dev \
--to=yajun.deng@linux.dev \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).