From: Selvin Xavier <selvin.xavier@broadcom.com>
To: leon@kernel.org, jgg@ziepe.ca
Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com,
kalesh-anakkur.purayil@broadcom.com,
Selvin Xavier <selvin.xavier@broadcom.com>
Subject: [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Cache MSIx info to a local structure
Date: Thu, 14 Nov 2024 01:49:08 -0800 [thread overview]
Message-ID: <1731577748-1804-5-git-send-email-selvin.xavier@broadcom.com> (raw)
In-Reply-To: <1731577748-1804-1-git-send-email-selvin.xavier@broadcom.com>
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
L2 driver allocates the vectors for RoCE and pass it through the
en_dev structure to RoCE. During probe, cache the MSIx related
info to a local structure.
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
---
drivers/infiniband/hw/bnxt_re/bnxt_re.h | 1 +
drivers/infiniband/hw/bnxt_re/main.c | 18 ++++++++++--------
2 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
index 5f64fc4..2975b11 100644
--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
@@ -157,6 +157,7 @@ struct bnxt_re_pacing {
#define BNXT_RE_MIN_MSIX 2
#define BNXT_RE_MAX_MSIX BNXT_MAX_ROCE_MSIX
struct bnxt_re_nq_record {
+ struct bnxt_msix_entry msix_entries[BNXT_RE_MAX_MSIX];
struct bnxt_qplib_nq nq[BNXT_RE_MAX_MSIX];
int num_msix;
/* serialize NQ access */
diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
index fcaf2b3..533b9f1 100644
--- a/drivers/infiniband/hw/bnxt_re/main.c
+++ b/drivers/infiniband/hw/bnxt_re/main.c
@@ -347,7 +347,7 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
return;
rdev = en_info->rdev;
- msix_ent = rdev->en_dev->msix_entries;
+ msix_ent = rdev->nqr->msix_entries;
rcfw = &rdev->rcfw;
if (!ent) {
/* Not setting the f/w timeout bit in rcfw.
@@ -363,7 +363,7 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
* in device sctructure.
*/
for (indx = 0; indx < rdev->nqr->num_msix; indx++)
- rdev->en_dev->msix_entries[indx].vector = ent[indx].vector;
+ rdev->nqr->msix_entries[indx].vector = ent[indx].vector;
rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
false);
@@ -1569,9 +1569,9 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
mutex_init(&rdev->nqr->load_lock);
for (i = 1; i < rdev->nqr->num_msix ; i++) {
- db_offt = rdev->en_dev->msix_entries[i].db_offset;
+ db_offt = rdev->nqr->msix_entries[i].db_offset;
rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nqr->nq[i - 1],
- i - 1, rdev->en_dev->msix_entries[i].vector,
+ i - 1, rdev->nqr->msix_entries[i].vector,
db_offt, &bnxt_re_cqn_handler,
&bnxt_re_srqn_handler);
if (rc) {
@@ -1658,7 +1658,7 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
rattr.type = type;
rattr.mode = RING_ALLOC_REQ_INT_MODE_MSIX;
rattr.depth = BNXT_QPLIB_NQE_MAX_CNT - 1;
- rattr.lrid = rdev->en_dev->msix_entries[i + 1].ring_idx;
+ rattr.lrid = rdev->nqr->msix_entries[i + 1].ring_idx;
rc = bnxt_re_net_ring_alloc(rdev, &rattr, &nq->ring_id);
if (rc) {
ibdev_err(&rdev->ibdev,
@@ -1983,6 +1983,8 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
return rc;
}
rdev->nqr->num_msix = rdev->en_dev->ulp_tbl->msix_requested;
+ memcpy(rdev->nqr->msix_entries, rdev->en_dev->msix_entries,
+ sizeof(struct bnxt_msix_entry) * rdev->nqr->num_msix);
/* Check whether VF or PF */
bnxt_re_get_sriov_func_type(rdev);
@@ -2008,14 +2010,14 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
rattr.type = type;
rattr.mode = RING_ALLOC_REQ_INT_MODE_MSIX;
rattr.depth = BNXT_QPLIB_CREQE_MAX_CNT - 1;
- rattr.lrid = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].ring_idx;
+ rattr.lrid = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].ring_idx;
rc = bnxt_re_net_ring_alloc(rdev, &rattr, &creq->ring_id);
if (rc) {
ibdev_err(&rdev->ibdev, "Failed to allocate CREQ: %#x\n", rc);
goto free_rcfw;
}
- db_offt = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].db_offset;
- vid = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].vector;
+ db_offt = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].db_offset;
+ vid = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].vector;
rc = bnxt_qplib_enable_rcfw_channel(&rdev->rcfw,
vid, db_offt,
&bnxt_re_aeq_handler);
--
2.5.5
next prev parent reply other threads:[~2024-11-14 10:10 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-14 9:49 [PATCH rdma-next v2 0/4]RDMA/bnxt_re: Refactor Notification queue allocation Selvin Xavier
2024-11-14 9:49 ` [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Fail probe early when not enough MSI-x vectors are reserved Selvin Xavier
2024-11-14 9:49 ` [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor NQ allocation Selvin Xavier
2024-11-14 9:49 ` [PATCH rdma-next v2 3/4] RDMA/bnxt_re: Refurbish CQ to NQ hash calculation Selvin Xavier
2024-11-14 9:49 ` Selvin Xavier [this message]
2024-11-14 14:53 ` [PATCH rdma-next v2 0/4]RDMA/bnxt_re: Refactor Notification queue allocation Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1731577748-1804-5-git-send-email-selvin.xavier@broadcom.com \
--to=selvin.xavier@broadcom.com \
--cc=andrew.gospodarek@broadcom.com \
--cc=jgg@ziepe.ca \
--cc=kalesh-anakkur.purayil@broadcom.com \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox