public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement
@ 2024-09-06  9:34 Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 1/9] RDMA/hns: Don't modify rq next block addr in HIP09 QPC Junxian Huang
                   ` (10 more replies)
  0 siblings, 11 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

This is a series of hns patches. Patch #8 is an improvement for
hem allocation performance, and the others are some fixes.

Chengchang Tang (2):
  RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled
  RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS

Feng Fang (1):
  RDMA/hns: Fix different dgids mapping to the same dip_idx

Junxian Huang (3):
  RDMA/hns: Don't modify rq next block addr in HIP09 QPC
  RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler
  RDMA/hns: Optimize hem allocation performance

wenglianfa (3):
  RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08
  RDMA/hns: Fix cpu stuck caused by printings during reset
  RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range()

 drivers/infiniband/hw/hns/hns_roce_cq.c     |   4 +-
 drivers/infiniband/hw/hns/hns_roce_device.h |   6 +-
 drivers/infiniband/hw/hns/hns_roce_hem.c    |  26 ++--
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 156 +++++++++++++-------
 drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |   1 +
 drivers/infiniband/hw/hns/hns_roce_mr.c     |   4 +-
 drivers/infiniband/hw/hns/hns_roce_qp.c     |  32 ++--
 drivers/infiniband/hw/hns/hns_roce_srq.c    |   4 +-
 8 files changed, 148 insertions(+), 85 deletions(-)

--
2.33.0


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH for-next 1/9] RDMA/hns: Don't modify rq next block addr in HIP09 QPC
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 2/9] RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08 Junxian Huang
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

The field 'rq next block addr' in QPC can be updated by driver only
on HIP08. On HIP09 HW updates this field while driver is not allowed.

Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC")
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 621b057fb9da..a166b476977f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -4423,12 +4423,14 @@ static int config_qp_rq_buf(struct hns_roce_dev *hr_dev,
 		     upper_32_bits(to_hr_hw_page_addr(mtts[0])));
 	hr_reg_clear(qpc_mask, QPC_RQ_CUR_BLK_ADDR_H);
 
-	context->rq_nxt_blk_addr = cpu_to_le32(to_hr_hw_page_addr(mtts[1]));
-	qpc_mask->rq_nxt_blk_addr = 0;
-
-	hr_reg_write(context, QPC_RQ_NXT_BLK_ADDR_H,
-		     upper_32_bits(to_hr_hw_page_addr(mtts[1])));
-	hr_reg_clear(qpc_mask, QPC_RQ_NXT_BLK_ADDR_H);
+	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
+		context->rq_nxt_blk_addr =
+				cpu_to_le32(to_hr_hw_page_addr(mtts[1]));
+		qpc_mask->rq_nxt_blk_addr = 0;
+		hr_reg_write(context, QPC_RQ_NXT_BLK_ADDR_H,
+			     upper_32_bits(to_hr_hw_page_addr(mtts[1])));
+		hr_reg_clear(qpc_mask, QPC_RQ_NXT_BLK_ADDR_H);
+	}
 
 	return 0;
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 2/9] RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 1/9] RDMA/hns: Don't modify rq next block addr in HIP09 QPC Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset Junxian Huang
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

From: wenglianfa <wenglianfa@huawei.com>

Currently rsv_qp is freed before ib_unregister_device() is called
on HIP08. During the time interval, users can still dereg MR and
rsv_qp will be used in this process, leading to a UAF. Move the
release of rsv_qp after calling ib_unregister_device() to fix it.

Fixes: 70f92521584f ("RDMA/hns: Use the reserved loopback QPs to free MR before destroying MPT")
Signed-off-by: wenglianfa <wenglianfa@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index a166b476977f..2225c9cc6366 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -2972,6 +2972,9 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
 
 static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
 {
+	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
+		free_mr_exit(hr_dev);
+
 	hns_roce_function_clear(hr_dev);
 
 	if (!hr_dev->is_vf)
@@ -6951,9 +6954,6 @@ static void __hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
 	hr_dev->state = HNS_ROCE_DEVICE_STATE_UNINIT;
 	hns_roce_handle_device_err(hr_dev);
 
-	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
-		free_mr_exit(hr_dev);
-
 	hns_roce_exit(hr_dev);
 	kfree(hr_dev->priv);
 	ib_dealloc_device(&hr_dev->ib_dev);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 1/9] RDMA/hns: Don't modify rq next block addr in HIP09 QPC Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 2/9] RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08 Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-10 13:09   ` Leon Romanovsky
  2024-09-06  9:34 ` [PATCH for-next 4/9] RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range() Junxian Huang
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

From: wenglianfa <wenglianfa@huawei.com>

During reset, cmd to destroy resources such as qp, cq, and mr may
fail, and error logs will be printed. When a large number of
resources are destroyed, there will be lots of printings, and it
may lead to a cpu stuck. Replace the printing functions in these
paths with the ratelimited version.

Fixes: 9a4435375cd1 ("IB/hns: Add driver files for hns RoCE driver")
Fixes: c7bcb13442e1 ("RDMA/hns: Add SRQ support for hip08 kernel mode")
Fixes: 70f92521584f ("RDMA/hns: Use the reserved loopback QPs to free MR before destroying MPT")
Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC")
Signed-off-by: wenglianfa <wenglianfa@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_cq.c    |  4 +-
 drivers/infiniband/hw/hns/hns_roce_hem.c   |  4 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 69 +++++++++++-----------
 drivers/infiniband/hw/hns/hns_roce_mr.c    |  4 +-
 drivers/infiniband/hw/hns/hns_roce_srq.c   |  4 +-
 5 files changed, 44 insertions(+), 41 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 4ec66611a143..4106423a1b39 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -179,8 +179,8 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
 	ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_CQC,
 				      hr_cq->cqn);
 	if (ret)
-		dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret,
-			hr_cq->cqn);
+		dev_err_ratelimited(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n",
+				    ret, hr_cq->cqn);
 
 	xa_erase_irq(&cq_table->array, hr_cq->cqn);
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 02baa853a76c..496584139240 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -672,8 +672,8 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
 
 	ret = hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
 	if (ret)
-		dev_warn(dev, "failed to clear HEM base address, ret = %d.\n",
-			 ret);
+		dev_warn_ratelimited(dev, "failed to clear HEM base address, ret = %d.\n",
+				     ret);
 
 	hns_roce_free_hem(hr_dev, table->hem[i]);
 	table->hem[i] = NULL;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 2225c9cc6366..adcadd2495ab 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -378,12 +378,12 @@ static int check_send_valid(struct hns_roce_dev *hr_dev,
 	if (unlikely(hr_qp->state == IB_QPS_RESET ||
 		     hr_qp->state == IB_QPS_INIT ||
 		     hr_qp->state == IB_QPS_RTR)) {
-		ibdev_err(ibdev, "failed to post WQE, QP state %u!\n",
-			  hr_qp->state);
+		ibdev_err_ratelimited(ibdev, "failed to post WQE, QP state %u!\n",
+				      hr_qp->state);
 		return -EINVAL;
 	} else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) {
-		ibdev_err(ibdev, "failed to post WQE, dev state %d!\n",
-			  hr_dev->state);
+		ibdev_err_ratelimited(ibdev, "failed to post WQE, dev state %d!\n",
+				      hr_dev->state);
 		return -EIO;
 	}
 
@@ -2775,8 +2775,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
 	ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
 				    IB_QPS_INIT, NULL);
 	if (ret) {
-		ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n",
-			  ret);
+		ibdev_err_ratelimited(ibdev, "failed to modify qp to init, ret = %d.\n",
+				      ret);
 		return ret;
 	}
 
@@ -3421,8 +3421,8 @@ static int free_mr_post_send_lp_wqe(struct hns_roce_qp *hr_qp)
 
 	ret = hns_roce_v2_post_send(&hr_qp->ibqp, send_wr, &bad_wr);
 	if (ret) {
-		ibdev_err(ibdev, "failed to post wqe for free mr, ret = %d.\n",
-			  ret);
+		ibdev_err_ratelimited(ibdev, "failed to post wqe for free mr, ret = %d.\n",
+				      ret);
 		return ret;
 	}
 
@@ -3461,9 +3461,9 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
 
 		ret = free_mr_post_send_lp_wqe(hr_qp);
 		if (ret) {
-			ibdev_err(ibdev,
-				  "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
-				  hr_qp->qpn, ret);
+			ibdev_err_ratelimited(ibdev,
+					      "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
+					      hr_qp->qpn, ret);
 			break;
 		}
 
@@ -3474,16 +3474,16 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
 	while (cqe_cnt) {
 		npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc);
 		if (npolled < 0) {
-			ibdev_err(ibdev,
-				  "failed to poll cqe for free mr, remain %d cqe.\n",
-				  cqe_cnt);
+			ibdev_err_ratelimited(ibdev,
+					      "failed to poll cqe for free mr, remain %d cqe.\n",
+					      cqe_cnt);
 			goto out;
 		}
 
 		if (time_after(jiffies, end)) {
-			ibdev_err(ibdev,
-				  "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
-				  cqe_cnt);
+			ibdev_err_ratelimited(ibdev,
+					      "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
+					      cqe_cnt);
 			goto out;
 		}
 		cqe_cnt -= npolled;
@@ -5062,7 +5062,8 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp,
 	int ret = 0;
 
 	if (!check_qp_state(cur_state, new_state)) {
-		ibdev_err(&hr_dev->ib_dev, "Illegal state for QP!\n");
+		ibdev_err_ratelimited(&hr_dev->ib_dev,
+				      "Illegal state for QP!\n");
 		return -EINVAL;
 	}
 
@@ -5325,7 +5326,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
 	/* SW pass context to HW */
 	ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp);
 	if (ret) {
-		ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret);
+		ibdev_err_ratelimited(ibdev, "failed to modify QP, ret = %d.\n", ret);
 		goto out;
 	}
 
@@ -5463,7 +5464,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 
 	ret = hns_roce_v2_query_qpc(hr_dev, hr_qp->qpn, &context);
 	if (ret) {
-		ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret);
+		ibdev_err_ratelimited(ibdev,
+				      "failed to query QPC, ret = %d.\n",
+				      ret);
 		ret = -EINVAL;
 		goto out;
 	}
@@ -5471,7 +5474,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	state = hr_reg_read(&context, QPC_QP_ST);
 	tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state);
 	if (tmp_qp_state == -1) {
-		ibdev_err(ibdev, "Illegal ib_qp_state\n");
+		ibdev_err_ratelimited(ibdev, "Illegal ib_qp_state\n");
 		ret = -EINVAL;
 		goto out;
 	}
@@ -5564,9 +5567,9 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
 		ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
 					    hr_qp->state, IB_QPS_RESET, udata);
 		if (ret)
-			ibdev_err(ibdev,
-				  "failed to modify QP to RST, ret = %d.\n",
-				  ret);
+			ibdev_err_ratelimited(ibdev,
+					      "failed to modify QP to RST, ret = %d.\n",
+					      ret);
 	}
 
 	send_cq = hr_qp->ibqp.send_cq ? to_hr_cq(hr_qp->ibqp.send_cq) : NULL;
@@ -5602,9 +5605,9 @@ int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 
 	ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
 	if (ret)
-		ibdev_err(&hr_dev->ib_dev,
-			  "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
-			  hr_qp->qpn, ret);
+		ibdev_err_ratelimited(&hr_dev->ib_dev,
+				      "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
+				      hr_qp->qpn, ret);
 
 	hns_roce_qp_destroy(hr_dev, hr_qp, udata);
 
@@ -5898,9 +5901,9 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
 				HNS_ROCE_CMD_MODIFY_CQC, hr_cq->cqn);
 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
 	if (ret)
-		ibdev_err(&hr_dev->ib_dev,
-			  "failed to process cmd when modifying CQ, ret = %d.\n",
-			  ret);
+		ibdev_err_ratelimited(&hr_dev->ib_dev,
+				      "failed to process cmd when modifying CQ, ret = %d.\n",
+				      ret);
 
 err_out:
 	if (ret)
@@ -5924,9 +5927,9 @@ static int hns_roce_v2_query_cqc(struct hns_roce_dev *hr_dev, u32 cqn,
 	ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma,
 				HNS_ROCE_CMD_QUERY_CQC, cqn);
 	if (ret) {
-		ibdev_err(&hr_dev->ib_dev,
-			  "failed to process cmd when querying CQ, ret = %d.\n",
-			  ret);
+		ibdev_err_ratelimited(&hr_dev->ib_dev,
+				      "failed to process cmd when querying CQ, ret = %d.\n",
+				      ret);
 		goto err_mailbox;
 	}
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index 846da8c78b8b..b3f4327d0e64 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -138,8 +138,8 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr
 					      key_to_hw_index(mr->key) &
 					      (hr_dev->caps.num_mtpts - 1));
 		if (ret)
-			ibdev_warn(ibdev, "failed to destroy mpt, ret = %d.\n",
-				   ret);
+			ibdev_warn_ratelimited(ibdev, "failed to destroy mpt, ret = %d.\n",
+					       ret);
 	}
 
 	free_mr_pbl(hr_dev, mr);
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index c9b8233f4b05..70c06ef65603 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -151,8 +151,8 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
 	ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_SRQ,
 				      srq->srqn);
 	if (ret)
-		dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
-			ret, srq->srqn);
+		dev_err_ratelimited(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
+				    ret, srq->srqn);
 
 	xa_erase_irq(&srq_table->xa, srq->srqn);
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 4/9] RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range()
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (2 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 5/9] RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled Junxian Huang
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

From: wenglianfa <wenglianfa@huawei.com>

The max value of 'unit' and 'hop_num' is 2^24 and 2, so the value of
'step' may exceed the range of u32. Change the type of 'step' to u64.

Fixes: 38389eaa4db1 ("RDMA/hns: Add mtr support for mixed multihop addressing")
Signed-off-by: wenglianfa <wenglianfa@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hem.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 496584139240..cade3ca68de1 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -1041,9 +1041,9 @@ static bool hem_list_is_bottom_bt(int hopnum, int bt_level)
  * @bt_level: base address table level
  * @unit: ba entries per bt page
  */
-static u32 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
+static u64 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
 {
-	u32 step;
+	u64 step;
 	int max;
 	int i;
 
@@ -1079,7 +1079,7 @@ int hns_roce_hem_list_calc_root_ba(const struct hns_roce_buf_region *regions,
 {
 	struct hns_roce_buf_region *r;
 	int total = 0;
-	int step;
+	u64 step;
 	int i;
 
 	for (i = 0; i < region_cnt; i++) {
@@ -1110,7 +1110,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
 	int ret = 0;
 	int max_ofs;
 	int level;
-	u32 step;
+	u64 step;
 	int end;
 
 	if (hopnum <= 1)
@@ -1147,7 +1147,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
 		}
 
 		start_aligned = (distance / step) * step + r->offset;
-		end = min_t(int, start_aligned + step - 1, max_ofs);
+		end = min_t(u64, start_aligned + step - 1, max_ofs);
 		cur = hem_list_alloc_item(hr_dev, start_aligned, end, unit,
 					  true);
 		if (!cur) {
@@ -1235,7 +1235,7 @@ static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
 	struct hns_roce_hem_item *hem, *temp_hem;
 	int total = 0;
 	int offset;
-	int step;
+	u64 step;
 
 	step = hem_list_calc_ba_range(r->hopnum, 1, unit);
 	if (step < 1)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 5/9] RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (3 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 4/9] RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range() Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 6/9] RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler Junxian Huang
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

From: Chengchang Tang <tangchengchang@huawei.com>

Fix missuse of spin_lock_irq()/spin_unlock_irq() when
spin_lock_irqsave()/spin_lock_irqrestore() was hold.

This was discovered through the lock debugging, and the corresponding
log is as follows:

raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 96 PID: 2074 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x30/0x40
...
Call trace:
 warn_bogus_irq_restore+0x30/0x40
 _raw_spin_unlock_irqrestore+0x84/0xc8
 add_qp_to_list+0x11c/0x148 [hns_roce_hw_v2]
 hns_roce_create_qp_common.constprop.0+0x240/0x780 [hns_roce_hw_v2]
 hns_roce_create_qp+0x98/0x160 [hns_roce_hw_v2]
 create_qp+0x138/0x258
 ib_create_qp_kernel+0x50/0xe8
 create_mad_qp+0xa8/0x128
 ib_mad_port_open+0x218/0x448
 ib_mad_init_device+0x70/0x1f8
 add_client_context+0xfc/0x220
 enable_device_and_get+0xd0/0x140
 ib_register_device.part.0+0xf4/0x1c8
 ib_register_device+0x34/0x50
 hns_roce_register_device+0x174/0x3d0 [hns_roce_hw_v2]
 hns_roce_init+0xfc/0x2c0 [hns_roce_hw_v2]
 __hns_roce_hw_v2_init_instance+0x7c/0x1d0 [hns_roce_hw_v2]
 hns_roce_hw_v2_init_instance+0x9c/0x180 [hns_roce_hw_v2]

Fixes: 9a4435375cd1 ("IB/hns: Add driver files for hns RoCE driver")
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_qp.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 1de384ce4d0e..6b03ba671ff8 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1460,19 +1460,19 @@ void hns_roce_lock_cqs(struct hns_roce_cq *send_cq, struct hns_roce_cq *recv_cq)
 		__acquire(&send_cq->lock);
 		__acquire(&recv_cq->lock);
 	} else if (unlikely(send_cq != NULL && recv_cq == NULL)) {
-		spin_lock_irq(&send_cq->lock);
+		spin_lock(&send_cq->lock);
 		__acquire(&recv_cq->lock);
 	} else if (unlikely(send_cq == NULL && recv_cq != NULL)) {
-		spin_lock_irq(&recv_cq->lock);
+		spin_lock(&recv_cq->lock);
 		__acquire(&send_cq->lock);
 	} else if (send_cq == recv_cq) {
-		spin_lock_irq(&send_cq->lock);
+		spin_lock(&send_cq->lock);
 		__acquire(&recv_cq->lock);
 	} else if (send_cq->cqn < recv_cq->cqn) {
-		spin_lock_irq(&send_cq->lock);
+		spin_lock(&send_cq->lock);
 		spin_lock_nested(&recv_cq->lock, SINGLE_DEPTH_NESTING);
 	} else {
-		spin_lock_irq(&recv_cq->lock);
+		spin_lock(&recv_cq->lock);
 		spin_lock_nested(&send_cq->lock, SINGLE_DEPTH_NESTING);
 	}
 }
@@ -1492,13 +1492,13 @@ void hns_roce_unlock_cqs(struct hns_roce_cq *send_cq,
 		spin_unlock(&recv_cq->lock);
 	} else if (send_cq == recv_cq) {
 		__release(&recv_cq->lock);
-		spin_unlock_irq(&send_cq->lock);
+		spin_unlock(&send_cq->lock);
 	} else if (send_cq->cqn < recv_cq->cqn) {
 		spin_unlock(&recv_cq->lock);
-		spin_unlock_irq(&send_cq->lock);
+		spin_unlock(&send_cq->lock);
 	} else {
 		spin_unlock(&send_cq->lock);
-		spin_unlock_irq(&recv_cq->lock);
+		spin_unlock(&recv_cq->lock);
 	}
 }
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 6/9] RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (4 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 5/9] RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 7/9] RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS Junxian Huang
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

In abnormal interrupt handler, a PF reset will be triggered even if
the device is a VF. It should be a VF reset.

Fixes: 2b9acb9a97fe ("RDMA/hns: Add the process of AEQ overflow for hip08")
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index adcadd2495ab..74bab07c10e5 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -6201,6 +6201,7 @@ static irqreturn_t abnormal_interrupt_basic(struct hns_roce_dev *hr_dev,
 	struct pci_dev *pdev = hr_dev->pci_dev;
 	struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
 	const struct hnae3_ae_ops *ops = ae_dev->ops;
+	enum hnae3_reset_type reset_type;
 	irqreturn_t int_work = IRQ_NONE;
 	u32 int_en;
 
@@ -6212,10 +6213,12 @@ static irqreturn_t abnormal_interrupt_basic(struct hns_roce_dev *hr_dev,
 		roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG,
 			   1 << HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S);
 
+		reset_type = hr_dev->is_vf ?
+			     HNAE3_VF_FUNC_RESET : HNAE3_FUNC_RESET;
+
 		/* Set reset level for reset_event() */
 		if (ops->set_default_reset_request)
-			ops->set_default_reset_request(ae_dev,
-						       HNAE3_FUNC_RESET);
+			ops->set_default_reset_request(ae_dev, reset_type);
 		if (ops->reset_event)
 			ops->reset_event(pdev, NULL);
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 7/9] RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (5 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 6/9] RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 8/9] RDMA/hns: Optimize hem allocation performance Junxian Huang
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

From: Chengchang Tang <tangchengchang@huawei.com>

The 1bit-ECC recovery address read from HW only contain bits 64:12, so
it should be fixed left-shifted 12 bits when used.

Currently, the driver will shift the address left by PAGE_SHIFT when
used, which is wrong in non-4K OS.

Fixes: 2de949abd6a5 ("RDMA/hns: Recover 1bit-ECC error of RAM on chip")
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 74bab07c10e5..085461713fa9 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -6288,7 +6288,7 @@ static u64 fmea_get_ram_res_addr(u32 res_type, __le64 *data)
 	    res_type == ECC_RESOURCE_SCCC)
 		return le64_to_cpu(*data);
 
-	return le64_to_cpu(*data) << PAGE_SHIFT;
+	return le64_to_cpu(*data) << HNS_HW_PAGE_SHIFT;
 }
 
 static int fmea_recover_others(struct hns_roce_dev *hr_dev, u32 res_type,
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 8/9] RDMA/hns: Optimize hem allocation performance
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (6 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 7/9] RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-06  9:34 ` [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx Junxian Huang
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

When allocating MTT hem, for each hop level of each hem that is being
allocated, the driver iterates the hem list to find out whether the
bt page has been allocated in this hop level. If not, allocate a new
one and splice it to the list. The time complexity is O(n^2) in worst
cases.

Currently the allocation for-loop uses 'unit' as the step size. This
actually has taken into account the reuse of last-hop-level MTT bt
pages by multiple buffer pages. Thus pages of last hop level will
never have been allocated, so there is no need to iterate the hem list
in last hop level.

Removing this unnecessary iteration can reduce the time complexity to
O(n).

Fixes: 38389eaa4db1 ("RDMA/hns: Add mtr support for mixed multihop addressing")
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hem.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index cade3ca68de1..12a875d5b511 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -1134,10 +1134,12 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
 
 	/* config L1 bt to last bt and link them to corresponding parent */
 	for (level = 1; level < hopnum; level++) {
-		cur = hem_list_search_item(&mid_bt[level], offset);
-		if (cur) {
-			hem_ptrs[level] = cur;
-			continue;
+		if (!hem_list_is_bottom_bt(hopnum, level)) {
+			cur = hem_list_search_item(&mid_bt[level], offset);
+			if (cur) {
+				hem_ptrs[level] = cur;
+				continue;
+			}
 		}
 
 		step = hem_list_calc_ba_range(hopnum, level, unit);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (7 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 8/9] RDMA/hns: Optimize hem allocation performance Junxian Huang
@ 2024-09-06  9:34 ` Junxian Huang
  2024-09-10 13:12   ` Leon Romanovsky
  2024-09-10 13:13 ` [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Leon Romanovsky
  2024-09-10 13:13 ` (subset) " Leon Romanovsky
  10 siblings, 1 reply; 19+ messages in thread
From: Junxian Huang @ 2024-09-06  9:34 UTC (permalink / raw)
  To: jgg, leon; +Cc: linux-rdma, linuxarm, linux-kernel, huangjunxian6

From: Feng Fang <fangfeng4@huawei.com>

DIP algorithm requires a one-to-one mapping between dgid and dip_idx.
Currently a queue 'spare_idx' is used to store QPN of QPs that use
DIP algorithm. For a new dgid, use a QPN from spare_idx as dip_idx.
This method lacks a mechanism for deduplicating QPN, which may result
in different dgids sharing the same dip_idx and break the one-to-one
mapping requirement.

This patch replaces spare_idx with two new bitmaps: qpn_bitmap to record
QPN that is not being used as dip_idx, and dip_idx_map to record QPN
that is being used. Besides, introduce a reference count of a dip_idx
to indicate the number of QPs that using this dip_idx. When creating
a DIP QP, if it has a new dgid, set the corresponding bit in dip_idx_map,
otherwise add 1 to the reference count of the reused dip_idx and set bit
in qpn_bitmap. When destroying a DIP QP, decrement the reference count
by 1. If it becomes 0, set bit in qpn_bitmap and clear bit in dip_idx_map.

Fixes: eb653eda1e91 ("RDMA/hns: Bugfix for incorrect association between dip_idx and dgid")
Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
Signed-off-by: Feng Fang <fangfeng4@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h |  6 +--
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 58 ++++++++++++++++++---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |  1 +
 drivers/infiniband/hw/hns/hns_roce_qp.c     | 16 ++++--
 4 files changed, 67 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 0b1e21cb6d2d..adc65d383cf1 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -490,9 +490,8 @@ struct hns_roce_bank {
 };
 
 struct hns_roce_idx_table {
-	u32 *spare_idx;
-	u32 head;
-	u32 tail;
+	unsigned long *qpn_bitmap;
+	unsigned long *dip_idx_bitmap;
 };
 
 struct hns_roce_qp_table {
@@ -656,6 +655,7 @@ struct hns_roce_qp {
 	enum hns_roce_cong_type	cong_type;
 	u8			tc_mode;
 	u8			priority;
+	struct hns_roce_dip *dip;
 };
 
 struct hns_roce_ib_iboe {
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 085461713fa9..19a4bf80a080 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -4706,21 +4706,24 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
 {
 	const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
-	u32 *spare_idx = hr_dev->qp_table.idx_table.spare_idx;
-	u32 *head =  &hr_dev->qp_table.idx_table.head;
-	u32 *tail =  &hr_dev->qp_table.idx_table.tail;
+	unsigned long *dip_idx_bitmap = hr_dev->qp_table.idx_table.dip_idx_bitmap;
+	unsigned long *qpn_bitmap = hr_dev->qp_table.idx_table.qpn_bitmap;
+	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
 	struct hns_roce_dip *hr_dip;
 	unsigned long flags;
 	int ret = 0;
+	u32 idx;
 
 	spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
 
-	spare_idx[*tail] = ibqp->qp_num;
-	*tail = (*tail == hr_dev->caps.num_qps - 1) ? 0 : (*tail + 1);
+	if (!test_bit(ibqp->qp_num, dip_idx_bitmap))
+		set_bit(ibqp->qp_num, qpn_bitmap);
 
 	list_for_each_entry(hr_dip, &hr_dev->dip_list, node) {
 		if (!memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
 			*dip_idx = hr_dip->dip_idx;
+			hr_dip->qp_cnt++;
+			hr_qp->dip = hr_dip;
 			goto out;
 		}
 	}
@@ -4734,9 +4737,21 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
 		goto out;
 	}
 
+	idx = find_first_bit(qpn_bitmap, hr_dev->caps.num_qps);
+	if (idx < hr_dev->caps.num_qps) {
+		*dip_idx = idx;
+		clear_bit(idx, qpn_bitmap);
+		set_bit(idx, dip_idx_bitmap);
+	} else {
+		ret = -ENOENT;
+		kfree(hr_dip);
+		goto out;
+	}
+
 	memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
-	hr_dip->dip_idx = *dip_idx = spare_idx[*head];
-	*head = (*head == hr_dev->caps.num_qps - 1) ? 0 : (*head + 1);
+	hr_dip->dip_idx = *dip_idx;
+	hr_dip->qp_cnt++;
+	hr_qp->dip = hr_dip;
 	list_add_tail(&hr_dip->node, &hr_dev->dip_list);
 
 out:
@@ -5597,12 +5612,41 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
 	return ret;
 }
 
+static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev,
+			    struct hns_roce_qp *hr_qp)
+{
+	unsigned long *dip_idx_bitmap = hr_dev->qp_table.idx_table.dip_idx_bitmap;
+	unsigned long *qpn_bitmap = hr_dev->qp_table.idx_table.qpn_bitmap;
+	struct hns_roce_dip *hr_dip = hr_qp->dip;
+	unsigned long flags;
+
+	spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
+
+	if (hr_dip) {
+		hr_dip->qp_cnt--;
+		if (!hr_dip->qp_cnt) {
+			clear_bit(hr_dip->dip_idx, dip_idx_bitmap);
+			set_bit(hr_dip->dip_idx, qpn_bitmap);
+
+			list_del(&hr_dip->node);
+		} else {
+			hr_dip = NULL;
+		}
+	}
+
+	spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
+	kfree(hr_dip);
+}
+
 int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 {
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
 	int ret;
 
+	if (hr_qp->cong_type == CONG_TYPE_DIP)
+		put_dip_ctx_idx(hr_dev, hr_qp);
+
 	ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
 	if (ret)
 		ibdev_err_ratelimited(&hr_dev->ib_dev,
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
index c65f68a14a26..3804882bb5b4 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
@@ -1342,6 +1342,7 @@ struct hns_roce_v2_priv {
 struct hns_roce_dip {
 	u8 dgid[GID_LEN_V2];
 	u32 dip_idx;
+	u32 qp_cnt;
 	struct list_head node; /* all dips are on a list */
 };
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 6b03ba671ff8..bc278b735736 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1546,11 +1546,18 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
 	unsigned int reserved_from_bot;
 	unsigned int i;
 
-	qp_table->idx_table.spare_idx = kcalloc(hr_dev->caps.num_qps,
-					sizeof(u32), GFP_KERNEL);
-	if (!qp_table->idx_table.spare_idx)
+	qp_table->idx_table.qpn_bitmap = bitmap_zalloc(hr_dev->caps.num_qps,
+						       GFP_KERNEL);
+	if (!qp_table->idx_table.qpn_bitmap)
 		return -ENOMEM;
 
+	qp_table->idx_table.dip_idx_bitmap = bitmap_zalloc(hr_dev->caps.num_qps,
+							   GFP_KERNEL);
+	if (!qp_table->idx_table.dip_idx_bitmap) {
+		bitmap_free(qp_table->idx_table.qpn_bitmap);
+		return -ENOMEM;
+	}
+
 	mutex_init(&qp_table->scc_mutex);
 	mutex_init(&qp_table->bank_mutex);
 	xa_init(&hr_dev->qp_table_xa);
@@ -1580,5 +1587,6 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
 		ida_destroy(&hr_dev->qp_table.bank[i].ida);
 	mutex_destroy(&hr_dev->qp_table.bank_mutex);
 	mutex_destroy(&hr_dev->qp_table.scc_mutex);
-	kfree(hr_dev->qp_table.idx_table.spare_idx);
+	bitmap_free(hr_dev->qp_table.idx_table.qpn_bitmap);
+	bitmap_free(hr_dev->qp_table.idx_table.dip_idx_bitmap);
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset
  2024-09-06  9:34 ` [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset Junxian Huang
@ 2024-09-10 13:09   ` Leon Romanovsky
  2024-09-11  1:34     ` Junxian Huang
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Romanovsky @ 2024-09-10 13:09 UTC (permalink / raw)
  To: Junxian Huang; +Cc: jgg, linux-rdma, linuxarm, linux-kernel

On Fri, Sep 06, 2024 at 05:34:38PM +0800, Junxian Huang wrote:
> From: wenglianfa <wenglianfa@huawei.com>
> 
> During reset, cmd to destroy resources such as qp, cq, and mr may
> fail, and error logs will be printed. When a large number of
> resources are destroyed, there will be lots of printings, and it
> may lead to a cpu stuck. Replace the printing functions in these
> paths with the ratelimited version.

At lease some of them if not most should be deleted.

Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx
  2024-09-06  9:34 ` [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx Junxian Huang
@ 2024-09-10 13:12   ` Leon Romanovsky
  2024-10-17 13:21     ` Junxian Huang
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Romanovsky @ 2024-09-10 13:12 UTC (permalink / raw)
  To: Junxian Huang; +Cc: jgg, linux-rdma, linuxarm, linux-kernel

On Fri, Sep 06, 2024 at 05:34:44PM +0800, Junxian Huang wrote:
> From: Feng Fang <fangfeng4@huawei.com>
> 
> DIP algorithm requires a one-to-one mapping between dgid and dip_idx.
> Currently a queue 'spare_idx' is used to store QPN of QPs that use
> DIP algorithm. For a new dgid, use a QPN from spare_idx as dip_idx.
> This method lacks a mechanism for deduplicating QPN, which may result
> in different dgids sharing the same dip_idx and break the one-to-one
> mapping requirement.
> 
> This patch replaces spare_idx with two new bitmaps: qpn_bitmap to record
> QPN that is not being used as dip_idx, and dip_idx_map to record QPN
> that is being used. Besides, introduce a reference count of a dip_idx
> to indicate the number of QPs that using this dip_idx. When creating
> a DIP QP, if it has a new dgid, set the corresponding bit in dip_idx_map,
> otherwise add 1 to the reference count of the reused dip_idx and set bit
> in qpn_bitmap. When destroying a DIP QP, decrement the reference count
> by 1. If it becomes 0, set bit in qpn_bitmap and clear bit in dip_idx_map.
> 
> Fixes: eb653eda1e91 ("RDMA/hns: Bugfix for incorrect association between dip_idx and dgid")
> Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
> Signed-off-by: Feng Fang <fangfeng4@huawei.com>
> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_device.h |  6 +--
>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 58 ++++++++++++++++++---
>  drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |  1 +
>  drivers/infiniband/hw/hns/hns_roce_qp.c     | 16 ++++--
>  4 files changed, 67 insertions(+), 14 deletions(-)

It is strange implementation, double bitmap and refcount looks like
open-coding of some basic coding patterns. Let's wait with applying it
for now.

Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (8 preceding siblings ...)
  2024-09-06  9:34 ` [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx Junxian Huang
@ 2024-09-10 13:13 ` Leon Romanovsky
  2024-09-10 13:13 ` (subset) " Leon Romanovsky
  10 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2024-09-10 13:13 UTC (permalink / raw)
  To: Junxian Huang; +Cc: jgg, linux-rdma, linuxarm, linux-kernel

On Fri, Sep 06, 2024 at 05:34:35PM +0800, Junxian Huang wrote:
> This is a series of hns patches. Patch #8 is an improvement for
> hem allocation performance, and the others are some fixes.
> 
>   RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled
>   RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS
>   RDMA/hns: Don't modify rq next block addr in HIP09 QPC
>   RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler
>   RDMA/hns: Optimize hem allocation performance
>   RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08
>   RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range()

Applied

>   RDMA/hns: Fix cpu stuck caused by printings during reset
>   RDMA/hns: Fix different dgids mapping to the same dip_idx

Need some discussion.

Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: (subset) [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement
  2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
                   ` (9 preceding siblings ...)
  2024-09-10 13:13 ` [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Leon Romanovsky
@ 2024-09-10 13:13 ` Leon Romanovsky
  10 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2024-09-10 13:13 UTC (permalink / raw)
  To: jgg, Junxian Huang; +Cc: linux-rdma, linuxarm, linux-kernel


On Fri, 06 Sep 2024 17:34:35 +0800, Junxian Huang wrote:
> This is a series of hns patches. Patch #8 is an improvement for
> hem allocation performance, and the others are some fixes.
> 
> Chengchang Tang (2):
>   RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled
>   RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS
> 
> [...]

Applied, thanks!

[1/9] RDMA/hns: Don't modify rq next block addr in HIP09 QPC
      https://git.kernel.org/rdma/rdma/c/6928d264e328e0
[2/9] RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08
      https://git.kernel.org/rdma/rdma/c/fd8489294dd2be
[4/9] RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range()
      https://git.kernel.org/rdma/rdma/c/d586628b169d14
[5/9] RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled
      https://git.kernel.org/rdma/rdma/c/74d315b5af1802
[6/9] RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler
      https://git.kernel.org/rdma/rdma/c/4321feefa5501a
[7/9] RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS
      https://git.kernel.org/rdma/rdma/c/ce196f6297c7f3
[8/9] RDMA/hns: Optimize hem allocation performance
      https://git.kernel.org/rdma/rdma/c/fe51f6254d81f5

Best regards,
-- 
Leon Romanovsky <leon@kernel.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset
  2024-09-10 13:09   ` Leon Romanovsky
@ 2024-09-11  1:34     ` Junxian Huang
  2024-09-11 13:25       ` Leon Romanovsky
  0 siblings, 1 reply; 19+ messages in thread
From: Junxian Huang @ 2024-09-11  1:34 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: jgg, linux-rdma, linuxarm, linux-kernel



On 2024/9/10 21:09, Leon Romanovsky wrote:
> On Fri, Sep 06, 2024 at 05:34:38PM +0800, Junxian Huang wrote:
>> From: wenglianfa <wenglianfa@huawei.com>
>>
>> During reset, cmd to destroy resources such as qp, cq, and mr may
>> fail, and error logs will be printed. When a large number of
>> resources are destroyed, there will be lots of printings, and it
>> may lead to a cpu stuck. Replace the printing functions in these
>> paths with the ratelimited version.
> 
> At lease some of them if not most should be deleted.
> 

Hi Leon,I wonder if there is a clear standard about whether printing
can be added?

Thanks,
Junxian

> Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset
  2024-09-11  1:34     ` Junxian Huang
@ 2024-09-11 13:25       ` Leon Romanovsky
  2024-09-12  1:04         ` Junxian Huang
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Romanovsky @ 2024-09-11 13:25 UTC (permalink / raw)
  To: Junxian Huang; +Cc: jgg, linux-rdma, linuxarm, linux-kernel

On Wed, Sep 11, 2024 at 09:34:19AM +0800, Junxian Huang wrote:
> 
> 
> On 2024/9/10 21:09, Leon Romanovsky wrote:
> > On Fri, Sep 06, 2024 at 05:34:38PM +0800, Junxian Huang wrote:
> >> From: wenglianfa <wenglianfa@huawei.com>
> >>
> >> During reset, cmd to destroy resources such as qp, cq, and mr may
> >> fail, and error logs will be printed. When a large number of
> >> resources are destroyed, there will be lots of printings, and it
> >> may lead to a cpu stuck. Replace the printing functions in these
> >> paths with the ratelimited version.
> > 
> > At lease some of them if not most should be deleted.
> > 
> 
> Hi Leon,I wonder if there is a clear standard about whether printing
> can be added?

I don't think so, but there are some guidelines that can help you to do it:
1. Don't print error messages in the fast path.
2. Don't print error messages if other function down in the stack already
   printed it.
3. Don't print error messages if it is possible to trigger them from
unprivileged user.
...

> 
> Thanks,
> Junxian
> 
> > Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset
  2024-09-11 13:25       ` Leon Romanovsky
@ 2024-09-12  1:04         ` Junxian Huang
  0 siblings, 0 replies; 19+ messages in thread
From: Junxian Huang @ 2024-09-12  1:04 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: jgg, linux-rdma, linuxarm, linux-kernel



On 2024/9/11 21:25, Leon Romanovsky wrote:
> On Wed, Sep 11, 2024 at 09:34:19AM +0800, Junxian Huang wrote:
>>
>>
>> On 2024/9/10 21:09, Leon Romanovsky wrote:
>>> On Fri, Sep 06, 2024 at 05:34:38PM +0800, Junxian Huang wrote:
>>>> From: wenglianfa <wenglianfa@huawei.com>
>>>>
>>>> During reset, cmd to destroy resources such as qp, cq, and mr may
>>>> fail, and error logs will be printed. When a large number of
>>>> resources are destroyed, there will be lots of printings, and it
>>>> may lead to a cpu stuck. Replace the printing functions in these
>>>> paths with the ratelimited version.
>>>
>>> At lease some of them if not most should be deleted.
>>>
>>
>> Hi Leon,I wonder if there is a clear standard about whether printing
>> can be added?
> 
> I don't think so, but there are some guidelines that can help you to do it:
> 1. Don't print error messages in the fast path.
> 2. Don't print error messages if other function down in the stack already
>    printed it.
> 3. Don't print error messages if it is possible to trigger them from
> unprivileged user.
> ...
> 

Thanks

>>
>> Thanks,
>> Junxian
>>
>>> Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx
  2024-09-10 13:12   ` Leon Romanovsky
@ 2024-10-17 13:21     ` Junxian Huang
  2024-10-29 12:40       ` Leon Romanovsky
  0 siblings, 1 reply; 19+ messages in thread
From: Junxian Huang @ 2024-10-17 13:21 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: jgg, linux-rdma, linuxarm, linux-kernel



On 2024/9/10 21:12, Leon Romanovsky wrote:
> On Fri, Sep 06, 2024 at 05:34:44PM +0800, Junxian Huang wrote:
>> From: Feng Fang <fangfeng4@huawei.com>
>>
>> DIP algorithm requires a one-to-one mapping between dgid and dip_idx.
>> Currently a queue 'spare_idx' is used to store QPN of QPs that use
>> DIP algorithm. For a new dgid, use a QPN from spare_idx as dip_idx.
>> This method lacks a mechanism for deduplicating QPN, which may result
>> in different dgids sharing the same dip_idx and break the one-to-one
>> mapping requirement.
>>
>> This patch replaces spare_idx with two new bitmaps: qpn_bitmap to record
>> QPN that is not being used as dip_idx, and dip_idx_map to record QPN
>> that is being used. Besides, introduce a reference count of a dip_idx
>> to indicate the number of QPs that using this dip_idx. When creating
>> a DIP QP, if it has a new dgid, set the corresponding bit in dip_idx_map,
>> otherwise add 1 to the reference count of the reused dip_idx and set bit
>> in qpn_bitmap. When destroying a DIP QP, decrement the reference count
>> by 1. If it becomes 0, set bit in qpn_bitmap and clear bit in dip_idx_map.
>>
>> Fixes: eb653eda1e91 ("RDMA/hns: Bugfix for incorrect association between dip_idx and dgid")
>> Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
>> Signed-off-by: Feng Fang <fangfeng4@huawei.com>
>> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
>> ---
>>  drivers/infiniband/hw/hns/hns_roce_device.h |  6 +--
>>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 58 ++++++++++++++++++---
>>  drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |  1 +
>>  drivers/infiniband/hw/hns/hns_roce_qp.c     | 16 ++++--
>>  4 files changed, 67 insertions(+), 14 deletions(-)
> 
> It is strange implementation, double bitmap and refcount looks like
> open-coding of some basic coding patterns. Let's wait with applying it
> for now.
> 

Hi Leon, it's been a while since this patch was sent. Is it okay to be applied?

Regarding your question about the double bitmaps, that's because we have 3 states
to track:
1) the context hasn't been created
2) the context has been created but not used as dip_ctx
3) the context is being used as dip_ctx.

Junxian

> Thanks
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx
  2024-10-17 13:21     ` Junxian Huang
@ 2024-10-29 12:40       ` Leon Romanovsky
  0 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2024-10-29 12:40 UTC (permalink / raw)
  To: Junxian Huang; +Cc: jgg, linux-rdma, linuxarm, linux-kernel

On Thu, Oct 17, 2024 at 09:21:45PM +0800, Junxian Huang wrote:
> 
> 
> On 2024/9/10 21:12, Leon Romanovsky wrote:
> > On Fri, Sep 06, 2024 at 05:34:44PM +0800, Junxian Huang wrote:
> >> From: Feng Fang <fangfeng4@huawei.com>
> >>
> >> DIP algorithm requires a one-to-one mapping between dgid and dip_idx.
> >> Currently a queue 'spare_idx' is used to store QPN of QPs that use
> >> DIP algorithm. For a new dgid, use a QPN from spare_idx as dip_idx.
> >> This method lacks a mechanism for deduplicating QPN, which may result
> >> in different dgids sharing the same dip_idx and break the one-to-one
> >> mapping requirement.
> >>
> >> This patch replaces spare_idx with two new bitmaps: qpn_bitmap to record
> >> QPN that is not being used as dip_idx, and dip_idx_map to record QPN
> >> that is being used. Besides, introduce a reference count of a dip_idx
> >> to indicate the number of QPs that using this dip_idx. When creating
> >> a DIP QP, if it has a new dgid, set the corresponding bit in dip_idx_map,
> >> otherwise add 1 to the reference count of the reused dip_idx and set bit
> >> in qpn_bitmap. When destroying a DIP QP, decrement the reference count
> >> by 1. If it becomes 0, set bit in qpn_bitmap and clear bit in dip_idx_map.
> >>
> >> Fixes: eb653eda1e91 ("RDMA/hns: Bugfix for incorrect association between dip_idx and dgid")
> >> Fixes: f91696f2f053 ("RDMA/hns: Support congestion control type selection according to the FW")
> >> Signed-off-by: Feng Fang <fangfeng4@huawei.com>
> >> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
> >> ---
> >>  drivers/infiniband/hw/hns/hns_roce_device.h |  6 +--
> >>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 58 ++++++++++++++++++---
> >>  drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |  1 +
> >>  drivers/infiniband/hw/hns/hns_roce_qp.c     | 16 ++++--
> >>  4 files changed, 67 insertions(+), 14 deletions(-)
> > 
> > It is strange implementation, double bitmap and refcount looks like
> > open-coding of some basic coding patterns. Let's wait with applying it
> > for now.
> > 
> 
> Hi Leon, it's been a while since this patch was sent. Is it okay to be applied?

Not in this implementation. I think that xarray + tag will give you what
you are looking without the need to open-code it.

Thanks

> 
> Regarding your question about the double bitmaps, that's because we have 3 states
> to track:
> 1) the context hasn't been created
> 2) the context has been created but not used as dip_ctx
> 3) the context is being used as dip_ctx.
> 
> Junxian
> 
> > Thanks
> > 
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-10-29 12:40 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-06  9:34 [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 1/9] RDMA/hns: Don't modify rq next block addr in HIP09 QPC Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 2/9] RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08 Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 3/9] RDMA/hns: Fix cpu stuck caused by printings during reset Junxian Huang
2024-09-10 13:09   ` Leon Romanovsky
2024-09-11  1:34     ` Junxian Huang
2024-09-11 13:25       ` Leon Romanovsky
2024-09-12  1:04         ` Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 4/9] RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range() Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 5/9] RDMA/hns: Fix spin_unlock_irqrestore() called with IRQs enabled Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 6/9] RDMA/hns: Fix VF triggering PF reset in abnormal interrupt handler Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 7/9] RDMA/hns: Fix 1bit-ECC recovery address in non-4K OS Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 8/9] RDMA/hns: Optimize hem allocation performance Junxian Huang
2024-09-06  9:34 ` [PATCH for-next 9/9] RDMA/hns: Fix different dgids mapping to the same dip_idx Junxian Huang
2024-09-10 13:12   ` Leon Romanovsky
2024-10-17 13:21     ` Junxian Huang
2024-10-29 12:40       ` Leon Romanovsky
2024-09-10 13:13 ` [PATCH for-next 0/9] RDMA/hns: Bugfixes and one improvement Leon Romanovsky
2024-09-10 13:13 ` (subset) " Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox