linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next v2 0/4] RDMA/bnxt_re: Support direct verbs
@ 2025-11-04  7:23 Sriharsha Basavapatna
  2025-11-04  7:23 ` [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file Sriharsha Basavapatna
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-04  7:23 UTC (permalink / raw)
  To: leon, jgg
  Cc: linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

Hi,

This patchset supports Direct Verbs in the bnxt_re driver.

This is required by vendor specific applications that need to manage
the HW resources directly and to implement the datapath in the
application.

To support this, the library and the driver are being enhanced to
provide Direct Verbs using which the application can allocate and
manage the HW resources (Queues, Doorbell etc) . The Direct Verbs
enable the application to implement the control path.

Patch#1 Move uapi methods to a separate file
Patch#2 Refactor existing bnxt_qplib_create_qp() function
Patch#3 Support dbr and umem direct verbs
Patch#4 Support cq and qp direct verbs

Thanks,
-Harsha

******

Changes:

v2:
- Fixed build warnings reported by test robot in patches 3 and 4.

v1: https://lore.kernel.org/linux-rdma/20251103105033.205586-1-sriharsha.basavapatna@broadcom.com/

******

Kalesh AP (3):
  RDMA/bnxt_re: Move the UAPI methods to a dedicated file
  RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function
  RDMA/bnxt_re: Direct Verbs: Support DBR and UMEM verbs

Sriharsha Basavapatna (1):
  RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs

 drivers/infiniband/hw/bnxt_re/Makefile    |    2 +-
 drivers/infiniband/hw/bnxt_re/bnxt_re.h   |   12 +-
 drivers/infiniband/hw/bnxt_re/dv.c        | 1816 +++++++++++++++++++++
 drivers/infiniband/hw/bnxt_re/ib_verbs.c  |  549 +++----
 drivers/infiniband/hw/bnxt_re/ib_verbs.h  |   23 +
 drivers/infiniband/hw/bnxt_re/qplib_fp.c  |  311 ++--
 drivers/infiniband/hw/bnxt_re/qplib_fp.h  |   10 +-
 drivers/infiniband/hw/bnxt_re/qplib_res.c |   43 +
 drivers/infiniband/hw/bnxt_re/qplib_res.h |   10 +
 include/uapi/rdma/bnxt_re-abi.h           |  142 ++
 10 files changed, 2376 insertions(+), 542 deletions(-)
 create mode 100644 drivers/infiniband/hw/bnxt_re/dv.c

-- 
2.51.2.636.ga99f379adf


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file
  2025-11-04  7:23 [PATCH rdma-next v2 0/4] RDMA/bnxt_re: Support direct verbs Sriharsha Basavapatna
@ 2025-11-04  7:23 ` Sriharsha Basavapatna
  2025-11-09  9:12   ` Leon Romanovsky
  2025-11-04  7:23 ` [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function Sriharsha Basavapatna
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-04  7:23 UTC (permalink / raw)
  To: leon, jgg
  Cc: linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

This is in preparation for upcoming patches in the series.
Driver has to support additional UAPIs for Direct verbs.
Moving current UAPI implementation to a new file, dv.c.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
---
 drivers/infiniband/hw/bnxt_re/Makefile   |   2 +-
 drivers/infiniband/hw/bnxt_re/dv.c       | 356 +++++++++++++++++++++++
 drivers/infiniband/hw/bnxt_re/ib_verbs.c | 305 +------------------
 drivers/infiniband/hw/bnxt_re/ib_verbs.h |   3 +
 4 files changed, 361 insertions(+), 305 deletions(-)
 create mode 100644 drivers/infiniband/hw/bnxt_re/dv.c

diff --git a/drivers/infiniband/hw/bnxt_re/Makefile b/drivers/infiniband/hw/bnxt_re/Makefile
index f63417d2ccc6..b82d12df6269 100644
--- a/drivers/infiniband/hw/bnxt_re/Makefile
+++ b/drivers/infiniband/hw/bnxt_re/Makefile
@@ -5,4 +5,4 @@ obj-$(CONFIG_INFINIBAND_BNXT_RE) += bnxt_re.o
 bnxt_re-y := main.o ib_verbs.o \
 	     qplib_res.o qplib_rcfw.o	\
 	     qplib_sp.o qplib_fp.o  hw_counters.o	\
-	     debugfs.o
+	     debugfs.o dv.o
diff --git a/drivers/infiniband/hw/bnxt_re/dv.c b/drivers/infiniband/hw/bnxt_re/dv.c
new file mode 100644
index 000000000000..2b3e34b940b3
--- /dev/null
+++ b/drivers/infiniband/hw/bnxt_re/dv.c
@@ -0,0 +1,356 @@
+/*
+ * Broadcom NetXtreme-E RoCE driver.
+ *
+ * Copyright (c) 2025, Broadcom. All rights reserved.  The term
+ * Broadcom refers to Broadcom Inc. and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Description: Direct Verbs interpreter
+ */
+
+#include <rdma/ib_addr.h>
+#include <rdma/uverbs_types.h>
+#include <rdma/uverbs_std_types.h>
+#include <rdma/ib_user_ioctl_cmds.h>
+#define UVERBS_MODULE_NAME bnxt_re
+#include <rdma/uverbs_named_ioctl.h>
+#include <rdma/bnxt_re-abi.h>
+
+#include "roce_hsi.h"
+#include "qplib_res.h"
+#include "qplib_sp.h"
+#include "qplib_fp.h"
+#include "qplib_rcfw.h"
+#include "bnxt_re.h"
+#include "ib_verbs.h"
+
+static struct bnxt_re_cq *bnxt_re_search_for_cq(struct bnxt_re_dev *rdev, u32 cq_id)
+{
+	struct bnxt_re_cq *cq = NULL, *tmp_cq;
+
+	hash_for_each_possible(rdev->cq_hash, tmp_cq, hash_entry, cq_id) {
+		if (tmp_cq->qplib_cq.id == cq_id) {
+			cq = tmp_cq;
+			break;
+		}
+	}
+	return cq;
+}
+
+static struct bnxt_re_srq *bnxt_re_search_for_srq(struct bnxt_re_dev *rdev, u32 srq_id)
+{
+	struct bnxt_re_srq *srq = NULL, *tmp_srq;
+
+	hash_for_each_possible(rdev->srq_hash, tmp_srq, hash_entry, srq_id) {
+		if (tmp_srq->qplib_srq.id == srq_id) {
+			srq = tmp_srq;
+			break;
+		}
+	}
+	return srq;
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_NOTIFY_DRV)(struct uverbs_attr_bundle *attrs)
+{
+	struct bnxt_re_ucontext *uctx;
+
+	uctx = container_of(ib_uverbs_get_ucontext(attrs), struct bnxt_re_ucontext, ib_uctx);
+	bnxt_re_pacing_alert(uctx->rdev);
+	return 0;
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_ALLOC_PAGE)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_ALLOC_PAGE_HANDLE);
+	enum bnxt_re_alloc_page_type alloc_type;
+	struct bnxt_re_user_mmap_entry *entry;
+	enum bnxt_re_mmap_flag mmap_flag;
+	struct bnxt_qplib_chip_ctx *cctx;
+	struct bnxt_re_ucontext *uctx;
+	struct bnxt_re_dev *rdev;
+	u64 mmap_offset;
+	u32 length;
+	u32 dpi;
+	u64 addr;
+	int err;
+
+	uctx = container_of(ib_uverbs_get_ucontext(attrs), struct bnxt_re_ucontext, ib_uctx);
+	if (IS_ERR(uctx))
+		return PTR_ERR(uctx);
+
+	err = uverbs_get_const(&alloc_type, attrs, BNXT_RE_ALLOC_PAGE_TYPE);
+	if (err)
+		return err;
+
+	rdev = uctx->rdev;
+	cctx = rdev->chip_ctx;
+
+	switch (alloc_type) {
+	case BNXT_RE_ALLOC_WC_PAGE:
+		if (cctx->modes.db_push)  {
+			if (bnxt_qplib_alloc_dpi(&rdev->qplib_res, &uctx->wcdpi,
+						 uctx, BNXT_QPLIB_DPI_TYPE_WC))
+				return -ENOMEM;
+			length = PAGE_SIZE;
+			dpi = uctx->wcdpi.dpi;
+			addr = (u64)uctx->wcdpi.umdbr;
+			mmap_flag = BNXT_RE_MMAP_WC_DB;
+		} else {
+			return -EINVAL;
+		}
+
+		break;
+	case BNXT_RE_ALLOC_DBR_BAR_PAGE:
+		length = PAGE_SIZE;
+		addr = (u64)rdev->pacing.dbr_bar_addr;
+		mmap_flag = BNXT_RE_MMAP_DBR_BAR;
+		break;
+
+	case BNXT_RE_ALLOC_DBR_PAGE:
+		length = PAGE_SIZE;
+		addr = (u64)rdev->pacing.dbr_page;
+		mmap_flag = BNXT_RE_MMAP_DBR_PAGE;
+		break;
+
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	entry = bnxt_re_mmap_entry_insert(uctx, addr, mmap_flag, &mmap_offset);
+	if (!entry)
+		return -ENOMEM;
+
+	uobj->object = entry;
+	uverbs_finalize_uobj_create(attrs, BNXT_RE_ALLOC_PAGE_HANDLE);
+	err = uverbs_copy_to(attrs, BNXT_RE_ALLOC_PAGE_MMAP_OFFSET,
+			     &mmap_offset, sizeof(mmap_offset));
+	if (err)
+		return err;
+
+	err = uverbs_copy_to(attrs, BNXT_RE_ALLOC_PAGE_MMAP_LENGTH,
+			     &length, sizeof(length));
+	if (err)
+		return err;
+
+	err = uverbs_copy_to(attrs, BNXT_RE_ALLOC_PAGE_DPI,
+			     &dpi, sizeof(dpi));
+	if (err)
+		return err;
+
+	return 0;
+}
+
+static int alloc_page_obj_cleanup(struct ib_uobject *uobject,
+				  enum rdma_remove_reason why,
+			    struct uverbs_attr_bundle *attrs)
+{
+	struct  bnxt_re_user_mmap_entry *entry = uobject->object;
+	struct bnxt_re_ucontext *uctx = entry->uctx;
+
+	switch (entry->mmap_flag) {
+	case BNXT_RE_MMAP_WC_DB:
+		if (uctx && uctx->wcdpi.dbr) {
+			struct bnxt_re_dev *rdev = uctx->rdev;
+
+			bnxt_qplib_dealloc_dpi(&rdev->qplib_res, &uctx->wcdpi);
+			uctx->wcdpi.dbr = NULL;
+		}
+		break;
+	case BNXT_RE_MMAP_DBR_BAR:
+	case BNXT_RE_MMAP_DBR_PAGE:
+		break;
+	default:
+		goto exit;
+	}
+	rdma_user_mmap_entry_remove(&entry->rdma_entry);
+exit:
+	return 0;
+}
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_ALLOC_PAGE,
+			    UVERBS_ATTR_IDR(BNXT_RE_ALLOC_PAGE_HANDLE,
+					    BNXT_RE_OBJECT_ALLOC_PAGE,
+					    UVERBS_ACCESS_NEW,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_CONST_IN(BNXT_RE_ALLOC_PAGE_TYPE,
+						 enum bnxt_re_alloc_page_type,
+						 UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_ALLOC_PAGE_MMAP_OFFSET,
+						UVERBS_ATTR_TYPE(u64),
+						UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_ALLOC_PAGE_MMAP_LENGTH,
+						UVERBS_ATTR_TYPE(u32),
+						UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_ALLOC_PAGE_DPI,
+						UVERBS_ATTR_TYPE(u32),
+						UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_DESTROY_PAGE,
+				    UVERBS_ATTR_IDR(BNXT_RE_DESTROY_PAGE_HANDLE,
+						    BNXT_RE_OBJECT_ALLOC_PAGE,
+						    UVERBS_ACCESS_DESTROY,
+						    UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_ALLOC_PAGE,
+			    UVERBS_TYPE_ALLOC_IDR(alloc_page_obj_cleanup),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_ALLOC_PAGE),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DESTROY_PAGE));
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_NOTIFY_DRV);
+
+DECLARE_UVERBS_GLOBAL_METHODS(BNXT_RE_OBJECT_NOTIFY_DRV,
+			      &UVERBS_METHOD(BNXT_RE_METHOD_NOTIFY_DRV));
+
+/* Toggle MEM */
+static int UVERBS_HANDLER(BNXT_RE_METHOD_GET_TOGGLE_MEM)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_TOGGLE_MEM_HANDLE);
+	enum bnxt_re_mmap_flag mmap_flag = BNXT_RE_MMAP_TOGGLE_PAGE;
+	enum bnxt_re_get_toggle_mem_type res_type;
+	struct bnxt_re_user_mmap_entry *entry;
+	struct bnxt_re_ucontext *uctx;
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_re_dev *rdev;
+	struct bnxt_re_srq *srq;
+	u32 length = PAGE_SIZE;
+	struct bnxt_re_cq *cq;
+	u64 mem_offset;
+	u32 offset = 0;
+	u64 addr = 0;
+	u32 res_id;
+	int err;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	err = uverbs_get_const(&res_type, attrs, BNXT_RE_TOGGLE_MEM_TYPE);
+	if (err)
+		return err;
+
+	uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = uctx->rdev;
+	err = uverbs_copy_from(&res_id, attrs, BNXT_RE_TOGGLE_MEM_RES_ID);
+	if (err)
+		return err;
+
+	switch (res_type) {
+	case BNXT_RE_CQ_TOGGLE_MEM:
+		cq = bnxt_re_search_for_cq(rdev, res_id);
+		if (!cq)
+			return -EINVAL;
+
+		addr = (u64)cq->uctx_cq_page;
+		break;
+	case BNXT_RE_SRQ_TOGGLE_MEM:
+		srq = bnxt_re_search_for_srq(rdev, res_id);
+		if (!srq)
+			return -EINVAL;
+
+		addr = (u64)srq->uctx_srq_page;
+		break;
+
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	entry = bnxt_re_mmap_entry_insert(uctx, addr, mmap_flag, &mem_offset);
+	if (!entry)
+		return -ENOMEM;
+
+	uobj->object = entry;
+	uverbs_finalize_uobj_create(attrs, BNXT_RE_TOGGLE_MEM_HANDLE);
+	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_PAGE,
+			     &mem_offset, sizeof(mem_offset));
+	if (err)
+		return err;
+
+	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_LENGTH,
+			     &length, sizeof(length));
+	if (err)
+		return err;
+
+	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_OFFSET,
+			     &offset, sizeof(offset));
+	if (err)
+		return err;
+
+	return 0;
+}
+
+static int get_toggle_mem_obj_cleanup(struct ib_uobject *uobject,
+				      enum rdma_remove_reason why,
+				      struct uverbs_attr_bundle *attrs)
+{
+	struct  bnxt_re_user_mmap_entry *entry = uobject->object;
+
+	rdma_user_mmap_entry_remove(&entry->rdma_entry);
+	return 0;
+}
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_GET_TOGGLE_MEM,
+			    UVERBS_ATTR_IDR(BNXT_RE_TOGGLE_MEM_HANDLE,
+					    BNXT_RE_OBJECT_GET_TOGGLE_MEM,
+					    UVERBS_ACCESS_NEW,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_CONST_IN(BNXT_RE_TOGGLE_MEM_TYPE,
+						 enum bnxt_re_get_toggle_mem_type,
+						 UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_TOGGLE_MEM_RES_ID,
+					       UVERBS_ATTR_TYPE(u32),
+					       UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_TOGGLE_MEM_MMAP_PAGE,
+						UVERBS_ATTR_TYPE(u64),
+						UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_TOGGLE_MEM_MMAP_OFFSET,
+						UVERBS_ATTR_TYPE(u32),
+						UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_TOGGLE_MEM_MMAP_LENGTH,
+						UVERBS_ATTR_TYPE(u32),
+						UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_RELEASE_TOGGLE_MEM,
+				    UVERBS_ATTR_IDR(BNXT_RE_RELEASE_TOGGLE_MEM_HANDLE,
+						    BNXT_RE_OBJECT_GET_TOGGLE_MEM,
+						    UVERBS_ACCESS_DESTROY,
+						    UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_GET_TOGGLE_MEM,
+			    UVERBS_TYPE_ALLOC_IDR(get_toggle_mem_obj_cleanup),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_GET_TOGGLE_MEM),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_RELEASE_TOGGLE_MEM));
+
+const struct uapi_definition bnxt_re_uapi_defs[] = {
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_ALLOC_PAGE),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_NOTIFY_DRV),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_GET_TOGGLE_MEM),
+	{}
+};
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index 4dab5ca7362b..034f5744127f 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -626,7 +626,7 @@ static int bnxt_re_create_fence_mr(struct bnxt_re_pd *pd)
 	return rc;
 }
 
-static struct bnxt_re_user_mmap_entry*
+struct bnxt_re_user_mmap_entry*
 bnxt_re_mmap_entry_insert(struct bnxt_re_ucontext *uctx, u64 mem_offset,
 			  enum bnxt_re_mmap_flag mmap_flag, u64 *offset)
 {
@@ -4534,32 +4534,6 @@ int bnxt_re_destroy_flow(struct ib_flow *flow_id)
 	return rc;
 }
 
-static struct bnxt_re_cq *bnxt_re_search_for_cq(struct bnxt_re_dev *rdev, u32 cq_id)
-{
-	struct bnxt_re_cq *cq = NULL, *tmp_cq;
-
-	hash_for_each_possible(rdev->cq_hash, tmp_cq, hash_entry, cq_id) {
-		if (tmp_cq->qplib_cq.id == cq_id) {
-			cq = tmp_cq;
-			break;
-		}
-	}
-	return cq;
-}
-
-static struct bnxt_re_srq *bnxt_re_search_for_srq(struct bnxt_re_dev *rdev, u32 srq_id)
-{
-	struct bnxt_re_srq *srq = NULL, *tmp_srq;
-
-	hash_for_each_possible(rdev->srq_hash, tmp_srq, hash_entry, srq_id) {
-		if (tmp_srq->qplib_srq.id == srq_id) {
-			srq = tmp_srq;
-			break;
-		}
-	}
-	return srq;
-}
-
 /* Helper function to mmap the virtual memory from user app */
 int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct vm_area_struct *vma)
 {
@@ -4662,280 +4636,3 @@ int bnxt_re_process_mad(struct ib_device *ibdev, int mad_flags,
 	ret |= IB_MAD_RESULT_REPLY;
 	return ret;
 }
-
-static int UVERBS_HANDLER(BNXT_RE_METHOD_NOTIFY_DRV)(struct uverbs_attr_bundle *attrs)
-{
-	struct bnxt_re_ucontext *uctx;
-
-	uctx = container_of(ib_uverbs_get_ucontext(attrs), struct bnxt_re_ucontext, ib_uctx);
-	bnxt_re_pacing_alert(uctx->rdev);
-	return 0;
-}
-
-static int UVERBS_HANDLER(BNXT_RE_METHOD_ALLOC_PAGE)(struct uverbs_attr_bundle *attrs)
-{
-	struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_ALLOC_PAGE_HANDLE);
-	enum bnxt_re_alloc_page_type alloc_type;
-	struct bnxt_re_user_mmap_entry *entry;
-	enum bnxt_re_mmap_flag mmap_flag;
-	struct bnxt_qplib_chip_ctx *cctx;
-	struct bnxt_re_ucontext *uctx;
-	struct bnxt_re_dev *rdev;
-	u64 mmap_offset;
-	u32 length;
-	u32 dpi;
-	u64 addr;
-	int err;
-
-	uctx = container_of(ib_uverbs_get_ucontext(attrs), struct bnxt_re_ucontext, ib_uctx);
-	if (IS_ERR(uctx))
-		return PTR_ERR(uctx);
-
-	err = uverbs_get_const(&alloc_type, attrs, BNXT_RE_ALLOC_PAGE_TYPE);
-	if (err)
-		return err;
-
-	rdev = uctx->rdev;
-	cctx = rdev->chip_ctx;
-
-	switch (alloc_type) {
-	case BNXT_RE_ALLOC_WC_PAGE:
-		if (cctx->modes.db_push)  {
-			if (bnxt_qplib_alloc_dpi(&rdev->qplib_res, &uctx->wcdpi,
-						 uctx, BNXT_QPLIB_DPI_TYPE_WC))
-				return -ENOMEM;
-			length = PAGE_SIZE;
-			dpi = uctx->wcdpi.dpi;
-			addr = (u64)uctx->wcdpi.umdbr;
-			mmap_flag = BNXT_RE_MMAP_WC_DB;
-		} else {
-			return -EINVAL;
-		}
-
-		break;
-	case BNXT_RE_ALLOC_DBR_BAR_PAGE:
-		length = PAGE_SIZE;
-		addr = (u64)rdev->pacing.dbr_bar_addr;
-		mmap_flag = BNXT_RE_MMAP_DBR_BAR;
-		break;
-
-	case BNXT_RE_ALLOC_DBR_PAGE:
-		length = PAGE_SIZE;
-		addr = (u64)rdev->pacing.dbr_page;
-		mmap_flag = BNXT_RE_MMAP_DBR_PAGE;
-		break;
-
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	entry = bnxt_re_mmap_entry_insert(uctx, addr, mmap_flag, &mmap_offset);
-	if (!entry)
-		return -ENOMEM;
-
-	uobj->object = entry;
-	uverbs_finalize_uobj_create(attrs, BNXT_RE_ALLOC_PAGE_HANDLE);
-	err = uverbs_copy_to(attrs, BNXT_RE_ALLOC_PAGE_MMAP_OFFSET,
-			     &mmap_offset, sizeof(mmap_offset));
-	if (err)
-		return err;
-
-	err = uverbs_copy_to(attrs, BNXT_RE_ALLOC_PAGE_MMAP_LENGTH,
-			     &length, sizeof(length));
-	if (err)
-		return err;
-
-	err = uverbs_copy_to(attrs, BNXT_RE_ALLOC_PAGE_DPI,
-			     &dpi, sizeof(dpi));
-	if (err)
-		return err;
-
-	return 0;
-}
-
-static int alloc_page_obj_cleanup(struct ib_uobject *uobject,
-				  enum rdma_remove_reason why,
-			    struct uverbs_attr_bundle *attrs)
-{
-	struct  bnxt_re_user_mmap_entry *entry = uobject->object;
-	struct bnxt_re_ucontext *uctx = entry->uctx;
-
-	switch (entry->mmap_flag) {
-	case BNXT_RE_MMAP_WC_DB:
-		if (uctx && uctx->wcdpi.dbr) {
-			struct bnxt_re_dev *rdev = uctx->rdev;
-
-			bnxt_qplib_dealloc_dpi(&rdev->qplib_res, &uctx->wcdpi);
-			uctx->wcdpi.dbr = NULL;
-		}
-		break;
-	case BNXT_RE_MMAP_DBR_BAR:
-	case BNXT_RE_MMAP_DBR_PAGE:
-		break;
-	default:
-		goto exit;
-	}
-	rdma_user_mmap_entry_remove(&entry->rdma_entry);
-exit:
-	return 0;
-}
-
-DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_ALLOC_PAGE,
-			    UVERBS_ATTR_IDR(BNXT_RE_ALLOC_PAGE_HANDLE,
-					    BNXT_RE_OBJECT_ALLOC_PAGE,
-					    UVERBS_ACCESS_NEW,
-					    UA_MANDATORY),
-			    UVERBS_ATTR_CONST_IN(BNXT_RE_ALLOC_PAGE_TYPE,
-						 enum bnxt_re_alloc_page_type,
-						 UA_MANDATORY),
-			    UVERBS_ATTR_PTR_OUT(BNXT_RE_ALLOC_PAGE_MMAP_OFFSET,
-						UVERBS_ATTR_TYPE(u64),
-						UA_MANDATORY),
-			    UVERBS_ATTR_PTR_OUT(BNXT_RE_ALLOC_PAGE_MMAP_LENGTH,
-						UVERBS_ATTR_TYPE(u32),
-						UA_MANDATORY),
-			    UVERBS_ATTR_PTR_OUT(BNXT_RE_ALLOC_PAGE_DPI,
-						UVERBS_ATTR_TYPE(u32),
-						UA_MANDATORY));
-
-DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_DESTROY_PAGE,
-				    UVERBS_ATTR_IDR(BNXT_RE_DESTROY_PAGE_HANDLE,
-						    BNXT_RE_OBJECT_ALLOC_PAGE,
-						    UVERBS_ACCESS_DESTROY,
-						    UA_MANDATORY));
-
-DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_ALLOC_PAGE,
-			    UVERBS_TYPE_ALLOC_IDR(alloc_page_obj_cleanup),
-			    &UVERBS_METHOD(BNXT_RE_METHOD_ALLOC_PAGE),
-			    &UVERBS_METHOD(BNXT_RE_METHOD_DESTROY_PAGE));
-
-DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_NOTIFY_DRV);
-
-DECLARE_UVERBS_GLOBAL_METHODS(BNXT_RE_OBJECT_NOTIFY_DRV,
-			      &UVERBS_METHOD(BNXT_RE_METHOD_NOTIFY_DRV));
-
-/* Toggle MEM */
-static int UVERBS_HANDLER(BNXT_RE_METHOD_GET_TOGGLE_MEM)(struct uverbs_attr_bundle *attrs)
-{
-	struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_TOGGLE_MEM_HANDLE);
-	enum bnxt_re_mmap_flag mmap_flag = BNXT_RE_MMAP_TOGGLE_PAGE;
-	enum bnxt_re_get_toggle_mem_type res_type;
-	struct bnxt_re_user_mmap_entry *entry;
-	struct bnxt_re_ucontext *uctx;
-	struct ib_ucontext *ib_uctx;
-	struct bnxt_re_dev *rdev;
-	struct bnxt_re_srq *srq;
-	u32 length = PAGE_SIZE;
-	struct bnxt_re_cq *cq;
-	u64 mem_offset;
-	u32 offset = 0;
-	u64 addr = 0;
-	u32 res_id;
-	int err;
-
-	ib_uctx = ib_uverbs_get_ucontext(attrs);
-	if (IS_ERR(ib_uctx))
-		return PTR_ERR(ib_uctx);
-
-	err = uverbs_get_const(&res_type, attrs, BNXT_RE_TOGGLE_MEM_TYPE);
-	if (err)
-		return err;
-
-	uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
-	rdev = uctx->rdev;
-	err = uverbs_copy_from(&res_id, attrs, BNXT_RE_TOGGLE_MEM_RES_ID);
-	if (err)
-		return err;
-
-	switch (res_type) {
-	case BNXT_RE_CQ_TOGGLE_MEM:
-		cq = bnxt_re_search_for_cq(rdev, res_id);
-		if (!cq)
-			return -EINVAL;
-
-		addr = (u64)cq->uctx_cq_page;
-		break;
-	case BNXT_RE_SRQ_TOGGLE_MEM:
-		srq = bnxt_re_search_for_srq(rdev, res_id);
-		if (!srq)
-			return -EINVAL;
-
-		addr = (u64)srq->uctx_srq_page;
-		break;
-
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	entry = bnxt_re_mmap_entry_insert(uctx, addr, mmap_flag, &mem_offset);
-	if (!entry)
-		return -ENOMEM;
-
-	uobj->object = entry;
-	uverbs_finalize_uobj_create(attrs, BNXT_RE_TOGGLE_MEM_HANDLE);
-	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_PAGE,
-			     &mem_offset, sizeof(mem_offset));
-	if (err)
-		return err;
-
-	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_LENGTH,
-			     &length, sizeof(length));
-	if (err)
-		return err;
-
-	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_OFFSET,
-			     &offset, sizeof(offset));
-	if (err)
-		return err;
-
-	return 0;
-}
-
-static int get_toggle_mem_obj_cleanup(struct ib_uobject *uobject,
-				      enum rdma_remove_reason why,
-				      struct uverbs_attr_bundle *attrs)
-{
-	struct  bnxt_re_user_mmap_entry *entry = uobject->object;
-
-	rdma_user_mmap_entry_remove(&entry->rdma_entry);
-	return 0;
-}
-
-DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_GET_TOGGLE_MEM,
-			    UVERBS_ATTR_IDR(BNXT_RE_TOGGLE_MEM_HANDLE,
-					    BNXT_RE_OBJECT_GET_TOGGLE_MEM,
-					    UVERBS_ACCESS_NEW,
-					    UA_MANDATORY),
-			    UVERBS_ATTR_CONST_IN(BNXT_RE_TOGGLE_MEM_TYPE,
-						 enum bnxt_re_get_toggle_mem_type,
-						 UA_MANDATORY),
-			    UVERBS_ATTR_PTR_IN(BNXT_RE_TOGGLE_MEM_RES_ID,
-					       UVERBS_ATTR_TYPE(u32),
-					       UA_MANDATORY),
-			    UVERBS_ATTR_PTR_OUT(BNXT_RE_TOGGLE_MEM_MMAP_PAGE,
-						UVERBS_ATTR_TYPE(u64),
-						UA_MANDATORY),
-			    UVERBS_ATTR_PTR_OUT(BNXT_RE_TOGGLE_MEM_MMAP_OFFSET,
-						UVERBS_ATTR_TYPE(u32),
-						UA_MANDATORY),
-			    UVERBS_ATTR_PTR_OUT(BNXT_RE_TOGGLE_MEM_MMAP_LENGTH,
-						UVERBS_ATTR_TYPE(u32),
-						UA_MANDATORY));
-
-DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_RELEASE_TOGGLE_MEM,
-				    UVERBS_ATTR_IDR(BNXT_RE_RELEASE_TOGGLE_MEM_HANDLE,
-						    BNXT_RE_OBJECT_GET_TOGGLE_MEM,
-						    UVERBS_ACCESS_DESTROY,
-						    UA_MANDATORY));
-
-DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_GET_TOGGLE_MEM,
-			    UVERBS_TYPE_ALLOC_IDR(get_toggle_mem_obj_cleanup),
-			    &UVERBS_METHOD(BNXT_RE_METHOD_GET_TOGGLE_MEM),
-			    &UVERBS_METHOD(BNXT_RE_METHOD_RELEASE_TOGGLE_MEM));
-
-const struct uapi_definition bnxt_re_uapi_defs[] = {
-	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_ALLOC_PAGE),
-	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_NOTIFY_DRV),
-	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_GET_TOGGLE_MEM),
-	{}
-};
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
index 76ba9ab04d5c..a11f56730a31 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
@@ -293,4 +293,7 @@ static inline u32 __to_ib_port_num(u16 port_id)
 
 unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp);
 void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
+struct bnxt_re_user_mmap_entry*
+bnxt_re_mmap_entry_insert(struct bnxt_re_ucontext *uctx, u64 mem_offset,
+			  enum bnxt_re_mmap_flag mmap_flag, u64 *offset);
 #endif /* __BNXT_RE_IB_VERBS_H__ */
-- 
2.51.2.636.ga99f379adf


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function
  2025-11-04  7:23 [PATCH rdma-next v2 0/4] RDMA/bnxt_re: Support direct verbs Sriharsha Basavapatna
  2025-11-04  7:23 ` [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file Sriharsha Basavapatna
@ 2025-11-04  7:23 ` Sriharsha Basavapatna
  2025-11-09  9:21   ` Leon Romanovsky
  2025-11-04  7:23 ` [PATCH rdma-next v2 3/4] RDMA/bnxt_re: Direct Verbs: Support DBR and UMEM verbs Sriharsha Basavapatna
  2025-11-04  7:23 ` [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs Sriharsha Basavapatna
  3 siblings, 1 reply; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-04  7:23 UTC (permalink / raw)
  To: leon, jgg
  Cc: linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

Inside bnxt_qplib_create_qp(), driver currently is doing
a lot of things like allocating HWQ memory for SQ/RQ/ORRQ/IRRQ,
initializing few of qplib_qp fields etc.

Refactored the code such that all memory allocation for HWQs
have been moved to bnxt_re_init_qp_attr() function and inside
bnxt_qplib_create_qp() function just initialize the request
structure and issue the HWRM command to firmware.

Introduced couple of new functions bnxt_re_setup_qp_hwqs() and
bnxt_re_setup_qp_swqs() moved the hwq and swq memory allocation
logic there.

This patch also introduces a change to store the PD id in
bnxt_qplib_qp. Instead of keeping a pointer to "struct
bnxt_qplib_pd", store PD id directly in "struct bnxt_qplib_qp".
This change is needed for a subsequent change in this patch
series. This PD ID value will be used in new DV implementation
for create_qp(). There is no functional change.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
---
 drivers/infiniband/hw/bnxt_re/ib_verbs.c  | 207 ++++++++++++--
 drivers/infiniband/hw/bnxt_re/qplib_fp.c  | 311 +++++++---------------
 drivers/infiniband/hw/bnxt_re/qplib_fp.h  |  10 +-
 drivers/infiniband/hw/bnxt_re/qplib_res.h |   6 +
 4 files changed, 304 insertions(+), 230 deletions(-)

diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index 034f5744127f..272934c33c6b 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -971,6 +971,12 @@ static void bnxt_re_del_unique_gid(struct bnxt_re_dev *rdev)
 		dev_err(rdev_to_dev(rdev), "Failed to delete unique GID, rc: %d\n", rc);
 }
 
+static void bnxt_re_qp_free_umem(struct bnxt_re_qp *qp)
+{
+	ib_umem_release(qp->rumem);
+	ib_umem_release(qp->sumem);
+}
+
 /* Queue Pairs */
 int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
 {
@@ -1013,8 +1019,7 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
 	if (qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_RAW_ETHERTYPE)
 		bnxt_re_del_unique_gid(rdev);
 
-	ib_umem_release(qp->rumem);
-	ib_umem_release(qp->sumem);
+	bnxt_re_qp_free_umem(qp);
 
 	/* Flush all the entries of notification queue associated with
 	 * given qp.
@@ -1158,6 +1163,7 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd,
 	}
 
 	qplib_qp->dpi = &cntx->dpi;
+	qplib_qp->is_user = true;
 	return 0;
 rqfail:
 	ib_umem_release(qp->sumem);
@@ -1215,6 +1221,114 @@ static struct bnxt_re_ah *bnxt_re_create_shadow_qp_ah
 	return NULL;
 }
 
+static int bnxt_re_qp_alloc_init_xrrq(struct bnxt_re_qp *qp)
+{
+	struct bnxt_qplib_res *res = &qp->rdev->qplib_res;
+	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
+	struct bnxt_qplib_hwq_attr hwq_attr = {};
+	struct bnxt_qplib_sg_info sginfo = {};
+	struct bnxt_qplib_hwq *irrq, *orrq;
+	int rc, req_size;
+
+	orrq = &qplib_qp->orrq;
+	orrq->max_elements =
+		ORD_LIMIT_TO_ORRQ_SLOTS(qplib_qp->max_rd_atomic);
+	req_size = orrq->max_elements *
+		BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE + PAGE_SIZE - 1;
+	req_size &= ~(PAGE_SIZE - 1);
+	sginfo.pgsize = req_size;
+	sginfo.pgshft = PAGE_SHIFT;
+
+	hwq_attr.res = res;
+	hwq_attr.sginfo = &sginfo;
+	hwq_attr.depth = orrq->max_elements;
+	hwq_attr.stride = BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE;
+	hwq_attr.aux_stride = 0;
+	hwq_attr.aux_depth = 0;
+	hwq_attr.type = HWQ_TYPE_CTX;
+	rc = bnxt_qplib_alloc_init_hwq(orrq, &hwq_attr);
+	if (rc)
+		return rc;
+
+	irrq = &qplib_qp->irrq;
+	irrq->max_elements =
+		IRD_LIMIT_TO_IRRQ_SLOTS(qplib_qp->max_dest_rd_atomic);
+	req_size = irrq->max_elements *
+		BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE + PAGE_SIZE - 1;
+	req_size &= ~(PAGE_SIZE - 1);
+	sginfo.pgsize = req_size;
+	hwq_attr.sginfo = &sginfo;
+	hwq_attr.depth =  irrq->max_elements;
+	hwq_attr.stride = BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE;
+	rc = bnxt_qplib_alloc_init_hwq(irrq, &hwq_attr);
+	if (rc)
+		goto free_orrq_hwq;
+	return 0;
+free_orrq_hwq:
+	bnxt_qplib_free_hwq(res, orrq);
+	return rc;
+}
+
+static int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp)
+{
+	struct bnxt_qplib_res *res = &qp->rdev->qplib_res;
+	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
+	struct bnxt_qplib_hwq_attr hwq_attr = {};
+	struct bnxt_qplib_q *sq = &qplib_qp->sq;
+	struct bnxt_qplib_q *rq = &qplib_qp->rq;
+	u8 wqe_mode = qplib_qp->wqe_mode;
+	u8 pg_sz_lvl;
+	int rc;
+
+	hwq_attr.res = res;
+	hwq_attr.sginfo = &sq->sg_info;
+	hwq_attr.stride = bnxt_qplib_get_stride();
+	hwq_attr.depth = bnxt_qplib_get_depth(sq, wqe_mode, true);
+	hwq_attr.aux_stride = qplib_qp->psn_sz;
+	hwq_attr.aux_depth = (qplib_qp->psn_sz) ?
+		bnxt_qplib_set_sq_size(sq, wqe_mode) : 0;
+	if (qplib_qp->is_host_msn_tbl && qplib_qp->psn_sz)
+		hwq_attr.aux_depth = qplib_qp->msn_tbl_sz;
+	hwq_attr.type = HWQ_TYPE_QUEUE;
+	rc = bnxt_qplib_alloc_init_hwq(&sq->hwq, &hwq_attr);
+	if (rc)
+		return rc;
+
+	pg_sz_lvl = bnxt_qplib_base_pg_size(&sq->hwq) << CMDQ_CREATE_QP_SQ_PG_SIZE_SFT;
+	pg_sz_lvl |= ((sq->hwq.level & CMDQ_CREATE_QP_SQ_LVL_MASK) <<
+		      CMDQ_CREATE_QP_SQ_LVL_SFT);
+	sq->hwq.pg_sz_lvl = pg_sz_lvl;
+
+	hwq_attr.res = res;
+	hwq_attr.sginfo = &rq->sg_info;
+	hwq_attr.stride = bnxt_qplib_get_stride();
+	hwq_attr.depth = bnxt_qplib_get_depth(rq, qplib_qp->wqe_mode, false);
+	hwq_attr.aux_stride = 0;
+	hwq_attr.aux_depth = 0;
+	hwq_attr.type = HWQ_TYPE_QUEUE;
+	rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr);
+	if (rc)
+		goto free_sq_hwq;
+	pg_sz_lvl = bnxt_qplib_base_pg_size(&rq->hwq) <<
+		CMDQ_CREATE_QP_RQ_PG_SIZE_SFT;
+	pg_sz_lvl |= ((rq->hwq.level & CMDQ_CREATE_QP_RQ_LVL_MASK) <<
+		      CMDQ_CREATE_QP_RQ_LVL_SFT);
+	rq->hwq.pg_sz_lvl = pg_sz_lvl;
+
+	if (qplib_qp->psn_sz) {
+		rc = bnxt_re_qp_alloc_init_xrrq(qp);
+		if (rc)
+			goto free_rq_hwq;
+	}
+
+	return 0;
+free_rq_hwq:
+	bnxt_qplib_free_hwq(res, &rq->hwq);
+free_sq_hwq:
+	bnxt_qplib_free_hwq(res, &sq->hwq);
+	return rc;
+}
+
 static struct bnxt_re_qp *bnxt_re_create_shadow_qp
 				(struct bnxt_re_pd *pd,
 				 struct bnxt_qplib_res *qp1_res,
@@ -1233,9 +1347,10 @@ static struct bnxt_re_qp *bnxt_re_create_shadow_qp
 	/* Initialize the shadow QP structure from the QP1 values */
 	ether_addr_copy(qp->qplib_qp.smac, rdev->netdev->dev_addr);
 
-	qp->qplib_qp.pd = &pd->qplib_pd;
+	qp->qplib_qp.pd_id = pd->qplib_pd.id;
 	qp->qplib_qp.qp_handle = (u64)(unsigned long)(&qp->qplib_qp);
 	qp->qplib_qp.type = IB_QPT_UD;
+	qp->qplib_qp.cctx = rdev->chip_ctx;
 
 	qp->qplib_qp.max_inline_data = 0;
 	qp->qplib_qp.sig_type = true;
@@ -1268,10 +1383,14 @@ static struct bnxt_re_qp *bnxt_re_create_shadow_qp
 	qp->qplib_qp.rq_hdr_buf_size = BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6;
 	qp->qplib_qp.dpi = &rdev->dpi_privileged;
 
-	rc = bnxt_qplib_create_qp(qp1_res, &qp->qplib_qp);
+	rc = bnxt_re_setup_qp_hwqs(qp);
 	if (rc)
 		goto fail;
 
+	rc = bnxt_qplib_create_qp(qp1_res, &qp->qplib_qp);
+	if (rc)
+		goto free_hwq;
+
 	spin_lock_init(&qp->sq_lock);
 	INIT_LIST_HEAD(&qp->list);
 	mutex_lock(&rdev->qp_lock);
@@ -1279,6 +1398,9 @@ static struct bnxt_re_qp *bnxt_re_create_shadow_qp
 	atomic_inc(&rdev->stats.res.qp_count);
 	mutex_unlock(&rdev->qp_lock);
 	return qp;
+
+free_hwq:
+	bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp);
 fail:
 	kfree(qp);
 	return NULL;
@@ -1449,6 +1571,39 @@ static int bnxt_re_init_qp_type(struct bnxt_re_dev *rdev,
 	return qptype;
 }
 
+static void bnxt_re_qp_calculate_msn_psn_size(struct bnxt_re_qp *qp)
+{
+	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
+	struct bnxt_qplib_q *sq = &qplib_qp->sq;
+	struct bnxt_re_dev *rdev = qp->rdev;
+	u8 wqe_mode = qplib_qp->wqe_mode;
+
+	if (rdev->dev_attr)
+		qplib_qp->is_host_msn_tbl =
+			_is_host_msn_table(rdev->dev_attr->dev_cap_flags2);
+
+	if (qplib_qp->type == CMDQ_CREATE_QP_TYPE_RC) {
+		qplib_qp->psn_sz = bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx) ?
+			sizeof(struct sq_psn_search_ext) :
+			sizeof(struct sq_psn_search);
+		if (qplib_qp->is_host_msn_tbl) {
+			qplib_qp->psn_sz = sizeof(struct sq_msn_search);
+			qplib_qp->msn = 0;
+		}
+	}
+
+	/* Update msn tbl size */
+	if (qplib_qp->is_host_msn_tbl && qplib_qp->psn_sz) {
+		if (wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC)
+			qplib_qp->msn_tbl_sz =
+				roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, wqe_mode));
+		else
+			qplib_qp->msn_tbl_sz =
+				roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, wqe_mode)) / 2;
+		qplib_qp->msn = 0;
+	}
+}
+
 static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
 				struct ib_qp_init_attr *init_attr,
 				struct bnxt_re_ucontext *uctx,
@@ -1466,17 +1621,17 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
 
 	/* Setup misc params */
 	ether_addr_copy(qplqp->smac, rdev->netdev->dev_addr);
-	qplqp->pd = &pd->qplib_pd;
+	qplqp->pd_id = pd->qplib_pd.id;
 	qplqp->qp_handle = (u64)qplqp;
 	qplqp->max_inline_data = init_attr->cap.max_inline_data;
 	qplqp->sig_type = init_attr->sq_sig_type == IB_SIGNAL_ALL_WR;
 	qptype = bnxt_re_init_qp_type(rdev, init_attr);
-	if (qptype < 0) {
-		rc = qptype;
-		goto out;
-	}
+	if (qptype < 0)
+		return qptype;
 	qplqp->type = (u8)qptype;
 	qplqp->wqe_mode = bnxt_re_is_var_size_supported(rdev, uctx);
+	qplqp->dev_cap_flags = dev_attr->dev_cap_flags;
+	qplqp->cctx = rdev->chip_ctx;
 	if (init_attr->qp_type == IB_QPT_RC) {
 		qplqp->max_rd_atomic = dev_attr->max_qp_rd_atom;
 		qplqp->max_dest_rd_atomic = dev_attr->max_qp_init_rd_atom;
@@ -1506,20 +1661,33 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
 	/* Setup RQ/SRQ */
 	rc = bnxt_re_init_rq_attr(qp, init_attr, uctx);
 	if (rc)
-		goto out;
+		return rc;
 	if (init_attr->qp_type == IB_QPT_GSI)
 		bnxt_re_adjust_gsi_rq_attr(qp);
 
 	/* Setup SQ */
 	rc = bnxt_re_init_sq_attr(qp, init_attr, uctx, ureq);
 	if (rc)
-		goto out;
+		return rc;
 	if (init_attr->qp_type == IB_QPT_GSI)
 		bnxt_re_adjust_gsi_sq_attr(qp, init_attr, uctx);
 
-	if (uctx) /* This will update DPI and qp_handle */
+	if (uctx) { /* This will update DPI and qp_handle */
 		rc = bnxt_re_init_user_qp(rdev, pd, qp, uctx, ureq);
-out:
+		if (rc)
+			return rc;
+	}
+
+	bnxt_re_qp_calculate_msn_psn_size(qp);
+
+	rc = bnxt_re_setup_qp_hwqs(qp);
+	if (rc)
+		goto free_umem;
+
+	return 0;
+free_umem:
+	if (uctx)
+		bnxt_re_qp_free_umem(qp);
 	return rc;
 }
 
@@ -1577,6 +1745,7 @@ static int bnxt_re_create_gsi_qp(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
 
 	rdev = qp->rdev;
 	qplqp = &qp->qplib_qp;
+	qplqp->cctx = rdev->chip_ctx;
 
 	qplqp->rq_hdr_buf_size = BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2;
 	qplqp->sq_hdr_buf_size = BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2;
@@ -1680,13 +1849,14 @@ int bnxt_re_create_qp(struct ib_qp *ib_qp, struct ib_qp_init_attr *qp_init_attr,
 		if (rc == -ENODEV)
 			goto qp_destroy;
 		if (rc)
-			goto fail;
+			goto free_hwq;
 	} else {
 		rc = bnxt_qplib_create_qp(&rdev->qplib_res, &qp->qplib_qp);
 		if (rc) {
 			ibdev_err(&rdev->ibdev, "Failed to create HW QP");
-			goto free_umem;
+			goto free_hwq;
 		}
+
 		if (udata) {
 			struct bnxt_re_qp_resp resp;
 
@@ -1737,9 +1907,10 @@ int bnxt_re_create_qp(struct ib_qp *ib_qp, struct ib_qp_init_attr *qp_init_attr,
 	return 0;
 qp_destroy:
 	bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
-free_umem:
-	ib_umem_release(qp->rumem);
-	ib_umem_release(qp->sumem);
+free_hwq:
+	bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp);
+	if (udata)
+		bnxt_re_qp_free_umem(qp);
 fail:
 	return rc;
 }
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
index ce90d3d834d4..23132c5d7f89 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
@@ -793,8 +793,6 @@ int bnxt_qplib_post_srq_recv(struct bnxt_qplib_srq *srq,
 	return 0;
 }
 
-/* QP */
-
 static int bnxt_qplib_alloc_init_swq(struct bnxt_qplib_q *que)
 {
 	int indx;
@@ -813,9 +811,72 @@ static int bnxt_qplib_alloc_init_swq(struct bnxt_qplib_q *que)
 	return 0;
 }
 
+static int bnxt_re_setup_qp_swqs(struct bnxt_qplib_qp *qplqp)
+{
+	struct bnxt_qplib_q *sq = &qplqp->sq;
+	struct bnxt_qplib_q *rq = &qplqp->rq;
+	int rc;
+
+	if (qplqp->is_user)
+		return 0;
+
+	rc = bnxt_qplib_alloc_init_swq(sq);
+	if (rc)
+		return rc;
+
+	if (!qplqp->srq) {
+		rc = bnxt_qplib_alloc_init_swq(rq);
+		if (rc)
+			goto free_sq_swq;
+	}
+
+	return 0;
+free_sq_swq:
+	kfree(sq->swq);
+	sq->swq = NULL;
+	return rc;
+}
+
+static void bnxt_qp_init_dbinfo(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+{
+	struct bnxt_qplib_q *sq = &qp->sq;
+	struct bnxt_qplib_q *rq = &qp->rq;
+
+	sq->dbinfo.hwq = &sq->hwq;
+	sq->dbinfo.xid = qp->id;
+	sq->dbinfo.db = qp->dpi->dbr;
+	sq->dbinfo.max_slot = bnxt_qplib_set_sq_max_slot(qp->wqe_mode);
+	sq->dbinfo.flags = 0;
+	if (rq->max_wqe) {
+		rq->dbinfo.hwq = &rq->hwq;
+		rq->dbinfo.xid = qp->id;
+		rq->dbinfo.db = qp->dpi->dbr;
+		rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size);
+		rq->dbinfo.flags = 0;
+	}
+}
+
+static void bnxt_qplib_init_psn_ptr(struct bnxt_qplib_qp *qp, int size)
+{
+	struct bnxt_qplib_hwq *sq_hwq;
+	struct bnxt_qplib_q *sq;
+	u64 fpsne, psn_pg;
+	u16 indx_pad = 0;
+
+	sq = &qp->sq;
+	sq_hwq = &sq->hwq;
+	/* First psn entry */
+	fpsne = (u64)bnxt_qplib_get_qe(sq_hwq, sq_hwq->depth, &psn_pg);
+	if (!IS_ALIGNED(fpsne, PAGE_SIZE))
+		indx_pad = (fpsne & ~PAGE_MASK) / size;
+	sq_hwq->pad_pgofft = indx_pad;
+	sq_hwq->pad_pg = (u64 *)psn_pg;
+	sq_hwq->pad_stride = size;
+}
+
+/* QP */
 int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 {
-	struct bnxt_qplib_hwq_attr hwq_attr = {};
 	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
 	struct creq_create_qp1_resp resp = {};
 	struct bnxt_qplib_cmdqmsg msg = {};
@@ -824,7 +885,6 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	struct cmdq_create_qp1 req = {};
 	struct bnxt_qplib_pbl *pbl;
 	u32 qp_flags = 0;
-	u8 pg_sz_lvl;
 	u32 tbl_indx;
 	int rc;
 
@@ -838,26 +898,12 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	req.qp_handle = cpu_to_le64(qp->qp_handle);
 
 	/* SQ */
-	hwq_attr.res = res;
-	hwq_attr.sginfo = &sq->sg_info;
-	hwq_attr.stride = sizeof(struct sq_sge);
-	hwq_attr.depth = bnxt_qplib_get_depth(sq, qp->wqe_mode, false);
-	hwq_attr.type = HWQ_TYPE_QUEUE;
-	rc = bnxt_qplib_alloc_init_hwq(&sq->hwq, &hwq_attr);
-	if (rc)
-		return rc;
-
-	rc = bnxt_qplib_alloc_init_swq(sq);
-	if (rc)
-		goto fail_sq;
+	sq->max_sw_wqe = bnxt_qplib_get_depth(sq, qp->wqe_mode, true);
+	req.sq_size = cpu_to_le32(sq->max_sw_wqe);
+	req.sq_pg_size_sq_lvl = sq->hwq.pg_sz_lvl;
 
-	req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
 	pbl = &sq->hwq.pbl[PBL_LVL_0];
 	req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
-	pg_sz_lvl = (bnxt_qplib_base_pg_size(&sq->hwq) <<
-		     CMDQ_CREATE_QP1_SQ_PG_SIZE_SFT);
-	pg_sz_lvl |= (sq->hwq.level & CMDQ_CREATE_QP1_SQ_LVL_MASK);
-	req.sq_pg_size_sq_lvl = pg_sz_lvl;
 	req.sq_fwo_sq_sge =
 		cpu_to_le16((sq->max_sge & CMDQ_CREATE_QP1_SQ_SGE_MASK) <<
 			     CMDQ_CREATE_QP1_SQ_SGE_SFT);
@@ -866,24 +912,10 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	/* RQ */
 	if (rq->max_wqe) {
 		rq->dbinfo.flags = 0;
-		hwq_attr.res = res;
-		hwq_attr.sginfo = &rq->sg_info;
-		hwq_attr.stride = sizeof(struct sq_sge);
-		hwq_attr.depth = bnxt_qplib_get_depth(rq, qp->wqe_mode, false);
-		hwq_attr.type = HWQ_TYPE_QUEUE;
-		rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr);
-		if (rc)
-			goto sq_swq;
-		rc = bnxt_qplib_alloc_init_swq(rq);
-		if (rc)
-			goto fail_rq;
 		req.rq_size = cpu_to_le32(rq->max_wqe);
 		pbl = &rq->hwq.pbl[PBL_LVL_0];
 		req.rq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
-		pg_sz_lvl = (bnxt_qplib_base_pg_size(&rq->hwq) <<
-			     CMDQ_CREATE_QP1_RQ_PG_SIZE_SFT);
-		pg_sz_lvl |= (rq->hwq.level & CMDQ_CREATE_QP1_RQ_LVL_MASK);
-		req.rq_pg_size_rq_lvl = pg_sz_lvl;
+		req.rq_pg_size_rq_lvl = rq->hwq.pg_sz_lvl;
 		req.rq_fwo_rq_sge =
 			cpu_to_le16((rq->max_sge &
 				     CMDQ_CREATE_QP1_RQ_SGE_MASK) <<
@@ -894,11 +926,11 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	rc = bnxt_qplib_alloc_qp_hdr_buf(res, qp);
 	if (rc) {
 		rc = -ENOMEM;
-		goto rq_rwq;
+		return rc;
 	}
 	qp_flags |= CMDQ_CREATE_QP1_QP_FLAGS_RESERVED_LKEY_ENABLE;
 	req.qp_flags = cpu_to_le32(qp_flags);
-	req.pd_id = cpu_to_le32(qp->pd->id);
+	req.pd_id = cpu_to_le32(qp->pd_id);
 
 	bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, NULL, sizeof(req), sizeof(resp), 0);
 	rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
@@ -907,73 +939,39 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 
 	qp->id = le32_to_cpu(resp.xid);
 	qp->cur_qp_state = CMDQ_MODIFY_QP_NEW_STATE_RESET;
-	qp->cctx = res->cctx;
-	sq->dbinfo.hwq = &sq->hwq;
-	sq->dbinfo.xid = qp->id;
-	sq->dbinfo.db = qp->dpi->dbr;
-	sq->dbinfo.max_slot = bnxt_qplib_set_sq_max_slot(qp->wqe_mode);
-	if (rq->max_wqe) {
-		rq->dbinfo.hwq = &rq->hwq;
-		rq->dbinfo.xid = qp->id;
-		rq->dbinfo.db = qp->dpi->dbr;
-		rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size);
-	}
+
+	rc = bnxt_re_setup_qp_swqs(qp);
+	if (rc)
+		goto destroy_qp;
+	bnxt_qp_init_dbinfo(res, qp);
+
 	tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
 	rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
 	rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp;
 
 	return 0;
 
+destroy_qp:
+	bnxt_qplib_destroy_qp(res, qp);
 fail:
 	bnxt_qplib_free_qp_hdr_buf(res, qp);
-rq_rwq:
-	kfree(rq->swq);
-fail_rq:
-	bnxt_qplib_free_hwq(res, &rq->hwq);
-sq_swq:
-	kfree(sq->swq);
-fail_sq:
-	bnxt_qplib_free_hwq(res, &sq->hwq);
 	return rc;
 }
 
-static void bnxt_qplib_init_psn_ptr(struct bnxt_qplib_qp *qp, int size)
-{
-	struct bnxt_qplib_hwq *hwq;
-	struct bnxt_qplib_q *sq;
-	u64 fpsne, psn_pg;
-	u16 indx_pad = 0;
-
-	sq = &qp->sq;
-	hwq = &sq->hwq;
-	/* First psn entry */
-	fpsne = (u64)bnxt_qplib_get_qe(hwq, hwq->depth, &psn_pg);
-	if (!IS_ALIGNED(fpsne, PAGE_SIZE))
-		indx_pad = (fpsne & ~PAGE_MASK) / size;
-	hwq->pad_pgofft = indx_pad;
-	hwq->pad_pg = (u64 *)psn_pg;
-	hwq->pad_stride = size;
-}
-
 int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 {
 	struct bnxt_qplib_rcfw *rcfw = res->rcfw;
-	struct bnxt_qplib_hwq_attr hwq_attr = {};
-	struct bnxt_qplib_sg_info sginfo = {};
 	struct creq_create_qp_resp resp = {};
 	struct bnxt_qplib_cmdqmsg msg = {};
 	struct bnxt_qplib_q *sq = &qp->sq;
 	struct bnxt_qplib_q *rq = &qp->rq;
 	struct cmdq_create_qp req = {};
-	int rc, req_size, psn_sz = 0;
-	struct bnxt_qplib_hwq *xrrq;
 	struct bnxt_qplib_pbl *pbl;
 	u32 qp_flags = 0;
-	u8 pg_sz_lvl;
 	u32 tbl_indx;
 	u16 nsge;
+	int rc;
 
-	qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
 	sq->dbinfo.flags = 0;
 	bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
 				 CMDQ_BASE_OPCODE_CREATE_QP,
@@ -985,56 +983,10 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	req.qp_handle = cpu_to_le64(qp->qp_handle);
 
 	/* SQ */
-	if (qp->type == CMDQ_CREATE_QP_TYPE_RC) {
-		psn_sz = bnxt_qplib_is_chip_gen_p5_p7(res->cctx) ?
-			 sizeof(struct sq_psn_search_ext) :
-			 sizeof(struct sq_psn_search);
-
-		if (qp->is_host_msn_tbl) {
-			psn_sz = sizeof(struct sq_msn_search);
-			qp->msn = 0;
-		}
-	}
-
-	hwq_attr.res = res;
-	hwq_attr.sginfo = &sq->sg_info;
-	hwq_attr.stride = sizeof(struct sq_sge);
-	hwq_attr.depth = bnxt_qplib_get_depth(sq, qp->wqe_mode, true);
-	hwq_attr.aux_stride = psn_sz;
-	hwq_attr.aux_depth = psn_sz ? bnxt_qplib_set_sq_size(sq, qp->wqe_mode)
-				    : 0;
-	/* Update msn tbl size */
-	if (qp->is_host_msn_tbl && psn_sz) {
-		if (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC)
-			hwq_attr.aux_depth =
-				roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
-		else
-			hwq_attr.aux_depth =
-				roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)) / 2;
-		qp->msn_tbl_sz = hwq_attr.aux_depth;
-		qp->msn = 0;
-	}
-
-	hwq_attr.type = HWQ_TYPE_QUEUE;
-	rc = bnxt_qplib_alloc_init_hwq(&sq->hwq, &hwq_attr);
-	if (rc)
-		return rc;
-
-	if (!sq->hwq.is_user) {
-		rc = bnxt_qplib_alloc_init_swq(sq);
-		if (rc)
-			goto fail_sq;
-
-		if (psn_sz)
-			bnxt_qplib_init_psn_ptr(qp, psn_sz);
-	}
-	req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
+	req.sq_size = cpu_to_le32(sq->max_sw_wqe);
 	pbl = &sq->hwq.pbl[PBL_LVL_0];
 	req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
-	pg_sz_lvl = (bnxt_qplib_base_pg_size(&sq->hwq) <<
-		     CMDQ_CREATE_QP_SQ_PG_SIZE_SFT);
-	pg_sz_lvl |= (sq->hwq.level & CMDQ_CREATE_QP_SQ_LVL_MASK);
-	req.sq_pg_size_sq_lvl = pg_sz_lvl;
+	req.sq_pg_size_sq_lvl = sq->hwq.pg_sz_lvl;
 	req.sq_fwo_sq_sge =
 		cpu_to_le16(((sq->max_sge & CMDQ_CREATE_QP_SQ_SGE_MASK) <<
 			     CMDQ_CREATE_QP_SQ_SGE_SFT) | 0);
@@ -1043,29 +995,10 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	/* RQ */
 	if (!qp->srq) {
 		rq->dbinfo.flags = 0;
-		hwq_attr.res = res;
-		hwq_attr.sginfo = &rq->sg_info;
-		hwq_attr.stride = sizeof(struct sq_sge);
-		hwq_attr.depth = bnxt_qplib_get_depth(rq, qp->wqe_mode, false);
-		hwq_attr.aux_stride = 0;
-		hwq_attr.aux_depth = 0;
-		hwq_attr.type = HWQ_TYPE_QUEUE;
-		rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr);
-		if (rc)
-			goto sq_swq;
-		if (!rq->hwq.is_user) {
-			rc = bnxt_qplib_alloc_init_swq(rq);
-			if (rc)
-				goto fail_rq;
-		}
-
 		req.rq_size = cpu_to_le32(rq->max_wqe);
 		pbl = &rq->hwq.pbl[PBL_LVL_0];
 		req.rq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
-		pg_sz_lvl = (bnxt_qplib_base_pg_size(&rq->hwq) <<
-			     CMDQ_CREATE_QP_RQ_PG_SIZE_SFT);
-		pg_sz_lvl |= (rq->hwq.level & CMDQ_CREATE_QP_RQ_LVL_MASK);
-		req.rq_pg_size_rq_lvl = pg_sz_lvl;
+		req.rq_pg_size_rq_lvl = rq->hwq.pg_sz_lvl;
 		nsge = (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC) ?
 			6 : rq->max_sge;
 		req.rq_fwo_rq_sge =
@@ -1091,68 +1024,34 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	req.qp_flags = cpu_to_le32(qp_flags);
 
 	/* ORRQ and IRRQ */
-	if (psn_sz) {
-		xrrq = &qp->orrq;
-		xrrq->max_elements =
-			ORD_LIMIT_TO_ORRQ_SLOTS(qp->max_rd_atomic);
-		req_size = xrrq->max_elements *
-			   BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE + PAGE_SIZE - 1;
-		req_size &= ~(PAGE_SIZE - 1);
-		sginfo.pgsize = req_size;
-		sginfo.pgshft = PAGE_SHIFT;
-
-		hwq_attr.res = res;
-		hwq_attr.sginfo = &sginfo;
-		hwq_attr.depth = xrrq->max_elements;
-		hwq_attr.stride = BNXT_QPLIB_MAX_ORRQE_ENTRY_SIZE;
-		hwq_attr.aux_stride = 0;
-		hwq_attr.aux_depth = 0;
-		hwq_attr.type = HWQ_TYPE_CTX;
-		rc = bnxt_qplib_alloc_init_hwq(xrrq, &hwq_attr);
-		if (rc)
-			goto rq_swq;
-		pbl = &xrrq->pbl[PBL_LVL_0];
-		req.orrq_addr = cpu_to_le64(pbl->pg_map_arr[0]);
-
-		xrrq = &qp->irrq;
-		xrrq->max_elements = IRD_LIMIT_TO_IRRQ_SLOTS(
-						qp->max_dest_rd_atomic);
-		req_size = xrrq->max_elements *
-			   BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE + PAGE_SIZE - 1;
-		req_size &= ~(PAGE_SIZE - 1);
-		sginfo.pgsize = req_size;
-		hwq_attr.depth =  xrrq->max_elements;
-		hwq_attr.stride = BNXT_QPLIB_MAX_IRRQE_ENTRY_SIZE;
-		rc = bnxt_qplib_alloc_init_hwq(xrrq, &hwq_attr);
-		if (rc)
-			goto fail_orrq;
-
-		pbl = &xrrq->pbl[PBL_LVL_0];
-		req.irrq_addr = cpu_to_le64(pbl->pg_map_arr[0]);
+	if (qp->psn_sz) {
+		req.orrq_addr = cpu_to_le64(bnxt_qplib_get_base_addr(&qp->orrq));
+		req.irrq_addr = cpu_to_le64(bnxt_qplib_get_base_addr(&qp->irrq));
 	}
-	req.pd_id = cpu_to_le32(qp->pd->id);
+
+	req.pd_id = cpu_to_le32(qp->pd_id);
 
 	bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, NULL, sizeof(req),
 				sizeof(resp), 0);
 	rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
 	if (rc)
-		goto fail;
+		return rc;
 
 	qp->id = le32_to_cpu(resp.xid);
+
+	if (!qp->is_user) {
+		rc = bnxt_re_setup_qp_swqs(qp);
+		if (rc)
+			goto destroy_qp;
+	}
+	bnxt_qp_init_dbinfo(res, qp);
+	if (qp->psn_sz)
+		bnxt_qplib_init_psn_ptr(qp, qp->psn_sz);
+
 	qp->cur_qp_state = CMDQ_MODIFY_QP_NEW_STATE_RESET;
 	INIT_LIST_HEAD(&qp->sq_flush);
 	INIT_LIST_HEAD(&qp->rq_flush);
 	qp->cctx = res->cctx;
-	sq->dbinfo.hwq = &sq->hwq;
-	sq->dbinfo.xid = qp->id;
-	sq->dbinfo.db = qp->dpi->dbr;
-	sq->dbinfo.max_slot = bnxt_qplib_set_sq_max_slot(qp->wqe_mode);
-	if (rq->max_wqe) {
-		rq->dbinfo.hwq = &rq->hwq;
-		rq->dbinfo.xid = qp->id;
-		rq->dbinfo.db = qp->dpi->dbr;
-		rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size);
-	}
 	spin_lock_bh(&rcfw->tbl_lock);
 	tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
 	rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
@@ -1160,18 +1059,8 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
 	spin_unlock_bh(&rcfw->tbl_lock);
 
 	return 0;
-fail:
-	bnxt_qplib_free_hwq(res, &qp->irrq);
-fail_orrq:
-	bnxt_qplib_free_hwq(res, &qp->orrq);
-rq_swq:
-	kfree(rq->swq);
-fail_rq:
-	bnxt_qplib_free_hwq(res, &rq->hwq);
-sq_swq:
-	kfree(sq->swq);
-fail_sq:
-	bnxt_qplib_free_hwq(res, &sq->hwq);
+destroy_qp:
+	bnxt_qplib_destroy_qp(res, qp);
 	return rc;
 }
 
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
index b990d0c0ce1a..1fd4deac3eff 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
@@ -268,7 +268,7 @@ struct bnxt_qplib_q {
 };
 
 struct bnxt_qplib_qp {
-	struct bnxt_qplib_pd		*pd;
+	u32				pd_id;
 	struct bnxt_qplib_dpi		*dpi;
 	struct bnxt_qplib_chip_ctx	*cctx;
 	u64				qp_handle;
@@ -279,6 +279,7 @@ struct bnxt_qplib_qp {
 	u8				wqe_mode;
 	u8				state;
 	u8				cur_qp_state;
+	u8				is_user;
 	u64				modify_flags;
 	u32				max_inline_data;
 	u32				mtu;
@@ -343,9 +344,11 @@ struct bnxt_qplib_qp {
 	struct list_head		rq_flush;
 	u32				msn;
 	u32				msn_tbl_sz;
+	u32				psn_sz;
 	bool				is_host_msn_tbl;
 	u8				tos_dscp;
 	u32				ugid_index;
+	u16				dev_cap_flags;
 };
 
 #define BNXT_RE_MAX_MSG_SIZE	0x80000000
@@ -613,6 +616,11 @@ static inline void bnxt_qplib_swq_mod_start(struct bnxt_qplib_q *que, u32 idx)
 	que->swq_start = que->swq[idx].next_idx;
 }
 
+static inline u32 bnxt_qplib_get_stride(void)
+{
+	return sizeof(struct sq_sge);
+}
+
 static inline u32 bnxt_qplib_get_depth(struct bnxt_qplib_q *que, u8 wqe_mode, bool is_sq)
 {
 	u32 slots;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
index 2ea3b7f232a3..ccdab938d707 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
@@ -198,6 +198,7 @@ struct bnxt_qplib_hwq {
 	u32				cons;		/* raw */
 	u8				cp_bit;
 	u8				is_user;
+	u8				pg_sz_lvl;
 	u64				*pad_pg;
 	u32				pad_stride;
 	u32				pad_pgofft;
@@ -358,6 +359,11 @@ static inline u8 bnxt_qplib_get_ring_type(struct bnxt_qplib_chip_ctx *cctx)
 	       RING_ALLOC_REQ_RING_TYPE_ROCE_CMPL;
 }
 
+static inline u64 bnxt_qplib_get_base_addr(struct bnxt_qplib_hwq *hwq)
+{
+	return hwq->pbl[PBL_LVL_0].pg_map_arr[0];
+}
+
 static inline u8 bnxt_qplib_base_pg_size(struct bnxt_qplib_hwq *hwq)
 {
 	u8 pg_size = BNXT_QPLIB_HWRM_PG_SIZE_4K;
-- 
2.51.2.636.ga99f379adf


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-next v2 3/4] RDMA/bnxt_re: Direct Verbs: Support DBR and UMEM verbs
  2025-11-04  7:23 [PATCH rdma-next v2 0/4] RDMA/bnxt_re: Support direct verbs Sriharsha Basavapatna
  2025-11-04  7:23 ` [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file Sriharsha Basavapatna
  2025-11-04  7:23 ` [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function Sriharsha Basavapatna
@ 2025-11-04  7:23 ` Sriharsha Basavapatna
  2025-11-04  7:23 ` [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs Sriharsha Basavapatna
  3 siblings, 0 replies; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-04  7:23 UTC (permalink / raw)
  To: leon, jgg
  Cc: linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

The following Direct Verb (DV) methods have been implemented in
this patch.

Doorbell Region Direct Verbs:
-----------------------------
- BNXT_RE_METHOD_DBR_ALLOC:
  This will allow the appliation to create extra doorbell regions
  and use the associated doorbell page index in DV_CREATE_QP and
  use the associated DB address while ringing the doorbell.

- BNXT_RE_METHOD_DBR_FREE:
  Free the allocated doorbell region.

- BNXT_RE_METHOD_DBR_QUERY:
  Return the default doorbell page index and doorbell page address
  associated with the ucontext.

Umem Registration Direct Verbs:
-------------------------------
- BNXT_RE_METHOD_UMEM_REG:
  Register the user memory to be used by the application with
  the driver. Application can register a large chunk of memory and
  use it during subsequent resource creation DV APIs.

  Note that the driver doesn't really map/pin the user memory
  during dv_umem_reg(). The app specified memory params (addr, len)
  are saved and a corresponding umem-handle is returned.

  This memory is mapped/pinned when the application subsequently
  creates the required resources (CQ/QP) using respective direct
  verbs. This is implemented in the next patch in this series.

- BNXT_RE_METHOD_UMEM_DEREG:
  Deregister the user memory specified by the umem-handle.

Co-developed-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxt_re/dv.c        | 252 ++++++++++++++++++++++
 drivers/infiniband/hw/bnxt_re/ib_verbs.h  |   8 +
 drivers/infiniband/hw/bnxt_re/qplib_res.c |  43 ++++
 drivers/infiniband/hw/bnxt_re/qplib_res.h |   4 +
 include/uapi/rdma/bnxt_re-abi.h           |  49 +++++
 5 files changed, 356 insertions(+)

diff --git a/drivers/infiniband/hw/bnxt_re/dv.c b/drivers/infiniband/hw/bnxt_re/dv.c
index 2b3e34b940b3..f40c0478f00d 100644
--- a/drivers/infiniband/hw/bnxt_re/dv.c
+++ b/drivers/infiniband/hw/bnxt_re/dv.c
@@ -42,6 +42,7 @@
 #include <rdma/ib_user_ioctl_cmds.h>
 #define UVERBS_MODULE_NAME bnxt_re
 #include <rdma/uverbs_named_ioctl.h>
+#include <rdma/ib_umem.h>
 #include <rdma/bnxt_re-abi.h>
 
 #include "roce_hsi.h"
@@ -52,6 +53,15 @@
 #include "bnxt_re.h"
 #include "ib_verbs.h"
 
+struct bnxt_re_dv_umem {
+	struct bnxt_re_dev *rdev;
+	struct ib_umem *umem;
+	__u64 addr;
+	size_t size;
+	__u32 access;
+	int dmabuf_fd;
+};
+
 static struct bnxt_re_cq *bnxt_re_search_for_cq(struct bnxt_re_dev *rdev, u32 cq_id)
 {
 	struct bnxt_re_cq *cq = NULL, *tmp_cq;
@@ -348,9 +358,251 @@ DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_GET_TOGGLE_MEM,
 			    &UVERBS_METHOD(BNXT_RE_METHOD_GET_TOGGLE_MEM),
 			    &UVERBS_METHOD(BNXT_RE_METHOD_RELEASE_TOGGLE_MEM));
 
+static int bnxt_re_dv_validate_umem_attr(struct bnxt_re_dev *rdev,
+					 struct uverbs_attr_bundle *attrs,
+					 struct bnxt_re_dv_umem *obj)
+{
+	int dmabuf_fd = 0;
+	u32 access_flags;
+	size_t size;
+	u64 addr;
+	int err;
+
+	err = uverbs_get_flags32(&access_flags, attrs,
+				 BNXT_RE_UMEM_OBJ_REG_ACCESS,
+				 IB_ACCESS_LOCAL_WRITE |
+				 IB_ACCESS_REMOTE_WRITE |
+				 IB_ACCESS_REMOTE_READ);
+	if (err)
+		return err;
+
+	err = ib_check_mr_access(&rdev->ibdev, access_flags);
+	if (err)
+		return err;
+
+	if (uverbs_copy_from(&addr, attrs, BNXT_RE_UMEM_OBJ_REG_ADDR) ||
+	    uverbs_copy_from(&size, attrs, BNXT_RE_UMEM_OBJ_REG_LEN))
+		return -EFAULT;
+	if (uverbs_attr_is_valid(attrs, BNXT_RE_UMEM_OBJ_REG_DMABUF_FD)) {
+		if (uverbs_get_raw_fd(&dmabuf_fd, attrs,
+				      BNXT_RE_UMEM_OBJ_REG_DMABUF_FD))
+			return -EFAULT;
+	}
+	obj->addr = addr;
+	obj->size = size;
+	obj->access = access_flags;
+	obj->dmabuf_fd = dmabuf_fd;
+
+	return 0;
+}
+
+static int bnxt_re_dv_umem_cleanup(struct ib_uobject *uobject,
+				   enum rdma_remove_reason why,
+				   struct uverbs_attr_bundle *attrs)
+{
+	kfree(uobject->object);
+	return 0;
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_UMEM_REG)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj =
+		uverbs_attr_get_uobject(attrs, BNXT_RE_UMEM_OBJ_REG_HANDLE);
+	struct bnxt_re_ucontext *uctx;
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_re_dv_umem *obj;
+	struct bnxt_re_dev *rdev;
+	int err;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = uctx->rdev;
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return -ENOMEM;
+
+	obj->rdev = rdev;
+	err = bnxt_re_dv_validate_umem_attr(rdev, attrs, obj);
+	if (err)
+		goto free_mem;
+
+	obj->umem = NULL;
+	uobj->object = obj;
+	uverbs_finalize_uobj_create(attrs, BNXT_RE_UMEM_OBJ_REG_HANDLE);
+
+	return 0;
+free_mem:
+	kfree(obj);
+	return err;
+}
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_UMEM_REG,
+			    UVERBS_ATTR_IDR(BNXT_RE_UMEM_OBJ_REG_HANDLE,
+					    BNXT_RE_OBJECT_UMEM,
+					    UVERBS_ACCESS_NEW,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_UMEM_OBJ_REG_ADDR,
+					       UVERBS_ATTR_TYPE(u64),
+					       UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_UMEM_OBJ_REG_LEN,
+					       UVERBS_ATTR_TYPE(u64),
+					       UA_MANDATORY),
+			    UVERBS_ATTR_FLAGS_IN(BNXT_RE_UMEM_OBJ_REG_ACCESS,
+						 enum ib_access_flags),
+			    UVERBS_ATTR_RAW_FD(BNXT_RE_UMEM_OBJ_REG_DMABUF_FD,
+					       UA_OPTIONAL),
+			    UVERBS_ATTR_CONST_IN(BNXT_RE_UMEM_OBJ_REG_PGSZ_BITMAP,
+						 u64));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_UMEM_DEREG,
+				    UVERBS_ATTR_IDR(BNXT_RE_UMEM_OBJ_DEREG_HANDLE,
+						    BNXT_RE_OBJECT_UMEM,
+						    UVERBS_ACCESS_DESTROY,
+						    UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_UMEM,
+			    UVERBS_TYPE_ALLOC_IDR(bnxt_re_dv_umem_cleanup),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_UMEM_REG),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_UMEM_DEREG));
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_DBR_ALLOC)(struct uverbs_attr_bundle *attrs)
+{
+	struct bnxt_re_dv_db_region dbr = {};
+	struct bnxt_re_alloc_dbr_obj *obj;
+	struct bnxt_re_ucontext *uctx;
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_qplib_dpi *dpi;
+	struct bnxt_re_dev *rdev;
+	struct ib_uobject *uobj;
+	u64 mmap_offset;
+	int ret;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = uctx->rdev;
+	uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_DV_ALLOC_DBR_HANDLE);
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return -ENOMEM;
+
+	dpi = &obj->dpi;
+	ret = bnxt_qplib_alloc_uc_dpi(&rdev->qplib_res, dpi);
+	if (ret)
+		goto free_mem;
+
+	obj->entry = bnxt_re_mmap_entry_insert(uctx, 0, BNXT_RE_MMAP_UC_DB,
+					       &mmap_offset);
+	if (!obj->entry) {
+		ret = -ENOMEM;
+		goto free_dpi;
+	}
+
+	obj->rdev = rdev;
+	dbr.umdbr = dpi->umdbr;
+	dbr.dpi = dpi->dpi;
+
+	ret = uverbs_copy_to_struct_or_zero(attrs, BNXT_RE_DV_ALLOC_DBR_ATTR,
+					    &dbr, sizeof(dbr));
+	if (ret)
+		goto free_entry;
+
+	ret = uverbs_copy_to(attrs, BNXT_RE_DV_ALLOC_DBR_OFFSET,
+			     &mmap_offset, sizeof(mmap_offset));
+	if (ret)
+		goto free_entry;
+
+	uobj->object = obj;
+	uverbs_finalize_uobj_create(attrs, BNXT_RE_DV_ALLOC_DBR_HANDLE);
+	return 0;
+free_entry:
+	rdma_user_mmap_entry_remove(&obj->entry->rdma_entry);
+free_dpi:
+	bnxt_qplib_free_uc_dpi(&rdev->qplib_res, dpi);
+free_mem:
+	kfree(obj);
+	return ret;
+}
+
+static int bnxt_re_dv_dbr_cleanup(struct ib_uobject *uobject,
+				  enum rdma_remove_reason why,
+				  struct uverbs_attr_bundle *attrs)
+{
+	struct bnxt_re_alloc_dbr_obj *obj = uobject->object;
+	struct bnxt_re_dev *rdev = obj->rdev;
+
+	rdma_user_mmap_entry_remove(&obj->entry->rdma_entry);
+	bnxt_qplib_free_uc_dpi(&rdev->qplib_res, &obj->dpi);
+	kfree(obj);
+	return 0;
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_DBR_QUERY)(struct uverbs_attr_bundle *attrs)
+{
+	struct bnxt_re_dv_db_region dpi = {};
+	struct bnxt_re_ucontext *uctx;
+	struct ib_ucontext *ib_uctx;
+	int ret;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	dpi.umdbr = uctx->dpi.umdbr;
+	dpi.dpi = uctx->dpi.dpi;
+
+	ret = uverbs_copy_to_struct_or_zero(attrs, BNXT_RE_DV_QUERY_DBR_ATTR,
+					    &dpi, sizeof(dpi));
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_DBR_ALLOC,
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_ALLOC_DBR_HANDLE,
+					    BNXT_RE_OBJECT_DBR,
+					    UVERBS_ACCESS_NEW,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_DV_ALLOC_DBR_ATTR,
+						UVERBS_ATTR_STRUCT(struct bnxt_re_dv_db_region,
+								   dbr),
+								   UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_DV_ALLOC_DBR_OFFSET,
+					       UVERBS_ATTR_TYPE(u64),
+					       UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_DBR_QUERY,
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_DV_QUERY_DBR_ATTR,
+						UVERBS_ATTR_STRUCT(struct bnxt_re_dv_db_region,
+								   dbr),
+						UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_DBR_FREE,
+				    UVERBS_ATTR_IDR(BNXT_RE_DV_FREE_DBR_HANDLE,
+						    BNXT_RE_OBJECT_DBR,
+						    UVERBS_ACCESS_DESTROY,
+						    UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_DBR,
+			    UVERBS_TYPE_ALLOC_IDR(bnxt_re_dv_dbr_cleanup),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DBR_ALLOC),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DBR_FREE),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DBR_QUERY));
+
 const struct uapi_definition bnxt_re_uapi_defs[] = {
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_ALLOC_PAGE),
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_NOTIFY_DRV),
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_GET_TOGGLE_MEM),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_DBR),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_UMEM),
 	{}
 };
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
index a11f56730a31..1ff89192a728 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
@@ -113,6 +113,7 @@ struct bnxt_re_cq {
 	int			resize_cqe;
 	void			*uctx_cq_page;
 	struct hlist_node	hash_entry;
+	struct bnxt_re_ucontext *uctx;
 };
 
 struct bnxt_re_mr {
@@ -164,6 +165,13 @@ struct bnxt_re_user_mmap_entry {
 	u8 mmap_flag;
 };
 
+struct bnxt_re_alloc_dbr_obj {
+	struct bnxt_re_dev *rdev;
+	struct bnxt_re_dv_db_region attr;
+	struct bnxt_qplib_dpi dpi;
+	struct bnxt_re_user_mmap_entry *entry;
+};
+
 struct bnxt_re_flow {
 	struct ib_flow		ib_flow;
 	struct bnxt_re_dev	*rdev;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
index 875d7b52c06a..30cc2d64a9ae 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
@@ -685,6 +685,49 @@ static int bnxt_qplib_alloc_pd_tbl(struct bnxt_qplib_res *res,
 }
 
 /* DPIs */
+int bnxt_qplib_alloc_uc_dpi(struct bnxt_qplib_res *res, struct bnxt_qplib_dpi *dpi)
+{
+	struct bnxt_qplib_dpi_tbl *dpit = &res->dpi_tbl;
+	struct bnxt_qplib_reg_desc *reg;
+	u32 bit_num;
+	int rc = 0;
+
+	reg = &dpit->wcreg;
+	mutex_lock(&res->dpi_tbl_lock);
+	bit_num = find_first_bit(dpit->tbl, dpit->max);
+	if (bit_num >= dpit->max) {
+		rc = -ENOMEM;
+		goto unlock;
+	}
+	/* Found unused DPI */
+	clear_bit(bit_num, dpit->tbl);
+	dpi->bit = bit_num;
+	dpi->dpi = bit_num + (reg->offset - dpit->ucreg.offset) / PAGE_SIZE;
+	dpi->umdbr = reg->bar_base + reg->offset + bit_num * PAGE_SIZE;
+unlock:
+	mutex_unlock(&res->dpi_tbl_lock);
+	return rc;
+}
+
+int bnxt_qplib_free_uc_dpi(struct bnxt_qplib_res *res, struct bnxt_qplib_dpi *dpi)
+{
+	struct bnxt_qplib_dpi_tbl *dpit = &res->dpi_tbl;
+	int rc = 0;
+
+	mutex_lock(&res->dpi_tbl_lock);
+	if (dpi->bit >= dpit->max) {
+		rc = -EINVAL;
+		goto unlock;
+	}
+
+	if (test_and_set_bit(dpi->bit, dpit->tbl))
+		rc = -EINVAL;
+	memset(dpi, 0, sizeof(*dpi));
+unlock:
+	mutex_unlock(&res->dpi_tbl_lock);
+	return rc;
+}
+
 int bnxt_qplib_alloc_dpi(struct bnxt_qplib_res *res,
 			 struct bnxt_qplib_dpi *dpi,
 			 void *app, u8 type)
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
index ccdab938d707..3a8162ef4c33 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
@@ -436,6 +436,10 @@ int bnxt_qplib_alloc_dpi(struct bnxt_qplib_res *res,
 			 void *app, u8 type);
 int bnxt_qplib_dealloc_dpi(struct bnxt_qplib_res *res,
 			   struct bnxt_qplib_dpi *dpi);
+int bnxt_qplib_alloc_uc_dpi(struct bnxt_qplib_res *res,
+			    struct bnxt_qplib_dpi *dpi);
+int bnxt_qplib_free_uc_dpi(struct bnxt_qplib_res *res,
+			   struct bnxt_qplib_dpi *dpi);
 void bnxt_qplib_cleanup_res(struct bnxt_qplib_res *res);
 int bnxt_qplib_init_res(struct bnxt_qplib_res *res);
 void bnxt_qplib_free_res(struct bnxt_qplib_res *res);
diff --git a/include/uapi/rdma/bnxt_re-abi.h b/include/uapi/rdma/bnxt_re-abi.h
index faa9d62b3b30..59a0b030de04 100644
--- a/include/uapi/rdma/bnxt_re-abi.h
+++ b/include/uapi/rdma/bnxt_re-abi.h
@@ -162,6 +162,8 @@ enum bnxt_re_objects {
 	BNXT_RE_OBJECT_ALLOC_PAGE = (1U << UVERBS_ID_NS_SHIFT),
 	BNXT_RE_OBJECT_NOTIFY_DRV,
 	BNXT_RE_OBJECT_GET_TOGGLE_MEM,
+	BNXT_RE_OBJECT_DBR,
+	BNXT_RE_OBJECT_UMEM,
 };
 
 enum bnxt_re_alloc_page_type {
@@ -215,4 +217,51 @@ enum bnxt_re_toggle_mem_methods {
 	BNXT_RE_METHOD_GET_TOGGLE_MEM = (1U << UVERBS_ID_NS_SHIFT),
 	BNXT_RE_METHOD_RELEASE_TOGGLE_MEM,
 };
+
+struct bnxt_re_dv_db_region {
+	__u32 dbr_handle;
+	__u32 dpi;
+	__u64 umdbr;
+	void *dbr;
+	__aligned_u64 comp_mask;
+};
+
+enum bnxt_re_obj_dbr_alloc_attrs {
+	BNXT_RE_DV_ALLOC_DBR_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_DV_ALLOC_DBR_ATTR,
+	BNXT_RE_DV_ALLOC_DBR_OFFSET,
+};
+
+enum bnxt_re_obj_dbr_free_attrs {
+	BNXT_RE_DV_FREE_DBR_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+enum bnxt_re_obj_dbr_query_attrs {
+	BNXT_RE_DV_QUERY_DBR_ATTR = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+enum bnxt_re_obj_dpi_methods {
+	BNXT_RE_METHOD_DBR_ALLOC = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_METHOD_DBR_FREE,
+	BNXT_RE_METHOD_DBR_QUERY,
+};
+
+enum bnxt_re_dv_umem_reg_attrs {
+	BNXT_RE_UMEM_OBJ_REG_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_UMEM_OBJ_REG_ADDR,
+	BNXT_RE_UMEM_OBJ_REG_LEN,
+	BNXT_RE_UMEM_OBJ_REG_ACCESS,
+	BNXT_RE_UMEM_OBJ_REG_DMABUF_FD,
+	BNXT_RE_UMEM_OBJ_REG_PGSZ_BITMAP,
+};
+
+enum bnxt_re_dv_umem_dereg_attrs {
+	BNXT_RE_UMEM_OBJ_DEREG_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+enum bnxt_re_dv_umem_methods {
+	BNXT_RE_METHOD_UMEM_REG = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_METHOD_UMEM_DEREG,
+};
+
 #endif /* __BNXT_RE_UVERBS_ABI_H__*/
-- 
2.51.2.636.ga99f379adf


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs
  2025-11-04  7:23 [PATCH rdma-next v2 0/4] RDMA/bnxt_re: Support direct verbs Sriharsha Basavapatna
                   ` (2 preceding siblings ...)
  2025-11-04  7:23 ` [PATCH rdma-next v2 3/4] RDMA/bnxt_re: Direct Verbs: Support DBR and UMEM verbs Sriharsha Basavapatna
@ 2025-11-04  7:23 ` Sriharsha Basavapatna
  2025-11-09  9:49   ` Leon Romanovsky
  3 siblings, 1 reply; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-04  7:23 UTC (permalink / raw)
  To: leon, jgg
  Cc: linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

The following Direct Verb (DV) methods have been implemented in
this patch.

CQ Direct Verbs:
----------------
- BNXT_RE_METHOD_DV_CREATE_CQ:
  Create a CQ of requested size (cqe). The application must have
  already registered this memory with the driver using DV_UMEM_REG.
  The CQ umem-handle and umem-offset are passed to the driver. The
  driver now maps/pins the CQ user memory and registers it with the
  hardware. The driver returns a CQ-handle to the application.

- BNXT_RE_METHOD_DV_DESTROY_CQ:
  Destroy the DV_CQ specified by the CQ-handle; unmap the user memory.

QP Direct Verbs:
----------------
- BNXT_RE_METHOD_DV_CREATE_QP:
  Create a QP using specified params (struct bnxt_re_dv_create_qp_req).
  The application must have already registered SQ/RQ memory with the
  driver using DV_UMEM_REG. The SQ/RQ umem-handle and umem-offset are
  passed to the driver. The driver now maps/pins the SQ/RQ user memory
  and registers it with the hardware. The driver returns a QP-handle to
  the application.

- BNXT_RE_METHOD_DV_DESTROY_QP:
  Destroy the DV_QP specified by the QP-handle; unmap SQ/RQ user memory.

- BNXT_RE_METHOD_DV_MODIFY_QP:
  Modify QP attributes for the DV_QP specified by the QP-handle;
  wrapper functions have been implemented to resolve dmac/smac using
  rdma_resolve_ip().

- BNXT_RE_METHOD_DV_QUERY_QP:
  Return QP attributes for the DV_QP specified by the QP-handle.

Note:
-----
Some applications might want to allocate memory for all resources of a
given type (CQ/QP) in one big chunk and then register that entire memory
once using DV_UMEM_REG. At the time of creating each individual
resource, the application would pass a specific offset/length in the
umem registered memory.

- The DV_UMEM_REG handler (previous patch) only creates a dv_umem object
  and saves user memory parameters, but doesn't really map/pin this
  memory.
- The mapping would be done at the time of creating individual objects.
- This actual mapping of specific umem offsets is implemented by the
  function bnxt_re_dv_umem_get(). This function validates the
  umem-offset and size parameters passed during CQ/QP creation. If the
  request is valid, it maps the specified offset/length within the umem
  registered memory.
- The CQ and QP creation DV handlers call bnxt_re_dv_umem_get() to map
  offsets/sizes specific to each individual object. This means each
  object gets its own mapped dv_umem object that is distinct from the
  main dv_umem object created during DV_UMEM_REG.
- The object specific dv_umem is unmapped when the object is destroyed.

Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Co-developed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Co-developed-by: Selvin Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
 drivers/infiniband/hw/bnxt_re/bnxt_re.h  |   12 +-
 drivers/infiniband/hw/bnxt_re/dv.c       | 1208 ++++++++++++++++++++++
 drivers/infiniband/hw/bnxt_re/ib_verbs.c |   55 +-
 drivers/infiniband/hw/bnxt_re/ib_verbs.h |   12 +
 include/uapi/rdma/bnxt_re-abi.h          |   93 ++
 5 files changed, 1364 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
index 3485e495ac6a..44d4a4a83bfe 100644
--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
@@ -167,6 +167,12 @@ static inline bool bnxt_re_chip_gen_p7(u16 chip_num)
 		chip_num == CHIP_NUM_57608);
 }
 
+enum {
+	BNXT_RE_DV_RES_TYPE_QP = 0,
+	BNXT_RE_DV_RES_TYPE_CQ,
+	BNXT_RE_DV_RES_TYPE_MAX
+};
+
 struct bnxt_re_dev {
 	struct ib_device		ibdev;
 	struct list_head		list;
@@ -231,6 +237,8 @@ struct bnxt_re_dev {
 	union ib_gid ugid;
 	u32 ugid_index;
 	u8 sniffer_flow_created : 1;
+	atomic_t		dv_cq_count;
+	atomic_t		dv_qp_count;
 };
 
 #define to_bnxt_re_dev(ptr, member)	\
@@ -274,6 +282,9 @@ static inline int bnxt_re_read_context_allowed(struct bnxt_re_dev *rdev)
 	return 0;
 }
 
+struct bnxt_qplib_nq *bnxt_re_get_nq(struct bnxt_re_dev *rdev);
+void bnxt_re_put_nq(struct bnxt_re_dev *rdev, struct bnxt_qplib_nq *nq);
+
 #define BNXT_RE_CONTEXT_TYPE_QPC_SIZE_P5	1088
 #define BNXT_RE_CONTEXT_TYPE_CQ_SIZE_P5		128
 #define BNXT_RE_CONTEXT_TYPE_MRW_SIZE_P5	128
@@ -286,5 +297,4 @@ static inline int bnxt_re_read_context_allowed(struct bnxt_re_dev *rdev)
 
 #define BNXT_RE_HWRM_CMD_TIMEOUT(rdev)		\
 		((rdev)->chip_ctx->hwrm_cmd_max_timeout * 1000)
-
 #endif
diff --git a/drivers/infiniband/hw/bnxt_re/dv.c b/drivers/infiniband/hw/bnxt_re/dv.c
index f40c0478f00d..0c30cec8cfe7 100644
--- a/drivers/infiniband/hw/bnxt_re/dv.c
+++ b/drivers/infiniband/hw/bnxt_re/dv.c
@@ -44,6 +44,7 @@
 #include <rdma/uverbs_named_ioctl.h>
 #include <rdma/ib_umem.h>
 #include <rdma/bnxt_re-abi.h>
+#include <rdma/ib_cache.h>
 
 #include "roce_hsi.h"
 #include "qplib_res.h"
@@ -396,6 +397,88 @@ static int bnxt_re_dv_validate_umem_attr(struct bnxt_re_dev *rdev,
 	return 0;
 }
 
+static bool bnxt_re_dv_is_valid_umem(struct bnxt_re_dv_umem *umem,
+				     u64 offset, u32 size)
+{
+	return ((offset == ALIGN(offset, PAGE_SIZE)) &&
+		(offset + size <= umem->size));
+}
+
+static struct bnxt_re_dv_umem *bnxt_re_dv_umem_get(struct bnxt_re_dev *rdev,
+						   struct ib_ucontext *ib_uctx,
+						   struct bnxt_re_dv_umem *obj,
+						   u64 umem_offset, u64 size,
+						   struct bnxt_qplib_sg_info *sg)
+{
+	struct bnxt_re_dv_umem *dv_umem;
+	struct ib_umem *umem;
+	int umem_pgs, rc;
+
+	if (!bnxt_re_dv_is_valid_umem(obj, umem_offset, size))
+		return ERR_PTR(-EINVAL);
+
+	dv_umem = kzalloc(sizeof(*dv_umem), GFP_KERNEL);
+	if (!dv_umem)
+		return ERR_PTR(-ENOMEM);
+
+	dv_umem->addr = obj->addr + umem_offset;
+	dv_umem->size = size;
+	dv_umem->rdev = obj->rdev;
+	dv_umem->dmabuf_fd = obj->dmabuf_fd;
+	dv_umem->access = obj->access;
+
+	if (obj->dmabuf_fd) {
+		struct ib_umem_dmabuf *umem_dmabuf;
+
+		umem_dmabuf = ib_umem_dmabuf_get_pinned(&rdev->ibdev, dv_umem->addr,
+							dv_umem->size, dv_umem->dmabuf_fd,
+							dv_umem->access);
+		if (IS_ERR(umem_dmabuf)) {
+			rc = PTR_ERR(umem_dmabuf);
+			dev_err(rdev_to_dev(rdev),
+				"%s: failed to get umem dmabuf : %d\n",
+				__func__, rc);
+			goto free_umem;
+		}
+		umem = &umem_dmabuf->umem;
+	} else {
+		umem = ib_umem_get(&rdev->ibdev, (unsigned long)dv_umem->addr,
+				   dv_umem->size, dv_umem->access);
+		if (IS_ERR(umem)) {
+			rc = PTR_ERR(umem);
+			dev_err(rdev_to_dev(rdev),
+				"%s: ib_umem_get failed! rc = %d\n",
+				__func__, rc);
+			goto free_umem;
+		}
+	}
+
+	dv_umem->umem = umem;
+
+	umem_pgs = ib_umem_num_dma_blocks(umem, PAGE_SIZE);
+	if (!umem_pgs) {
+		dev_err(rdev_to_dev(rdev), "%s: umem is invalid!", __func__);
+		rc = -EINVAL;
+		goto rel_umem;
+	}
+	sg->npages = ib_umem_num_dma_blocks(umem, PAGE_SIZE);
+	sg->pgshft = PAGE_SHIFT;
+	sg->pgsize = PAGE_SIZE;
+	sg->umem = umem;
+
+	dev_dbg(rdev_to_dev(rdev), "%s: umem: 0x%llx va: 0x%llx size: %lu\n",
+		__func__, (u64)umem, dv_umem->addr, dv_umem->size);
+	dev_dbg(rdev_to_dev(rdev), "\tpgsize: %d pgshft: %d npages: %d\n",
+		sg->pgsize, sg->pgshft, sg->npages);
+	return dv_umem;
+
+rel_umem:
+	ib_umem_release(umem);
+free_umem:
+	kfree(dv_umem);
+	return ERR_PTR(rc);
+}
+
 static int bnxt_re_dv_umem_cleanup(struct ib_uobject *uobject,
 				   enum rdma_remove_reason why,
 				   struct uverbs_attr_bundle *attrs)
@@ -598,11 +681,1136 @@ DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_DBR,
 			    &UVERBS_METHOD(BNXT_RE_METHOD_DBR_FREE),
 			    &UVERBS_METHOD(BNXT_RE_METHOD_DBR_QUERY));
 
+static int bnxt_re_dv_create_cq_resp(struct bnxt_re_dev *rdev,
+				     struct bnxt_re_cq *cq,
+				     struct bnxt_re_dv_cq_resp *resp)
+{
+	struct bnxt_qplib_cq *qplcq = &cq->qplib_cq;
+
+	resp->cqid = qplcq->id;
+	resp->tail = qplcq->hwq.cons;
+	resp->phase = qplcq->period;
+	resp->comp_mask = 0;
+
+	dev_dbg(rdev_to_dev(rdev),
+		"%s: cqid: 0x%x tail: 0x%x phase: 0x%x comp_mask: 0x%llx\n",
+		__func__, resp->cqid, resp->tail, resp->phase,
+		resp->comp_mask);
+	return 0;
+}
+
+static struct bnxt_re_cq *
+bnxt_re_dv_create_qplib_cq(struct bnxt_re_dev *rdev,
+			   struct bnxt_re_ucontext *re_uctx,
+			   struct bnxt_re_dv_cq_req *req,
+			   struct bnxt_re_dv_umem *umem_handle,
+			   u64 umem_offset)
+{
+	struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+	struct bnxt_re_dv_umem *dv_umem;
+	struct bnxt_qplib_cq *qplcq;
+	struct bnxt_re_cq *cq = NULL;
+	int cqe = req->ncqe;
+	u32 max_active_cqs;
+	int rc = 0;
+
+	if (atomic_read(&rdev->stats.res.cq_count) >= dev_attr->max_cq) {
+		dev_err(rdev_to_dev(rdev),
+			"Create CQ failed - max exceeded(CQs)");
+		return NULL;
+	}
+
+	/* Validate CQ fields */
+	if (cqe < 1 || cqe > dev_attr->max_cq_wqes) {
+		dev_err(rdev_to_dev(rdev),
+			"Create CQ failed - max exceeded(CQ_WQs)");
+		return NULL;
+	}
+
+	cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+	if (!cq)
+		return NULL;
+
+	cq->rdev = rdev;
+	cq->uctx = re_uctx;
+	qplcq = &cq->qplib_cq;
+	qplcq->cq_handle = (u64)qplcq;
+	dev_dbg(rdev_to_dev(rdev), "%s: umem_va: 0x%llx umem_offset: 0x%llx\n",
+		__func__, umem_handle->addr, umem_offset);
+	dv_umem = bnxt_re_dv_umem_get(rdev, &re_uctx->ib_uctx, umem_handle,
+				      umem_offset, cqe * sizeof(struct cq_base),
+				      &qplcq->sg_info);
+	if (IS_ERR(dv_umem)) {
+		rc = PTR_ERR(dv_umem);
+		dev_err(rdev_to_dev(rdev), "%s: bnxt_re_dv_umem_get() failed! rc = %d\n",
+			__func__, rc);
+		goto fail_umem;
+	}
+	cq->umem = dv_umem->umem;
+	cq->umem_handle = dv_umem;
+	dev_dbg(rdev_to_dev(rdev), "%s: cq->umem: %llx\n", __func__, (u64)cq->umem);
+
+	qplcq->dpi = &re_uctx->dpi;
+	qplcq->max_wqe = cqe;
+	qplcq->nq = bnxt_re_get_nq(rdev);
+	qplcq->cnq_hw_ring_id = qplcq->nq->ring_id;
+	qplcq->coalescing = &rdev->cq_coalescing;
+	rc = bnxt_qplib_create_cq(&rdev->qplib_res, qplcq);
+	if (rc) {
+		dev_err(rdev_to_dev(rdev), "Create HW CQ failed!");
+		goto fail_qpl;
+	}
+
+	cq->ib_cq.cqe = cqe;
+	cq->cq_period = qplcq->period;
+
+	atomic_inc(&rdev->stats.res.cq_count);
+	max_active_cqs = atomic_read(&rdev->stats.res.cq_count);
+	if (max_active_cqs > rdev->stats.res.cq_watermark)
+		rdev->stats.res.cq_watermark = max_active_cqs;
+	spin_lock_init(&cq->cq_lock);
+
+	return cq;
+
+fail_qpl:
+	ib_umem_release(cq->umem);
+	kfree(cq->umem_handle);
+fail_umem:
+	kfree(cq);
+	return NULL;
+}
+
+static int bnxt_re_dv_uverbs_copy_to(struct bnxt_re_dev *rdev,
+				     struct uverbs_attr_bundle *attrs,
+				     int attr, void *from, size_t size)
+{
+	int ret;
+
+	ret = uverbs_copy_to_struct_or_zero(attrs, attr, from, size);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev), "%s: uverbs_copy_to() failed: %d\n",
+			__func__, ret);
+		return ret;
+	}
+
+	dev_dbg(rdev_to_dev(rdev),
+		"%s: Copied to user from: 0x%llx size: 0x%lx\n",
+		__func__, (u64)from, size);
+	return ret;
+}
+
+static void bnxt_re_dv_finalize_uobj(struct ib_uobject *uobj, void *priv_obj,
+				     struct uverbs_attr_bundle *attrs, int attr)
+{
+	uobj->object = priv_obj;
+	uverbs_finalize_uobj_create(attrs, attr);
+}
+
+static void bnxt_re_dv_init_ib_cq(struct bnxt_re_dev *rdev,
+				  struct bnxt_re_cq *re_cq)
+{
+	struct ib_cq *ib_cq;
+
+	ib_cq = &re_cq->ib_cq;
+	ib_cq->device = &rdev->ibdev;
+	ib_cq->uobject = NULL;
+	ib_cq->comp_handler  = NULL;
+	ib_cq->event_handler = NULL;
+	atomic_set(&ib_cq->usecnt, 0);
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_DV_CREATE_CQ)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj =
+		uverbs_attr_get_uobject(attrs, BNXT_RE_DV_CREATE_CQ_HANDLE);
+	struct bnxt_re_dv_umem *umem_handle = NULL;
+	struct bnxt_re_dv_cq_resp resp = {};
+	struct bnxt_re_dv_cq_req req = {};
+	struct bnxt_re_ucontext *re_uctx;
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_re_dev *rdev;
+	struct bnxt_re_cq *re_cq;
+	u64 offset;
+	int ret;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	re_uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = re_uctx->rdev;
+
+	ret = uverbs_copy_from_or_zero(&req, attrs, BNXT_RE_DV_CREATE_CQ_REQ);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev), "%s: Failed to copy request: %d\n",
+			__func__, ret);
+		return ret;
+	}
+
+	umem_handle = uverbs_attr_get_obj(attrs, BNXT_RE_DV_CREATE_CQ_UMEM_HANDLE);
+	if (IS_ERR(umem_handle)) {
+		dev_err(rdev_to_dev(rdev),
+			"%s: BNXT_RE_DV_CREATE_CQ_UMEM_HANDLE is not valid\n",
+			__func__);
+		return PTR_ERR(umem_handle);
+	}
+
+	ret = uverbs_copy_from(&offset, attrs, BNXT_RE_DV_CREATE_CQ_UMEM_OFFSET);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev), "%s: Failed to copy umem offset: %d\n",
+			__func__, ret);
+		return ret;
+	}
+
+	re_cq = bnxt_re_dv_create_qplib_cq(rdev, re_uctx, &req, umem_handle, offset);
+	if (!re_cq) {
+		dev_err(rdev_to_dev(rdev), "%s: Failed to create qplib cq\n",
+			__func__);
+		return -EIO;
+	}
+
+	ret = bnxt_re_dv_create_cq_resp(rdev, re_cq, &resp);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev),
+			"%s: Failed to create cq response\n", __func__);
+		goto fail_resp;
+	}
+
+	ret = bnxt_re_dv_uverbs_copy_to(rdev, attrs, BNXT_RE_DV_CREATE_CQ_RESP,
+					&resp, sizeof(resp));
+	if (ret) {
+		dev_err(rdev_to_dev(rdev),
+			"%s: Failed to copy cq response: %d\n", __func__, ret);
+		goto fail_resp;
+	}
+
+	bnxt_re_dv_finalize_uobj(uobj, re_cq, attrs, BNXT_RE_DV_CREATE_CQ_HANDLE);
+	bnxt_re_dv_init_ib_cq(rdev, re_cq);
+	re_cq->is_dv_cq = true;
+	atomic_inc(&rdev->dv_cq_count);
+
+	dev_dbg(rdev_to_dev(rdev), "%s: Created CQ: 0x%llx, handle: 0x%x\n",
+		__func__, (u64)re_cq, uobj->id);
+
+	return 0;
+
+fail_resp:
+	bnxt_qplib_destroy_cq(&rdev->qplib_res, &re_cq->qplib_cq);
+	bnxt_re_put_nq(rdev, re_cq->qplib_cq.nq);
+	if (re_cq->umem_handle) {
+		ib_umem_release(re_cq->umem);
+		kfree(re_cq->umem_handle);
+	}
+	kfree(re_cq);
+	return ret;
+};
+
+static int bnxt_re_dv_free_cq(struct ib_uobject *uobj,
+			      enum rdma_remove_reason why,
+			      struct uverbs_attr_bundle *attrs)
+{
+	struct bnxt_re_cq *cq = uobj->object;
+	struct bnxt_re_dev *rdev = cq->rdev;
+	int rc;
+
+	dev_dbg(rdev_to_dev(rdev), "%s: Destroy CQ: 0x%llx, handle: 0x%x\n",
+		__func__, (u64)cq, uobj->id);
+
+	rc = bnxt_qplib_destroy_cq(&rdev->qplib_res, &cq->qplib_cq);
+	if (rc)
+		dev_err_ratelimited(rdev_to_dev(rdev),
+				    "%s id = %d failed rc = %d",
+				    __func__, cq->qplib_cq.id, rc);
+
+	bnxt_re_put_nq(rdev, cq->qplib_cq.nq);
+	if (cq->umem_handle) {
+		ib_umem_release(cq->umem);
+		kfree(cq->umem_handle);
+	}
+	atomic_dec(&rdev->stats.res.cq_count);
+	atomic_dec(&rdev->dv_cq_count);
+	kfree(cq);
+	uobj->object = NULL;
+	return 0;
+}
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_DV_CREATE_CQ,
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_CQ_HANDLE,
+					    BNXT_RE_OBJECT_DV_CQ,
+					    UVERBS_ACCESS_NEW,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_DV_CREATE_CQ_REQ,
+					       UVERBS_ATTR_STRUCT(struct bnxt_re_dv_cq_req,
+								  comp_mask),
+								  UA_MANDATORY),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_CQ_UMEM_HANDLE,
+					    BNXT_RE_OBJECT_UMEM,
+					    UVERBS_ACCESS_READ,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_DV_CREATE_CQ_UMEM_OFFSET,
+					       UVERBS_ATTR_TYPE(u64),
+					       UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_DV_CREATE_CQ_RESP,
+						UVERBS_ATTR_STRUCT(struct bnxt_re_dv_cq_resp,
+								   comp_mask),
+								   UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_DV_DESTROY_CQ,
+				    UVERBS_ATTR_IDR(BNXT_RE_DV_DESTROY_CQ_HANDLE,
+						    BNXT_RE_OBJECT_DV_CQ,
+						    UVERBS_ACCESS_DESTROY,
+						    UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_DV_CQ,
+			    UVERBS_TYPE_ALLOC_IDR(bnxt_re_dv_free_cq),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DV_CREATE_CQ),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DV_DESTROY_CQ));
+
+static void
+bnxt_re_print_dv_qp_attr(struct bnxt_re_dev *rdev,
+			 struct bnxt_re_cq *send_cq,
+			 struct bnxt_re_cq *recv_cq,
+			 struct  bnxt_re_dv_create_qp_req *req)
+{
+	dev_dbg(rdev_to_dev(rdev), "DV_QP_ATTR:\n");
+	dev_dbg(rdev_to_dev(rdev),
+		"\t qp_type: 0x%x pdid: 0x%x qp_handle: 0x%llx\n",
+		req->qp_type, req->pd_id, req->qp_handle);
+
+	dev_dbg(rdev_to_dev(rdev), "\t SQ ATTR:\n");
+	dev_dbg(rdev_to_dev(rdev),
+		"\t\t max_send_wr: 0x%x max_send_sge: 0x%x\n",
+		req->max_send_wr, req->max_send_sge);
+	dev_dbg(rdev_to_dev(rdev),
+		"\t\t va: 0x%llx len: 0x%x slots: 0x%x wqe_sz: 0x%x\n",
+		req->sq_va, req->sq_len, req->sq_slots, req->sq_wqe_sz);
+	dev_dbg(rdev_to_dev(rdev), "\t\t psn_sz: 0x%x npsn: 0x%x\n",
+		req->sq_psn_sz, req->sq_npsn);
+	dev_dbg(rdev_to_dev(rdev),
+		"\t\t send_cq_id: 0x%x\n", send_cq->qplib_cq.id);
+
+	dev_dbg(rdev_to_dev(rdev), "\t RQ ATTR:\n");
+	dev_dbg(rdev_to_dev(rdev),
+		"\t\t max_recv_wr: 0x%x max_recv_sge: 0x%x\n",
+		req->max_recv_wr, req->max_recv_sge);
+	dev_dbg(rdev_to_dev(rdev),
+		"\t\t va: 0x%llx len: 0x%x slots: 0x%x wqe_sz: 0x%x\n",
+		req->rq_va, req->rq_len, req->rq_slots, req->rq_wqe_sz);
+	dev_dbg(rdev_to_dev(rdev),
+		"\t\t recv_cq_id: 0x%x\n", recv_cq->qplib_cq.id);
+}
+
+static int bnxt_re_dv_init_qp_attr(struct bnxt_re_qp *qp,
+				   struct ib_ucontext *context,
+				   struct bnxt_re_cq *send_cq,
+				   struct bnxt_re_cq *recv_cq,
+				   struct bnxt_re_srq *srq,
+				   struct bnxt_re_alloc_dbr_obj *dbr_obj,
+				   struct bnxt_re_dv_create_qp_req *init_attr)
+{
+	struct bnxt_qplib_dev_attr *dev_attr;
+	struct bnxt_re_ucontext *cntx = NULL;
+	struct bnxt_qplib_qp *qplqp;
+	struct bnxt_re_dev *rdev;
+	struct bnxt_qplib_q *rq;
+	struct bnxt_qplib_q *sq;
+	u32 slot_size;
+	int qptype;
+
+	rdev = qp->rdev;
+	qplqp = &qp->qplib_qp;
+	dev_attr = rdev->dev_attr;
+	cntx = container_of(context, struct bnxt_re_ucontext, ib_uctx);
+
+	/* Setup misc params */
+	qplqp->is_user = true;
+	qplqp->pd_id = init_attr->pd_id;
+	qplqp->qp_handle = (u64)qplqp;
+	qplqp->sig_type = false;
+	qptype = __from_ib_qp_type(init_attr->qp_type);
+	if (qptype < 0)
+		return qptype;
+	qplqp->type = (u8)qptype;
+	qplqp->wqe_mode = rdev->chip_ctx->modes.wqe_mode;
+	ether_addr_copy(qplqp->smac, rdev->netdev->dev_addr);
+	qplqp->dev_cap_flags = dev_attr->dev_cap_flags;
+	qplqp->cctx = rdev->chip_ctx;
+
+	if (init_attr->qp_type == IB_QPT_RC) {
+		qplqp->max_rd_atomic = dev_attr->max_qp_rd_atom;
+		qplqp->max_dest_rd_atomic = dev_attr->max_qp_init_rd_atom;
+	}
+	qplqp->mtu = ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
+	if (dbr_obj)
+		qplqp->dpi = &dbr_obj->dpi;
+	else
+		qplqp->dpi = &cntx->dpi;
+
+	/* Setup CQs */
+	if (!send_cq) {
+		dev_err(rdev_to_dev(rdev), "Send CQ not found");
+		return -EINVAL;
+	}
+	qplqp->scq = &send_cq->qplib_cq;
+	qp->scq = send_cq;
+
+	if (!recv_cq) {
+		dev_err(rdev_to_dev(rdev), "Receive CQ not found");
+		return -EINVAL;
+	}
+	qplqp->rcq = &recv_cq->qplib_cq;
+	qp->rcq = recv_cq;
+
+	if (!srq) {
+		/* Setup RQ */
+		slot_size = bnxt_qplib_get_stride();
+		rq = &qplqp->rq;
+		rq->max_sge = init_attr->max_recv_sge;
+		rq->wqe_size = init_attr->rq_wqe_sz;
+		rq->max_wqe = (init_attr->rq_slots * slot_size) /
+				init_attr->rq_wqe_sz;
+		rq->max_sw_wqe = rq->max_wqe;
+		rq->q_full_delta = 0;
+		rq->sg_info.pgsize = PAGE_SIZE;
+		rq->sg_info.pgshft = PAGE_SHIFT;
+	}
+
+	/* Setup SQ */
+	sq = &qplqp->sq;
+	sq->max_sge = init_attr->max_send_sge;
+	sq->wqe_size = init_attr->sq_wqe_sz;
+	sq->max_wqe = init_attr->sq_slots; /* SQ in var-wqe mode */
+	sq->max_sw_wqe = sq->max_wqe;
+	sq->q_full_delta = 0;
+	sq->sg_info.pgsize = PAGE_SIZE;
+	sq->sg_info.pgshft = PAGE_SHIFT;
+
+	return 0;
+}
+
+static int bnxt_re_dv_init_user_qp(struct bnxt_re_dev *rdev,
+				   struct ib_ucontext *context,
+				   struct bnxt_re_qp *qp,
+				   struct bnxt_re_srq *srq,
+				   struct bnxt_re_dv_create_qp_req *init_attr,
+				   struct bnxt_re_dv_umem *sq_umem,
+				   struct bnxt_re_dv_umem *rq_umem)
+{
+	struct bnxt_qplib_sg_info *sginfo;
+	struct bnxt_re_dv_umem *dv_umem;
+	struct bnxt_qplib_qp *qplib_qp;
+	int rc = -EINVAL;
+
+	if (!sq_umem || (!srq && !rq_umem))
+		return rc;
+
+	qplib_qp = &qp->qplib_qp;
+	qplib_qp->qp_handle = init_attr->qp_handle;
+	sginfo = &qplib_qp->sq.sg_info;
+
+	/* SQ */
+	dv_umem = bnxt_re_dv_umem_get(rdev, context, sq_umem,
+				      init_attr->sq_umem_offset,
+				      init_attr->sq_len, sginfo);
+	if (IS_ERR(dv_umem)) {
+		rc = PTR_ERR(dv_umem);
+		dev_err(rdev_to_dev(rdev), "%s: bnxt_re_dv_umem_get() failed! rc = %d\n",
+			__func__, rc);
+		return rc;
+	}
+	qp->sq_umem = dv_umem;
+	qp->sumem = dv_umem->umem;
+	dev_dbg(rdev_to_dev(rdev),
+		"%s: umem: 0x%llx npages: %d page_size: %d page_shift: %d\n",
+		__func__, (u64)(dv_umem->umem), sginfo->npages, sginfo->pgsize, sginfo->pgshft);
+
+	/* SRQ */
+	if (srq) {
+		qplib_qp->srq = &srq->qplib_srq;
+		goto done;
+	}
+
+	/* RQ */
+	sginfo = &qplib_qp->rq.sg_info;
+	dv_umem = bnxt_re_dv_umem_get(rdev, context, rq_umem,
+				      init_attr->rq_umem_offset,
+				      init_attr->rq_len, sginfo);
+	if (IS_ERR(dv_umem)) {
+		rc = PTR_ERR(dv_umem);
+		dev_err(rdev_to_dev(rdev), "%s: bnxt_re_dv_umem_get() failed! rc = %d\n",
+			__func__, rc);
+		goto rqfail;
+	}
+	qp->rq_umem = dv_umem;
+	qp->rumem = dv_umem->umem;
+	dev_dbg(rdev_to_dev(rdev),
+		"%s: umem: 0x%llx npages: %d page_size: %d page_shift: %d\n",
+		__func__, (u64)(dv_umem->umem), sginfo->npages, sginfo->pgsize, sginfo->pgshft);
+
+done:
+	qplib_qp->is_user = true;
+	return 0;
+rqfail:
+	ib_umem_release(qp->sumem);
+	kfree(qp->sq_umem);
+	qplib_qp->sq.sg_info.umem = NULL;
+	return rc;
+}
+
+static void
+bnxt_re_dv_qp_init_msn(struct bnxt_re_qp *qp,
+		       struct bnxt_re_dv_create_qp_req *req)
+{
+	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
+
+	qplib_qp->is_host_msn_tbl = true;
+	qplib_qp->msn = 0;
+	qplib_qp->psn_sz = req->sq_psn_sz;
+	qplib_qp->msn_tbl_sz = req->sq_psn_sz * req->sq_npsn;
+}
+
+/* Init some members of ib_qp for now; this may
+ * not be needed in the final DV implementation.
+ * Reference code: core/verbs.c::create_qp()
+ */
+static void bnxt_re_dv_init_ib_qp(struct bnxt_re_dev *rdev,
+				  struct bnxt_re_qp *re_qp)
+{
+	struct bnxt_qplib_qp *qplib_qp = &re_qp->qplib_qp;
+	struct ib_qp *ib_qp = &re_qp->ib_qp;
+
+	ib_qp->device = &rdev->ibdev;
+	ib_qp->qp_num = qplib_qp->id;
+	ib_qp->real_qp = ib_qp;
+	ib_qp->qp_type = IB_QPT_RC;
+	ib_qp->send_cq = &re_qp->scq->ib_cq;
+	ib_qp->recv_cq = &re_qp->rcq->ib_cq;
+}
+
+static void bnxt_re_dv_init_qp(struct bnxt_re_dev *rdev,
+			       struct bnxt_re_qp *qp)
+{
+	u32 active_qps, tmp_qps;
+
+	spin_lock_init(&qp->sq_lock);
+	spin_lock_init(&qp->rq_lock);
+	INIT_LIST_HEAD(&qp->list);
+	mutex_lock(&rdev->qp_lock);
+	list_add_tail(&qp->list, &rdev->qp_list);
+	mutex_unlock(&rdev->qp_lock);
+	atomic_inc(&rdev->stats.res.qp_count);
+	active_qps = atomic_read(&rdev->stats.res.qp_count);
+	if (active_qps > rdev->stats.res.qp_watermark)
+		rdev->stats.res.qp_watermark = active_qps;
+
+	/* Get the counters for RC QPs */
+	tmp_qps = atomic_inc_return(&rdev->stats.res.rc_qp_count);
+	if (tmp_qps > rdev->stats.res.rc_qp_watermark)
+		rdev->stats.res.rc_qp_watermark = tmp_qps;
+
+	bnxt_re_dv_init_ib_qp(rdev, qp);
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_DV_CREATE_QP)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj =
+		uverbs_attr_get_uobject(attrs, BNXT_RE_DV_CREATE_QP_HANDLE);
+	struct bnxt_re_alloc_dbr_obj *dbr_obj = NULL;
+	struct bnxt_re_dv_create_qp_resp resp = {};
+	struct bnxt_re_dv_create_qp_req req = {};
+	struct bnxt_re_dv_umem *sq_umem = NULL;
+	struct bnxt_re_dv_umem *rq_umem = NULL;
+	struct bnxt_re_ucontext *re_uctx;
+	struct bnxt_re_srq *srq = NULL;
+	struct bnxt_re_dv_umem *umem;
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_re_cq *send_cq;
+	struct bnxt_re_cq *recv_cq;
+	struct bnxt_re_dev *rdev;
+	struct bnxt_re_qp *re_qp;
+	struct ib_srq *ib_srq;
+	int ret;
+
+	if (IS_ERR(uobj))
+		return PTR_ERR(uobj);
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	re_uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = re_uctx->rdev;
+
+	ret = uverbs_copy_from_or_zero(&req, attrs, BNXT_RE_DV_CREATE_QP_REQ);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev), "%s: uverbs_copy_to() failed: %d\n",
+			__func__, ret);
+		return ret;
+	}
+
+	send_cq = uverbs_attr_get_obj(attrs,
+				      BNXT_RE_DV_CREATE_QP_SEND_CQ_HANDLE);
+	if (IS_ERR(send_cq))
+		return PTR_ERR(send_cq);
+
+	recv_cq = uverbs_attr_get_obj(attrs,
+				      BNXT_RE_DV_CREATE_QP_RECV_CQ_HANDLE);
+	if (IS_ERR(recv_cq))
+		return PTR_ERR(recv_cq);
+
+	bnxt_re_print_dv_qp_attr(rdev, send_cq, recv_cq, &req);
+
+	re_qp = kzalloc(sizeof(*re_qp), GFP_KERNEL);
+	if (!re_qp)
+		return -ENOMEM;
+
+	re_qp->rdev = rdev;
+	umem = uverbs_attr_get_obj(attrs, BNXT_RE_DV_CREATE_QP_SQ_UMEM_HANDLE);
+	if (!IS_ERR(umem))
+		sq_umem = umem;
+
+	umem = uverbs_attr_get_obj(attrs, BNXT_RE_DV_CREATE_QP_RQ_UMEM_HANDLE);
+	if (!IS_ERR(umem))
+		rq_umem = umem;
+
+	ib_srq = uverbs_attr_get_obj(attrs, BNXT_RE_DV_CREATE_QP_SRQ_HANDLE);
+	if (!IS_ERR(ib_srq))
+		srq = container_of(ib_srq, struct bnxt_re_srq, ib_srq);
+
+	if (uverbs_attr_is_valid(attrs, BNXT_RE_DV_CREATE_QP_DBR_HANDLE))
+		dbr_obj = uverbs_attr_get_obj(attrs, BNXT_RE_DV_CREATE_QP_DBR_HANDLE);
+
+	ret = bnxt_re_dv_init_qp_attr(re_qp, ib_uctx, send_cq, recv_cq, srq,
+				      dbr_obj, &req);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev), "Failed to initialize qp attr");
+		return ret;
+	}
+
+	ret = bnxt_re_dv_init_user_qp(rdev, ib_uctx, re_qp, srq, &req, sq_umem, rq_umem);
+	if (ret)
+		return ret;
+
+	bnxt_re_dv_qp_init_msn(re_qp, &req);
+
+	ret = bnxt_re_setup_qp_hwqs(re_qp, true);
+	if (ret)
+		goto free_umem;
+
+	ret = bnxt_qplib_create_qp(&rdev->qplib_res, &re_qp->qplib_qp);
+	if (ret) {
+		dev_err(rdev_to_dev(rdev), "create HW QP failed!");
+		goto free_hwq;
+	}
+
+	resp.qpid = re_qp->qplib_qp.id;
+	ret = bnxt_re_dv_uverbs_copy_to(rdev, attrs, BNXT_RE_DV_CREATE_QP_RESP,
+					&resp, sizeof(resp));
+	if (ret) {
+		dev_err(rdev_to_dev(rdev),
+			"%s: Failed to copy cq response: %d\n", __func__, ret);
+		goto free_qplib;
+	}
+
+	bnxt_re_dv_finalize_uobj(uobj, re_qp, attrs, BNXT_RE_DV_CREATE_QP_HANDLE);
+	bnxt_re_dv_init_qp(rdev, re_qp);
+	re_qp->is_dv_qp = true;
+	atomic_inc(&rdev->dv_qp_count);
+	dev_dbg(rdev_to_dev(rdev), "%s: Created QP: 0x%llx, handle: 0x%x\n",
+		__func__, (u64)re_qp, uobj->id);
+	if (dbr_obj)
+		dev_dbg(rdev_to_dev(rdev), "%s: QP DPI index: 0x%x\n",
+			__func__, re_qp->qplib_qp.dpi->dpi);
+
+	return 0;
+
+free_qplib:
+	bnxt_qplib_destroy_qp(&rdev->qplib_res, &re_qp->qplib_qp);
+free_hwq:
+	bnxt_qplib_free_qp_res(&rdev->qplib_res, &re_qp->qplib_qp);
+free_umem:
+	bnxt_re_qp_free_umem(re_qp);
+	return ret;
+}
+
+static int bnxt_re_dv_free_qp(struct ib_uobject *uobj,
+			      enum rdma_remove_reason why,
+			      struct uverbs_attr_bundle *attrs)
+{
+	struct bnxt_re_qp *qp = uobj->object;
+	struct bnxt_re_dev *rdev = qp->rdev;
+	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
+	struct bnxt_qplib_nq *scq_nq = NULL;
+	struct bnxt_qplib_nq *rcq_nq = NULL;
+	int rc;
+
+	dev_dbg(rdev_to_dev(rdev), "%s: Destroy QP: 0x%llx, handle: 0x%x\n",
+		__func__, (u64)qp, uobj->id);
+
+	mutex_lock(&rdev->qp_lock);
+	list_del(&qp->list);
+	atomic_dec_return(&rdev->stats.res.qp_count);
+	if (qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_RC)
+		atomic_dec(&rdev->stats.res.rc_qp_count);
+	mutex_unlock(&rdev->qp_lock);
+
+	rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, qplib_qp);
+	if (rc)
+		dev_err_ratelimited(rdev_to_dev(rdev), "%s id = %d failed rc = %d",
+				    __func__, qplib_qp->id, rc);
+
+	bnxt_qplib_free_qp_res(&rdev->qplib_res, qplib_qp);
+	bnxt_re_qp_free_umem(qp);
+
+	/* Flush all the entries of notification queue associated with
+	 * given qp.
+	 */
+	scq_nq = qplib_qp->scq->nq;
+	rcq_nq = qplib_qp->rcq->nq;
+	bnxt_re_synchronize_nq(scq_nq);
+	if (scq_nq != rcq_nq)
+		bnxt_re_synchronize_nq(rcq_nq);
+
+	if (qp->sgid_attr)
+		rdma_put_gid_attr(qp->sgid_attr);
+	atomic_dec(&rdev->dv_qp_count);
+	kfree(qp);
+	uobj->object = NULL;
+	return 0;
+}
+
+static void bnxt_re_copyout_ah_attr(struct ib_uverbs_ah_attr *dattr,
+				    struct rdma_ah_attr *sattr)
+{
+	dattr->sl               = sattr->sl;
+	memcpy(dattr->grh.dgid, &sattr->grh.dgid, 16);
+	dattr->grh.flow_label = sattr->grh.flow_label;
+	dattr->grh.hop_limit = sattr->grh.hop_limit;
+	dattr->grh.sgid_index = sattr->grh.sgid_index;
+	dattr->grh.traffic_class = sattr->grh.traffic_class;
+}
+
+static void bnxt_re_dv_copy_qp_attr_out(struct bnxt_re_dev *rdev,
+					struct ib_uverbs_qp_attr *out,
+					struct ib_qp_attr *qp_attr,
+					struct ib_qp_init_attr *qp_init_attr)
+{
+	out->qp_state = qp_attr->qp_state;
+	out->cur_qp_state = qp_attr->cur_qp_state;
+	out->path_mtu = qp_attr->path_mtu;
+	out->path_mig_state = qp_attr->path_mig_state;
+	out->qkey = qp_attr->qkey;
+	out->rq_psn = qp_attr->rq_psn;
+	out->sq_psn = qp_attr->sq_psn;
+	out->dest_qp_num = qp_attr->dest_qp_num;
+	out->qp_access_flags = qp_attr->qp_access_flags;
+	out->max_send_wr = qp_attr->cap.max_send_wr;
+	out->max_recv_wr = qp_attr->cap.max_recv_wr;
+	out->max_send_sge = qp_attr->cap.max_send_sge;
+	out->max_recv_sge = qp_attr->cap.max_recv_sge;
+	out->max_inline_data = qp_attr->cap.max_inline_data;
+	out->pkey_index = qp_attr->pkey_index;
+	out->alt_pkey_index = qp_attr->alt_pkey_index;
+	out->en_sqd_async_notify = qp_attr->en_sqd_async_notify;
+	out->sq_draining = qp_attr->sq_draining;
+	out->max_rd_atomic = qp_attr->max_rd_atomic;
+	out->max_dest_rd_atomic = qp_attr->max_dest_rd_atomic;
+	out->min_rnr_timer = qp_attr->min_rnr_timer;
+	out->port_num = qp_attr->port_num;
+	out->timeout = qp_attr->timeout;
+	out->retry_cnt = qp_attr->retry_cnt;
+	out->rnr_retry = qp_attr->rnr_retry;
+	out->alt_port_num = qp_attr->alt_port_num;
+	out->alt_timeout = qp_attr->alt_timeout;
+
+	bnxt_re_copyout_ah_attr(&out->ah_attr, &qp_attr->ah_attr);
+	bnxt_re_copyout_ah_attr(&out->alt_ah_attr, &qp_attr->alt_ah_attr);
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_DV_QUERY_QP)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_qp_init_attr qp_init_attr = {};
+	struct ib_uverbs_qp_attr attr = {};
+	struct bnxt_re_ucontext *re_uctx;
+	struct ib_qp_attr qp_attr = {};
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_re_dev *rdev;
+	struct ib_uobject *uobj;
+	struct bnxt_re_qp *qp;
+	int ret;
+
+	uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_DV_QUERY_QP_HANDLE);
+	qp = uobj->object;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	re_uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = re_uctx->rdev;
+
+	ret = bnxt_re_query_qp_attr(qp, &qp_attr, 0, &qp_init_attr);
+	if (ret)
+		return ret;
+
+	bnxt_re_dv_copy_qp_attr_out(rdev, &attr, &qp_attr, &qp_init_attr);
+
+	ret = uverbs_copy_to(attrs, BNXT_RE_DV_QUERY_QP_ATTR, &attr,
+			     sizeof(attr));
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+struct bnxt_re_resolve_cb_context {
+	struct completion comp;
+	int status;
+};
+
+static void bnxt_re_resolve_cb(int status, struct sockaddr *src_addr,
+			       struct rdma_dev_addr *addr, void *context)
+{
+	struct bnxt_re_resolve_cb_context *ctx = context;
+
+	ctx->status = status;
+	complete(&ctx->comp);
+}
+
+static int bnxt_re_resolve_eth_dmac_by_grh(const struct ib_gid_attr *sgid_attr,
+					   const union ib_gid *sgid,
+					   const union ib_gid *dgid,
+					   u8 *dmac, u8 *smac,
+					   int *hoplimit)
+{
+	struct bnxt_re_resolve_cb_context ctx;
+	struct rdma_dev_addr dev_addr;
+	union {
+		struct sockaddr_in  _sockaddr_in;
+		struct sockaddr_in6 _sockaddr_in6;
+	} sgid_addr, dgid_addr;
+	int rc;
+
+	rdma_gid2ip((struct sockaddr *)&sgid_addr, sgid);
+	rdma_gid2ip((struct sockaddr *)&dgid_addr, dgid);
+
+	memset(&dev_addr, 0, sizeof(dev_addr));
+	dev_addr.sgid_attr = sgid_attr;
+	dev_addr.net = &init_net;
+
+	init_completion(&ctx.comp);
+	rc = rdma_resolve_ip((struct sockaddr *)&sgid_addr,
+			     (struct sockaddr *)&dgid_addr, &dev_addr, 1000,
+			     bnxt_re_resolve_cb, true, &ctx);
+	if (rc)
+		return rc;
+
+	wait_for_completion(&ctx.comp);
+
+	rc = ctx.status;
+	if (rc)
+		return rc;
+
+	memcpy(dmac, dev_addr.dst_dev_addr, ETH_ALEN);
+	memcpy(smac, dev_addr.src_dev_addr, ETH_ALEN);
+	*hoplimit = dev_addr.hoplimit;
+	return 0;
+}
+
+static int bnxt_re_resolve_gid_dmac(struct ib_device *device,
+				    struct rdma_ah_attr *ah_attr)
+{
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(device, ibdev);
+	const struct ib_gid_attr *sgid_attr;
+	struct ib_global_route *grh;
+	u8 smac[ETH_ALEN] = {};
+	int hop_limit = 0xff;
+	int rc = 0;
+
+	grh = rdma_ah_retrieve_grh(ah_attr);
+	if (!grh)
+		return -EINVAL;
+
+	sgid_attr = grh->sgid_attr;
+	if (!sgid_attr)
+		return -EINVAL;
+
+	/*  Link local destination and RoCEv1 SGID */
+	if (rdma_link_local_addr((struct in6_addr *)grh->dgid.raw) &&
+	    sgid_attr->gid_type == IB_GID_TYPE_ROCE) {
+		rdma_get_ll_mac((struct in6_addr *)grh->dgid.raw,
+				ah_attr->roce.dmac);
+		return rc;
+	}
+
+	dev_dbg(rdev_to_dev(rdev),
+		"%s: netdev: %s sgid: %pI6 dgid: %pI6 gid_type: %d gid_index: %d\n",
+		__func__, rdev->netdev ? rdev->netdev->name : "NULL",
+		&sgid_attr->gid, &grh->dgid, sgid_attr->gid_type, grh->sgid_index);
+
+	rc = bnxt_re_resolve_eth_dmac_by_grh(sgid_attr, &sgid_attr->gid,
+					     &grh->dgid, ah_attr->roce.dmac,
+					     smac, &hop_limit);
+	if (!rc) {
+		grh->hop_limit = hop_limit;
+		dev_dbg(rdev_to_dev(rdev), "%s: Resolved: dmac: %pM smac: %pM\n",
+			__func__, ah_attr->roce.dmac, smac);
+	}
+	return rc;
+}
+
+static int bnxt_re_resolve_eth_dmac(struct ib_device *device,
+				    struct rdma_ah_attr *ah_attr)
+{
+	int rc = 0;
+
+	/* unicast */
+	if (!rdma_is_multicast_addr((struct in6_addr *)ah_attr->grh.dgid.raw)) {
+		rc = bnxt_re_resolve_gid_dmac(device, ah_attr);
+		if (rc) {
+			dev_err(&device->dev, "%s: Failed to resolve gid dmac: %d\n",
+				__func__, rc);
+		}
+		return rc;
+	}
+
+	/* multicast */
+	if (ipv6_addr_v4mapped((struct in6_addr *)ah_attr->grh.dgid.raw)) {
+		__be32 addr = 0;
+
+		memcpy(&addr, ah_attr->grh.dgid.raw + 12, 4);
+		       ip_eth_mc_map(addr, (char *)ah_attr->roce.dmac);
+	} else {
+		ipv6_eth_mc_map((struct in6_addr *)ah_attr->grh.dgid.raw,
+				(char *)ah_attr->roce.dmac);
+	}
+	return rc;
+}
+
+static int bnxt_re_copyin_ah_attr(struct ib_device *device,
+				  struct rdma_ah_attr *dattr,
+				  struct ib_uverbs_ah_attr *sattr)
+{
+	const struct ib_gid_attr *sgid_attr;
+	struct ib_global_route *grh;
+	int rc;
+
+	dattr->sl		= sattr->sl;
+	dattr->static_rate	= sattr->static_rate;
+	dattr->port_num		= sattr->port_num;
+
+	if (!sattr->is_global)
+		return 0;
+
+	grh = &dattr->grh;
+	if (grh->sgid_attr)
+		return 0;
+
+	sgid_attr = rdma_get_gid_attr(device, sattr->port_num,
+				      sattr->grh.sgid_index);
+	if (IS_ERR(sgid_attr))
+		return PTR_ERR(sgid_attr);
+	grh->sgid_attr = sgid_attr;
+
+	memcpy(&grh->dgid, sattr->grh.dgid, 16);
+	grh->flow_label = sattr->grh.flow_label;
+	grh->hop_limit = sattr->grh.hop_limit;
+	grh->sgid_index = sattr->grh.sgid_index;
+	grh->traffic_class = sattr->grh.traffic_class;
+
+	rc = bnxt_re_resolve_eth_dmac(device, dattr);
+	if (rc)
+		rdma_put_gid_attr(sgid_attr);
+	return rc;
+}
+
+static int bnxt_re_dv_copy_qp_attr(struct bnxt_re_dev *rdev,
+				   struct ib_qp_attr *dst,
+				   struct ib_uverbs_qp_attr *src)
+{
+	int rc;
+
+	if (src->qp_attr_mask & IB_QP_ALT_PATH)
+		return -EINVAL;
+
+	dst->qp_state           = src->qp_state;
+	dst->cur_qp_state       = src->cur_qp_state;
+	dst->path_mtu           = src->path_mtu;
+	dst->path_mig_state     = src->path_mig_state;
+	dst->qkey               = src->qkey;
+	dst->rq_psn             = src->rq_psn;
+	dst->sq_psn             = src->sq_psn;
+	dst->dest_qp_num        = src->dest_qp_num;
+	dst->qp_access_flags    = src->qp_access_flags;
+
+	dst->cap.max_send_wr        = src->max_send_wr;
+	dst->cap.max_recv_wr        = src->max_recv_wr;
+	dst->cap.max_send_sge       = src->max_send_sge;
+	dst->cap.max_recv_sge       = src->max_recv_sge;
+	dst->cap.max_inline_data    = src->max_inline_data;
+
+	if (src->qp_attr_mask & IB_QP_AV) {
+		rc = bnxt_re_copyin_ah_attr(&rdev->ibdev, &dst->ah_attr,
+					    &src->ah_attr);
+		if (rc)
+			return rc;
+	}
+
+	dst->pkey_index         = src->pkey_index;
+	dst->alt_pkey_index     = src->alt_pkey_index;
+	dst->en_sqd_async_notify = src->en_sqd_async_notify;
+	dst->sq_draining        = src->sq_draining;
+	dst->max_rd_atomic      = src->max_rd_atomic;
+	dst->max_dest_rd_atomic = src->max_dest_rd_atomic;
+	dst->min_rnr_timer      = src->min_rnr_timer;
+	dst->port_num           = src->port_num;
+	dst->timeout            = src->timeout;
+	dst->retry_cnt          = src->retry_cnt;
+	dst->rnr_retry          = src->rnr_retry;
+	dst->alt_port_num       = src->alt_port_num;
+	dst->alt_timeout        = src->alt_timeout;
+
+	return 0;
+}
+
+static int bnxt_re_dv_modify_qp(struct ib_uobject *uobj,
+				struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uverbs_qp_attr qp_u_attr = {};
+	struct bnxt_re_ucontext *re_uctx;
+	struct ib_qp_attr qp_attr = {};
+	struct ib_ucontext *ib_uctx;
+	struct bnxt_re_dev *rdev;
+	struct bnxt_re_qp *qp;
+	int err;
+
+	qp = uobj->object;
+
+	ib_uctx = ib_uverbs_get_ucontext(attrs);
+	if (IS_ERR(ib_uctx))
+		return PTR_ERR(ib_uctx);
+
+	re_uctx = container_of(ib_uctx, struct bnxt_re_ucontext, ib_uctx);
+	rdev = re_uctx->rdev;
+
+	err = uverbs_copy_from_or_zero(&qp_u_attr, attrs, BNXT_RE_DV_MODIFY_QP_REQ);
+	if (err) {
+		dev_err(rdev_to_dev(rdev), "%s: uverbs_copy_from() failed: %d\n",
+			__func__, err);
+		return err;
+	}
+
+	err = bnxt_re_dv_copy_qp_attr(rdev, &qp_attr, &qp_u_attr);
+	if (err) {
+		dev_err(rdev_to_dev(rdev), "%s: Failed to copy qp_u_attr: %d\n",
+			__func__, err);
+		return err;
+	}
+
+	err = bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, qp_u_attr.qp_attr_mask, NULL);
+	if (err) {
+		dev_err(rdev_to_dev(rdev),
+			"%s: Modify QP failed: 0x%llx, handle: 0x%x\n",
+			 __func__, (u64)qp, uobj->id);
+		if (qp_u_attr.qp_attr_mask & IB_QP_AV)
+			rdma_put_gid_attr(qp_attr.ah_attr.grh.sgid_attr);
+	} else {
+		dev_dbg(rdev_to_dev(rdev),
+			"%s: Modified QP: 0x%llx, handle: 0x%x\n",
+			__func__, (u64)qp, uobj->id);
+		if (qp_u_attr.qp_attr_mask & IB_QP_AV)
+			qp->sgid_attr = qp_attr.ah_attr.grh.sgid_attr;
+	}
+	return err;
+}
+
+static int UVERBS_HANDLER(BNXT_RE_METHOD_DV_MODIFY_QP)(struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj;
+
+	uobj = uverbs_attr_get_uobject(attrs, BNXT_RE_DV_MODIFY_QP_HANDLE);
+	return bnxt_re_dv_modify_qp(uobj, attrs);
+}
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_DV_CREATE_QP,
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_HANDLE,
+					    BNXT_RE_OBJECT_DV_QP,
+					    UVERBS_ACCESS_NEW,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_DV_CREATE_QP_REQ,
+					       UVERBS_ATTR_STRUCT(struct bnxt_re_dv_create_qp_req,
+								  rq_slots),
+					       UA_MANDATORY),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_SEND_CQ_HANDLE,
+					    UVERBS_IDR_ANY_OBJECT,
+					    UVERBS_ACCESS_READ,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_RECV_CQ_HANDLE,
+					    UVERBS_IDR_ANY_OBJECT,
+					    UVERBS_ACCESS_READ,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_SQ_UMEM_HANDLE,
+					    BNXT_RE_OBJECT_UMEM,
+					    UVERBS_ACCESS_READ,
+					    UA_OPTIONAL),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_RQ_UMEM_HANDLE,
+					    BNXT_RE_OBJECT_UMEM,
+					    UVERBS_ACCESS_READ,
+					    UA_OPTIONAL),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_SRQ_HANDLE,
+					    UVERBS_OBJECT_SRQ,
+					    UVERBS_ACCESS_READ,
+					    UA_OPTIONAL),
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_CREATE_QP_DBR_HANDLE,
+					    BNXT_RE_OBJECT_DBR,
+					    UVERBS_ACCESS_READ,
+					    UA_OPTIONAL),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_DV_CREATE_QP_RESP,
+						UVERBS_ATTR_STRUCT(struct bnxt_re_dv_create_qp_resp,
+								   qpid),
+						UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(BNXT_RE_METHOD_DV_DESTROY_QP,
+				    UVERBS_ATTR_IDR(BNXT_RE_DV_DESTROY_QP_HANDLE,
+						    BNXT_RE_OBJECT_DV_QP,
+						    UVERBS_ACCESS_DESTROY,
+						    UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_DV_QUERY_QP,
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_QUERY_QP_HANDLE,
+					    BNXT_RE_OBJECT_DV_QP,
+					    UVERBS_ACCESS_READ,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_OUT(BNXT_RE_DV_QUERY_QP_ATTR,
+						UVERBS_ATTR_STRUCT(struct ib_uverbs_qp_attr,
+								   reserved),
+						UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD(BNXT_RE_METHOD_DV_MODIFY_QP,
+			    UVERBS_ATTR_IDR(BNXT_RE_DV_MODIFY_QP_HANDLE,
+					    UVERBS_IDR_ANY_OBJECT,
+					    UVERBS_ACCESS_READ,
+					    UA_MANDATORY),
+			    UVERBS_ATTR_PTR_IN(BNXT_RE_DV_MODIFY_QP_REQ,
+					       UVERBS_ATTR_STRUCT(struct ib_uverbs_qp_attr,
+								  reserved),
+					       UA_OPTIONAL));
+
+DECLARE_UVERBS_NAMED_OBJECT(BNXT_RE_OBJECT_DV_QP,
+			    UVERBS_TYPE_ALLOC_IDR(bnxt_re_dv_free_qp),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DV_CREATE_QP),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DV_DESTROY_QP),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DV_MODIFY_QP),
+			    &UVERBS_METHOD(BNXT_RE_METHOD_DV_QUERY_QP),
+);
+
 const struct uapi_definition bnxt_re_uapi_defs[] = {
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_ALLOC_PAGE),
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_NOTIFY_DRV),
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_GET_TOGGLE_MEM),
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_DBR),
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_UMEM),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_DV_CQ),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(BNXT_RE_OBJECT_DV_QP),
 	{}
 };
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index 272934c33c6b..4e50c0b6e253 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -971,10 +971,12 @@ static void bnxt_re_del_unique_gid(struct bnxt_re_dev *rdev)
 		dev_err(rdev_to_dev(rdev), "Failed to delete unique GID, rc: %d\n", rc);
 }
 
-static void bnxt_re_qp_free_umem(struct bnxt_re_qp *qp)
+void bnxt_re_qp_free_umem(struct bnxt_re_qp *qp)
 {
 	ib_umem_release(qp->rumem);
+	kfree(qp->rq_umem);
 	ib_umem_release(qp->sumem);
+	kfree(qp->sq_umem);
 }
 
 /* Queue Pairs */
@@ -1033,7 +1035,7 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
 	return 0;
 }
 
-static u8 __from_ib_qp_type(enum ib_qp_type type)
+u8 __from_ib_qp_type(enum ib_qp_type type)
 {
 	switch (type) {
 	case IB_QPT_GSI:
@@ -1269,7 +1271,7 @@ static int bnxt_re_qp_alloc_init_xrrq(struct bnxt_re_qp *qp)
 	return rc;
 }
 
-static int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp)
+int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp, bool is_dv_qp)
 {
 	struct bnxt_qplib_res *res = &qp->rdev->qplib_res;
 	struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp;
@@ -1283,12 +1285,17 @@ static int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp)
 	hwq_attr.res = res;
 	hwq_attr.sginfo = &sq->sg_info;
 	hwq_attr.stride = bnxt_qplib_get_stride();
-	hwq_attr.depth = bnxt_qplib_get_depth(sq, wqe_mode, true);
 	hwq_attr.aux_stride = qplib_qp->psn_sz;
-	hwq_attr.aux_depth = (qplib_qp->psn_sz) ?
-		bnxt_qplib_set_sq_size(sq, wqe_mode) : 0;
-	if (qplib_qp->is_host_msn_tbl && qplib_qp->psn_sz)
+	if (!is_dv_qp) {
+		hwq_attr.depth = bnxt_qplib_get_depth(sq, wqe_mode, true);
+		hwq_attr.aux_depth = (qplib_qp->psn_sz) ?
+				bnxt_qplib_set_sq_size(sq, wqe_mode) : 0;
+		if (qplib_qp->is_host_msn_tbl && qplib_qp->psn_sz)
+			hwq_attr.aux_depth = qplib_qp->msn_tbl_sz;
+	} else {
+		hwq_attr.depth = sq->max_wqe;
 		hwq_attr.aux_depth = qplib_qp->msn_tbl_sz;
+	}
 	hwq_attr.type = HWQ_TYPE_QUEUE;
 	rc = bnxt_qplib_alloc_init_hwq(&sq->hwq, &hwq_attr);
 	if (rc)
@@ -1299,10 +1306,16 @@ static int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp)
 		      CMDQ_CREATE_QP_SQ_LVL_SFT);
 	sq->hwq.pg_sz_lvl = pg_sz_lvl;
 
+	if (qplib_qp->srq)
+		goto done;
+
 	hwq_attr.res = res;
 	hwq_attr.sginfo = &rq->sg_info;
 	hwq_attr.stride = bnxt_qplib_get_stride();
-	hwq_attr.depth = bnxt_qplib_get_depth(rq, qplib_qp->wqe_mode, false);
+	if (!is_dv_qp)
+		hwq_attr.depth = bnxt_qplib_get_depth(rq, qplib_qp->wqe_mode, false);
+	else
+		hwq_attr.depth = rq->max_wqe * 3;
 	hwq_attr.aux_stride = 0;
 	hwq_attr.aux_depth = 0;
 	hwq_attr.type = HWQ_TYPE_QUEUE;
@@ -1315,6 +1328,7 @@ static int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp)
 		      CMDQ_CREATE_QP_RQ_LVL_SFT);
 	rq->hwq.pg_sz_lvl = pg_sz_lvl;
 
+done:
 	if (qplib_qp->psn_sz) {
 		rc = bnxt_re_qp_alloc_init_xrrq(qp);
 		if (rc)
@@ -1383,7 +1397,7 @@ static struct bnxt_re_qp *bnxt_re_create_shadow_qp
 	qp->qplib_qp.rq_hdr_buf_size = BNXT_QPLIB_MAX_GRH_HDR_SIZE_IPV6;
 	qp->qplib_qp.dpi = &rdev->dpi_privileged;
 
-	rc = bnxt_re_setup_qp_hwqs(qp);
+	rc = bnxt_re_setup_qp_hwqs(qp, false);
 	if (rc)
 		goto fail;
 
@@ -1680,7 +1694,7 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
 
 	bnxt_re_qp_calculate_msn_psn_size(qp);
 
-	rc = bnxt_re_setup_qp_hwqs(qp);
+	rc = bnxt_re_setup_qp_hwqs(qp, false);
 	if (rc)
 		goto free_umem;
 
@@ -2499,10 +2513,9 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
 	return rc;
 }
 
-int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
-		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+int bnxt_re_query_qp_attr(struct bnxt_re_qp *qp, struct ib_qp_attr *qp_attr,
+			  int attr_mask, struct ib_qp_init_attr *qp_init_attr)
 {
-	struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
 	struct bnxt_re_dev *rdev = qp->rdev;
 	struct bnxt_qplib_qp *qplib_qp;
 	int rc;
@@ -2560,6 +2573,18 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
 	return rc;
 }
 
+int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+		     int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+{
+	struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
+
+	/* Not all of output fields are applicable, make sure to zero them */
+	memset(qp_init_attr, 0, sizeof(*qp_init_attr));
+	memset(qp_attr, 0, sizeof(*qp_attr));
+
+	return bnxt_re_query_qp_attr(qp, qp_attr, qp_attr_mask, qp_init_attr);
+}
+
 /* Routine for sending QP1 packets for RoCE V1 an V2
  */
 static int bnxt_re_build_qp1_send_v2(struct bnxt_re_qp *qp,
@@ -3247,7 +3272,7 @@ int bnxt_re_post_recv(struct ib_qp *ib_qp, const struct ib_recv_wr *wr,
 	return rc;
 }
 
-static struct bnxt_qplib_nq *bnxt_re_get_nq(struct bnxt_re_dev *rdev)
+struct bnxt_qplib_nq *bnxt_re_get_nq(struct bnxt_re_dev *rdev)
 {
 	int min, indx;
 
@@ -3262,7 +3287,7 @@ static struct bnxt_qplib_nq *bnxt_re_get_nq(struct bnxt_re_dev *rdev)
 	return &rdev->nqr->nq[min];
 }
 
-static void bnxt_re_put_nq(struct bnxt_re_dev *rdev, struct bnxt_qplib_nq *nq)
+void bnxt_re_put_nq(struct bnxt_re_dev *rdev, struct bnxt_qplib_nq *nq)
 {
 	mutex_lock(&rdev->nqr->load_lock);
 	nq->load--;
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
index 1ff89192a728..e48c2cb2e02b 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
@@ -96,6 +96,11 @@ struct bnxt_re_qp {
 	struct bnxt_re_cq	*scq;
 	struct bnxt_re_cq	*rcq;
 	struct dentry		*dentry;
+	/* Below members added for DV support */
+	bool			is_dv_qp;
+	struct bnxt_re_dv_umem *sq_umem;
+	struct bnxt_re_dv_umem *rq_umem;
+	const struct ib_gid_attr *sgid_attr;
 };
 
 struct bnxt_re_cq {
@@ -113,6 +118,8 @@ struct bnxt_re_cq {
 	int			resize_cqe;
 	void			*uctx_cq_page;
 	struct hlist_node	hash_entry;
+	bool			is_dv_cq;
+	struct bnxt_re_dv_umem	*umem_handle;
 	struct bnxt_re_ucontext *uctx;
 };
 
@@ -304,4 +311,9 @@ void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
 struct bnxt_re_user_mmap_entry*
 bnxt_re_mmap_entry_insert(struct bnxt_re_ucontext *uctx, u64 mem_offset,
 			  enum bnxt_re_mmap_flag mmap_flag, u64 *offset);
+u8 __from_ib_qp_type(enum ib_qp_type type);
+int bnxt_re_setup_qp_hwqs(struct bnxt_re_qp *qp, bool is_dv_qp);
+void bnxt_re_qp_free_umem(struct bnxt_re_qp *qp);
+int bnxt_re_query_qp_attr(struct bnxt_re_qp *qp, struct ib_qp_attr *qp_attr,
+			  int attr_mask, struct ib_qp_init_attr *qp_init_attr);
 #endif /* __BNXT_RE_IB_VERBS_H__ */
diff --git a/include/uapi/rdma/bnxt_re-abi.h b/include/uapi/rdma/bnxt_re-abi.h
index 59a0b030de04..7baf30b7b1b0 100644
--- a/include/uapi/rdma/bnxt_re-abi.h
+++ b/include/uapi/rdma/bnxt_re-abi.h
@@ -164,6 +164,8 @@ enum bnxt_re_objects {
 	BNXT_RE_OBJECT_GET_TOGGLE_MEM,
 	BNXT_RE_OBJECT_DBR,
 	BNXT_RE_OBJECT_UMEM,
+	BNXT_RE_OBJECT_DV_CQ,
+	BNXT_RE_OBJECT_DV_QP,
 };
 
 enum bnxt_re_alloc_page_type {
@@ -264,4 +266,95 @@ enum bnxt_re_dv_umem_methods {
 	BNXT_RE_METHOD_UMEM_DEREG,
 };
 
+struct bnxt_re_dv_cq_req {
+	__u32 ncqe;
+	__aligned_u64 va;
+	__aligned_u64 comp_mask;
+};
+
+struct bnxt_re_dv_cq_resp {
+	__u32 cqid;
+	__u32 tail;
+	__u32 phase;
+	__u32 rsvd;
+	__aligned_u64 comp_mask;
+};
+
+enum bnxt_re_dv_create_cq_attrs {
+	BNXT_RE_DV_CREATE_CQ_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_DV_CREATE_CQ_REQ,
+	BNXT_RE_DV_CREATE_CQ_UMEM_HANDLE,
+	BNXT_RE_DV_CREATE_CQ_UMEM_OFFSET,
+	BNXT_RE_DV_CREATE_CQ_RESP,
+};
+
+enum bnxt_re_dv_cq_methods {
+	BNXT_RE_METHOD_DV_CREATE_CQ = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_METHOD_DV_DESTROY_CQ,
+};
+
+enum bnxt_re_dv_destroy_cq_attrs {
+	BNXT_RE_DV_DESTROY_CQ_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+struct bnxt_re_dv_create_qp_req {
+	int qp_type;
+	__u32 max_send_wr;
+	__u32 max_recv_wr;
+	__u32 max_send_sge;
+	__u32 max_recv_sge;
+	__u32 max_inline_data;
+	__u32 pd_id;
+	__aligned_u64 qp_handle;
+	__aligned_u64 sq_va;
+	__u32 sq_umem_offset;
+	__u32 sq_len;   /* total len including MSN area */
+	__u32 sq_slots;
+	__u32 sq_wqe_sz;
+	__u32 sq_psn_sz;
+	__u32 sq_npsn;
+	__aligned_u64 rq_va;
+	__u32 rq_umem_offset;
+	__u32 rq_len;
+	__u32 rq_slots; /* == max_recv_wr */
+	__u32 rq_wqe_sz;
+};
+
+struct bnxt_re_dv_create_qp_resp {
+	__u32 qpid;
+};
+
+enum bnxt_re_dv_create_qp_attrs {
+	BNXT_RE_DV_CREATE_QP_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_DV_CREATE_QP_REQ,
+	BNXT_RE_DV_CREATE_QP_SEND_CQ_HANDLE,
+	BNXT_RE_DV_CREATE_QP_RECV_CQ_HANDLE,
+	BNXT_RE_DV_CREATE_QP_SQ_UMEM_HANDLE,
+	BNXT_RE_DV_CREATE_QP_RQ_UMEM_HANDLE,
+	BNXT_RE_DV_CREATE_QP_SRQ_HANDLE,
+	BNXT_RE_DV_CREATE_QP_DBR_HANDLE,
+	BNXT_RE_DV_CREATE_QP_RESP
+};
+
+enum bnxt_re_dv_qp_methods {
+	BNXT_RE_METHOD_DV_CREATE_QP = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_METHOD_DV_DESTROY_QP,
+	BNXT_RE_METHOD_DV_MODIFY_QP,
+	BNXT_RE_METHOD_DV_QUERY_QP,
+};
+
+enum bnxt_re_dv_destroy_qp_attrs {
+	BNXT_RE_DV_DESTROY_QP_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+enum bnxt_re_var_dv_modify_qp_attrs {
+	BNXT_RE_DV_MODIFY_QP_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_DV_MODIFY_QP_REQ,
+};
+
+enum bnxt_re_dv_query_qp_attrs {
+	BNXT_RE_DV_QUERY_QP_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	BNXT_RE_DV_QUERY_QP_ATTR,
+};
+
 #endif /* __BNXT_RE_UVERBS_ABI_H__*/
-- 
2.51.2.636.ga99f379adf


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file
  2025-11-04  7:23 ` [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file Sriharsha Basavapatna
@ 2025-11-09  9:12   ` Leon Romanovsky
  2025-11-10 14:43     ` Sriharsha Basavapatna
  0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2025-11-09  9:12 UTC (permalink / raw)
  To: Sriharsha Basavapatna
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil

On Tue, Nov 04, 2025 at 12:53:17PM +0530, Sriharsha Basavapatna wrote:
> From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> 
> This is in preparation for upcoming patches in the series.
> Driver has to support additional UAPIs for Direct verbs.
> Moving current UAPI implementation to a new file, dv.c.
> 
> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> ---
>  drivers/infiniband/hw/bnxt_re/Makefile   |   2 +-
>  drivers/infiniband/hw/bnxt_re/dv.c       | 356 +++++++++++++++++++++++
>  drivers/infiniband/hw/bnxt_re/ib_verbs.c | 305 +------------------
>  drivers/infiniband/hw/bnxt_re/ib_verbs.h |   3 +
>  4 files changed, 361 insertions(+), 305 deletions(-)
>  create mode 100644 drivers/infiniband/hw/bnxt_re/dv.c

<...>

> +++ b/drivers/infiniband/hw/bnxt_re/dv.c
> @@ -0,0 +1,356 @@
> +/*
> + * Broadcom NetXtreme-E RoCE driver.
> + *
> + * Copyright (c) 2025, Broadcom. All rights reserved.  The term
> + * Broadcom refers to Broadcom Inc. and/or its subsidiaries.
> + *
> + * This software is available to you under a choice of one of two
> + * licenses.  You may choose to be licensed under the terms of the GNU
> + * General Public License (GPL) Version 2, available from the file
> + * COPYING in the main directory of this source tree, or the
> + * BSD license below:
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in
> + *    the documentation and/or other materials provided with the
> + *    distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
> + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
> + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
> + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
> + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
> + * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
> + * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + *
> + * Description: Direct Verbs interpreter
> + */

Please remove all this boilerplate and use SPDX tag instead,

> +
> +#include <rdma/ib_addr.h>
> +#include <rdma/uverbs_types.h>
> +#include <rdma/uverbs_std_types.h>
> +#include <rdma/ib_user_ioctl_cmds.h>
> +#define UVERBS_MODULE_NAME bnxt_re
> +#include <rdma/uverbs_named_ioctl.h>
> +#include <rdma/bnxt_re-abi.h>

<...>

> +	uctx = container_of(ib_uverbs_get_ucontext(attrs), struct bnxt_re_ucontext, ib_uctx);
> +	if (IS_ERR(uctx))

This is can't be right, you should check ib_uverbs_get_ucontext() for
error first, before doing container_of().

> +		return PTR_ERR(uctx);

Thanks

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function
  2025-11-04  7:23 ` [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function Sriharsha Basavapatna
@ 2025-11-09  9:21   ` Leon Romanovsky
  2025-11-10 14:49     ` Sriharsha Basavapatna
  0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2025-11-09  9:21 UTC (permalink / raw)
  To: Sriharsha Basavapatna
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil

On Tue, Nov 04, 2025 at 12:53:18PM +0530, Sriharsha Basavapatna wrote:
> From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> 
> Inside bnxt_qplib_create_qp(), driver currently is doing
> a lot of things like allocating HWQ memory for SQ/RQ/ORRQ/IRRQ,
> initializing few of qplib_qp fields etc.
> 
> Refactored the code such that all memory allocation for HWQs
> have been moved to bnxt_re_init_qp_attr() function and inside
> bnxt_qplib_create_qp() function just initialize the request
> structure and issue the HWRM command to firmware.
> 
> Introduced couple of new functions bnxt_re_setup_qp_hwqs() and
> bnxt_re_setup_qp_swqs() moved the hwq and swq memory allocation
> logic there.
> 
> This patch also introduces a change to store the PD id in
> bnxt_qplib_qp. Instead of keeping a pointer to "struct
> bnxt_qplib_pd", store PD id directly in "struct bnxt_qplib_qp".
> This change is needed for a subsequent change in this patch
> series. This PD ID value will be used in new DV implementation
> for create_qp(). There is no functional change.
> 
> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> ---
>  drivers/infiniband/hw/bnxt_re/ib_verbs.c  | 207 ++++++++++++--
>  drivers/infiniband/hw/bnxt_re/qplib_fp.c  | 311 +++++++---------------
>  drivers/infiniband/hw/bnxt_re/qplib_fp.h  |  10 +-
>  drivers/infiniband/hw/bnxt_re/qplib_res.h |   6 +
>  4 files changed, 304 insertions(+), 230 deletions(-)

<...>

> +free_umem:
> +	if (uctx)
> +		bnxt_re_qp_free_umem(qp);

<...>

> +	if (udata)
> +		bnxt_re_qp_free_umem(qp);

<...>

Do you need to have if (..) here?
ib_umem_release() does nothing if pointer is NULL.


> +	kfree(sq->swq);
> +	sq->swq = NULL;

Is this SQ reused?

> +	return rc;
> +}

<...>

>  struct bnxt_qplib_qp {
> -	struct bnxt_qplib_pd		*pd;
> +	u32				pd_id;
>  	struct bnxt_qplib_dpi		*dpi;
>  	struct bnxt_qplib_chip_ctx	*cctx;
>  	u64				qp_handle;
> @@ -279,6 +279,7 @@ struct bnxt_qplib_qp {
>  	u8				wqe_mode;
>  	u8				state;
>  	u8				cur_qp_state;
> +	u8				is_user;

This is already known to IB/core, use rdma_is_kernel_res().

Thanks

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs
  2025-11-04  7:23 ` [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs Sriharsha Basavapatna
@ 2025-11-09  9:49   ` Leon Romanovsky
  2025-11-10 14:58     ` Sriharsha Basavapatna
  0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2025-11-09  9:49 UTC (permalink / raw)
  To: Sriharsha Basavapatna
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil

On Tue, Nov 04, 2025 at 12:53:20PM +0530, Sriharsha Basavapatna wrote:
> The following Direct Verb (DV) methods have been implemented in
> this patch.
> 
> CQ Direct Verbs:
> ----------------
> - BNXT_RE_METHOD_DV_CREATE_CQ:
>   Create a CQ of requested size (cqe). The application must have
>   already registered this memory with the driver using DV_UMEM_REG.
>   The CQ umem-handle and umem-offset are passed to the driver. The
>   driver now maps/pins the CQ user memory and registers it with the
>   hardware. The driver returns a CQ-handle to the application.
> 
> - BNXT_RE_METHOD_DV_DESTROY_CQ:
>   Destroy the DV_CQ specified by the CQ-handle; unmap the user memory.
> 
> QP Direct Verbs:
> ----------------
> - BNXT_RE_METHOD_DV_CREATE_QP:
>   Create a QP using specified params (struct bnxt_re_dv_create_qp_req).
>   The application must have already registered SQ/RQ memory with the
>   driver using DV_UMEM_REG. The SQ/RQ umem-handle and umem-offset are
>   passed to the driver. The driver now maps/pins the SQ/RQ user memory
>   and registers it with the hardware. The driver returns a QP-handle to
>   the application.
> 
> - BNXT_RE_METHOD_DV_DESTROY_QP:
>   Destroy the DV_QP specified by the QP-handle; unmap SQ/RQ user memory.
> 
> - BNXT_RE_METHOD_DV_MODIFY_QP:
>   Modify QP attributes for the DV_QP specified by the QP-handle;
>   wrapper functions have been implemented to resolve dmac/smac using
>   rdma_resolve_ip().
> 
> - BNXT_RE_METHOD_DV_QUERY_QP:
>   Return QP attributes for the DV_QP specified by the QP-handle.
> 
> Note:
> -----
> Some applications might want to allocate memory for all resources of a
> given type (CQ/QP) in one big chunk and then register that entire memory
> once using DV_UMEM_REG. At the time of creating each individual
> resource, the application would pass a specific offset/length in the
> umem registered memory.
> 
> - The DV_UMEM_REG handler (previous patch) only creates a dv_umem object
>   and saves user memory parameters, but doesn't really map/pin this
>   memory.
> - The mapping would be done at the time of creating individual objects.
> - This actual mapping of specific umem offsets is implemented by the
>   function bnxt_re_dv_umem_get(). This function validates the
>   umem-offset and size parameters passed during CQ/QP creation. If the
>   request is valid, it maps the specified offset/length within the umem
>   registered memory.
> - The CQ and QP creation DV handlers call bnxt_re_dv_umem_get() to map
>   offsets/sizes specific to each individual object. This means each
>   object gets its own mapped dv_umem object that is distinct from the
>   main dv_umem object created during DV_UMEM_REG.
> - The object specific dv_umem is unmapped when the object is destroyed.
> 
> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> Co-developed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Co-developed-by: Selvin Xavier <selvin.xavier@broadcom.com>
> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
> ---
>  drivers/infiniband/hw/bnxt_re/bnxt_re.h  |   12 +-
>  drivers/infiniband/hw/bnxt_re/dv.c       | 1208 ++++++++++++++++++++++
>  drivers/infiniband/hw/bnxt_re/ib_verbs.c |   55 +-
>  drivers/infiniband/hw/bnxt_re/ib_verbs.h |   12 +
>  include/uapi/rdma/bnxt_re-abi.h          |   93 ++
>  5 files changed, 1364 insertions(+), 16 deletions(-)

<...>

> +		if (IS_ERR(umem_dmabuf)) {
> +			rc = PTR_ERR(umem_dmabuf);
> +			dev_err(rdev_to_dev(rdev),
> +				"%s: failed to get umem dmabuf : %d\n",
> +				__func__, rc);

All these dev_XXX() lines should go. They can be used before IB device
is created, after that you are invited to use ibdev_XXX() helpers.

> +			goto free_umem;

<...>

> +static void
> +bnxt_re_print_dv_qp_attr(struct bnxt_re_dev *rdev,
> +			 struct bnxt_re_cq *send_cq,
> +			 struct bnxt_re_cq *recv_cq,
> +			 struct  bnxt_re_dv_create_qp_req *req)
> +{
> +	dev_dbg(rdev_to_dev(rdev), "DV_QP_ATTR:\n");
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t qp_type: 0x%x pdid: 0x%x qp_handle: 0x%llx\n",
> +		req->qp_type, req->pd_id, req->qp_handle);
> +
> +	dev_dbg(rdev_to_dev(rdev), "\t SQ ATTR:\n");
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t\t max_send_wr: 0x%x max_send_sge: 0x%x\n",
> +		req->max_send_wr, req->max_send_sge);
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t\t va: 0x%llx len: 0x%x slots: 0x%x wqe_sz: 0x%x\n",
> +		req->sq_va, req->sq_len, req->sq_slots, req->sq_wqe_sz);
> +	dev_dbg(rdev_to_dev(rdev), "\t\t psn_sz: 0x%x npsn: 0x%x\n",
> +		req->sq_psn_sz, req->sq_npsn);
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t\t send_cq_id: 0x%x\n", send_cq->qplib_cq.id);
> +
> +	dev_dbg(rdev_to_dev(rdev), "\t RQ ATTR:\n");
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t\t max_recv_wr: 0x%x max_recv_sge: 0x%x\n",
> +		req->max_recv_wr, req->max_recv_sge);
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t\t va: 0x%llx len: 0x%x slots: 0x%x wqe_sz: 0x%x\n",
> +		req->rq_va, req->rq_len, req->rq_slots, req->rq_wqe_sz);
> +	dev_dbg(rdev_to_dev(rdev),
> +		"\t\t recv_cq_id: 0x%x\n", recv_cq->qplib_cq.id);
> +}

And I afraid that you went too far with debug prints in this patch.
Please remove ALL of them and leave only minimal number of error prints.

Thanks

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file
  2025-11-09  9:12   ` Leon Romanovsky
@ 2025-11-10 14:43     ` Sriharsha Basavapatna
  0 siblings, 0 replies; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-10 14:43 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

[-- Attachment #1: Type: text/plain, Size: 3805 bytes --]

On Sun, Nov 9, 2025 at 2:42 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Tue, Nov 04, 2025 at 12:53:17PM +0530, Sriharsha Basavapatna wrote:
> > From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> >
> > This is in preparation for upcoming patches in the series.
> > Driver has to support additional UAPIs for Direct verbs.
> > Moving current UAPI implementation to a new file, dv.c.
> >
> > Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
> > Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> > ---
> >  drivers/infiniband/hw/bnxt_re/Makefile   |   2 +-
> >  drivers/infiniband/hw/bnxt_re/dv.c       | 356 +++++++++++++++++++++++
> >  drivers/infiniband/hw/bnxt_re/ib_verbs.c | 305 +------------------
> >  drivers/infiniband/hw/bnxt_re/ib_verbs.h |   3 +
> >  4 files changed, 361 insertions(+), 305 deletions(-)
> >  create mode 100644 drivers/infiniband/hw/bnxt_re/dv.c
>
> <...>
>
> > +++ b/drivers/infiniband/hw/bnxt_re/dv.c
> > @@ -0,0 +1,356 @@
> > +/*
> > + * Broadcom NetXtreme-E RoCE driver.
> > + *
> > + * Copyright (c) 2025, Broadcom. All rights reserved.  The term
> > + * Broadcom refers to Broadcom Inc. and/or its subsidiaries.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses.  You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or without
> > + * modification, are permitted provided that the following conditions
> > + * are met:
> > + *
> > + * 1. Redistributions of source code must retain the above copyright
> > + *    notice, this list of conditions and the following disclaimer.
> > + * 2. Redistributions in binary form must reproduce the above copyright
> > + *    notice, this list of conditions and the following disclaimer in
> > + *    the documentation and/or other materials provided with the
> > + *    distribution.
> > + *
> > + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
> > + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
> > + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> > + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
> > + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> > + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
> > + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
> > + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
> > + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
> > + * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
> > + * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> > + *
> > + * Description: Direct Verbs interpreter
> > + */
>
> Please remove all this boilerplate and use SPDX tag instead,
ack.
>
> > +
> > +#include <rdma/ib_addr.h>
> > +#include <rdma/uverbs_types.h>
> > +#include <rdma/uverbs_std_types.h>
> > +#include <rdma/ib_user_ioctl_cmds.h>
> > +#define UVERBS_MODULE_NAME bnxt_re
> > +#include <rdma/uverbs_named_ioctl.h>
> > +#include <rdma/bnxt_re-abi.h>
>
> <...>
>
> > +     uctx = container_of(ib_uverbs_get_ucontext(attrs), struct bnxt_re_ucontext, ib_uctx);
> > +     if (IS_ERR(uctx))
>
> This is can't be right, you should check ib_uverbs_get_ucontext() for
> error first, before doing container_of().
>
> > +             return PTR_ERR(uctx);
ack.
>
> Thanks

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5505 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function
  2025-11-09  9:21   ` Leon Romanovsky
@ 2025-11-10 14:49     ` Sriharsha Basavapatna
  2025-11-11 10:14       ` Leon Romanovsky
  0 siblings, 1 reply; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-10 14:49 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

[-- Attachment #1: Type: text/plain, Size: 3068 bytes --]

On Sun, Nov 9, 2025 at 2:51 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Tue, Nov 04, 2025 at 12:53:18PM +0530, Sriharsha Basavapatna wrote:
> > From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> >
> > Inside bnxt_qplib_create_qp(), driver currently is doing
> > a lot of things like allocating HWQ memory for SQ/RQ/ORRQ/IRRQ,
> > initializing few of qplib_qp fields etc.
> >
> > Refactored the code such that all memory allocation for HWQs
> > have been moved to bnxt_re_init_qp_attr() function and inside
> > bnxt_qplib_create_qp() function just initialize the request
> > structure and issue the HWRM command to firmware.
> >
> > Introduced couple of new functions bnxt_re_setup_qp_hwqs() and
> > bnxt_re_setup_qp_swqs() moved the hwq and swq memory allocation
> > logic there.
> >
> > This patch also introduces a change to store the PD id in
> > bnxt_qplib_qp. Instead of keeping a pointer to "struct
> > bnxt_qplib_pd", store PD id directly in "struct bnxt_qplib_qp".
> > This change is needed for a subsequent change in this patch
> > series. This PD ID value will be used in new DV implementation
> > for create_qp(). There is no functional change.
> >
> > Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
> > Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> > ---
> >  drivers/infiniband/hw/bnxt_re/ib_verbs.c  | 207 ++++++++++++--
> >  drivers/infiniband/hw/bnxt_re/qplib_fp.c  | 311 +++++++---------------
> >  drivers/infiniband/hw/bnxt_re/qplib_fp.h  |  10 +-
> >  drivers/infiniband/hw/bnxt_re/qplib_res.h |   6 +
> >  4 files changed, 304 insertions(+), 230 deletions(-)
>
> <...>
>
> > +free_umem:
> > +     if (uctx)
> > +             bnxt_re_qp_free_umem(qp);
>
> <...>
>
> > +     if (udata)
> > +             bnxt_re_qp_free_umem(qp);
>
> <...>
>
> Do you need to have if (..) here?
> ib_umem_release() does nothing if pointer is NULL.
Agreed, no need to have that if() check.
>
>
> > +     kfree(sq->swq);
> > +     sq->swq = NULL;
>
> Is this SQ reused?
SQ is not reused after this clean up, no need to reset the pointer,
will delete that line.
>
> > +     return rc;
> > +}
>
> <...>
>
> >  struct bnxt_qplib_qp {
> > -     struct bnxt_qplib_pd            *pd;
> > +     u32                             pd_id;
> >       struct bnxt_qplib_dpi           *dpi;
> >       struct bnxt_qplib_chip_ctx      *cctx;
> >       u64                             qp_handle;
> > @@ -279,6 +279,7 @@ struct bnxt_qplib_qp {
> >       u8                              wqe_mode;
> >       u8                              state;
> >       u8                              cur_qp_state;
> > +     u8                              is_user;
>
> This is already known to IB/core, use rdma_is_kernel_res().
This one is used in the qplib (fw interface) layer in the driver where
we don't have the ib context, so I'd prefer to retain it.
Thanks,
-Harsha
>
> Thanks

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5505 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs
  2025-11-09  9:49   ` Leon Romanovsky
@ 2025-11-10 14:58     ` Sriharsha Basavapatna
  0 siblings, 0 replies; 12+ messages in thread
From: Sriharsha Basavapatna @ 2025-11-10 14:58 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil, Sriharsha Basavapatna

[-- Attachment #1: Type: text/plain, Size: 6312 bytes --]

On Sun, Nov 9, 2025 at 3:19 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Tue, Nov 04, 2025 at 12:53:20PM +0530, Sriharsha Basavapatna wrote:
> > The following Direct Verb (DV) methods have been implemented in
> > this patch.
> >
> > CQ Direct Verbs:
> > ----------------
> > - BNXT_RE_METHOD_DV_CREATE_CQ:
> >   Create a CQ of requested size (cqe). The application must have
> >   already registered this memory with the driver using DV_UMEM_REG.
> >   The CQ umem-handle and umem-offset are passed to the driver. The
> >   driver now maps/pins the CQ user memory and registers it with the
> >   hardware. The driver returns a CQ-handle to the application.
> >
> > - BNXT_RE_METHOD_DV_DESTROY_CQ:
> >   Destroy the DV_CQ specified by the CQ-handle; unmap the user memory.
> >
> > QP Direct Verbs:
> > ----------------
> > - BNXT_RE_METHOD_DV_CREATE_QP:
> >   Create a QP using specified params (struct bnxt_re_dv_create_qp_req).
> >   The application must have already registered SQ/RQ memory with the
> >   driver using DV_UMEM_REG. The SQ/RQ umem-handle and umem-offset are
> >   passed to the driver. The driver now maps/pins the SQ/RQ user memory
> >   and registers it with the hardware. The driver returns a QP-handle to
> >   the application.
> >
> > - BNXT_RE_METHOD_DV_DESTROY_QP:
> >   Destroy the DV_QP specified by the QP-handle; unmap SQ/RQ user memory.
> >
> > - BNXT_RE_METHOD_DV_MODIFY_QP:
> >   Modify QP attributes for the DV_QP specified by the QP-handle;
> >   wrapper functions have been implemented to resolve dmac/smac using
> >   rdma_resolve_ip().
> >
> > - BNXT_RE_METHOD_DV_QUERY_QP:
> >   Return QP attributes for the DV_QP specified by the QP-handle.
> >
> > Note:
> > -----
> > Some applications might want to allocate memory for all resources of a
> > given type (CQ/QP) in one big chunk and then register that entire memory
> > once using DV_UMEM_REG. At the time of creating each individual
> > resource, the application would pass a specific offset/length in the
> > umem registered memory.
> >
> > - The DV_UMEM_REG handler (previous patch) only creates a dv_umem object
> >   and saves user memory parameters, but doesn't really map/pin this
> >   memory.
> > - The mapping would be done at the time of creating individual objects.
> > - This actual mapping of specific umem offsets is implemented by the
> >   function bnxt_re_dv_umem_get(). This function validates the
> >   umem-offset and size parameters passed during CQ/QP creation. If the
> >   request is valid, it maps the specified offset/length within the umem
> >   registered memory.
> > - The CQ and QP creation DV handlers call bnxt_re_dv_umem_get() to map
> >   offsets/sizes specific to each individual object. This means each
> >   object gets its own mapped dv_umem object that is distinct from the
> >   main dv_umem object created during DV_UMEM_REG.
> > - The object specific dv_umem is unmapped when the object is destroyed.
> >
> > Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> > Co-developed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > Co-developed-by: Selvin Xavier <selvin.xavier@broadcom.com>
> > Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
> > ---
> >  drivers/infiniband/hw/bnxt_re/bnxt_re.h  |   12 +-
> >  drivers/infiniband/hw/bnxt_re/dv.c       | 1208 ++++++++++++++++++++++
> >  drivers/infiniband/hw/bnxt_re/ib_verbs.c |   55 +-
> >  drivers/infiniband/hw/bnxt_re/ib_verbs.h |   12 +
> >  include/uapi/rdma/bnxt_re-abi.h          |   93 ++
> >  5 files changed, 1364 insertions(+), 16 deletions(-)
>
> <...>
>
> > +             if (IS_ERR(umem_dmabuf)) {
> > +                     rc = PTR_ERR(umem_dmabuf);
> > +                     dev_err(rdev_to_dev(rdev),
> > +                             "%s: failed to get umem dmabuf : %d\n",
> > +                             __func__, rc);
>
> All these dev_XXX() lines should go. They can be used before IB device
> is created, after that you are invited to use ibdev_XXX() helpers.
Thanks, will change them to ibdev_XXX().
>
> > +                     goto free_umem;
>
> <...>
>
> > +static void
> > +bnxt_re_print_dv_qp_attr(struct bnxt_re_dev *rdev,
> > +                      struct bnxt_re_cq *send_cq,
> > +                      struct bnxt_re_cq *recv_cq,
> > +                      struct  bnxt_re_dv_create_qp_req *req)
> > +{
> > +     dev_dbg(rdev_to_dev(rdev), "DV_QP_ATTR:\n");
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t qp_type: 0x%x pdid: 0x%x qp_handle: 0x%llx\n",
> > +             req->qp_type, req->pd_id, req->qp_handle);
> > +
> > +     dev_dbg(rdev_to_dev(rdev), "\t SQ ATTR:\n");
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t\t max_send_wr: 0x%x max_send_sge: 0x%x\n",
> > +             req->max_send_wr, req->max_send_sge);
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t\t va: 0x%llx len: 0x%x slots: 0x%x wqe_sz: 0x%x\n",
> > +             req->sq_va, req->sq_len, req->sq_slots, req->sq_wqe_sz);
> > +     dev_dbg(rdev_to_dev(rdev), "\t\t psn_sz: 0x%x npsn: 0x%x\n",
> > +             req->sq_psn_sz, req->sq_npsn);
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t\t send_cq_id: 0x%x\n", send_cq->qplib_cq.id);
> > +
> > +     dev_dbg(rdev_to_dev(rdev), "\t RQ ATTR:\n");
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t\t max_recv_wr: 0x%x max_recv_sge: 0x%x\n",
> > +             req->max_recv_wr, req->max_recv_sge);
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t\t va: 0x%llx len: 0x%x slots: 0x%x wqe_sz: 0x%x\n",
> > +             req->rq_va, req->rq_len, req->rq_slots, req->rq_wqe_sz);
> > +     dev_dbg(rdev_to_dev(rdev),
> > +             "\t\t recv_cq_id: 0x%x\n", recv_cq->qplib_cq.id);
> > +}
>
> And I afraid that you went too far with debug prints in this patch.
> Please remove ALL of them and leave only minimal number of error prints.
I'll remove the above function, which was needed during the early dev
cycle. Also, will reduce both errors and debug prints, and will keep
only a small set of dbg msgs that we have found to be useful.
Thanks,
-Harsha
>
> Thanks

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5505 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function
  2025-11-10 14:49     ` Sriharsha Basavapatna
@ 2025-11-11 10:14       ` Leon Romanovsky
  0 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2025-11-11 10:14 UTC (permalink / raw)
  To: Sriharsha Basavapatna
  Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
	kalesh-anakkur.purayil

On Mon, Nov 10, 2025 at 08:19:45PM +0530, Sriharsha Basavapatna wrote:
> On Sun, Nov 9, 2025 at 2:51 PM Leon Romanovsky <leon@kernel.org> wrote:
> >
> > On Tue, Nov 04, 2025 at 12:53:18PM +0530, Sriharsha Basavapatna wrote:
> > > From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > >
> > > Inside bnxt_qplib_create_qp(), driver currently is doing
> > > a lot of things like allocating HWQ memory for SQ/RQ/ORRQ/IRRQ,
> > > initializing few of qplib_qp fields etc.
> > >
> > > Refactored the code such that all memory allocation for HWQs
> > > have been moved to bnxt_re_init_qp_attr() function and inside
> > > bnxt_qplib_create_qp() function just initialize the request
> > > structure and issue the HWRM command to firmware.
> > >
> > > Introduced couple of new functions bnxt_re_setup_qp_hwqs() and
> > > bnxt_re_setup_qp_swqs() moved the hwq and swq memory allocation
> > > logic there.
> > >
> > > This patch also introduces a change to store the PD id in
> > > bnxt_qplib_qp. Instead of keeping a pointer to "struct
> > > bnxt_qplib_pd", store PD id directly in "struct bnxt_qplib_qp".
> > > This change is needed for a subsequent change in this patch
> > > series. This PD ID value will be used in new DV implementation
> > > for create_qp(). There is no functional change.
> > >
> > > Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > > Reviewed-by: Selvin Thyparampil Xavier <selvin.xavier@broadcom.com>
> > > Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> > > ---
> > >  drivers/infiniband/hw/bnxt_re/ib_verbs.c  | 207 ++++++++++++--
> > >  drivers/infiniband/hw/bnxt_re/qplib_fp.c  | 311 +++++++---------------
> > >  drivers/infiniband/hw/bnxt_re/qplib_fp.h  |  10 +-
> > >  drivers/infiniband/hw/bnxt_re/qplib_res.h |   6 +
> > >  4 files changed, 304 insertions(+), 230 deletions(-)
> >
> > <...>
> >
> > > +free_umem:
> > > +     if (uctx)
> > > +             bnxt_re_qp_free_umem(qp);
> >
> > <...>
> >
> > > +     if (udata)
> > > +             bnxt_re_qp_free_umem(qp);
> >
> > <...>
> >
> > Do you need to have if (..) here?
> > ib_umem_release() does nothing if pointer is NULL.
> Agreed, no need to have that if() check.
> >
> >
> > > +     kfree(sq->swq);
> > > +     sq->swq = NULL;
> >
> > Is this SQ reused?
> SQ is not reused after this clean up, no need to reset the pointer,
> will delete that line.
> >
> > > +     return rc;
> > > +}
> >
> > <...>
> >
> > >  struct bnxt_qplib_qp {
> > > -     struct bnxt_qplib_pd            *pd;
> > > +     u32                             pd_id;
> > >       struct bnxt_qplib_dpi           *dpi;
> > >       struct bnxt_qplib_chip_ctx      *cctx;
> > >       u64                             qp_handle;
> > > @@ -279,6 +279,7 @@ struct bnxt_qplib_qp {
> > >       u8                              wqe_mode;
> > >       u8                              state;
> > >       u8                              cur_qp_state;
> > > +     u8                              is_user;
> >
> > This is already known to IB/core, use rdma_is_kernel_res().
> This one is used in the qplib (fw interface) layer in the driver where
> we don't have the ib context, so I'd prefer to retain it.

My old plan was to rely on restrack for everything related to that,
together with removal of custom book-keeping logic.

This is why mlx5/mlx5 uses rdma_restrack_no_track() for internal objects.

Thanks

> Thanks,
> -Harsha
> >
> > Thanks



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-11-11 10:14 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-04  7:23 [PATCH rdma-next v2 0/4] RDMA/bnxt_re: Support direct verbs Sriharsha Basavapatna
2025-11-04  7:23 ` [PATCH rdma-next v2 1/4] RDMA/bnxt_re: Move the UAPI methods to a dedicated file Sriharsha Basavapatna
2025-11-09  9:12   ` Leon Romanovsky
2025-11-10 14:43     ` Sriharsha Basavapatna
2025-11-04  7:23 ` [PATCH rdma-next v2 2/4] RDMA/bnxt_re: Refactor bnxt_qplib_create_qp() function Sriharsha Basavapatna
2025-11-09  9:21   ` Leon Romanovsky
2025-11-10 14:49     ` Sriharsha Basavapatna
2025-11-11 10:14       ` Leon Romanovsky
2025-11-04  7:23 ` [PATCH rdma-next v2 3/4] RDMA/bnxt_re: Direct Verbs: Support DBR and UMEM verbs Sriharsha Basavapatna
2025-11-04  7:23 ` [PATCH rdma-next v2 4/4] RDMA/bnxt_re: Direct Verbs: Support CQ and QP verbs Sriharsha Basavapatna
2025-11-09  9:49   ` Leon Romanovsky
2025-11-10 14:58     ` Sriharsha Basavapatna

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).