* [PATCH rdma-core v1 0/2] vmw_pvrdma: Use physical resource ids from device
@ 2019-09-19 20:26 Adit Ranadive
2019-09-19 20:26 ` [PATCH rdma-core v1 1/2] vmw_pvrdma: Update kernel header for QP handle/num Adit Ranadive
2019-09-19 20:26 ` [PATCH rdma-core v1 2/2] vmw_pvrdma: Use resource ids from physical device if available Adit Ranadive
0 siblings, 2 replies; 3+ messages in thread
From: Adit Ranadive @ 2019-09-19 20:26 UTC (permalink / raw)
To: jgg@mellanox.com, leon@kernel.org, linux-rdma@vger.kernel.org
Cc: Adit Ranadive
Changelog:
v1:
- Added a separate patch for the kernel header
- Dropped the ABI version check
- Added a create qp flag and an in-band check to indicate support
v0:
- https://patchwork.kernel.org/patch/10946987/
Hi,
Here is a patchset for enabling exposing physical resource ids for vmw_pvrdma
userspace provider.
rdma-core PR:
- https://github.com/linux-rdma/rdma-core/pull/581
Thanks,
Adit
Adit Ranadive (1):
vmw_pvrdma: Update kernel header for QP handle/num
Bryan Tan (1):
vmw_pvrdma: Use resource ids from physical device if available
kernel-headers/rdma/vmw_pvrdma-abi.h | 13 +++++++++++++
providers/vmw_pvrdma/pvrdma-abi.h | 2 +-
providers/vmw_pvrdma/pvrdma.h | 1 +
providers/vmw_pvrdma/qp.c | 24 +++++++++++++-----------
4 files changed, 28 insertions(+), 12 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH rdma-core v1 1/2] vmw_pvrdma: Update kernel header for QP handle/num
2019-09-19 20:26 [PATCH rdma-core v1 0/2] vmw_pvrdma: Use physical resource ids from device Adit Ranadive
@ 2019-09-19 20:26 ` Adit Ranadive
2019-09-19 20:26 ` [PATCH rdma-core v1 2/2] vmw_pvrdma: Use resource ids from physical device if available Adit Ranadive
1 sibling, 0 replies; 3+ messages in thread
From: Adit Ranadive @ 2019-09-19 20:26 UTC (permalink / raw)
To: jgg@mellanox.com, leon@kernel.org, linux-rdma@vger.kernel.org
Cc: Adit Ranadive
Add support in the kernel header to reflect back different qp
handle and/or qp number to userspace.
Reviewed-by: Bryan Tan <bryantan@vmware.com>
Reviewed-by: Jorgen Hansen <jhansen@vmware.com>
Signed-off-by: Adit Ranadive <aditr@vmware.com>
---
| 13 +++++++++++++
1 file changed, 13 insertions(+)
--git a/kernel-headers/rdma/vmw_pvrdma-abi.h b/kernel-headers/rdma/vmw_pvrdma-abi.h
index 6e73f0274e41..1d339285550e 100644
--- a/kernel-headers/rdma/vmw_pvrdma-abi.h
+++ b/kernel-headers/rdma/vmw_pvrdma-abi.h
@@ -133,6 +133,10 @@ enum pvrdma_wc_flags {
PVRDMA_WC_FLAGS_MAX = PVRDMA_WC_WITH_NETWORK_HDR_TYPE,
};
+enum pvrdma_user_qp_create_flags {
+ PVRDMA_USER_QP_CREATE_USE_RESP = 1 << 0,
+};
+
struct pvrdma_alloc_ucontext_resp {
__u32 qp_tab_size;
__u32 reserved;
@@ -177,6 +181,15 @@ struct pvrdma_create_qp {
__u32 rbuf_size;
__u32 sbuf_size;
__aligned_u64 qp_addr;
+ __u32 flags;
+ __u32 reserved;
+};
+
+struct pvrdma_create_qp_resp {
+ __u32 qpn;
+ __u32 qp_handle;
+ __u32 qpn_valid;
+ __u32 reserved;
};
/* PVRDMA masked atomic compare and swap */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH rdma-core v1 2/2] vmw_pvrdma: Use resource ids from physical device if available
2019-09-19 20:26 [PATCH rdma-core v1 0/2] vmw_pvrdma: Use physical resource ids from device Adit Ranadive
2019-09-19 20:26 ` [PATCH rdma-core v1 1/2] vmw_pvrdma: Update kernel header for QP handle/num Adit Ranadive
@ 2019-09-19 20:26 ` Adit Ranadive
1 sibling, 0 replies; 3+ messages in thread
From: Adit Ranadive @ 2019-09-19 20:26 UTC (permalink / raw)
To: jgg@mellanox.com, leon@kernel.org, linux-rdma@vger.kernel.org
Cc: Bryan Tan, Adit Ranadive
From: Bryan Tan <bryantan@vmware.com>
From: Bryan Tan <bryantan@vmware.com>
This is the accompanying userspace change to allow applications
use physical resource ids if provided by the driver/device.
The create QP command is modified to specify a flag that tells
the driver to send back the physical HCA's QPN and the hypervisor-
generated QP handle separately.
Reviewed-by: Jorgen Hansen <jhansen@vmware.com>
Signed-off-by: Adit Ranadive <aditr@vmware.com>
Signed-off-by: Bryan Tan <bryantan@vmware.com>
---
providers/vmw_pvrdma/pvrdma-abi.h | 2 +-
providers/vmw_pvrdma/pvrdma.h | 1 +
providers/vmw_pvrdma/qp.c | 24 +++++++++++++-----------
3 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/providers/vmw_pvrdma/pvrdma-abi.h b/providers/vmw_pvrdma/pvrdma-abi.h
index 77db9ddd1bb7..1a4c3c8a98f2 100644
--- a/providers/vmw_pvrdma/pvrdma-abi.h
+++ b/providers/vmw_pvrdma/pvrdma-abi.h
@@ -55,7 +55,7 @@ DECLARE_DRV_CMD(user_pvrdma_alloc_pd, IB_USER_VERBS_CMD_ALLOC_PD,
DECLARE_DRV_CMD(user_pvrdma_create_cq, IB_USER_VERBS_CMD_CREATE_CQ,
pvrdma_create_cq, pvrdma_create_cq_resp);
DECLARE_DRV_CMD(user_pvrdma_create_qp, IB_USER_VERBS_CMD_CREATE_QP,
- pvrdma_create_qp, empty);
+ pvrdma_create_qp, pvrdma_create_qp_resp);
DECLARE_DRV_CMD(user_pvrdma_create_srq, IB_USER_VERBS_CMD_CREATE_SRQ,
pvrdma_create_srq, pvrdma_create_srq_resp);
DECLARE_DRV_CMD(user_pvrdma_alloc_ucontext, IB_USER_VERBS_CMD_GET_CONTEXT,
diff --git a/providers/vmw_pvrdma/pvrdma.h b/providers/vmw_pvrdma/pvrdma.h
index d90bd8096664..bb4a2db08d14 100644
--- a/providers/vmw_pvrdma/pvrdma.h
+++ b/providers/vmw_pvrdma/pvrdma.h
@@ -170,6 +170,7 @@ struct pvrdma_qp {
struct pvrdma_wq sq;
struct pvrdma_wq rq;
int is_srq;
+ uint32_t qp_handle;
};
struct pvrdma_ah {
diff --git a/providers/vmw_pvrdma/qp.c b/providers/vmw_pvrdma/qp.c
index ef429db93a43..966480f5abaa 100644
--- a/providers/vmw_pvrdma/qp.c
+++ b/providers/vmw_pvrdma/qp.c
@@ -211,9 +211,8 @@ struct ibv_qp *pvrdma_create_qp(struct ibv_pd *pd,
{
struct pvrdma_device *dev = to_vdev(pd->context->device);
struct user_pvrdma_create_qp cmd;
- struct ib_uverbs_create_qp_resp resp;
+ struct user_pvrdma_create_qp_resp resp = {};
struct pvrdma_qp *qp;
- int ret;
int is_srq = !!(attr->srq);
attr->cap.max_send_sge = max_t(uint32_t, 1U, attr->cap.max_send_sge);
@@ -281,15 +280,18 @@ struct ibv_qp *pvrdma_create_qp(struct ibv_pd *pd,
cmd.rbuf_addr = (uintptr_t)qp->rbuf.buf;
cmd.rbuf_size = qp->rbuf.length;
cmd.qp_addr = (uintptr_t) qp;
+ cmd.flags = PVRDMA_USER_QP_CREATE_USE_RESP;
- ret = ibv_cmd_create_qp(pd, &qp->ibv_qp, attr,
- &cmd.ibv_cmd, sizeof(cmd),
- &resp, sizeof(resp));
-
- if (ret)
+ if (ibv_cmd_create_qp(pd, &qp->ibv_qp, attr, &cmd.ibv_cmd, sizeof(cmd),
+ &resp.ibv_resp, sizeof(resp)))
goto err_free;
- to_vctx(pd->context)->qp_tbl[qp->ibv_qp.qp_num & 0xFFFF] = qp;
+ if (resp.drv_payload.qpn_valid == PVRDMA_USER_QP_CREATE_USE_RESP)
+ qp->qp_handle = resp.drv_payload.qp_handle;
+ else
+ qp->qp_handle = qp->ibv_qp.qp_num;
+
+ to_vctx(pd->context)->qp_tbl[qp->qp_handle & 0xFFFF] = qp;
/* If set, each WR submitted to the SQ generate a completion entry */
if (attr->sq_sig_all)
@@ -414,7 +416,7 @@ int pvrdma_destroy_qp(struct ibv_qp *ibqp)
free(qp->rq.wrid);
pvrdma_free_buf(&qp->rbuf);
pvrdma_free_buf(&qp->sbuf);
- ctx->qp_tbl[ibqp->qp_num & 0xFFFF] = NULL;
+ ctx->qp_tbl[qp->qp_handle & 0xFFFF] = NULL;
free(qp);
return 0;
@@ -547,7 +549,7 @@ out:
if (nreq) {
udma_to_device_barrier();
pvrdma_write_uar_qp(ctx->uar,
- PVRDMA_UAR_QP_SEND | ibqp->qp_num);
+ PVRDMA_UAR_QP_SEND | qp->qp_handle);
}
pthread_spin_unlock(&qp->sq.lock);
@@ -630,7 +632,7 @@ int pvrdma_post_recv(struct ibv_qp *ibqp, struct ibv_recv_wr *wr,
out:
if (nreq)
pvrdma_write_uar_qp(ctx->uar,
- PVRDMA_UAR_QP_RECV | ibqp->qp_num);
+ PVRDMA_UAR_QP_RECV | qp->qp_handle);
pthread_spin_unlock(&qp->rq.lock);
return ret;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-09-19 20:27 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-09-19 20:26 [PATCH rdma-core v1 0/2] vmw_pvrdma: Use physical resource ids from device Adit Ranadive
2019-09-19 20:26 ` [PATCH rdma-core v1 1/2] vmw_pvrdma: Update kernel header for QP handle/num Adit Ranadive
2019-09-19 20:26 ` [PATCH rdma-core v1 2/2] vmw_pvrdma: Use resource ids from physical device if available Adit Ranadive
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).