linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr
@ 2023-05-30 22:13 Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 1/6] RDMA/rxe: Rename IB_ACCESS_REMOTE Bob Pearson
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

This patch set does some preparatory cleanups and then implements the
two simple cases of ib_rereg_user_mr.

Bob Pearson (6):
  RDMA/rxe: Rename IB_ACCESS_REMOTE
  rdma/rxe: Optimize send path in rxe_resp.c
  RDMA/RXE: Fix access checks in rxe_check_bind_mw
  RDMA/RXE: Introduce rxe access supported flags
  RDMA/RXE: Let rkey == lkey for local access
  RDMA/RXE: Implement rereg_user_mr

 drivers/infiniband/sw/rxe/rxe_mr.c     | 21 ++++++-------
 drivers/infiniband/sw/rxe/rxe_mw.c     | 22 +++++++++-----
 drivers/infiniband/sw/rxe/rxe_opcode.h |  3 ++
 drivers/infiniband/sw/rxe/rxe_qp.c     |  7 +++++
 drivers/infiniband/sw/rxe/rxe_resp.c   | 12 ++++++--
 drivers/infiniband/sw/rxe/rxe_verbs.c  | 41 ++++++++++++++++++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.h  | 20 +++++++++++++
 7 files changed, 104 insertions(+), 22 deletions(-)


base-commit: 8c1ee346da583718fb0a7791a1f84bdafb103caf
-- 
2.39.2


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH for-next 1/6] RDMA/rxe: Rename IB_ACCESS_REMOTE
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
@ 2023-05-30 22:13 ` Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 2/6] rdma/rxe: Optimize send path in rxe_resp.c Bob Pearson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

Rename IB_ACCESS_REMOTE to RXE_ACCESS_REMOTE and move to
rxe_verbs.h as an enum instead of a #define. Shouldn't
use IB_xxx for rxe symbols.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c    | 10 +++-------
 drivers/infiniband/sw/rxe/rxe_verbs.h |  6 ++++++
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 0e538fafcc20..b3bc4ac5fedd 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -45,14 +45,10 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 	}
 }
 
-#define IB_ACCESS_REMOTE	(IB_ACCESS_REMOTE_READ		\
-				| IB_ACCESS_REMOTE_WRITE	\
-				| IB_ACCESS_REMOTE_ATOMIC)
-
 static void rxe_mr_init(int access, struct rxe_mr *mr)
 {
 	u32 lkey = mr->elem.index << 8 | rxe_get_next_key(-1);
-	u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0;
+	u32 rkey = (access & RXE_ACCESS_REMOTE) ? lkey : 0;
 
 	/* set ibmr->l/rkey and also copy into private l/rkey
 	 * for user MRs these will always be the same
@@ -195,7 +191,7 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
 	int err;
 
 	/* always allow remote access for FMRs */
-	rxe_mr_init(IB_ACCESS_REMOTE, mr);
+	rxe_mr_init(RXE_ACCESS_REMOTE, mr);
 
 	err = rxe_mr_alloc(mr, max_pages);
 	if (err)
@@ -715,7 +711,7 @@ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 
 	mr->access = access;
 	mr->lkey = key;
-	mr->rkey = (access & IB_ACCESS_REMOTE) ? key : 0;
+	mr->rkey = (access & RXE_ACCESS_REMOTE) ? key : 0;
 	mr->ibmr.iova = wqe->wr.wr.reg.mr->iova;
 	mr->state = RXE_MR_STATE_VALID;
 
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 26a20f088692..0a2b7343e38f 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -253,6 +253,12 @@ struct rxe_qp {
 	struct execute_work	cleanup_work;
 };
 
+enum rxe_access {
+	RXE_ACCESS_REMOTE	= (IB_ACCESS_REMOTE_READ
+				| IB_ACCESS_REMOTE_WRITE
+				| IB_ACCESS_REMOTE_ATOMIC),
+};
+
 enum rxe_mr_state {
 	RXE_MR_STATE_INVALID,
 	RXE_MR_STATE_FREE,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH for-next 2/6] rdma/rxe: Optimize send path in rxe_resp.c
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 1/6] RDMA/rxe: Rename IB_ACCESS_REMOTE Bob Pearson
@ 2023-05-30 22:13 ` Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 3/6] RDMA/rxe: Fix access checks in rxe_check_bind_mw Bob Pearson
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

Bypass calling check_rkey() in rxe_resp.c for non-rdma messages.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_opcode.h |  3 +++
 drivers/infiniband/sw/rxe/rxe_resp.c   | 12 ++++++++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h
index cea4e0a63919..5686b691d6b8 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.h
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.h
@@ -91,6 +91,9 @@ enum rxe_hdr_mask {
 	RXE_READ_OR_ATOMIC_MASK	= (RXE_READ_MASK | RXE_ATOMIC_MASK),
 	RXE_WRITE_OR_SEND_MASK	= (RXE_WRITE_MASK | RXE_SEND_MASK),
 	RXE_READ_OR_WRITE_MASK	= (RXE_READ_MASK | RXE_WRITE_MASK),
+	RXE_RDMA_OP_MASK	= (RXE_READ_MASK | RXE_WRITE_MASK |
+				   RXE_ATOMIC_WRITE_MASK | RXE_FLUSH_MASK |
+				   RXE_ATOMIC_MASK),
 };
 
 #define OPCODE_NONE		(-1)
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index b92c41cdb620..07299205242e 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -387,7 +387,10 @@ static enum resp_states rxe_resp_check_length(struct rxe_qp *qp,
 		}
 	}
 
-	return RESPST_CHK_RKEY;
+	if (pkt->mask & RXE_RDMA_OP_MASK)
+		return RESPST_CHK_RKEY;
+	else
+		return RESPST_EXECUTE;
 }
 
 /* if the reth length field is zero we can assume nothing
@@ -434,6 +437,10 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 	enum resp_states state;
 	int access = 0;
 
+	/* parse RETH or ATMETH header for first/only packets
+	 * for va, length, rkey, etc. or use current value for
+	 * middle/last packets.
+	 */
 	if (pkt->mask & (RXE_READ_OR_WRITE_MASK | RXE_ATOMIC_WRITE_MASK)) {
 		if (pkt->mask & RXE_RETH_MASK)
 			qp_resp_from_reth(qp, pkt);
@@ -454,7 +461,8 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 		qp_resp_from_atmeth(qp, pkt);
 		access = IB_ACCESS_REMOTE_ATOMIC;
 	} else {
-		return RESPST_EXECUTE;
+		/* shouldn't happen */
+		WARN_ON(1);
 	}
 
 	/* A zero-byte read or write op is not required to
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH for-next 3/6] RDMA/rxe: Fix access checks in rxe_check_bind_mw
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 1/6] RDMA/rxe: Rename IB_ACCESS_REMOTE Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 2/6] rdma/rxe: Optimize send path in rxe_resp.c Bob Pearson
@ 2023-05-30 22:13 ` Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 4/6] RDMA/rxe: Introduce rxe access supported flags Bob Pearson
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

The subroutine rxe_check_bind_mw() in rxe_mw.c performs checks
on the mw access flags before they are set so they always succeed.
This patch instead checks the access flags passed in the send wqe.

Fixes: 32a577b4c3a9 ("RDMA/rxe: Add support for bind MW work requests")
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mw.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index afa5ce1a7116..a7ec57ab8fad 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -48,7 +48,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 }
 
 static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
-			 struct rxe_mw *mw, struct rxe_mr *mr)
+			 struct rxe_mw *mw, struct rxe_mr *mr, int access)
 {
 	if (mw->ibmw.type == IB_MW_TYPE_1) {
 		if (unlikely(mw->state != RXE_MW_STATE_VALID)) {
@@ -58,7 +58,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 		}
 
 		/* o10-36.2.2 */
-		if (unlikely((mw->access & IB_ZERO_BASED))) {
+		if (unlikely((access & IB_ZERO_BASED))) {
 			rxe_dbg_mw(mw, "attempt to bind a zero based type 1 MW\n");
 			return -EINVAL;
 		}
@@ -104,7 +104,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 	}
 
 	/* C10-74 */
-	if (unlikely((mw->access &
+	if (unlikely((access &
 		      (IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_ATOMIC)) &&
 		     !(mr->access & IB_ACCESS_LOCAL_WRITE))) {
 		rxe_dbg_mw(mw,
@@ -113,7 +113,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 	}
 
 	/* C10-75 */
-	if (mw->access & IB_ZERO_BASED) {
+	if (access & IB_ZERO_BASED) {
 		if (unlikely(wqe->wr.wr.mw.length > mr->ibmr.length)) {
 			rxe_dbg_mw(mw,
 				"attempt to bind a ZB MW outside of the MR\n");
@@ -133,12 +133,12 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 }
 
 static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
-		      struct rxe_mw *mw, struct rxe_mr *mr)
+		      struct rxe_mw *mw, struct rxe_mr *mr, int access)
 {
 	u32 key = wqe->wr.wr.mw.rkey & 0xff;
 
 	mw->rkey = (mw->rkey & ~0xff) | key;
-	mw->access = wqe->wr.wr.mw.access;
+	mw->access = access;
 	mw->state = RXE_MW_STATE_VALID;
 	mw->addr = wqe->wr.wr.mw.addr;
 	mw->length = wqe->wr.wr.mw.length;
@@ -169,6 +169,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	u32 mw_rkey = wqe->wr.wr.mw.mw_rkey;
 	u32 mr_lkey = wqe->wr.wr.mw.mr_lkey;
+	int access = wqe->wr.wr.mw.access;
 
 	mw = rxe_pool_get_index(&rxe->mw_pool, mw_rkey >> 8);
 	if (unlikely(!mw)) {
@@ -198,11 +199,11 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 
 	spin_lock_bh(&mw->lock);
 
-	ret = rxe_check_bind_mw(qp, wqe, mw, mr);
+	ret = rxe_check_bind_mw(qp, wqe, mw, mr, access);
 	if (ret)
 		goto err_unlock;
 
-	rxe_do_bind_mw(qp, wqe, mw, mr);
+	rxe_do_bind_mw(qp, wqe, mw, mr, access);
 err_unlock:
 	spin_unlock_bh(&mw->lock);
 err_drop_mr:
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH for-next 4/6] RDMA/rxe: Introduce rxe access supported flags
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
                   ` (2 preceding siblings ...)
  2023-05-30 22:13 ` [PATCH for-next 3/6] RDMA/rxe: Fix access checks in rxe_check_bind_mw Bob Pearson
@ 2023-05-30 22:13 ` Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 5/6] RDMA/rxe: Let rkey == lkey for local access Bob Pearson
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

Introduce supported bit masks for setting the access attributes
of MWs, MRs, and QPs. Check these when attributes are set.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mw.c    |  5 +++++
 drivers/infiniband/sw/rxe/rxe_qp.c    |  7 +++++++
 drivers/infiniband/sw/rxe/rxe_verbs.c |  6 ++++++
 drivers/infiniband/sw/rxe/rxe_verbs.h | 15 ++++++++++++---
 4 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index a7ec57ab8fad..d8a43d87de93 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -197,6 +197,11 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 		mr = NULL;
 	}
 
+	if (access & ~RXE_ACCESS_SUPPORTED_MW) {
+		rxe_err_mw(mw, "access %#x not supported", access);
+		return -EOPNOTSUPP;
+	}
+
 	spin_lock_bh(&mw->lock);
 
 	ret = rxe_check_bind_mw(qp, wqe, mw, mr, access);
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index c5451a4488ca..95d4a6760c33 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -392,6 +392,13 @@ int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp,
 	if (mask & IB_QP_CAP && rxe_qp_chk_cap(rxe, &attr->cap, !!qp->srq))
 		goto err1;
 
+	if (mask & IB_QP_ACCESS_FLAGS) {
+		if (!(qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_UC))
+			goto err1;
+		if (attr->qp_access_flags & ~RXE_ACCESS_SUPPORTED_QP)
+			goto err1;
+	}
+
 	if (mask & IB_QP_AV && rxe_av_chk_attr(qp, &attr->ah_attr))
 		goto err1;
 
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index dea605b7f683..bb2b9d40e242 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -1260,6 +1260,12 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, u64 start,
 	struct rxe_mr *mr;
 	int err, cleanup_err;
 
+	if (access & ~RXE_ACCESS_SUPPORTED_MR) {
+		rxe_err_pd(pd, "access = %#x not supported (%#x)", access,
+				RXE_ACCESS_SUPPORTED_MR);
+		return ERR_PTR(-EOPNOTSUPP);
+	}
+
 	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
 	if (!mr)
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 0a2b7343e38f..2f2dc67f03dd 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -253,10 +253,19 @@ struct rxe_qp {
 	struct execute_work	cleanup_work;
 };
 
-enum rxe_access {
-	RXE_ACCESS_REMOTE	= (IB_ACCESS_REMOTE_READ
+enum {
+	RXE_ACCESS_REMOTE	= IB_ACCESS_REMOTE_READ
 				| IB_ACCESS_REMOTE_WRITE
-				| IB_ACCESS_REMOTE_ATOMIC),
+				| IB_ACCESS_REMOTE_ATOMIC,
+	RXE_ACCESS_SUPPORTED_MR	= RXE_ACCESS_REMOTE
+				| IB_ACCESS_LOCAL_WRITE
+				| IB_ACCESS_MW_BIND
+				| IB_ACCESS_ON_DEMAND
+				| IB_ACCESS_FLUSH_GLOBAL
+				| IB_ACCESS_FLUSH_PERSISTENT,
+	RXE_ACCESS_SUPPORTED_QP	= RXE_ACCESS_SUPPORTED_MR,
+	RXE_ACCESS_SUPPORTED_MW	= RXE_ACCESS_SUPPORTED_MR
+				| IB_ZERO_BASED,
 };
 
 enum rxe_mr_state {
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH for-next 5/6] RDMA/rxe: Let rkey == lkey for local access
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
                   ` (3 preceding siblings ...)
  2023-05-30 22:13 ` [PATCH for-next 4/6] RDMA/rxe: Introduce rxe access supported flags Bob Pearson
@ 2023-05-30 22:13 ` Bob Pearson
  2023-05-30 22:13 ` [PATCH for-next 6/6] RDMA/rxe: Implement rereg_user_mr Bob Pearson
  2023-06-09 16:22 ` [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Jason Gunthorpe
  6 siblings, 0 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

In order to conform to other drivers stop using rkey == 0
as an indication that there are no remote access flags set.
Set rkey == lkey by default for all MRs.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index b3bc4ac5fedd..f54042e9aeb2 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -47,16 +47,15 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 
 static void rxe_mr_init(int access, struct rxe_mr *mr)
 {
-	u32 lkey = mr->elem.index << 8 | rxe_get_next_key(-1);
-	u32 rkey = (access & RXE_ACCESS_REMOTE) ? lkey : 0;
+	u32 key = mr->elem.index << 8 | rxe_get_next_key(-1);
 
 	/* set ibmr->l/rkey and also copy into private l/rkey
 	 * for user MRs these will always be the same
 	 * for cases where caller 'owns' the key portion
 	 * they may be different until REG_MR WQE is executed.
 	 */
-	mr->lkey = mr->ibmr.lkey = lkey;
-	mr->rkey = mr->ibmr.rkey = rkey;
+	mr->lkey = mr->ibmr.lkey = key;
+	mr->rkey = mr->ibmr.rkey = key;
 
 	mr->access = access;
 	mr->ibmr.page_size = PAGE_SIZE;
@@ -640,6 +639,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 key)
 {
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	struct rxe_mr *mr;
+	int remote;
 	int ret;
 
 	mr = rxe_pool_get_index(&rxe->mr_pool, key >> 8);
@@ -649,9 +649,10 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 key)
 		goto err;
 	}
 
-	if (mr->rkey ? (key != mr->rkey) : (key != mr->lkey)) {
+	remote = mr->access & RXE_ACCESS_REMOTE;
+	if (remote ? (key != mr->rkey) : (key != mr->lkey)) {
 		rxe_dbg_mr(mr, "wr key (%#x) doesn't match mr key (%#x)\n",
-			key, (mr->rkey ? mr->rkey : mr->lkey));
+			key, (remote ? mr->rkey : mr->lkey));
 		ret = -EINVAL;
 		goto err_drop_ref;
 	}
@@ -711,7 +712,7 @@ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 
 	mr->access = access;
 	mr->lkey = key;
-	mr->rkey = (access & RXE_ACCESS_REMOTE) ? key : 0;
+	mr->rkey = key;
 	mr->ibmr.iova = wqe->wr.wr.reg.mr->iova;
 	mr->state = RXE_MR_STATE_VALID;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH for-next 6/6] RDMA/rxe: Implement rereg_user_mr
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
                   ` (4 preceding siblings ...)
  2023-05-30 22:13 ` [PATCH for-next 5/6] RDMA/rxe: Let rkey == lkey for local access Bob Pearson
@ 2023-05-30 22:13 ` Bob Pearson
  2023-06-09 16:22 ` [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Jason Gunthorpe
  6 siblings, 0 replies; 8+ messages in thread
From: Bob Pearson @ 2023-05-30 22:13 UTC (permalink / raw)
  To: jgg, zyjzyj2000, edwards, linux-rdma; +Cc: Bob Pearson

Implement the two easy cases of ib_rereg_user_mr.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_verbs.c | 35 +++++++++++++++++++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.h |  5 ++++
 2 files changed, 40 insertions(+)

diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index bb2b9d40e242..f6396333bcef 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -1299,6 +1299,40 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, u64 start,
 	return ERR_PTR(err);
 }
 
+static struct ib_mr *rxe_rereg_user_mr(struct ib_mr *ibmr, int flags,
+				       u64 start, u64 length, u64 iova,
+				       int access, struct ib_pd *ibpd,
+				       struct ib_udata *udata)
+{
+	struct rxe_mr *mr = to_rmr(ibmr);
+	struct rxe_pd *old_pd = to_rpd(ibmr->pd);
+	struct rxe_pd *pd = to_rpd(ibpd);
+
+	/* for now only support the two easy cases:
+	 * rereg_pd and rereg_access
+	 */
+	if (flags & ~RXE_MR_REREG_SUPPORTED) {
+		rxe_err_mr(mr, "flags = %#x not supported", flags);
+		return ERR_PTR(-EOPNOTSUPP);
+	}
+
+	if (flags & IB_MR_REREG_PD) {
+		rxe_put(old_pd);
+		rxe_get(pd);
+		mr->ibmr.pd = ibpd;
+	}
+
+	if (flags & IB_MR_REREG_ACCESS) {
+		if (access & ~RXE_ACCESS_SUPPORTED_MR) {
+			rxe_err_mr(mr, "access = %#x not supported", access);
+			return ERR_PTR(-EOPNOTSUPP);
+		}
+		mr->access = access;
+	}
+
+	return NULL;
+}
+
 static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 				  u32 max_num_sg)
 {
@@ -1451,6 +1485,7 @@ static const struct ib_device_ops rxe_dev_ops = {
 	.query_srq = rxe_query_srq,
 	.reg_user_mr = rxe_reg_user_mr,
 	.req_notify_cq = rxe_req_notify_cq,
+	.rereg_user_mr = rxe_rereg_user_mr,
 	.resize_cq = rxe_resize_cq,
 
 	INIT_RDMA_OBJ_SIZE(ib_ah, rxe_ah, ibah),
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 2f2dc67f03dd..cb18b83b73c1 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -284,6 +284,11 @@ enum rxe_mr_lookup_type {
 	RXE_LOOKUP_REMOTE,
 };
 
+enum rxe_rereg {
+	RXE_MR_REREG_SUPPORTED	= IB_MR_REREG_PD
+				| IB_MR_REREG_ACCESS,
+};
+
 static inline int rkey_is_mw(u32 rkey)
 {
 	u32 index = rkey >> 8;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr
  2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
                   ` (5 preceding siblings ...)
  2023-05-30 22:13 ` [PATCH for-next 6/6] RDMA/rxe: Implement rereg_user_mr Bob Pearson
@ 2023-06-09 16:22 ` Jason Gunthorpe
  6 siblings, 0 replies; 8+ messages in thread
From: Jason Gunthorpe @ 2023-06-09 16:22 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, edwards, linux-rdma

On Tue, May 30, 2023 at 05:13:29PM -0500, Bob Pearson wrote:
> This patch set does some preparatory cleanups and then implements the
> two simple cases of ib_rereg_user_mr.
> 
> Bob Pearson (6):
>   RDMA/rxe: Rename IB_ACCESS_REMOTE
>   rdma/rxe: Optimize send path in rxe_resp.c
>   RDMA/RXE: Fix access checks in rxe_check_bind_mw
>   RDMA/RXE: Introduce rxe access supported flags
>   RDMA/RXE: Let rkey == lkey for local access
>   RDMA/RXE: Implement rereg_user_mr

Applied to for-next, thanks

Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-06-09 16:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-30 22:13 [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Bob Pearson
2023-05-30 22:13 ` [PATCH for-next 1/6] RDMA/rxe: Rename IB_ACCESS_REMOTE Bob Pearson
2023-05-30 22:13 ` [PATCH for-next 2/6] rdma/rxe: Optimize send path in rxe_resp.c Bob Pearson
2023-05-30 22:13 ` [PATCH for-next 3/6] RDMA/rxe: Fix access checks in rxe_check_bind_mw Bob Pearson
2023-05-30 22:13 ` [PATCH for-next 4/6] RDMA/rxe: Introduce rxe access supported flags Bob Pearson
2023-05-30 22:13 ` [PATCH for-next 5/6] RDMA/rxe: Let rkey == lkey for local access Bob Pearson
2023-05-30 22:13 ` [PATCH for-next 6/6] RDMA/rxe: Implement rereg_user_mr Bob Pearson
2023-06-09 16:22 ` [PATCH for-next 0/6] Misc cleanups and implement rereg_user_mr Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).