* [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups
@ 2023-07-27 19:28 Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 1/8] RDMA/rxe: Add pad size to struct rxe_pkt_info Bob Pearson
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
This patch set is a collection of cleanup patches previously
posted as part of a larger set that included support for
nonlinear or fragmented packets. It has been rebased to the
current for-next branch after the application of three previous
patch sets:
RDMA/rxe: Fix incomplete state save in rxe_requester
RDMA/rxe: Misc fixes and cleanups
Enable rcu locking of verbs objects
These changes are a pre-requisite for a patch set to follow
which implemements support for nonlinear packets. They are
mainly a code cleanup of rxe_req.c.
Bob Pearson (8):
RDMA/rxe: Add pad size to struct rxe_pkt_info
RDMA/rxe: Isolate code to fill request roce headers
RDMA/rxe: Isolate request payload code in a subroutine
RDMA/rxe: Remove paylen parameter from rxe_init_packet
RDMA/rxe: Isolate code to build request packet
RDMA/rxe: Put fake udp send code in a subroutine
RDMA/rxe: Combine setting pkt info
RDMA/rxe: Move next_opcode to rxe_opcode.c
drivers/infiniband/sw/rxe/rxe_hdr.h | 1 +
drivers/infiniband/sw/rxe/rxe_icrc.c | 4 +-
drivers/infiniband/sw/rxe/rxe_loc.h | 2 +-
drivers/infiniband/sw/rxe/rxe_net.c | 11 +-
drivers/infiniband/sw/rxe/rxe_opcode.c | 176 +++++++++-
drivers/infiniband/sw/rxe/rxe_opcode.h | 4 +
drivers/infiniband/sw/rxe/rxe_recv.c | 1 +
drivers/infiniband/sw/rxe/rxe_req.c | 451 ++++++++-----------------
drivers/infiniband/sw/rxe/rxe_resp.c | 36 +-
9 files changed, 354 insertions(+), 332 deletions(-)
base-commit: 693e1cdebb50d2aa67406411ca6d5be195d62771
prerequisite-patch-id: c3994e7a93e37e0ce4f50e0c768f3c1a0059a02f
prerequisite-patch-id: 48e13f6ccb560fdeacbd20aaf6696782c23d1190
prerequisite-patch-id: da75fb8eaa863df840e7b392b5048fcc72b0bef3
prerequisite-patch-id: d0877649e2edaf00585a0a6a80391fe0d7bbc13b
prerequisite-patch-id: 6495b1d1f664f8ab91ed9ef9d2ca5b3b27d7df35
prerequisite-patch-id: a6367b8fedd0d8999139c8b857ebbd3ce5c72245
prerequisite-patch-id: 78c95e90a5e49b15b7af8ef57130739c143e88b5
prerequisite-patch-id: 7c65a01066c0418de6897bc8b5f44d078d21b0ec
prerequisite-patch-id: 8ab09f93c23c7875e56c597e69236c30464723b6
prerequisite-patch-id: ca9d84b34873b49048e42fb4c13a2a097c215c46
prerequisite-patch-id: 0f6a587501c8246e1185dfd0cbf5e2044c5f9b13
prerequisite-patch-id: 5246df93137429916d76e75b9a13a4ad5ceb0bad
--
2.39.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH for-next v3 1/8] RDMA/rxe: Add pad size to struct rxe_pkt_info
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 2/8] RDMA/rxe: Isolate code to fill request roce headers Bob Pearson
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Add the packet pad size to struct rxe_pkt_info and use this to
simplify references to pad size in the rxe driver.
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_hdr.h | 1 +
drivers/infiniband/sw/rxe/rxe_icrc.c | 4 ++--
drivers/infiniband/sw/rxe/rxe_recv.c | 1 +
drivers/infiniband/sw/rxe/rxe_req.c | 20 ++++++++++----------
drivers/infiniband/sw/rxe/rxe_resp.c | 24 +++++++++++-------------
5 files changed, 25 insertions(+), 25 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h
index 46f82b27fcd2..1dcdb87fa01a 100644
--- a/drivers/infiniband/sw/rxe/rxe_hdr.h
+++ b/drivers/infiniband/sw/rxe/rxe_hdr.h
@@ -22,6 +22,7 @@ struct rxe_pkt_info {
u16 paylen; /* length of bth - icrc */
u8 port_num; /* port pkt received on */
u8 opcode; /* bth opcode of packet */
+ u8 pad; /* pad size of packet */
};
/* Macros should be used only for received skb */
diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c
index fdf5f08cd8f1..c9aa0995e900 100644
--- a/drivers/infiniband/sw/rxe/rxe_icrc.c
+++ b/drivers/infiniband/sw/rxe/rxe_icrc.c
@@ -148,7 +148,7 @@ int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt)
icrc = rxe_icrc_hdr(skb, pkt);
icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt),
- payload_size(pkt) + bth_pad(pkt));
+ payload_size(pkt) + pkt->pad);
icrc = ~icrc;
if (unlikely(icrc != pkt_icrc))
@@ -170,6 +170,6 @@ void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt)
icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE);
icrc = rxe_icrc_hdr(skb, pkt);
icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt),
- payload_size(pkt) + bth_pad(pkt));
+ payload_size(pkt) + pkt->pad);
*icrcp = ~icrc;
}
diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
index 5861e4244049..f912a913f89a 100644
--- a/drivers/infiniband/sw/rxe/rxe_recv.c
+++ b/drivers/infiniband/sw/rxe/rxe_recv.c
@@ -329,6 +329,7 @@ void rxe_rcv(struct sk_buff *skb)
pkt->psn = bth_psn(pkt);
pkt->qp = NULL;
pkt->mask |= rxe_opcode[pkt->opcode].mask;
+ pkt->pad = bth_pad(pkt);
if (unlikely(skb->len < header_size(pkt)))
goto drop;
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index d8c41fd626a9..31858761ca1e 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -420,18 +420,17 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
struct sk_buff *skb;
struct rxe_send_wr *ibwr = &wqe->wr;
- int pad = (-payload) & 0x3;
- int paylen;
int solicited;
u32 qp_num;
int ack_req;
/* length from start of bth to end of icrc */
- paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
- pkt->paylen = paylen;
+ pkt->pad = (-payload) & 0x3;
+ pkt->paylen = rxe_opcode[opcode].length + payload +
+ pkt->pad + RXE_ICRC_SIZE;
/* init skb */
- skb = rxe_init_packet(rxe, av, paylen, pkt);
+ skb = rxe_init_packet(rxe, av, pkt->paylen, pkt);
if (unlikely(!skb))
return NULL;
@@ -450,7 +449,8 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
if (ack_req)
qp->req.noack_pkts = 0;
- bth_init(pkt, pkt->opcode, solicited, 0, pad, IB_DEFAULT_PKEY_FULL, qp_num,
+ bth_init(pkt, pkt->opcode, solicited, 0, pkt->pad,
+ IB_DEFAULT_PKEY_FULL, qp_num,
ack_req, pkt->psn);
/* init optional headers */
@@ -499,6 +499,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt,
struct sk_buff *skb, u32 payload)
{
+ u8 *pad_addr;
int err;
err = rxe_prepare(av, pkt, skb);
@@ -520,10 +521,9 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
if (err)
return err;
}
- if (bth_pad(pkt)) {
- u8 *pad = payload_addr(pkt) + payload;
-
- memset(pad, 0, bth_pad(pkt));
+ if (pkt->pad) {
+ pad_addr = payload_addr(pkt) + payload;
+ memset(pad_addr, 0, pkt->pad);
}
} else if (pkt->mask & RXE_FLUSH_MASK) {
/* oA19-2: shall have no payload. */
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 64c64f5f36a8..fc2f55329fa2 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -525,7 +525,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
skip_check_range:
if (pkt->mask & (RXE_WRITE_MASK | RXE_ATOMIC_WRITE_MASK)) {
if (resid > mtu) {
- if (pktlen != mtu || bth_pad(pkt)) {
+ if (pktlen != mtu || pkt->pad) {
state = RESPST_ERR_LENGTH;
goto err;
}
@@ -534,7 +534,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
state = RESPST_ERR_LENGTH;
goto err;
}
- if ((bth_pad(pkt) != (0x3 & (-resid)))) {
+ if ((pkt->pad != (0x3 & (-resid)))) {
/* This case may not be exactly that
* but nothing else fits.
*/
@@ -766,27 +766,25 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
{
struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
struct sk_buff *skb;
- int paylen;
- int pad;
int err;
/*
* allocate packet
*/
- pad = (-payload) & 0x3;
- paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
+ ack->pad = (-payload) & 0x3;
+ ack->paylen = rxe_opcode[opcode].length + payload +
+ ack->pad + RXE_ICRC_SIZE;
- skb = rxe_init_packet(rxe, &qp->pri_av, paylen, ack);
+ skb = rxe_init_packet(rxe, &qp->pri_av, ack->paylen, ack);
if (!skb)
return NULL;
ack->qp = qp;
ack->opcode = opcode;
ack->mask = rxe_opcode[opcode].mask;
- ack->paylen = paylen;
ack->psn = psn;
- bth_init(ack, opcode, 0, 0, pad, IB_DEFAULT_PKEY_FULL,
+ bth_init(ack, opcode, 0, 0, ack->pad, IB_DEFAULT_PKEY_FULL,
qp->attr.dest_qp_num, 0, psn);
if (ack->mask & RXE_AETH_MASK) {
@@ -874,6 +872,7 @@ static enum resp_states read_reply(struct rxe_qp *qp,
int err;
struct resp_res *res = qp->resp.res;
struct rxe_mr *mr;
+ u8 *pad_addr;
if (!res) {
res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK);
@@ -932,10 +931,9 @@ static enum resp_states read_reply(struct rxe_qp *qp,
goto err_out;
}
- if (bth_pad(&ack_pkt)) {
- u8 *pad = payload_addr(&ack_pkt) + payload;
-
- memset(pad, 0, bth_pad(&ack_pkt));
+ if (ack_pkt.pad) {
+ pad_addr = payload_addr(&ack_pkt) + payload;
+ memset(pad_addr, 0, ack_pkt.pad);
}
/* rxe_xmit_packet always consumes the skb */
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 2/8] RDMA/rxe: Isolate code to fill request roce headers
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 1/8] RDMA/rxe: Add pad size to struct rxe_pkt_info Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 3/8] RDMA/rxe: Isolate request payload code in a subroutine Bob Pearson
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Isolate the code to fill in roce headers in a request packet into
a subroutine named rxe_init_roce_hdrs.
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_req.c | 108 +++++++++++++++-------------
1 file changed, 57 insertions(+), 51 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 31858761ca1e..6e9c8da001a4 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -411,86 +411,92 @@ static inline int get_mtu(struct rxe_qp *qp)
return rxe->port.mtu_cap;
}
-static struct sk_buff *init_req_packet(struct rxe_qp *qp,
- struct rxe_av *av,
- struct rxe_send_wqe *wqe,
- int opcode, u32 payload,
- struct rxe_pkt_info *pkt)
+static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ struct rxe_pkt_info *pkt)
{
- struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
- struct sk_buff *skb;
- struct rxe_send_wr *ibwr = &wqe->wr;
- int solicited;
- u32 qp_num;
- int ack_req;
-
- /* length from start of bth to end of icrc */
- pkt->pad = (-payload) & 0x3;
- pkt->paylen = rxe_opcode[opcode].length + payload +
- pkt->pad + RXE_ICRC_SIZE;
-
- /* init skb */
- skb = rxe_init_packet(rxe, av, pkt->paylen, pkt);
- if (unlikely(!skb))
- return NULL;
+ struct rxe_send_wr *wr = &wqe->wr;
+ int is_send;
+ int is_write_imm;
+ int is_end;
+ int solicited;
+ u32 dst_qpn;
+ u32 qkey;
+ int ack_req;
/* init bth */
- solicited = (ibwr->send_flags & IB_SEND_SOLICITED) &&
- (pkt->mask & RXE_END_MASK) &&
- ((pkt->mask & (RXE_SEND_MASK)) ||
- (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) ==
- (RXE_WRITE_MASK | RXE_IMMDT_MASK));
-
- qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn :
- qp->attr.dest_qp_num;
-
- ack_req = ((pkt->mask & RXE_END_MASK) ||
- (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK));
+ is_send = pkt->mask & RXE_SEND_MASK;
+ is_write_imm = (pkt->mask & RXE_WRITE_MASK) &&
+ (pkt->mask & RXE_IMMDT_MASK);
+ is_end = pkt->mask & RXE_END_MASK;
+ solicited = (wr->send_flags & IB_SEND_SOLICITED) && is_end &&
+ (is_send || is_write_imm);
+ dst_qpn = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn :
+ qp->attr.dest_qp_num;
+ ack_req = is_end || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK);
if (ack_req)
qp->req.noack_pkts = 0;
bth_init(pkt, pkt->opcode, solicited, 0, pkt->pad,
- IB_DEFAULT_PKEY_FULL, qp_num,
- ack_req, pkt->psn);
+ IB_DEFAULT_PKEY_FULL, dst_qpn, ack_req, pkt->psn);
- /* init optional headers */
+ /* init extended headers */
if (pkt->mask & RXE_RETH_MASK) {
if (pkt->mask & RXE_FETH_MASK)
- reth_set_rkey(pkt, ibwr->wr.flush.rkey);
+ reth_set_rkey(pkt, wr->wr.flush.rkey);
else
- reth_set_rkey(pkt, ibwr->wr.rdma.rkey);
+ reth_set_rkey(pkt, wr->wr.rdma.rkey);
reth_set_va(pkt, wqe->iova);
reth_set_len(pkt, wqe->dma.resid);
}
- /* Fill Flush Extension Transport Header */
if (pkt->mask & RXE_FETH_MASK)
- feth_init(pkt, ibwr->wr.flush.type, ibwr->wr.flush.level);
+ feth_init(pkt, wr->wr.flush.type, wr->wr.flush.level);
if (pkt->mask & RXE_IMMDT_MASK)
- immdt_set_imm(pkt, ibwr->ex.imm_data);
+ immdt_set_imm(pkt, wr->ex.imm_data);
if (pkt->mask & RXE_IETH_MASK)
- ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey);
+ ieth_set_rkey(pkt, wr->ex.invalidate_rkey);
if (pkt->mask & RXE_ATMETH_MASK) {
atmeth_set_va(pkt, wqe->iova);
- if (opcode == IB_OPCODE_RC_COMPARE_SWAP) {
- atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap);
- atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add);
+ if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) {
+ atmeth_set_swap_add(pkt, wr->wr.atomic.swap);
+ atmeth_set_comp(pkt, wr->wr.atomic.compare_add);
} else {
- atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add);
+ atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add);
}
- atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey);
+ atmeth_set_rkey(pkt, wr->wr.atomic.rkey);
}
if (pkt->mask & RXE_DETH_MASK) {
- if (qp->ibqp.qp_num == 1)
- deth_set_qkey(pkt, GSI_QKEY);
- else
- deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey);
- deth_set_sqp(pkt, qp->ibqp.qp_num);
+ qkey = (qp->ibqp.qp_num == 1) ? GSI_QKEY :
+ wr->wr.ud.remote_qkey;
+ deth_set_qkey(pkt, qkey);
+ deth_set_sqp(pkt, qp_num(qp));
}
+}
+
+static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ struct rxe_av *av,
+ struct rxe_send_wqe *wqe,
+ int opcode, u32 payload,
+ struct rxe_pkt_info *pkt)
+{
+ struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ struct sk_buff *skb;
+
+ /* length from start of bth to end of icrc */
+ pkt->pad = (-payload) & 0x3;
+ pkt->paylen = rxe_opcode[opcode].length + payload +
+ pkt->pad + RXE_ICRC_SIZE;
+
+ /* init skb */
+ skb = rxe_init_packet(rxe, av, pkt->paylen, pkt);
+ if (unlikely(!skb))
+ return NULL;
+
+ rxe_init_roce_hdrs(qp, wqe, pkt);
return skb;
}
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 3/8] RDMA/rxe: Isolate request payload code in a subroutine
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 1/8] RDMA/rxe: Add pad size to struct rxe_pkt_info Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 2/8] RDMA/rxe: Isolate code to fill request roce headers Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 4/8] RDMA/rxe: Remove paylen parameter from rxe_init_packet Bob Pearson
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Isolate the code that fills the payload of a request packet into
a subroutine named rxe_init_payload().
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_req.c | 34 +++++++++++++++++------------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 6e9c8da001a4..c92e561b8a0b 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -477,6 +477,25 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
}
}
+static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ struct rxe_pkt_info *pkt, u32 payload)
+{
+ void *data;
+ int err = 0;
+
+ if (wqe->wr.send_flags & IB_SEND_INLINE) {
+ data = &wqe->dma.inline_data[wqe->dma.sge_offset];
+ memcpy(payload_addr(pkt), data, payload);
+ wqe->dma.resid -= payload;
+ wqe->dma.sge_offset += payload;
+ } else {
+ err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt),
+ payload, RXE_FROM_MR_OBJ);
+ }
+
+ return err;
+}
+
static struct sk_buff *init_req_packet(struct rxe_qp *qp,
struct rxe_av *av,
struct rxe_send_wqe *wqe,
@@ -513,20 +532,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
return err;
if (pkt->mask & RXE_WRITE_OR_SEND_MASK) {
- if (wqe->wr.send_flags & IB_SEND_INLINE) {
- u8 *tmp = &wqe->dma.inline_data[wqe->dma.sge_offset];
-
- memcpy(payload_addr(pkt), tmp, payload);
-
- wqe->dma.resid -= payload;
- wqe->dma.sge_offset += payload;
- } else {
- err = copy_data(qp->pd, 0, &wqe->dma,
- payload_addr(pkt), payload,
- RXE_FROM_MR_OBJ);
- if (err)
- return err;
- }
+ err = rxe_init_payload(qp, wqe, pkt, payload);
if (pkt->pad) {
pad_addr = payload_addr(pkt) + payload;
memset(pad_addr, 0, pkt->pad);
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 4/8] RDMA/rxe: Remove paylen parameter from rxe_init_packet
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
` (2 preceding siblings ...)
2023-07-27 19:28 ` [PATCH for-next v3 3/8] RDMA/rxe: Isolate request payload code in a subroutine Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 5/8] RDMA/rxe: Isolate code to build request packet Bob Pearson
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Remove paylen as a parameter to rxe_init_packet() since it
is already available in pkt.
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_loc.h | 2 +-
drivers/infiniband/sw/rxe/rxe_net.c | 7 ++++---
drivers/infiniband/sw/rxe/rxe_req.c | 2 +-
drivers/infiniband/sw/rxe/rxe_resp.c | 2 +-
4 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 666e06a82bc9..cf38f4dcff78 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -90,7 +90,7 @@ void rxe_mw_cleanup(struct rxe_pool_elem *elem);
/* rxe_net.c */
struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
- int paylen, struct rxe_pkt_info *pkt);
+ struct rxe_pkt_info *pkt);
int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt,
struct sk_buff *skb);
int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
index 0e447420a441..006c2d60f04d 100644
--- a/drivers/infiniband/sw/rxe/rxe_net.c
+++ b/drivers/infiniband/sw/rxe/rxe_net.c
@@ -511,7 +511,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
}
struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
- int paylen, struct rxe_pkt_info *pkt)
+ struct rxe_pkt_info *pkt)
{
unsigned int hdr_len;
struct sk_buff *skb = NULL;
@@ -525,7 +525,8 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
hdr_len = ETH_HLEN + sizeof(struct udphdr) +
sizeof(struct ipv6hdr);
- skb = alloc_skb(paylen + hdr_len + LL_RESERVED_SPACE(ndev), GFP_ATOMIC);
+ skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev),
+ GFP_ATOMIC);
if (unlikely(!skb))
goto out;
@@ -541,7 +542,7 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
pkt->rxe = rxe;
pkt->port_num = port_num;
- pkt->hdr = skb_put(skb, paylen);
+ pkt->hdr = skb_put(skb, pkt->paylen);
pkt->mask |= RXE_GRH_MASK;
out:
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index c92e561b8a0b..e444e1f91523 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -511,7 +511,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
pkt->pad + RXE_ICRC_SIZE;
/* init skb */
- skb = rxe_init_packet(rxe, av, pkt->paylen, pkt);
+ skb = rxe_init_packet(rxe, av, pkt);
if (unlikely(!skb))
return NULL;
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index fc2f55329fa2..7e79d3e4d64e 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -775,7 +775,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
ack->paylen = rxe_opcode[opcode].length + payload +
ack->pad + RXE_ICRC_SIZE;
- skb = rxe_init_packet(rxe, &qp->pri_av, ack->paylen, ack);
+ skb = rxe_init_packet(rxe, &qp->pri_av, ack);
if (!skb)
return NULL;
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 5/8] RDMA/rxe: Isolate code to build request packet
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
` (3 preceding siblings ...)
2023-07-27 19:28 ` [PATCH for-next v3 4/8] RDMA/rxe: Remove paylen parameter from rxe_init_packet Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 6/8] RDMA/rxe: Put fake udp send code in a subroutine Bob Pearson
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Isolate the code to build a request packet into a single
subroutine called rxe_init_req_packet().
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_req.c | 127 +++++++++++++---------------
1 file changed, 60 insertions(+), 67 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index e444e1f91523..27be1a946d62 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -496,14 +496,32 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
return err;
}
-static struct sk_buff *init_req_packet(struct rxe_qp *qp,
- struct rxe_av *av,
- struct rxe_send_wqe *wqe,
- int opcode, u32 payload,
- struct rxe_pkt_info *pkt)
+static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp,
+ struct rxe_send_wqe *wqe,
+ int opcode, u32 payload,
+ struct rxe_pkt_info *pkt)
{
struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
- struct sk_buff *skb;
+ struct sk_buff *skb = NULL;
+ struct rxe_av *av;
+ struct rxe_ah *ah = NULL;
+ u8 *pad_addr;
+ int err;
+
+ pkt->rxe = rxe;
+ pkt->opcode = opcode;
+ pkt->qp = qp;
+ pkt->psn = qp->req.psn;
+ pkt->mask = rxe_opcode[opcode].mask;
+ pkt->wqe = wqe;
+ pkt->port_num = 1;
+
+ /* get address vector and address handle for UD qps only */
+ av = rxe_get_av(pkt, &ah);
+ if (unlikely(!av)) {
+ err = -EINVAL;
+ goto err_out;
+ }
/* length from start of bth to end of icrc */
pkt->pad = (-payload) & 0x3;
@@ -512,31 +530,19 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
/* init skb */
skb = rxe_init_packet(rxe, av, pkt);
- if (unlikely(!skb))
- return NULL;
+ if (unlikely(!skb)) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+ /* init roce headers */
rxe_init_roce_hdrs(qp, wqe, pkt);
- return skb;
-}
-
-static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
- struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt,
- struct sk_buff *skb, u32 payload)
-{
- u8 *pad_addr;
- int err;
-
- err = rxe_prepare(av, pkt, skb);
- if (err)
- return err;
-
+ /* init payload if any */
if (pkt->mask & RXE_WRITE_OR_SEND_MASK) {
err = rxe_init_payload(qp, wqe, pkt, payload);
- if (pkt->pad) {
- pad_addr = payload_addr(pkt) + payload;
- memset(pad_addr, 0, pkt->pad);
- }
+ if (unlikely(err))
+ goto err_out;
} else if (pkt->mask & RXE_FLUSH_MASK) {
/* oA19-2: shall have no payload. */
wqe->dma.resid = 0;
@@ -547,7 +553,32 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
wqe->dma.resid -= payload;
}
- return 0;
+ /* init pad and icrc */
+ if (pkt->pad) {
+ pad_addr = payload_addr(pkt) + payload;
+ memset(pad_addr, 0, pkt->pad);
+ }
+
+ /* init IP and UDP network headers */
+ err = rxe_prepare(av, pkt, skb);
+ if (unlikely(err))
+ goto err_out;
+
+ if (ah)
+ rxe_put(ah);
+
+ return skb;
+
+err_out:
+ if (err == -EFAULT)
+ wqe->status = IB_WC_LOC_PROT_ERR;
+ else
+ wqe->status = IB_WC_LOC_QP_OP_ERR;
+ if (skb)
+ kfree_skb(skb);
+ if (ah)
+ rxe_put(ah);
+ return NULL;
}
static void update_wqe_state(struct rxe_qp *qp,
@@ -678,7 +709,6 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
int rxe_requester(struct rxe_qp *qp)
{
- struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
struct rxe_pkt_info pkt;
struct sk_buff *skb;
struct rxe_send_wqe *wqe;
@@ -691,8 +721,6 @@ int rxe_requester(struct rxe_qp *qp)
struct rxe_send_wqe rollback_wqe;
u32 rollback_psn;
struct rxe_queue *q = qp->sq.queue;
- struct rxe_ah *ah;
- struct rxe_av *av;
unsigned long flags;
spin_lock_irqsave(&qp->state_lock, flags);
@@ -804,47 +832,12 @@ int rxe_requester(struct rxe_qp *qp)
payload = mtu;
}
- pkt.rxe = rxe;
- pkt.opcode = opcode;
- pkt.qp = qp;
- pkt.psn = qp->req.psn;
- pkt.mask = rxe_opcode[opcode].mask;
- pkt.wqe = wqe;
-
/* save wqe state before we build and send packet */
save_state(wqe, qp, &rollback_wqe, &rollback_psn);
- av = rxe_get_av(&pkt, &ah);
- if (unlikely(!av)) {
- rxe_dbg_qp(qp, "Failed no address vector\n");
- wqe->status = IB_WC_LOC_QP_OP_ERR;
- goto err;
- }
-
- skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt);
- if (unlikely(!skb)) {
- rxe_dbg_qp(qp, "Failed allocating skb\n");
- wqe->status = IB_WC_LOC_QP_OP_ERR;
- if (ah)
- rxe_put(ah);
- goto err;
- }
-
- err = finish_packet(qp, av, wqe, &pkt, skb, payload);
- if (unlikely(err)) {
- rxe_dbg_qp(qp, "Error during finish packet\n");
- if (err == -EFAULT)
- wqe->status = IB_WC_LOC_PROT_ERR;
- else
- wqe->status = IB_WC_LOC_QP_OP_ERR;
- kfree_skb(skb);
- if (ah)
- rxe_put(ah);
+ skb = rxe_init_req_packet(qp, wqe, opcode, payload, &pkt);
+ if (!skb)
goto err;
- }
-
- if (ah)
- rxe_put(ah);
/* update wqe state as though we had sent it */
update_wqe_state(qp, wqe, &pkt);
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 6/8] RDMA/rxe: Put fake udp send code in a subroutine
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
` (4 preceding siblings ...)
2023-07-27 19:28 ` [PATCH for-next v3 5/8] RDMA/rxe: Isolate code to build request packet Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 7/8] RDMA/rxe: Combine setting pkt info Bob Pearson
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Isolate the code that handles the case of an overlong to a
subroutine named fake_udp_send().
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_req.c | 37 ++++++++++++++++-------------
1 file changed, 20 insertions(+), 17 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 27be1a946d62..8423d259f26a 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -707,6 +707,24 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
return 0;
}
+/* C10-93.1.1: If the total sum of all the buffer lengths specified for a
+ * UD message exceeds the MTU of the port as returned by QueryHCA, the CI
+ * shall not emit any packets for this message. Further, the CI shall not
+ * generate an error due to this condition.
+ */
+static void fake_udp_send(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+{
+ wqe->first_psn = qp->req.psn;
+ wqe->last_psn = qp->req.psn;
+ qp->req.psn = (qp->req.psn + 1) & BTH_PSN_MASK;
+ qp->req.opcode = IB_OPCODE_UD_SEND_ONLY;
+ qp->req.wqe_index = queue_next_index(qp->sq.queue,
+ qp->req.wqe_index);
+ wqe->state = wqe_state_done;
+ wqe->status = IB_WC_SUCCESS;
+ rxe_run_task(&qp->comp.task);
+}
+
int rxe_requester(struct rxe_qp *qp)
{
struct rxe_pkt_info pkt;
@@ -810,23 +828,8 @@ int rxe_requester(struct rxe_qp *qp)
payload = (mask & (RXE_WRITE_OR_SEND_MASK | RXE_ATOMIC_WRITE_MASK)) ?
wqe->dma.resid : 0;
if (payload > mtu) {
- if (qp_type(qp) == IB_QPT_UD) {
- /* C10-93.1.1: If the total sum of all the buffer lengths specified for a
- * UD message exceeds the MTU of the port as returned by QueryHCA, the CI
- * shall not emit any packets for this message. Further, the CI shall not
- * generate an error due to this condition.
- */
-
- /* fake a successful UD send */
- wqe->first_psn = qp->req.psn;
- wqe->last_psn = qp->req.psn;
- qp->req.psn = (qp->req.psn + 1) & BTH_PSN_MASK;
- qp->req.opcode = IB_OPCODE_UD_SEND_ONLY;
- qp->req.wqe_index = queue_next_index(qp->sq.queue,
- qp->req.wqe_index);
- wqe->state = wqe_state_done;
- wqe->status = IB_WC_SUCCESS;
- rxe_sched_task(&qp->comp.task);
+ if (unlikely(qp_type(qp) == IB_QPT_UD)) {
+ fake_udp_send(qp, wqe);
goto done;
}
payload = mtu;
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 7/8] RDMA/rxe: Combine setting pkt info
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
` (5 preceding siblings ...)
2023-07-27 19:28 ` [PATCH for-next v3 6/8] RDMA/rxe: Put fake udp send code in a subroutine Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 8/8] RDMA/rxe: Move next_opcode to rxe_opcode.c Bob Pearson
2023-08-09 19:20 ` [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Jason Gunthorpe
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Move setting some rxe_pkt_info fields in rxe_init_packet() together
with the rest of the fields in rxe_init_req_packet() and
prepare_ack_packet().
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_net.c | 6 ------
drivers/infiniband/sw/rxe/rxe_req.c | 4 +++-
drivers/infiniband/sw/rxe/rxe_resp.c | 12 ++++++++----
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
index 006c2d60f04d..94e347a7f386 100644
--- a/drivers/infiniband/sw/rxe/rxe_net.c
+++ b/drivers/infiniband/sw/rxe/rxe_net.c
@@ -516,7 +516,6 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
unsigned int hdr_len;
struct sk_buff *skb = NULL;
struct net_device *ndev = rxe->ndev;
- const int port_num = 1;
if (av->network_type == RXE_NETWORK_TYPE_IPV4)
hdr_len = ETH_HLEN + sizeof(struct udphdr) +
@@ -540,11 +539,6 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
else
skb->protocol = htons(ETH_P_IPV6);
- pkt->rxe = rxe;
- pkt->port_num = port_num;
- pkt->hdr = skb_put(skb, pkt->paylen);
- pkt->mask |= RXE_GRH_MASK;
-
out:
return skb;
}
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 8423d259f26a..4db1bacdfdb8 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -512,7 +512,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp,
pkt->opcode = opcode;
pkt->qp = qp;
pkt->psn = qp->req.psn;
- pkt->mask = rxe_opcode[opcode].mask;
+ pkt->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK;
pkt->wqe = wqe;
pkt->port_num = 1;
@@ -535,6 +535,8 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp,
goto err_out;
}
+ pkt->hdr = skb_put(skb, pkt->paylen);
+
/* init roce headers */
rxe_init_roce_hdrs(qp, wqe, pkt);
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 7e79d3e4d64e..8a25c56dfd86 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -768,6 +768,13 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
struct sk_buff *skb;
int err;
+ ack->rxe = rxe;
+ ack->qp = qp;
+ ack->opcode = opcode;
+ ack->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK;
+ ack->psn = psn;
+ ack->port_num = 1;
+
/*
* allocate packet
*/
@@ -779,10 +786,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
if (!skb)
return NULL;
- ack->qp = qp;
- ack->opcode = opcode;
- ack->mask = rxe_opcode[opcode].mask;
- ack->psn = psn;
+ ack->hdr = skb_put(skb, ack->paylen);
bth_init(ack, opcode, 0, 0, ack->pad, IB_DEFAULT_PKEY_FULL,
qp->attr.dest_qp_num, 0, psn);
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH for-next v3 8/8] RDMA/rxe: Move next_opcode to rxe_opcode.c
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
` (6 preceding siblings ...)
2023-07-27 19:28 ` [PATCH for-next v3 7/8] RDMA/rxe: Combine setting pkt info Bob Pearson
@ 2023-07-27 19:28 ` Bob Pearson
2023-08-09 19:20 ` [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Jason Gunthorpe
8 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2023-07-27 19:28 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, jhack; +Cc: Bob Pearson
Localize opcode specific code to rxe_opcode.c by moving next_opcode()
to rxe_next_req_opcode() in rxe_opcode.c.
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_opcode.c | 176 ++++++++++++++++++++++++-
drivers/infiniband/sw/rxe/rxe_opcode.h | 4 +
drivers/infiniband/sw/rxe/rxe_req.c | 173 +-----------------------
3 files changed, 183 insertions(+), 170 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
index 5c0d5c6ffda4..f358b732a751 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
@@ -5,8 +5,8 @@
*/
#include <rdma/ib_pack.h>
-#include "rxe_opcode.h"
-#include "rxe_hdr.h"
+
+#include "rxe.h"
/* useful information about work request opcodes and pkt opcodes in
* table form
@@ -973,3 +973,175 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
},
};
+
+static int next_opcode_rc(int last_opcode, u32 wr_opcode, bool fits)
+{
+ switch (wr_opcode) {
+ case IB_WR_RDMA_WRITE:
+ if (last_opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST ||
+ last_opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE)
+ return fits ?
+ IB_OPCODE_RC_RDMA_WRITE_LAST :
+ IB_OPCODE_RC_RDMA_WRITE_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_RC_RDMA_WRITE_ONLY :
+ IB_OPCODE_RC_RDMA_WRITE_FIRST;
+
+ case IB_WR_RDMA_WRITE_WITH_IMM:
+ if (last_opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST ||
+ last_opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE)
+ return fits ?
+ IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE :
+ IB_OPCODE_RC_RDMA_WRITE_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE :
+ IB_OPCODE_RC_RDMA_WRITE_FIRST;
+
+ case IB_WR_SEND:
+ if (last_opcode == IB_OPCODE_RC_SEND_FIRST ||
+ last_opcode == IB_OPCODE_RC_SEND_MIDDLE)
+ return fits ?
+ IB_OPCODE_RC_SEND_LAST :
+ IB_OPCODE_RC_SEND_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_RC_SEND_ONLY :
+ IB_OPCODE_RC_SEND_FIRST;
+
+ case IB_WR_SEND_WITH_IMM:
+ if (last_opcode == IB_OPCODE_RC_SEND_FIRST ||
+ last_opcode == IB_OPCODE_RC_SEND_MIDDLE)
+ return fits ?
+ IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE :
+ IB_OPCODE_RC_SEND_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE :
+ IB_OPCODE_RC_SEND_FIRST;
+
+ case IB_WR_SEND_WITH_INV:
+ if (last_opcode == IB_OPCODE_RC_SEND_FIRST ||
+ last_opcode == IB_OPCODE_RC_SEND_MIDDLE)
+ return fits ?
+ IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE :
+ IB_OPCODE_RC_SEND_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE :
+ IB_OPCODE_RC_SEND_FIRST;
+
+ case IB_WR_FLUSH:
+ return IB_OPCODE_RC_FLUSH;
+
+ case IB_WR_RDMA_READ:
+ return IB_OPCODE_RC_RDMA_READ_REQUEST;
+
+ case IB_WR_ATOMIC_CMP_AND_SWP:
+ return IB_OPCODE_RC_COMPARE_SWAP;
+
+ case IB_WR_ATOMIC_FETCH_AND_ADD:
+ return IB_OPCODE_RC_FETCH_ADD;
+
+ case IB_WR_ATOMIC_WRITE:
+ return IB_OPCODE_RC_ATOMIC_WRITE;
+
+ case IB_WR_REG_MR:
+ case IB_WR_LOCAL_INV:
+ return OPCODE_NONE; /* not used */
+ }
+
+ return -EINVAL;
+}
+
+static int next_opcode_uc(int last_opcode, u32 wr_opcode, bool fits)
+{
+ switch (wr_opcode) {
+ case IB_WR_RDMA_WRITE:
+ if (last_opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST ||
+ last_opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE)
+ return fits ?
+ IB_OPCODE_UC_RDMA_WRITE_LAST :
+ IB_OPCODE_UC_RDMA_WRITE_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_UC_RDMA_WRITE_ONLY :
+ IB_OPCODE_UC_RDMA_WRITE_FIRST;
+
+ case IB_WR_RDMA_WRITE_WITH_IMM:
+ if (last_opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST ||
+ last_opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE)
+ return fits ?
+ IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE :
+ IB_OPCODE_UC_RDMA_WRITE_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE :
+ IB_OPCODE_UC_RDMA_WRITE_FIRST;
+
+ case IB_WR_SEND:
+ if (last_opcode == IB_OPCODE_UC_SEND_FIRST ||
+ last_opcode == IB_OPCODE_UC_SEND_MIDDLE)
+ return fits ?
+ IB_OPCODE_UC_SEND_LAST :
+ IB_OPCODE_UC_SEND_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_UC_SEND_ONLY :
+ IB_OPCODE_UC_SEND_FIRST;
+
+ case IB_WR_SEND_WITH_IMM:
+ if (last_opcode == IB_OPCODE_UC_SEND_FIRST ||
+ last_opcode == IB_OPCODE_UC_SEND_MIDDLE)
+ return fits ?
+ IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE :
+ IB_OPCODE_UC_SEND_MIDDLE;
+ else
+ return fits ?
+ IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE :
+ IB_OPCODE_UC_SEND_FIRST;
+ }
+
+ return -EINVAL;
+}
+
+/* compute next requester packet opcode
+ * assumes caller is following the sequence rules
+ */
+int next_req_opcode(struct rxe_qp *qp, int resid, u32 wr_opcode)
+{
+ int fits = resid <= qp->mtu;
+ int last_opcode = qp->req.opcode;
+ int ret;
+
+ switch (qp_type(qp)) {
+ case IB_QPT_RC:
+ ret = next_opcode_rc(last_opcode, wr_opcode, fits);
+ break;
+ case IB_QPT_UC:
+ ret = next_opcode_uc(last_opcode, wr_opcode, fits);
+ break;
+ case IB_QPT_UD:
+ case IB_QPT_GSI:
+ switch (wr_opcode) {
+ case IB_WR_SEND:
+ ret = IB_OPCODE_UD_SEND_ONLY;
+ break;
+ case IB_WR_SEND_WITH_IMM:
+ ret = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ if (ret == -EINVAL)
+ rxe_err_qp(qp, "unable to compute next opcode");
+ return ret;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h
index 5686b691d6b8..61030d9c299f 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.h
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.h
@@ -7,6 +7,8 @@
#ifndef RXE_OPCODE_H
#define RXE_OPCODE_H
+struct rxe_qp;
+
/*
* contains header bit mask definitions and header lengths
* declaration of the rxe_opcode_info struct and
@@ -108,4 +110,6 @@ struct rxe_opcode_info {
extern struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE];
+int next_req_opcode(struct rxe_qp *qp, int resid, u32 wr_opcode);
+
#endif /* RXE_OPCODE_H */
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 4db1bacdfdb8..51b781ac2844 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -11,9 +11,6 @@
#include "rxe_loc.h"
#include "rxe_queue.h"
-static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
- u32 opcode);
-
static inline void retry_first_write_send(struct rxe_qp *qp,
struct rxe_send_wqe *wqe, int npsn)
{
@@ -23,8 +20,8 @@ static inline void retry_first_write_send(struct rxe_qp *qp,
int to_send = (wqe->dma.resid > qp->mtu) ?
qp->mtu : wqe->dma.resid;
- qp->req.opcode = next_opcode(qp, wqe,
- wqe->wr.opcode);
+ qp->req.opcode = next_req_opcode(qp, wqe->dma.resid,
+ wqe->wr.opcode);
if (wqe->wr.send_flags & IB_SEND_INLINE) {
wqe->dma.resid -= to_send;
@@ -51,7 +48,7 @@ static void req_retry(struct rxe_qp *qp)
qp->req.wqe_index = cons;
qp->req.psn = qp->comp.psn;
- qp->req.opcode = -1;
+ qp->req.opcode = OPCODE_NONE;
for (wqe_index = cons; wqe_index != prod;
wqe_index = queue_next_index(q, wqe_index)) {
@@ -221,166 +218,6 @@ static int rxe_wqe_is_fenced(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
atomic_read(&qp->req.rd_atomic) != qp->attr.max_rd_atomic;
}
-static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits)
-{
- switch (opcode) {
- case IB_WR_RDMA_WRITE:
- if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST ||
- qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE)
- return fits ?
- IB_OPCODE_RC_RDMA_WRITE_LAST :
- IB_OPCODE_RC_RDMA_WRITE_MIDDLE;
- else
- return fits ?
- IB_OPCODE_RC_RDMA_WRITE_ONLY :
- IB_OPCODE_RC_RDMA_WRITE_FIRST;
-
- case IB_WR_RDMA_WRITE_WITH_IMM:
- if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST ||
- qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE)
- return fits ?
- IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE :
- IB_OPCODE_RC_RDMA_WRITE_MIDDLE;
- else
- return fits ?
- IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE :
- IB_OPCODE_RC_RDMA_WRITE_FIRST;
-
- case IB_WR_SEND:
- if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST ||
- qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE)
- return fits ?
- IB_OPCODE_RC_SEND_LAST :
- IB_OPCODE_RC_SEND_MIDDLE;
- else
- return fits ?
- IB_OPCODE_RC_SEND_ONLY :
- IB_OPCODE_RC_SEND_FIRST;
-
- case IB_WR_SEND_WITH_IMM:
- if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST ||
- qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE)
- return fits ?
- IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE :
- IB_OPCODE_RC_SEND_MIDDLE;
- else
- return fits ?
- IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE :
- IB_OPCODE_RC_SEND_FIRST;
-
- case IB_WR_FLUSH:
- return IB_OPCODE_RC_FLUSH;
-
- case IB_WR_RDMA_READ:
- return IB_OPCODE_RC_RDMA_READ_REQUEST;
-
- case IB_WR_ATOMIC_CMP_AND_SWP:
- return IB_OPCODE_RC_COMPARE_SWAP;
-
- case IB_WR_ATOMIC_FETCH_AND_ADD:
- return IB_OPCODE_RC_FETCH_ADD;
-
- case IB_WR_SEND_WITH_INV:
- if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST ||
- qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE)
- return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE :
- IB_OPCODE_RC_SEND_MIDDLE;
- else
- return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE :
- IB_OPCODE_RC_SEND_FIRST;
-
- case IB_WR_ATOMIC_WRITE:
- return IB_OPCODE_RC_ATOMIC_WRITE;
-
- case IB_WR_REG_MR:
- case IB_WR_LOCAL_INV:
- return opcode;
- }
-
- return -EINVAL;
-}
-
-static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits)
-{
- switch (opcode) {
- case IB_WR_RDMA_WRITE:
- if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST ||
- qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE)
- return fits ?
- IB_OPCODE_UC_RDMA_WRITE_LAST :
- IB_OPCODE_UC_RDMA_WRITE_MIDDLE;
- else
- return fits ?
- IB_OPCODE_UC_RDMA_WRITE_ONLY :
- IB_OPCODE_UC_RDMA_WRITE_FIRST;
-
- case IB_WR_RDMA_WRITE_WITH_IMM:
- if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST ||
- qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE)
- return fits ?
- IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE :
- IB_OPCODE_UC_RDMA_WRITE_MIDDLE;
- else
- return fits ?
- IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE :
- IB_OPCODE_UC_RDMA_WRITE_FIRST;
-
- case IB_WR_SEND:
- if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST ||
- qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE)
- return fits ?
- IB_OPCODE_UC_SEND_LAST :
- IB_OPCODE_UC_SEND_MIDDLE;
- else
- return fits ?
- IB_OPCODE_UC_SEND_ONLY :
- IB_OPCODE_UC_SEND_FIRST;
-
- case IB_WR_SEND_WITH_IMM:
- if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST ||
- qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE)
- return fits ?
- IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE :
- IB_OPCODE_UC_SEND_MIDDLE;
- else
- return fits ?
- IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE :
- IB_OPCODE_UC_SEND_FIRST;
- }
-
- return -EINVAL;
-}
-
-static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
- u32 opcode)
-{
- int fits = (wqe->dma.resid <= qp->mtu);
-
- switch (qp_type(qp)) {
- case IB_QPT_RC:
- return next_opcode_rc(qp, opcode, fits);
-
- case IB_QPT_UC:
- return next_opcode_uc(qp, opcode, fits);
-
- case IB_QPT_UD:
- case IB_QPT_GSI:
- switch (opcode) {
- case IB_WR_SEND:
- return IB_OPCODE_UD_SEND_ONLY;
-
- case IB_WR_SEND_WITH_IMM:
- return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;
- }
- break;
-
- default:
- break;
- }
-
- return -EINVAL;
-}
-
static inline int check_init_depth(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
{
int depth;
@@ -761,7 +598,7 @@ int rxe_requester(struct rxe_qp *qp)
if (unlikely(qp_state(qp) == IB_QPS_RESET)) {
qp->req.wqe_index = queue_get_consumer(q,
QUEUE_TYPE_FROM_CLIENT);
- qp->req.opcode = -1;
+ qp->req.opcode = OPCODE_NONE;
qp->req.need_rd_atomic = 0;
qp->req.wait_psn = 0;
qp->req.need_retry = 0;
@@ -813,7 +650,7 @@ int rxe_requester(struct rxe_qp *qp)
goto exit;
}
- opcode = next_opcode(qp, wqe, wqe->wr.opcode);
+ opcode = next_req_opcode(qp, wqe->dma.resid, wqe->wr.opcode);
if (unlikely(opcode < 0)) {
wqe->status = IB_WC_LOC_QP_OP_ERR;
goto err;
--
2.39.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
` (7 preceding siblings ...)
2023-07-27 19:28 ` [PATCH for-next v3 8/8] RDMA/rxe: Move next_opcode to rxe_opcode.c Bob Pearson
@ 2023-08-09 19:20 ` Jason Gunthorpe
8 siblings, 0 replies; 10+ messages in thread
From: Jason Gunthorpe @ 2023-08-09 19:20 UTC (permalink / raw)
To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma, jhack
On Thu, Jul 27, 2023 at 02:28:24PM -0500, Bob Pearson wrote:
> This patch set is a collection of cleanup patches previously
> posted as part of a larger set that included support for
> nonlinear or fragmented packets. It has been rebased to the
> current for-next branch after the application of three previous
> patch sets:
> RDMA/rxe: Fix incomplete state save in rxe_requester
> RDMA/rxe: Misc fixes and cleanups
> Enable rcu locking of verbs objects
>
> These changes are a pre-requisite for a patch set to follow
> which implemements support for nonlinear packets. They are
> mainly a code cleanup of rxe_req.c.
>
> Bob Pearson (8):
> RDMA/rxe: Add pad size to struct rxe_pkt_info
> RDMA/rxe: Isolate code to fill request roce headers
> RDMA/rxe: Isolate request payload code in a subroutine
> RDMA/rxe: Remove paylen parameter from rxe_init_packet
> RDMA/rxe: Isolate code to build request packet
> RDMA/rxe: Put fake udp send code in a subroutine
> RDMA/rxe: Combine setting pkt info
> RDMA/rxe: Move next_opcode to rxe_opcode.c
This doesn't apply to anything I have so I'm going to drop it for now,
resend it when it can be applied
I didn't notice anything troubling in it
Thanks,
Jason
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-08-09 19:22 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-27 19:28 [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 1/8] RDMA/rxe: Add pad size to struct rxe_pkt_info Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 2/8] RDMA/rxe: Isolate code to fill request roce headers Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 3/8] RDMA/rxe: Isolate request payload code in a subroutine Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 4/8] RDMA/rxe: Remove paylen parameter from rxe_init_packet Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 5/8] RDMA/rxe: Isolate code to build request packet Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 6/8] RDMA/rxe: Put fake udp send code in a subroutine Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 7/8] RDMA/rxe: Combine setting pkt info Bob Pearson
2023-07-27 19:28 ` [PATCH for-next v3 8/8] RDMA/rxe: Move next_opcode to rxe_opcode.c Bob Pearson
2023-08-09 19:20 ` [PATCH for-next v3 0/7] RDMA/rxe: Misc cleanups Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox