* [PATCH for-next 0/6] Add CM support to hip08
@ 2018-01-04 4:19 Lijun Ou
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch series adds the CM(Connection Mangement) support
to hip08 RoCE driver. Changes done are primarily to add
support of APIs in IB device and some fixes over the
origin driver to support RDMA CM. This patch series
also adjust some codes style in order to fill the
sq wqe of ud type.
Lijun Ou (6):
RDMA/hns: Create gsi qp in hip08
RDMA/hns: Add gsi qp support for modifying qp in hip08
RDMA/hns: Fill sq wqe context of ud type in hip08
RDMA/hns: Assign zero for pkey_index of wc in hip08
RDMA/hns: Update the verbs of polling for completion
RDMA/hns: Set the guid for hip08 RoCE device
drivers/infiniband/hw/hns/hns_roce_device.h | 1 +
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 547 ++++++++++++++++++++--------
drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 86 ++++-
drivers/infiniband/hw/hns/hns_roce_qp.c | 18 +-
4 files changed, 487 insertions(+), 165 deletions(-)
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH for-next 1/6] RDMA/hns: Create gsi qp in hip08
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2018-01-04 4:19 ` Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying " Lijun Ou
` (4 subsequent siblings)
5 siblings, 0 replies; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
The gsi qp and rc qp use the same qp context
structure and the created flow, only
differentiate them by qpn and qp type.
Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_qp.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 351fa31..4414cea 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -455,6 +455,13 @@ static int hns_roce_set_kernel_sq_size(struct hns_roce_dev *hr_dev,
hr_qp->sge.sge_shift = 4;
}
+ /* ud sqwqe's sge use extend sge */
+ if (hr_dev->caps.max_sq_sg > 2 && hr_qp->ibqp.qp_type == IB_QPT_GSI) {
+ hr_qp->sge.sge_cnt = roundup_pow_of_two(hr_qp->sq.wqe_cnt *
+ hr_qp->sq.max_gs);
+ hr_qp->sge.sge_shift = 4;
+ }
+
/* Get buf size, SQ and RQ are aligned to PAGE_SIZE */
page_size = 1 << (hr_dev->caps.mtt_buf_pg_sz + PAGE_SHIFT);
hr_qp->sq.offset = 0;
@@ -502,6 +509,8 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
hr_qp->state = IB_QPS_RESET;
+ hr_qp->ibqp.qp_type = init_attr->qp_type;
+
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
hr_qp->sq_signal_bits = IB_SIGNAL_ALL_WR;
else
@@ -764,8 +773,13 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
hr_qp = &hr_sqp->hr_qp;
hr_qp->port = init_attr->port_num - 1;
hr_qp->phy_port = hr_dev->iboe.phy_port[hr_qp->port];
- hr_qp->ibqp.qp_num = HNS_ROCE_MAX_PORTS +
- hr_dev->iboe.phy_port[hr_qp->port];
+
+ /* when hw version is v1, the sqpn is allocated */
+ if (hr_dev->caps.max_sq_sg <= 2)
+ hr_qp->ibqp.qp_num = HNS_ROCE_MAX_PORTS +
+ hr_dev->iboe.phy_port[hr_qp->port];
+ else
+ hr_qp->ibqp.qp_num = 1;
ret = hns_roce_create_qp_common(hr_dev, pd, init_attr, udata,
hr_qp->ibqp.qp_num, hr_qp);
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying qp in hip08
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-04 4:19 ` [PATCH for-next 1/6] RDMA/hns: Create gsi qp in hip08 Lijun Ou
@ 2018-01-04 4:19 ` Lijun Ou
[not found] ` <1515039563-73084-3-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-04 4:19 ` [PATCH for-next 3/6] RDMA/hns: Fill sq wqe context of ud type " Lijun Ou
` (3 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
It needs to Assign the values for some fields in qp context
when qp type is gsi qp type in hip08.
Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_device.h | 1 +
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 66 +++++++++++++++++++++--------
2 files changed, 50 insertions(+), 17 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 4afa070..42c3b5a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -485,6 +485,7 @@ struct hns_roce_qp {
u32 access_flags;
u32 atomic_rd_en;
u32 pkey_index;
+ u32 qkey;
void (*event)(struct hns_roce_qp *,
enum hns_roce_event);
unsigned long qpn;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 2ca35e3..e53cd7d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -1975,6 +1975,7 @@ static void set_access_flags(struct hns_roce_qp *hr_qp,
static void modify_qp_reset_to_init(struct ib_qp *ibqp,
const struct ib_qp_attr *attr,
+ int attr_mask,
struct hns_roce_v2_qp_context *context,
struct hns_roce_v2_qp_context *qpc_mask)
{
@@ -1991,9 +1992,18 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp,
roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_TST_M,
V2_QPC_BYTE_4_TST_S, 0);
- roce_set_field(context->byte_4_sqpn_tst, V2_QPC_BYTE_4_SGE_SHIFT_M,
- V2_QPC_BYTE_4_SGE_SHIFT_S, hr_qp->sq.max_gs > 2 ?
- ilog2((unsigned int)hr_qp->sge.sge_cnt) : 0);
+ if (ibqp->qp_type == IB_QPT_GSI)
+ roce_set_field(context->byte_4_sqpn_tst,
+ V2_QPC_BYTE_4_SGE_SHIFT_M,
+ V2_QPC_BYTE_4_SGE_SHIFT_S,
+ ilog2((unsigned int)hr_qp->sge.sge_cnt));
+ else
+ roce_set_field(context->byte_4_sqpn_tst,
+ V2_QPC_BYTE_4_SGE_SHIFT_M,
+ V2_QPC_BYTE_4_SGE_SHIFT_S,
+ hr_qp->sq.max_gs > 2 ?
+ ilog2((unsigned int)hr_qp->sge.sge_cnt) : 0);
+
roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SGE_SHIFT_M,
V2_QPC_BYTE_4_SGE_SHIFT_S, 0);
@@ -2058,6 +2068,12 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp,
roce_set_bit(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_CNP_TX_FLAG_S, 0);
roce_set_bit(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_CE_FLAG_S, 0);
+ if (attr_mask & IB_QP_QKEY) {
+ context->qkey_xrcd = attr->qkey;
+ qpc_mask->qkey_xrcd = 0;
+ hr_qp->qkey = attr->qkey;
+ }
+
roce_set_bit(context->byte_76_srqn_op_en, V2_QPC_BYTE_76_RQIE_S, 1);
roce_set_bit(qpc_mask->byte_76_srqn_op_en, V2_QPC_BYTE_76_RQIE_S, 0);
@@ -2279,9 +2295,17 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_TST_M,
V2_QPC_BYTE_4_TST_S, 0);
- roce_set_field(context->byte_4_sqpn_tst, V2_QPC_BYTE_4_SGE_SHIFT_M,
- V2_QPC_BYTE_4_SGE_SHIFT_S, hr_qp->sq.max_gs > 2 ?
- ilog2((unsigned int)hr_qp->sge.sge_cnt) : 0);
+ if (ibqp->qp_type == IB_QPT_GSI)
+ roce_set_field(context->byte_4_sqpn_tst,
+ V2_QPC_BYTE_4_SGE_SHIFT_M,
+ V2_QPC_BYTE_4_SGE_SHIFT_S,
+ ilog2((unsigned int)hr_qp->sge.sge_cnt));
+ else
+ roce_set_field(context->byte_4_sqpn_tst,
+ V2_QPC_BYTE_4_SGE_SHIFT_M,
+ V2_QPC_BYTE_4_SGE_SHIFT_S, hr_qp->sq.max_gs > 2 ?
+ ilog2((unsigned int)hr_qp->sge.sge_cnt) : 0);
+
roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SGE_SHIFT_M,
V2_QPC_BYTE_4_SGE_SHIFT_S, 0);
@@ -2342,7 +2366,7 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
V2_QPC_BYTE_80_RX_CQN_S, 0);
roce_set_field(context->byte_252_err_txcqn, V2_QPC_BYTE_252_TX_CQN_M,
- V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->recv_cq)->cqn);
+ V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->send_cq)->cqn);
roce_set_field(qpc_mask->byte_252_err_txcqn, V2_QPC_BYTE_252_TX_CQN_M,
V2_QPC_BYTE_252_TX_CQN_S, 0);
@@ -2358,10 +2382,10 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
V2_QPC_BYTE_76_SRQN_M, V2_QPC_BYTE_76_SRQN_S, 0);
}
- if (attr_mask & IB_QP_PKEY_INDEX)
- context->qkey_xrcd = attr->pkey_index;
- else
- context->qkey_xrcd = hr_qp->pkey_index;
+ if (attr_mask & IB_QP_QKEY) {
+ context->qkey_xrcd = attr->qkey;
+ qpc_mask->qkey_xrcd = 0;
+ }
roce_set_field(context->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M,
V2_QPC_BYTE_4_SQPN_S, hr_qp->qpn);
@@ -2457,7 +2481,8 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
roce_set_field(context->byte_20_smac_sgid_idx,
V2_QPC_BYTE_20_SGE_HOP_NUM_M,
V2_QPC_BYTE_20_SGE_HOP_NUM_S,
- hr_qp->sq.max_gs > 2 ? hr_dev->caps.mtt_hop_num : 0);
+ ((ibqp->qp_type == IB_QPT_GSI) || hr_qp->sq.max_gs > 2) ?
+ hr_dev->caps.mtt_hop_num : 0);
roce_set_field(qpc_mask->byte_20_smac_sgid_idx,
V2_QPC_BYTE_20_SGE_HOP_NUM_M,
V2_QPC_BYTE_20_SGE_HOP_NUM_S, 0);
@@ -2617,8 +2642,13 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_TC_M,
V2_QPC_BYTE_24_TC_S, 0);
- roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
- V2_QPC_BYTE_24_MTU_S, attr->path_mtu);
+ if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD)
+ roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
+ V2_QPC_BYTE_24_MTU_S, IB_MTU_4096);
+ else
+ roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
+ V2_QPC_BYTE_24_MTU_S, attr->path_mtu);
+
roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
V2_QPC_BYTE_24_MTU_S, 0);
@@ -2725,13 +2755,14 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_S, 0);
page_size = 1 << (hr_dev->caps.mtt_buf_pg_sz + PAGE_SHIFT);
- context->sq_cur_sge_blk_addr = hr_qp->sq.max_gs > 2 ?
+ context->sq_cur_sge_blk_addr =
+ ((ibqp->qp_type == IB_QPT_GSI) || hr_qp->sq.max_gs > 2) ?
((u32)(mtts[hr_qp->sge.offset / page_size]
>> PAGE_ADDR_SHIFT)) : 0;
roce_set_field(context->byte_184_irrl_idx,
V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_M,
V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_S,
- hr_qp->sq.max_gs > 2 ?
+ ((ibqp->qp_type == IB_QPT_GSI) || hr_qp->sq.max_gs > 2) ?
(mtts[hr_qp->sge.offset / page_size] >>
(32 + PAGE_ADDR_SHIFT)) : 0);
qpc_mask->sq_cur_sge_blk_addr = 0;
@@ -2902,7 +2933,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
*/
memset(qpc_mask, 0xff, sizeof(*qpc_mask));
if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
- modify_qp_reset_to_init(ibqp, attr, context, qpc_mask);
+ modify_qp_reset_to_init(ibqp, attr, attr_mask, context,
+ qpc_mask);
} else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
modify_qp_init_to_init(ibqp, attr, attr_mask, context,
qpc_mask);
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH for-next 3/6] RDMA/hns: Fill sq wqe context of ud type in hip08
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-04 4:19 ` [PATCH for-next 1/6] RDMA/hns: Create gsi qp in hip08 Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying " Lijun Ou
@ 2018-01-04 4:19 ` Lijun Ou
[not found] ` <1515039563-73084-4-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-04 4:19 ` [PATCH for-next 4/6] RDMA/hns: Assign zero for pkey_index of wc " Lijun Ou
` (2 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch mainly configure the fileds of sq wqe of ud
type when posting wr of gsi qp type.
Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 447 +++++++++++++++++++----------
drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 84 ++++++
2 files changed, 386 insertions(+), 145 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index e53cd7d..0c30998 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -51,26 +51,101 @@ static void set_data_seg_v2(struct hns_roce_v2_wqe_data_seg *dseg,
dseg->len = cpu_to_le32(sg->length);
}
+static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ struct hns_roce_v2_rc_send_wqe *rc_sq_wqe,
+ void *wqe, unsigned int *sge_ind,
+ struct ib_send_wr **bad_wr)
+{
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ struct hns_roce_v2_wqe_data_seg *dseg = wqe;
+ struct hns_roce_qp *qp = to_hr_qp(ibqp);
+ int ret = 0;
+ int i;
+
+ if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) {
+ if (rc_sq_wqe->msg_len > hr_dev->caps.max_sq_inline) {
+ ret = -EINVAL;
+ *bad_wr = wr;
+ dev_err(hr_dev->dev, "inline len(1-%d)=%d, illegal",
+ rc_sq_wqe->msg_len, hr_dev->caps.max_sq_inline);
+ return ret;
+ }
+
+ for (i = 0; i < wr->num_sge; i++) {
+ memcpy(wqe, ((void *)wr->sg_list[i].addr),
+ wr->sg_list[i].length);
+ wqe += wr->sg_list[i].length;
+ }
+
+ roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S,
+ 1);
+ } else {
+ if (wr->num_sge <= 2) {
+ for (i = 0; i < wr->num_sge; i++) {
+ if (likely(wr->sg_list[i].length)) {
+ set_data_seg_v2(dseg, wr->sg_list + i);
+ dseg++;
+ }
+ }
+ } else {
+ roce_set_field(rc_sq_wqe->byte_20,
+ V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
+ V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
+ (*sge_ind) & (qp->sge.sge_cnt - 1));
+
+ for (i = 0; i < 2; i++) {
+ if (likely(wr->sg_list[i].length)) {
+ set_data_seg_v2(dseg, wr->sg_list + i);
+ dseg++;
+ }
+ }
+
+ dseg = get_send_extend_sge(qp,
+ (*sge_ind) & (qp->sge.sge_cnt - 1));
+
+ for (i = 0; i < wr->num_sge - 2; i++) {
+ if (likely(wr->sg_list[i + 2].length)) {
+ set_data_seg_v2(dseg,
+ wr->sg_list + 2 + i);
+ dseg++;
+ (*sge_ind)++;
+ }
+ }
+ }
+
+ roce_set_field(rc_sq_wqe->byte_16,
+ V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
+ V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, wr->num_sge);
+ }
+
+ return ret;
+}
+
static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
struct ib_send_wr **bad_wr)
{
struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ struct hns_roce_ah *ah = to_hr_ah(ud_wr(wr)->ah);
+ struct hns_roce_v2_ud_send_wqe *ud_sq_wqe;
struct hns_roce_v2_rc_send_wqe *rc_sq_wqe;
struct hns_roce_qp *qp = to_hr_qp(ibqp);
struct hns_roce_v2_wqe_data_seg *dseg;
struct device *dev = hr_dev->dev;
struct hns_roce_v2_db sq_db;
unsigned int sge_ind = 0;
- unsigned int wqe_sz = 0;
unsigned int owner_bit;
unsigned long flags;
unsigned int ind;
void *wqe = NULL;
+ bool loopback;
int ret = 0;
+ u8 *smac;
int nreq;
int i;
- if (unlikely(ibqp->qp_type != IB_QPT_RC)) {
+ if (unlikely(ibqp->qp_type != IB_QPT_RC &&
+ ibqp->qp_type != IB_QPT_GSI &&
+ ibqp->qp_type != IB_QPT_UD)) {
dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type);
*bad_wr = NULL;
return -EOPNOTSUPP;
@@ -107,172 +182,254 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
wr->wr_id;
owner_bit = ~(qp->sq.head >> ilog2(qp->sq.wqe_cnt)) & 0x1;
- rc_sq_wqe = wqe;
- memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe));
- for (i = 0; i < wr->num_sge; i++)
- rc_sq_wqe->msg_len += wr->sg_list[i].length;
- rc_sq_wqe->inv_key_immtdata = send_ieth(wr);
+ /* Corresponding to the QP type, wqe process separately */
+ if (ibqp->qp_type == IB_QPT_GSI) {
+ ud_sq_wqe = wqe;
+ memset(ud_sq_wqe, 0, sizeof(*ud_sq_wqe));
+
+ roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_0_M,
+ V2_UD_SEND_WQE_DMAC_0_S, ah->av.mac[0]);
+ roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_1_M,
+ V2_UD_SEND_WQE_DMAC_1_S, ah->av.mac[1]);
+ roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_2_M,
+ V2_UD_SEND_WQE_DMAC_2_S, ah->av.mac[2]);
+ roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_3_M,
+ V2_UD_SEND_WQE_DMAC_3_S, ah->av.mac[3]);
+ roce_set_field(ud_sq_wqe->byte_48,
+ V2_UD_SEND_WQE_BYTE_48_DMAC_4_M,
+ V2_UD_SEND_WQE_BYTE_48_DMAC_4_S,
+ ah->av.mac[4]);
+ roce_set_field(ud_sq_wqe->byte_48,
+ V2_UD_SEND_WQE_BYTE_48_DMAC_5_M,
+ V2_UD_SEND_WQE_BYTE_48_DMAC_5_S,
+ ah->av.mac[5]);
+
+ /* MAC loopback */
+ smac = (u8 *)hr_dev->dev_addr[qp->port];
+ loopback = ether_addr_equal_unaligned(ah->av.mac,
+ smac) ? 1 : 0;
+
+ roce_set_bit(ud_sq_wqe->byte_40,
+ V2_UD_SEND_WQE_BYTE_40_LBI_S, loopback);
+
+ roce_set_field(ud_sq_wqe->byte_4,
+ V2_UD_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_UD_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_SEND);
- roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_FENCE_S,
- (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
+ for (i = 0; i < wr->num_sge; i++)
+ ud_sq_wqe->msg_len += wr->sg_list[i].length;
- roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_SE_S,
- (wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
+ ud_sq_wqe->immtdata = send_ieth(wr);
- roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_CQE_S,
- (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
+ /* Set sig attr */
+ roce_set_bit(ud_sq_wqe->byte_4,
+ V2_UD_SEND_WQE_BYTE_4_CQE_S,
+ (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
- roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_OWNER_S,
- owner_bit);
+ /* Set se attr */
+ roce_set_bit(ud_sq_wqe->byte_4,
+ V2_UD_SEND_WQE_BYTE_4_SE_S,
+ (wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
- switch (wr->opcode) {
- case IB_WR_RDMA_READ:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_RDMA_READ);
- rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey);
- rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr);
- break;
- case IB_WR_RDMA_WRITE:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_RDMA_WRITE);
- rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey);
- rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr);
- break;
- case IB_WR_RDMA_WRITE_WITH_IMM:
- roce_set_field(rc_sq_wqe->byte_4,
+ roce_set_bit(ud_sq_wqe->byte_4,
+ V2_UD_SEND_WQE_BYTE_4_OWNER_S, owner_bit);
+
+ roce_set_field(ud_sq_wqe->byte_16,
+ V2_UD_SEND_WQE_BYTE_16_PD_M,
+ V2_UD_SEND_WQE_BYTE_16_PD_S,
+ to_hr_pd(ibqp->pd)->pdn);
+
+ roce_set_field(ud_sq_wqe->byte_16,
+ V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M,
+ V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S,
+ wr->num_sge);
+
+ roce_set_field(ud_sq_wqe->byte_20,
+ V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
+ V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
+ sge_ind & (qp->sge.sge_cnt - 1));
+
+ roce_set_field(ud_sq_wqe->byte_24,
+ V2_UD_SEND_WQE_BYTE_24_UDPSPN_M,
+ V2_UD_SEND_WQE_BYTE_24_UDPSPN_S, 0);
+ ud_sq_wqe->qkey =
+ cpu_to_be32(ud_wr(wr)->remote_qkey & 0x80000000) ?
+ qp->qkey : ud_wr(wr)->remote_qkey;
+ roce_set_field(ud_sq_wqe->byte_32,
+ V2_UD_SEND_WQE_BYTE_32_DQPN_M,
+ V2_UD_SEND_WQE_BYTE_32_DQPN_S,
+ ud_wr(wr)->remote_qpn);
+
+ roce_set_field(ud_sq_wqe->byte_36,
+ V2_UD_SEND_WQE_BYTE_36_VLAN_M,
+ V2_UD_SEND_WQE_BYTE_36_VLAN_S,
+ ah->av.vlan);
+ roce_set_field(ud_sq_wqe->byte_36,
+ V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M,
+ V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S,
+ ah->av.hop_limit);
+ roce_set_field(ud_sq_wqe->byte_36,
+ V2_UD_SEND_WQE_BYTE_36_TCLASS_M,
+ V2_UD_SEND_WQE_BYTE_36_TCLASS_S,
+ 0);
+ roce_set_field(ud_sq_wqe->byte_36,
+ V2_UD_SEND_WQE_BYTE_36_TCLASS_M,
+ V2_UD_SEND_WQE_BYTE_36_TCLASS_S,
+ 0);
+ roce_set_field(ud_sq_wqe->byte_40,
+ V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_M,
+ V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_S, 0);
+ roce_set_field(ud_sq_wqe->byte_40,
+ V2_UD_SEND_WQE_BYTE_40_SL_M,
+ V2_UD_SEND_WQE_BYTE_40_SL_S,
+ ah->av.sl_tclass_flowlabel >>
+ HNS_ROCE_SL_SHIFT);
+ roce_set_field(ud_sq_wqe->byte_40,
+ V2_UD_SEND_WQE_BYTE_40_PORTN_M,
+ V2_UD_SEND_WQE_BYTE_40_PORTN_S,
+ qp->port);
+
+ roce_set_field(ud_sq_wqe->byte_48,
+ V2_UD_SEND_WQE_BYTE_48_SGID_INDX_M,
+ V2_UD_SEND_WQE_BYTE_48_SGID_INDX_S,
+ hns_get_gid_index(hr_dev, qp->phy_port,
+ ah->av.gid_index));
+
+ memcpy(&ud_sq_wqe->dgid[0], &ah->av.dgid[0],
+ GID_LEN_V2);
+
+ dseg = get_send_extend_sge(qp,
+ sge_ind & (qp->sge.sge_cnt - 1));
+ for (i = 0; i < wr->num_sge; i++) {
+ set_data_seg_v2(dseg + i, wr->sg_list + i);
+ sge_ind++;
+ }
+
+ ind++;
+ } else if (ibqp->qp_type == IB_QPT_RC) {
+ rc_sq_wqe = wqe;
+ memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe));
+ for (i = 0; i < wr->num_sge; i++)
+ rc_sq_wqe->msg_len += wr->sg_list[i].length;
+
+ rc_sq_wqe->inv_key_immtdata = send_ieth(wr);
+
+ roce_set_bit(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_FENCE_S,
+ (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
+
+ roce_set_bit(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_SE_S,
+ (wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
+
+ roce_set_bit(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_CQE_S,
+ (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
+
+ roce_set_bit(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OWNER_S, owner_bit);
+
+ switch (wr->opcode) {
+ case IB_WR_RDMA_READ:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_RDMA_READ);
+ rc_sq_wqe->rkey =
+ cpu_to_le32(rdma_wr(wr)->rkey);
+ rc_sq_wqe->va =
+ cpu_to_le64(rdma_wr(wr)->remote_addr);
+ break;
+ case IB_WR_RDMA_WRITE:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_RDMA_WRITE);
+ rc_sq_wqe->rkey =
+ cpu_to_le32(rdma_wr(wr)->rkey);
+ rc_sq_wqe->va =
+ cpu_to_le64(rdma_wr(wr)->remote_addr);
+ break;
+ case IB_WR_RDMA_WRITE_WITH_IMM:
+ roce_set_field(rc_sq_wqe->byte_4,
V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM);
- rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey);
- rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr);
- break;
- case IB_WR_SEND:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_SEND);
- break;
- case IB_WR_SEND_WITH_INV:
- roce_set_field(rc_sq_wqe->byte_4,
+ rc_sq_wqe->rkey =
+ cpu_to_le32(rdma_wr(wr)->rkey);
+ rc_sq_wqe->va =
+ cpu_to_le64(rdma_wr(wr)->remote_addr);
+ break;
+ case IB_WR_SEND:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_SEND);
+ break;
+ case IB_WR_SEND_WITH_INV:
+ roce_set_field(rc_sq_wqe->byte_4,
V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
HNS_ROCE_V2_WQE_OP_SEND_WITH_INV);
- break;
- case IB_WR_SEND_WITH_IMM:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM);
- break;
- case IB_WR_LOCAL_INV:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_LOCAL_INV);
- break;
- case IB_WR_ATOMIC_CMP_AND_SWP:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP);
- break;
- case IB_WR_ATOMIC_FETCH_AND_ADD:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD);
- break;
- case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:
- roce_set_field(rc_sq_wqe->byte_4,
+ break;
+ case IB_WR_SEND_WITH_IMM:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM);
+ break;
+ case IB_WR_LOCAL_INV:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_LOCAL_INV);
+ break;
+ case IB_WR_ATOMIC_CMP_AND_SWP:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP);
+ break;
+ case IB_WR_ATOMIC_FETCH_AND_ADD:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD);
+ break;
+ case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:
+ roce_set_field(rc_sq_wqe->byte_4,
V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP);
- break;
- case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD:
- roce_set_field(rc_sq_wqe->byte_4,
+ break;
+ case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD:
+ roce_set_field(rc_sq_wqe->byte_4,
V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD);
- break;
- default:
- roce_set_field(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
- V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
- HNS_ROCE_V2_WQE_OP_MASK);
- break;
- }
-
- wqe += sizeof(struct hns_roce_v2_rc_send_wqe);
- dseg = wqe;
- if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) {
- if (rc_sq_wqe->msg_len >
- hr_dev->caps.max_sq_inline) {
- ret = -EINVAL;
- *bad_wr = wr;
- dev_err(dev, "inline len(1-%d)=%d, illegal",
- rc_sq_wqe->msg_len,
- hr_dev->caps.max_sq_inline);
- goto out;
+ break;
+ default:
+ roce_set_field(rc_sq_wqe->byte_4,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
+ V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
+ HNS_ROCE_V2_WQE_OP_MASK);
+ break;
}
- for (i = 0; i < wr->num_sge; i++) {
- memcpy(wqe, ((void *)wr->sg_list[i].addr),
- wr->sg_list[i].length);
- wqe += wr->sg_list[i].length;
- wqe_sz += wr->sg_list[i].length;
- }
+ wqe += sizeof(struct hns_roce_v2_rc_send_wqe);
+ dseg = wqe;
- roce_set_bit(rc_sq_wqe->byte_4,
- V2_RC_SEND_WQE_BYTE_4_INLINE_S, 1);
+ ret = set_rwqe_data_seg(ibqp, wr, rc_sq_wqe, wqe,
+ &sge_ind, bad_wr);
+ if (ret)
+ goto out;
+ ind++;
} else {
- if (wr->num_sge <= 2) {
- for (i = 0; i < wr->num_sge; i++) {
- if (likely(wr->sg_list[i].length)) {
- set_data_seg_v2(dseg,
- wr->sg_list + i);
- dseg++;
- }
- }
- } else {
- roce_set_field(rc_sq_wqe->byte_20,
- V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
- V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
- sge_ind & (qp->sge.sge_cnt - 1));
-
- for (i = 0; i < 2; i++) {
- if (likely(wr->sg_list[i].length)) {
- set_data_seg_v2(dseg,
- wr->sg_list + i);
- dseg++;
- }
- }
-
- dseg = get_send_extend_sge(qp,
- sge_ind & (qp->sge.sge_cnt - 1));
-
- for (i = 0; i < wr->num_sge - 2; i++) {
- if (likely(wr->sg_list[i + 2].length)) {
- set_data_seg_v2(dseg,
- wr->sg_list + 2 + i);
- dseg++;
- sge_ind++;
- }
- }
- }
-
- roce_set_field(rc_sq_wqe->byte_16,
- V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
- V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S,
- wr->num_sge);
- wqe_sz += wr->num_sge *
- sizeof(struct hns_roce_v2_wqe_data_seg);
+ dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type);
+ return -EOPNOTSUPP;
}
- ind++;
}
out:
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
index 463edab..c11b253 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
@@ -916,6 +916,90 @@ struct hns_roce_v2_cq_db {
#define V2_CQ_DB_PARAMETER_NOTIFY_S 24
+struct hns_roce_v2_ud_send_wqe {
+ u32 byte_4;
+ u32 msg_len;
+ u32 immtdata;
+ u32 byte_16;
+ u32 byte_20;
+ u32 byte_24;
+ u32 qkey;
+ u32 byte_32;
+ u32 byte_36;
+ u32 byte_40;
+ u32 dmac;
+ u32 byte_48;
+ u8 dgid[GID_LEN_V2];
+
+};
+#define V2_UD_SEND_WQE_BYTE_4_OPCODE_S 0
+#define V2_UD_SEND_WQE_BYTE_4_OPCODE_M GENMASK(4, 0)
+
+#define V2_UD_SEND_WQE_BYTE_4_OWNER_S 7
+
+#define V2_UD_SEND_WQE_BYTE_4_CQE_S 8
+
+#define V2_UD_SEND_WQE_BYTE_4_SE_S 11
+
+#define V2_UD_SEND_WQE_BYTE_16_PD_S 0
+#define V2_UD_SEND_WQE_BYTE_16_PD_M GENMASK(23, 0)
+
+#define V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S 24
+#define V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M GENMASK(31, 24)
+
+#define V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S 0
+#define V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M GENMASK(23, 0)
+
+#define V2_UD_SEND_WQE_BYTE_24_UDPSPN_S 16
+#define V2_UD_SEND_WQE_BYTE_24_UDPSPN_M GENMASK(31, 16)
+
+#define V2_UD_SEND_WQE_BYTE_32_DQPN_S 0
+#define V2_UD_SEND_WQE_BYTE_32_DQPN_M GENMASK(23, 0)
+
+#define V2_UD_SEND_WQE_BYTE_36_VLAN_S 0
+#define V2_UD_SEND_WQE_BYTE_36_VLAN_M GENMASK(15, 0)
+
+#define V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S 16
+#define V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M GENMASK(23, 16)
+
+#define V2_UD_SEND_WQE_BYTE_36_TCLASS_S 24
+#define V2_UD_SEND_WQE_BYTE_36_TCLASS_M GENMASK(31, 24)
+
+#define V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_S 0
+#define V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_M GENMASK(19, 0)
+
+#define V2_UD_SEND_WQE_BYTE_40_SL_S 20
+#define V2_UD_SEND_WQE_BYTE_40_SL_M GENMASK(23, 20)
+
+#define V2_UD_SEND_WQE_BYTE_40_PORTN_S 24
+#define V2_UD_SEND_WQE_BYTE_40_PORTN_M GENMASK(26, 24)
+
+#define V2_UD_SEND_WQE_BYTE_40_LBI_S 31
+
+#define V2_UD_SEND_WQE_DMAC_0_S 0
+#define V2_UD_SEND_WQE_DMAC_0_M GENMASK(7, 0)
+
+#define V2_UD_SEND_WQE_DMAC_1_S 8
+#define V2_UD_SEND_WQE_DMAC_1_M GENMASK(15, 8)
+
+#define V2_UD_SEND_WQE_DMAC_2_S 16
+#define V2_UD_SEND_WQE_DMAC_2_M GENMASK(23, 16)
+
+#define V2_UD_SEND_WQE_DMAC_3_S 24
+#define V2_UD_SEND_WQE_DMAC_3_M GENMASK(31, 24)
+
+#define V2_UD_SEND_WQE_BYTE_48_DMAC_4_S 0
+#define V2_UD_SEND_WQE_BYTE_48_DMAC_4_M GENMASK(7, 0)
+
+#define V2_UD_SEND_WQE_BYTE_48_DMAC_5_S 8
+#define V2_UD_SEND_WQE_BYTE_48_DMAC_5_M GENMASK(15, 8)
+
+#define V2_UD_SEND_WQE_BYTE_48_SGID_INDX_S 16
+#define V2_UD_SEND_WQE_BYTE_48_SGID_INDX_M GENMASK(23, 16)
+
+#define V2_UD_SEND_WQE_BYTE_48_SMAC_INDX_S 24
+#define V2_UD_SEND_WQE_BYTE_48_SMAC_INDX_M GENMASK(31, 24)
+
struct hns_roce_v2_rc_send_wqe {
u32 byte_4;
u32 msg_len;
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH for-next 4/6] RDMA/hns: Assign zero for pkey_index of wc in hip08
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
` (2 preceding siblings ...)
2018-01-04 4:19 ` [PATCH for-next 3/6] RDMA/hns: Fill sq wqe context of ud type " Lijun Ou
@ 2018-01-04 4:19 ` Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 5/6] RDMA/hns: Update the verbs of polling for completion Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 6/6] RDMA/hns: Set the guid for hip08 RoCE device Lijun Ou
5 siblings, 0 replies; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
Because pkey is fixed for hip08 RoCE, it needs to assign zero
for pkey_index of wc. otherwise, it will happen an error when
establishing connection by communication management mechanism.
Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 0c30998..4a902c4 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -1911,6 +1911,9 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
wc->wc_flags |= (roce_get_bit(cqe->byte_32,
V2_CQE_BYTE_32_GRH_S) ?
IB_WC_GRH : 0);
+ wc->port_num = roce_get_field(cqe->byte_32,
+ V2_CQE_BYTE_32_PORTN_M, V2_CQE_BYTE_32_PORTN_S);
+ wc->pkey_index = 0;
}
return 0;
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH for-next 5/6] RDMA/hns: Update the verbs of polling for completion
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
` (3 preceding siblings ...)
2018-01-04 4:19 ` [PATCH for-next 4/6] RDMA/hns: Assign zero for pkey_index of wc " Lijun Ou
@ 2018-01-04 4:19 ` Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 6/6] RDMA/hns: Set the guid for hip08 RoCE device Lijun Ou
5 siblings, 0 replies; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
If the port is a RoCEv2 port, the remote port address and QP
information which returned for UD will be modified.
Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 12 ++++++++++++
drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 2 +-
2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 4a902c4..6b4474d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -1914,6 +1914,18 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
wc->port_num = roce_get_field(cqe->byte_32,
V2_CQE_BYTE_32_PORTN_M, V2_CQE_BYTE_32_PORTN_S);
wc->pkey_index = 0;
+ memcpy(wc->smac, cqe->smac, 4);
+ wc->smac[4] = roce_get_field(cqe->byte_28,
+ V2_CQE_BYTE_28_SMAC_4_M,
+ V2_CQE_BYTE_28_SMAC_4_S);
+ wc->smac[5] = roce_get_field(cqe->byte_28,
+ V2_CQE_BYTE_28_SMAC_5_M,
+ V2_CQE_BYTE_28_SMAC_5_S);
+ wc->vlan_id = 0xffff;
+ wc->wc_flags |= (IB_WC_WITH_VLAN | IB_WC_WITH_SMAC);
+ wc->network_hdr_type = roce_get_field(cqe->byte_28,
+ V2_CQE_BYTE_28_PORT_TYPE_M,
+ V2_CQE_BYTE_28_PORT_TYPE_S);
}
return 0;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
index c11b253..ce52c73 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
@@ -766,7 +766,7 @@ struct hns_roce_v2_cqe {
u32 byte_12;
u32 byte_16;
u32 byte_cnt;
- u32 smac;
+ u8 smac[4];
u32 byte_28;
u32 byte_32;
};
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH for-next 6/6] RDMA/hns: Set the guid for hip08 RoCE device
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
` (4 preceding siblings ...)
2018-01-04 4:19 ` [PATCH for-next 5/6] RDMA/hns: Update the verbs of polling for completion Lijun Ou
@ 2018-01-04 4:19 ` Lijun Ou
[not found] ` <1515039563-73084-7-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
5 siblings, 1 reply; 13+ messages in thread
From: Lijun Ou @ 2018-01-04 4:19 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch assgin a guid(Global Unique identifer)
value to the hip08 device.
Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 6b4474d..6a6f355 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -4658,6 +4658,22 @@ static void hns_roce_v2_cleanup_eq_table(struct hns_roce_dev *hr_dev)
{0, }
};
+static void hns_roce_get_guid(u8 *dev_addr, u8 *guid)
+{
+ u8 mac[ETH_ALEN];
+
+ /* MAC-48 to EUI-64 mapping */
+ memcpy(mac, dev_addr, ETH_ALEN);
+ guid[0] = mac[0] ^ 2;
+ guid[1] = mac[1];
+ guid[2] = mac[2];
+ guid[3] = 0xff;
+ guid[4] = 0xfe;
+ guid[5] = mac[3];
+ guid[6] = mac[4];
+ guid[7] = mac[5];
+}
+
static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
struct hnae3_handle *handle)
{
@@ -4680,6 +4696,9 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
hr_dev->iboe.netdevs[0] = handle->rinfo.netdev;
hr_dev->iboe.phy_port[0] = 0;
+ hns_roce_get_guid(hr_dev->iboe.netdevs[0]->dev_addr,
+ (u8 *)&hr_dev->ib_dev.node_guid);
+
for (i = 0; i < HNS_ROCE_V2_MAX_IRQ_NUM; i++)
hr_dev->irq[i] = pci_irq_vector(handle->pdev,
i + handle->rinfo.base_vector);
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying qp in hip08
[not found] ` <1515039563-73084-3-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2018-01-08 21:20 ` Doug Ledford
[not found] ` <1515446459.3403.94.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: Doug Ledford @ 2018-01-08 21:20 UTC (permalink / raw)
To: Lijun Ou, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 680 bytes --]
On Thu, 2018-01-04 at 12:19 +0800, Lijun Ou wrote:
> @@ -2342,7 +2366,7 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
> V2_QPC_BYTE_80_RX_CQN_S, 0);
>
> roce_set_field(context->byte_252_err_txcqn, V2_QPC_BYTE_252_TX_CQN_M,
> - V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->recv_cq)->cqn);
> + V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->send_cq)->cqn);
This looks like a bugfix unrelated to the rest of the patch.
--
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying qp in hip08
[not found] ` <1515446459.3403.94.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-01-09 1:04 ` oulijun
[not found] ` <66c69e6b-8f74-be2e-1404-3e0c8c4a4024-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: oulijun @ 2018-01-09 1:04 UTC (permalink / raw)
To: Doug Ledford, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
在 2018/1/9 5:20, Doug Ledford 写道:
> On Thu, 2018-01-04 at 12:19 +0800, Lijun Ou wrote:
>> @@ -2342,7 +2366,7 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
>> V2_QPC_BYTE_80_RX_CQN_S, 0);
>>
>> roce_set_field(context->byte_252_err_txcqn, V2_QPC_BYTE_252_TX_CQN_M,
>> - V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->recv_cq)->cqn);
>> + V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->send_cq)->cqn);
>
> This looks like a bugfix unrelated to the rest of the patch.
>
Sure, This is found for debugging CM and The other modification of qp context in
this patch is unified for CM. As a result, I put it into the patch-set of CM.
Do I need to send PATCHv2?
thanks
Lijun Ou
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH for-next 3/6] RDMA/hns: Fill sq wqe context of ud type in hip08
[not found] ` <1515039563-73084-4-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2018-01-09 14:33 ` Leon Romanovsky
0 siblings, 0 replies; 13+ messages in thread
From: Leon Romanovsky @ 2018-01-09 14:33 UTC (permalink / raw)
To: Lijun Ou
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 21562 bytes --]
On Thu, Jan 04, 2018 at 12:19:20PM +0800, Lijun Ou wrote:
> This patch mainly configure the fileds of sq wqe of ud
> type when posting wr of gsi qp type.
>
> Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> ---
> drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 447 +++++++++++++++++++----------
> drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 84 ++++++
> 2 files changed, 386 insertions(+), 145 deletions(-)
>
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> index e53cd7d..0c30998 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> @@ -51,26 +51,101 @@ static void set_data_seg_v2(struct hns_roce_v2_wqe_data_seg *dseg,
> dseg->len = cpu_to_le32(sg->length);
> }
>
> +static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr,
> + struct hns_roce_v2_rc_send_wqe *rc_sq_wqe,
> + void *wqe, unsigned int *sge_ind,
> + struct ib_send_wr **bad_wr)
> +{
> + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> + struct hns_roce_v2_wqe_data_seg *dseg = wqe;
> + struct hns_roce_qp *qp = to_hr_qp(ibqp);
> + int ret = 0;
> + int i;
> +
> + if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) {
> + if (rc_sq_wqe->msg_len > hr_dev->caps.max_sq_inline) {
> + ret = -EINVAL;
This assignment is not needed, you can return directly.
> + *bad_wr = wr;
> + dev_err(hr_dev->dev, "inline len(1-%d)=%d, illegal",
> + rc_sq_wqe->msg_len, hr_dev->caps.max_sq_inline);
> + return ret;
> + }
> +
> + for (i = 0; i < wr->num_sge; i++) {
> + memcpy(wqe, ((void *)wr->sg_list[i].addr),
> + wr->sg_list[i].length);
> + wqe += wr->sg_list[i].length;
> + }
> +
> + roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S,
> + 1);
> + } else {
> + if (wr->num_sge <= 2) {
> + for (i = 0; i < wr->num_sge; i++) {
> + if (likely(wr->sg_list[i].length)) {
> + set_data_seg_v2(dseg, wr->sg_list + i);
> + dseg++;
> + }
> + }
> + } else {
> + roce_set_field(rc_sq_wqe->byte_20,
> + V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
> + V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
> + (*sge_ind) & (qp->sge.sge_cnt - 1));
> +
> + for (i = 0; i < 2; i++) {
> + if (likely(wr->sg_list[i].length)) {
> + set_data_seg_v2(dseg, wr->sg_list + i);
> + dseg++;
> + }
> + }
> +
> + dseg = get_send_extend_sge(qp,
> + (*sge_ind) & (qp->sge.sge_cnt - 1));
> +
> + for (i = 0; i < wr->num_sge - 2; i++) {
> + if (likely(wr->sg_list[i + 2].length)) {
> + set_data_seg_v2(dseg,
> + wr->sg_list + 2 + i);
> + dseg++;
> + (*sge_ind)++;
> + }
> + }
> + }
> +
> + roce_set_field(rc_sq_wqe->byte_16,
> + V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
> + V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, wr->num_sge);
> + }
> +
> + return ret;
You initialized this at the beginning, but didn't actually set in the code.
You can drop the "int ret = 0" line.
> +}
> +
> static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
> struct ib_send_wr **bad_wr)
> {
> struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> + struct hns_roce_ah *ah = to_hr_ah(ud_wr(wr)->ah);
> + struct hns_roce_v2_ud_send_wqe *ud_sq_wqe;
> struct hns_roce_v2_rc_send_wqe *rc_sq_wqe;
> struct hns_roce_qp *qp = to_hr_qp(ibqp);
> struct hns_roce_v2_wqe_data_seg *dseg;
> struct device *dev = hr_dev->dev;
> struct hns_roce_v2_db sq_db;
> unsigned int sge_ind = 0;
> - unsigned int wqe_sz = 0;
> unsigned int owner_bit;
> unsigned long flags;
> unsigned int ind;
> void *wqe = NULL;
> + bool loopback;
> int ret = 0;
> + u8 *smac;
> int nreq;
> int i;
>
> - if (unlikely(ibqp->qp_type != IB_QPT_RC)) {
> + if (unlikely(ibqp->qp_type != IB_QPT_RC &&
> + ibqp->qp_type != IB_QPT_GSI &&
> + ibqp->qp_type != IB_QPT_UD)) {
> dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type);
> *bad_wr = NULL;
> return -EOPNOTSUPP;
> @@ -107,172 +182,254 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
> wr->wr_id;
>
> owner_bit = ~(qp->sq.head >> ilog2(qp->sq.wqe_cnt)) & 0x1;
> - rc_sq_wqe = wqe;
> - memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe));
> - for (i = 0; i < wr->num_sge; i++)
> - rc_sq_wqe->msg_len += wr->sg_list[i].length;
>
> - rc_sq_wqe->inv_key_immtdata = send_ieth(wr);
> + /* Corresponding to the QP type, wqe process separately */
> + if (ibqp->qp_type == IB_QPT_GSI) {
> + ud_sq_wqe = wqe;
> + memset(ud_sq_wqe, 0, sizeof(*ud_sq_wqe));
> +
> + roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_0_M,
> + V2_UD_SEND_WQE_DMAC_0_S, ah->av.mac[0]);
> + roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_1_M,
> + V2_UD_SEND_WQE_DMAC_1_S, ah->av.mac[1]);
> + roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_2_M,
> + V2_UD_SEND_WQE_DMAC_2_S, ah->av.mac[2]);
> + roce_set_field(ud_sq_wqe->dmac, V2_UD_SEND_WQE_DMAC_3_M,
> + V2_UD_SEND_WQE_DMAC_3_S, ah->av.mac[3]);
> + roce_set_field(ud_sq_wqe->byte_48,
> + V2_UD_SEND_WQE_BYTE_48_DMAC_4_M,
> + V2_UD_SEND_WQE_BYTE_48_DMAC_4_S,
> + ah->av.mac[4]);
> + roce_set_field(ud_sq_wqe->byte_48,
> + V2_UD_SEND_WQE_BYTE_48_DMAC_5_M,
> + V2_UD_SEND_WQE_BYTE_48_DMAC_5_S,
> + ah->av.mac[5]);
> +
> + /* MAC loopback */
> + smac = (u8 *)hr_dev->dev_addr[qp->port];
> + loopback = ether_addr_equal_unaligned(ah->av.mac,
> + smac) ? 1 : 0;
> +
> + roce_set_bit(ud_sq_wqe->byte_40,
> + V2_UD_SEND_WQE_BYTE_40_LBI_S, loopback);
> +
> + roce_set_field(ud_sq_wqe->byte_4,
> + V2_UD_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_UD_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_SEND);
>
> - roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_FENCE_S,
> - (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
> + for (i = 0; i < wr->num_sge; i++)
> + ud_sq_wqe->msg_len += wr->sg_list[i].length;
>
> - roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_SE_S,
> - (wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
> + ud_sq_wqe->immtdata = send_ieth(wr);
>
> - roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_CQE_S,
> - (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
> + /* Set sig attr */
> + roce_set_bit(ud_sq_wqe->byte_4,
> + V2_UD_SEND_WQE_BYTE_4_CQE_S,
> + (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
>
> - roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_OWNER_S,
> - owner_bit);
> + /* Set se attr */
> + roce_set_bit(ud_sq_wqe->byte_4,
> + V2_UD_SEND_WQE_BYTE_4_SE_S,
> + (wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
>
> - switch (wr->opcode) {
> - case IB_WR_RDMA_READ:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_RDMA_READ);
> - rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey);
> - rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr);
> - break;
> - case IB_WR_RDMA_WRITE:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_RDMA_WRITE);
> - rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey);
> - rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr);
> - break;
> - case IB_WR_RDMA_WRITE_WITH_IMM:
> - roce_set_field(rc_sq_wqe->byte_4,
> + roce_set_bit(ud_sq_wqe->byte_4,
> + V2_UD_SEND_WQE_BYTE_4_OWNER_S, owner_bit);
> +
> + roce_set_field(ud_sq_wqe->byte_16,
> + V2_UD_SEND_WQE_BYTE_16_PD_M,
> + V2_UD_SEND_WQE_BYTE_16_PD_S,
> + to_hr_pd(ibqp->pd)->pdn);
> +
> + roce_set_field(ud_sq_wqe->byte_16,
> + V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M,
> + V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S,
> + wr->num_sge);
> +
> + roce_set_field(ud_sq_wqe->byte_20,
> + V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
> + V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
> + sge_ind & (qp->sge.sge_cnt - 1));
> +
> + roce_set_field(ud_sq_wqe->byte_24,
> + V2_UD_SEND_WQE_BYTE_24_UDPSPN_M,
> + V2_UD_SEND_WQE_BYTE_24_UDPSPN_S, 0);
> + ud_sq_wqe->qkey =
> + cpu_to_be32(ud_wr(wr)->remote_qkey & 0x80000000) ?
> + qp->qkey : ud_wr(wr)->remote_qkey;
> + roce_set_field(ud_sq_wqe->byte_32,
> + V2_UD_SEND_WQE_BYTE_32_DQPN_M,
> + V2_UD_SEND_WQE_BYTE_32_DQPN_S,
> + ud_wr(wr)->remote_qpn);
> +
> + roce_set_field(ud_sq_wqe->byte_36,
> + V2_UD_SEND_WQE_BYTE_36_VLAN_M,
> + V2_UD_SEND_WQE_BYTE_36_VLAN_S,
> + ah->av.vlan);
> + roce_set_field(ud_sq_wqe->byte_36,
> + V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M,
> + V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S,
> + ah->av.hop_limit);
> + roce_set_field(ud_sq_wqe->byte_36,
> + V2_UD_SEND_WQE_BYTE_36_TCLASS_M,
> + V2_UD_SEND_WQE_BYTE_36_TCLASS_S,
> + 0);
> + roce_set_field(ud_sq_wqe->byte_36,
> + V2_UD_SEND_WQE_BYTE_36_TCLASS_M,
> + V2_UD_SEND_WQE_BYTE_36_TCLASS_S,
> + 0);
> + roce_set_field(ud_sq_wqe->byte_40,
> + V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_M,
> + V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_S, 0);
> + roce_set_field(ud_sq_wqe->byte_40,
> + V2_UD_SEND_WQE_BYTE_40_SL_M,
> + V2_UD_SEND_WQE_BYTE_40_SL_S,
> + ah->av.sl_tclass_flowlabel >>
> + HNS_ROCE_SL_SHIFT);
> + roce_set_field(ud_sq_wqe->byte_40,
> + V2_UD_SEND_WQE_BYTE_40_PORTN_M,
> + V2_UD_SEND_WQE_BYTE_40_PORTN_S,
> + qp->port);
> +
> + roce_set_field(ud_sq_wqe->byte_48,
> + V2_UD_SEND_WQE_BYTE_48_SGID_INDX_M,
> + V2_UD_SEND_WQE_BYTE_48_SGID_INDX_S,
> + hns_get_gid_index(hr_dev, qp->phy_port,
> + ah->av.gid_index));
> +
> + memcpy(&ud_sq_wqe->dgid[0], &ah->av.dgid[0],
> + GID_LEN_V2);
> +
> + dseg = get_send_extend_sge(qp,
> + sge_ind & (qp->sge.sge_cnt - 1));
> + for (i = 0; i < wr->num_sge; i++) {
> + set_data_seg_v2(dseg + i, wr->sg_list + i);
> + sge_ind++;
> + }
> +
> + ind++;
> + } else if (ibqp->qp_type == IB_QPT_RC) {
> + rc_sq_wqe = wqe;
> + memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe));
> + for (i = 0; i < wr->num_sge; i++)
> + rc_sq_wqe->msg_len += wr->sg_list[i].length;
> +
> + rc_sq_wqe->inv_key_immtdata = send_ieth(wr);
> +
> + roce_set_bit(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_FENCE_S,
> + (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
> +
> + roce_set_bit(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_SE_S,
> + (wr->send_flags & IB_SEND_SOLICITED) ? 1 : 0);
> +
> + roce_set_bit(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_CQE_S,
> + (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0);
> +
> + roce_set_bit(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OWNER_S, owner_bit);
> +
> + switch (wr->opcode) {
> + case IB_WR_RDMA_READ:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_RDMA_READ);
> + rc_sq_wqe->rkey =
> + cpu_to_le32(rdma_wr(wr)->rkey);
> + rc_sq_wqe->va =
> + cpu_to_le64(rdma_wr(wr)->remote_addr);
> + break;
> + case IB_WR_RDMA_WRITE:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_RDMA_WRITE);
> + rc_sq_wqe->rkey =
> + cpu_to_le32(rdma_wr(wr)->rkey);
> + rc_sq_wqe->va =
> + cpu_to_le64(rdma_wr(wr)->remote_addr);
> + break;
> + case IB_WR_RDMA_WRITE_WITH_IMM:
> + roce_set_field(rc_sq_wqe->byte_4,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM);
> - rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey);
> - rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr);
> - break;
> - case IB_WR_SEND:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_SEND);
> - break;
> - case IB_WR_SEND_WITH_INV:
> - roce_set_field(rc_sq_wqe->byte_4,
> + rc_sq_wqe->rkey =
> + cpu_to_le32(rdma_wr(wr)->rkey);
> + rc_sq_wqe->va =
> + cpu_to_le64(rdma_wr(wr)->remote_addr);
> + break;
> + case IB_WR_SEND:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_SEND);
> + break;
> + case IB_WR_SEND_WITH_INV:
> + roce_set_field(rc_sq_wqe->byte_4,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> HNS_ROCE_V2_WQE_OP_SEND_WITH_INV);
> - break;
> - case IB_WR_SEND_WITH_IMM:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM);
> - break;
> - case IB_WR_LOCAL_INV:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_LOCAL_INV);
> - break;
> - case IB_WR_ATOMIC_CMP_AND_SWP:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP);
> - break;
> - case IB_WR_ATOMIC_FETCH_AND_ADD:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD);
> - break;
> - case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:
> - roce_set_field(rc_sq_wqe->byte_4,
> + break;
> + case IB_WR_SEND_WITH_IMM:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM);
> + break;
> + case IB_WR_LOCAL_INV:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_LOCAL_INV);
> + break;
> + case IB_WR_ATOMIC_CMP_AND_SWP:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP);
> + break;
> + case IB_WR_ATOMIC_FETCH_AND_ADD:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD);
> + break;
> + case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:
> + roce_set_field(rc_sq_wqe->byte_4,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP);
> - break;
> - case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD:
> - roce_set_field(rc_sq_wqe->byte_4,
> + break;
> + case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD:
> + roce_set_field(rc_sq_wqe->byte_4,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD);
> - break;
> - default:
> - roce_set_field(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> - V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> - HNS_ROCE_V2_WQE_OP_MASK);
> - break;
> - }
> -
> - wqe += sizeof(struct hns_roce_v2_rc_send_wqe);
> - dseg = wqe;
> - if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) {
> - if (rc_sq_wqe->msg_len >
> - hr_dev->caps.max_sq_inline) {
> - ret = -EINVAL;
> - *bad_wr = wr;
> - dev_err(dev, "inline len(1-%d)=%d, illegal",
> - rc_sq_wqe->msg_len,
> - hr_dev->caps.max_sq_inline);
> - goto out;
> + break;
> + default:
> + roce_set_field(rc_sq_wqe->byte_4,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_M,
> + V2_RC_SEND_WQE_BYTE_4_OPCODE_S,
> + HNS_ROCE_V2_WQE_OP_MASK);
> + break;
> }
>
> - for (i = 0; i < wr->num_sge; i++) {
> - memcpy(wqe, ((void *)wr->sg_list[i].addr),
> - wr->sg_list[i].length);
> - wqe += wr->sg_list[i].length;
> - wqe_sz += wr->sg_list[i].length;
> - }
> + wqe += sizeof(struct hns_roce_v2_rc_send_wqe);
> + dseg = wqe;
>
> - roce_set_bit(rc_sq_wqe->byte_4,
> - V2_RC_SEND_WQE_BYTE_4_INLINE_S, 1);
> + ret = set_rwqe_data_seg(ibqp, wr, rc_sq_wqe, wqe,
> + &sge_ind, bad_wr);
> + if (ret)
> + goto out;
> + ind++;
> } else {
> - if (wr->num_sge <= 2) {
> - for (i = 0; i < wr->num_sge; i++) {
> - if (likely(wr->sg_list[i].length)) {
> - set_data_seg_v2(dseg,
> - wr->sg_list + i);
> - dseg++;
> - }
> - }
> - } else {
> - roce_set_field(rc_sq_wqe->byte_20,
> - V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
> - V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
> - sge_ind & (qp->sge.sge_cnt - 1));
> -
> - for (i = 0; i < 2; i++) {
> - if (likely(wr->sg_list[i].length)) {
> - set_data_seg_v2(dseg,
> - wr->sg_list + i);
> - dseg++;
> - }
> - }
> -
> - dseg = get_send_extend_sge(qp,
> - sge_ind & (qp->sge.sge_cnt - 1));
> -
> - for (i = 0; i < wr->num_sge - 2; i++) {
> - if (likely(wr->sg_list[i + 2].length)) {
> - set_data_seg_v2(dseg,
> - wr->sg_list + 2 + i);
> - dseg++;
> - sge_ind++;
> - }
> - }
> - }
> -
> - roce_set_field(rc_sq_wqe->byte_16,
> - V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
> - V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S,
> - wr->num_sge);
> - wqe_sz += wr->num_sge *
> - sizeof(struct hns_roce_v2_wqe_data_seg);
> + dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type);
> + return -EOPNOTSUPP;
> }
> - ind++;
> }
>
> out:
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
> index 463edab..c11b253 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
> @@ -916,6 +916,90 @@ struct hns_roce_v2_cq_db {
>
> #define V2_CQ_DB_PARAMETER_NOTIFY_S 24
>
> +struct hns_roce_v2_ud_send_wqe {
> + u32 byte_4;
> + u32 msg_len;
> + u32 immtdata;
> + u32 byte_16;
> + u32 byte_20;
> + u32 byte_24;
> + u32 qkey;
> + u32 byte_32;
> + u32 byte_36;
> + u32 byte_40;
> + u32 dmac;
> + u32 byte_48;
> + u8 dgid[GID_LEN_V2];
> +
> +};
> +#define V2_UD_SEND_WQE_BYTE_4_OPCODE_S 0
> +#define V2_UD_SEND_WQE_BYTE_4_OPCODE_M GENMASK(4, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_4_OWNER_S 7
> +
> +#define V2_UD_SEND_WQE_BYTE_4_CQE_S 8
> +
> +#define V2_UD_SEND_WQE_BYTE_4_SE_S 11
> +
> +#define V2_UD_SEND_WQE_BYTE_16_PD_S 0
> +#define V2_UD_SEND_WQE_BYTE_16_PD_M GENMASK(23, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S 24
> +#define V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M GENMASK(31, 24)
> +
> +#define V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S 0
> +#define V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M GENMASK(23, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_24_UDPSPN_S 16
> +#define V2_UD_SEND_WQE_BYTE_24_UDPSPN_M GENMASK(31, 16)
> +
> +#define V2_UD_SEND_WQE_BYTE_32_DQPN_S 0
> +#define V2_UD_SEND_WQE_BYTE_32_DQPN_M GENMASK(23, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_36_VLAN_S 0
> +#define V2_UD_SEND_WQE_BYTE_36_VLAN_M GENMASK(15, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S 16
> +#define V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M GENMASK(23, 16)
> +
> +#define V2_UD_SEND_WQE_BYTE_36_TCLASS_S 24
> +#define V2_UD_SEND_WQE_BYTE_36_TCLASS_M GENMASK(31, 24)
> +
> +#define V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_S 0
> +#define V2_UD_SEND_WQE_BYTE_40_FLOW_LABEL_M GENMASK(19, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_40_SL_S 20
> +#define V2_UD_SEND_WQE_BYTE_40_SL_M GENMASK(23, 20)
> +
> +#define V2_UD_SEND_WQE_BYTE_40_PORTN_S 24
> +#define V2_UD_SEND_WQE_BYTE_40_PORTN_M GENMASK(26, 24)
> +
> +#define V2_UD_SEND_WQE_BYTE_40_LBI_S 31
> +
> +#define V2_UD_SEND_WQE_DMAC_0_S 0
> +#define V2_UD_SEND_WQE_DMAC_0_M GENMASK(7, 0)
> +
> +#define V2_UD_SEND_WQE_DMAC_1_S 8
> +#define V2_UD_SEND_WQE_DMAC_1_M GENMASK(15, 8)
> +
> +#define V2_UD_SEND_WQE_DMAC_2_S 16
> +#define V2_UD_SEND_WQE_DMAC_2_M GENMASK(23, 16)
> +
> +#define V2_UD_SEND_WQE_DMAC_3_S 24
> +#define V2_UD_SEND_WQE_DMAC_3_M GENMASK(31, 24)
> +
> +#define V2_UD_SEND_WQE_BYTE_48_DMAC_4_S 0
> +#define V2_UD_SEND_WQE_BYTE_48_DMAC_4_M GENMASK(7, 0)
> +
> +#define V2_UD_SEND_WQE_BYTE_48_DMAC_5_S 8
> +#define V2_UD_SEND_WQE_BYTE_48_DMAC_5_M GENMASK(15, 8)
> +
> +#define V2_UD_SEND_WQE_BYTE_48_SGID_INDX_S 16
> +#define V2_UD_SEND_WQE_BYTE_48_SGID_INDX_M GENMASK(23, 16)
> +
> +#define V2_UD_SEND_WQE_BYTE_48_SMAC_INDX_S 24
> +#define V2_UD_SEND_WQE_BYTE_48_SMAC_INDX_M GENMASK(31, 24)
> +
> struct hns_roce_v2_rc_send_wqe {
> u32 byte_4;
> u32 msg_len;
> --
> 1.9.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH for-next 6/6] RDMA/hns: Set the guid for hip08 RoCE device
[not found] ` <1515039563-73084-7-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2018-01-09 14:45 ` Leon Romanovsky
0 siblings, 0 replies; 13+ messages in thread
From: Leon Romanovsky @ 2018-01-09 14:45 UTC (permalink / raw)
To: Lijun Ou
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, jgg-uk2M96/98Pc,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 1907 bytes --]
On Thu, Jan 04, 2018 at 12:19:23PM +0800, Lijun Ou wrote:
> This patch assgin a guid(Global Unique identifer)
> value to the hip08 device.
>
> Signed-off-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> ---
> drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> index 6b4474d..6a6f355 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> @@ -4658,6 +4658,22 @@ static void hns_roce_v2_cleanup_eq_table(struct hns_roce_dev *hr_dev)
> {0, }
> };
>
> +static void hns_roce_get_guid(u8 *dev_addr, u8 *guid)
> +{
> + u8 mac[ETH_ALEN];
> +
> + /* MAC-48 to EUI-64 mapping */
> + memcpy(mac, dev_addr, ETH_ALEN);
> + guid[0] = mac[0] ^ 2;
> + guid[1] = mac[1];
> + guid[2] = mac[2];
> + guid[3] = 0xff;
> + guid[4] = 0xfe;
> + guid[5] = mac[3];
> + guid[6] = mac[4];
> + guid[7] = mac[5];
> +}
Please take a look on the commit: 4d6f28591fe4
("{net,IB}/{rxe,usnic}: Utilize generic mac to eui32 function").
It presents the correct way to update guid.
Thanks
> +
> static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
> struct hnae3_handle *handle)
> {
> @@ -4680,6 +4696,9 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
> hr_dev->iboe.netdevs[0] = handle->rinfo.netdev;
> hr_dev->iboe.phy_port[0] = 0;
>
> + hns_roce_get_guid(hr_dev->iboe.netdevs[0]->dev_addr,
> + (u8 *)&hr_dev->ib_dev.node_guid);
> +
> for (i = 0; i < HNS_ROCE_V2_MAX_IRQ_NUM; i++)
> hr_dev->irq[i] = pci_irq_vector(handle->pdev,
> i + handle->rinfo.base_vector);
> --
> 1.9.1
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying qp in hip08
[not found] ` <66c69e6b-8f74-be2e-1404-3e0c8c4a4024-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2018-01-09 15:12 ` Doug Ledford
[not found] ` <1515510778.3403.140.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: Doug Ledford @ 2018-01-09 15:12 UTC (permalink / raw)
To: oulijun, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 1370 bytes --]
On Tue, 2018-01-09 at 09:04 +0800, oulijun wrote:
> 在 2018/1/9 5:20, Doug Ledford 写道:
> > On Thu, 2018-01-04 at 12:19 +0800, Lijun Ou wrote:
> > > @@ -2342,7 +2366,7 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
> > > V2_QPC_BYTE_80_RX_CQN_S, 0);
> > >
> > > roce_set_field(context->byte_252_err_txcqn, V2_QPC_BYTE_252_TX_CQN_M,
> > > - V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->recv_cq)->cqn);
> > > + V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->send_cq)->cqn);
> >
> > This looks like a bugfix unrelated to the rest of the patch.
> >
>
> Sure, This is found for debugging CM and The other modification of qp context in
> this patch is unified for CM. As a result, I put it into the patch-set of CM.
>
> Do I need to send PATCHv2?
That depends. What's the effect of this bug? Is it something that
should be sent to stable? If the common case is that the send and recv
cq sizes are the same, and this bug is mostly never an issue, then no,
no v2 is necessary. If this is something we should send to stable, then
yes, pull out the bugfix, tag it for stable, and submit v2.
--
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying qp in hip08
[not found] ` <1515510778.3403.140.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-01-10 1:59 ` oulijun
0 siblings, 0 replies; 13+ messages in thread
From: oulijun @ 2018-01-10 1:59 UTC (permalink / raw)
To: Doug Ledford, jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
在 2018/1/9 23:12, Doug Ledford 写道:
> On Tue, 2018-01-09 at 09:04 +0800, oulijun wrote:
>> 在 2018/1/9 5:20, Doug Ledford 写道:
>>> On Thu, 2018-01-04 at 12:19 +0800, Lijun Ou wrote:
>>>> @@ -2342,7 +2366,7 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
>>>> V2_QPC_BYTE_80_RX_CQN_S, 0);
>>>>
>>>> roce_set_field(context->byte_252_err_txcqn, V2_QPC_BYTE_252_TX_CQN_M,
>>>> - V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->recv_cq)->cqn);
>>>> + V2_QPC_BYTE_252_TX_CQN_S, to_hr_cq(ibqp->send_cq)->cqn);
>>>
>>> This looks like a bugfix unrelated to the rest of the patch.
>>>
>>
>> Sure, This is found for debugging CM and The other modification of qp context in
>> this patch is unified for CM. As a result, I put it into the patch-set of CM.
>>
>> Do I need to send PATCHv2?
>
> That depends. What's the effect of this bug? Is it something that
> should be sent to stable? If the common case is that the send and recv
> cq sizes are the same, and this bug is mostly never an issue, then no,
> no v2 is necessary. If this is something we should send to stable, then
> yes, pull out the bugfix, tag it for stable, and submit v2.
>
Yes, This bug affects CM only. I will generate it as a separately patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2018-01-10 1:59 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-04 4:19 [PATCH for-next 0/6] Add CM support to hip08 Lijun Ou
[not found] ` <1515039563-73084-1-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-04 4:19 ` [PATCH for-next 1/6] RDMA/hns: Create gsi qp in hip08 Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 2/6] RDMA/hns: Add gsi qp support for modifying " Lijun Ou
[not found] ` <1515039563-73084-3-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-08 21:20 ` Doug Ledford
[not found] ` <1515446459.3403.94.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-01-09 1:04 ` oulijun
[not found] ` <66c69e6b-8f74-be2e-1404-3e0c8c4a4024-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-09 15:12 ` Doug Ledford
[not found] ` <1515510778.3403.140.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-01-10 1:59 ` oulijun
2018-01-04 4:19 ` [PATCH for-next 3/6] RDMA/hns: Fill sq wqe context of ud type " Lijun Ou
[not found] ` <1515039563-73084-4-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-09 14:33 ` Leon Romanovsky
2018-01-04 4:19 ` [PATCH for-next 4/6] RDMA/hns: Assign zero for pkey_index of wc " Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 5/6] RDMA/hns: Update the verbs of polling for completion Lijun Ou
2018-01-04 4:19 ` [PATCH for-next 6/6] RDMA/hns: Set the guid for hip08 RoCE device Lijun Ou
[not found] ` <1515039563-73084-7-git-send-email-oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2018-01-09 14:45 ` Leon Romanovsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox