public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-core 00/10] i40iw: Fixes and optimizations
@ 2016-12-08 18:16 Tatyana Nikolova
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

The patch series includes fixes, optimizations and
clean up to the user space i40iw.

Mustafa Ismail (4):
  i40iw: Optimize setting fragments
  i40iw: Remove unnecessary parameter
  i40iw: Optimize inline data copy
  i40iw: Remove unnecessary check for moving CQ head

Shiraz Saleem (6):
  i40iw: Fix incorrect assignment of SQ head
  i40iw: Return correct error codes for destroy QP and CQ
  i40iw: Do not destroy QP/CQ if lock is held
  i40iw: Control debug error prints using env variable
  i40iw: Use 2M huge pages for CQ/QP memory if available
  i40iw: Remove SQ size constraint

 providers/i40iw/i40iw_d.h      |   6 +-
 providers/i40iw/i40iw_uk.c     |  62 ++++++++------------
 providers/i40iw/i40iw_umain.c  |  21 ++++++-
 providers/i40iw/i40iw_umain.h  |   9 +++
 providers/i40iw/i40iw_user.h   |   2 +-
 providers/i40iw/i40iw_uverbs.c | 128 ++++++++++++++++++++++++++++++-----------
 6 files changed, 151 insertions(+), 77 deletions(-)

-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 01/10] i40iw: Optimize setting fragments
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 02/10] i40iw: Remove unnecessary parameter Tatyana Nikolova
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Small optimizations replace subtract and multiply with add.

Signed-off-by: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_uk.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/providers/i40iw/i40iw_uk.c b/providers/i40iw/i40iw_uk.c
index aff3de6..4d353b5 100644
--- a/providers/i40iw/i40iw_uk.c
+++ b/providers/i40iw/i40iw_uk.c
@@ -291,9 +291,9 @@ static enum i40iw_status_code i40iw_rdma_write(struct i40iw_qp_uk *qp,
 
 	i40iw_set_fragment(wqe, I40IW_BYTE_0, op_info->lo_sg_list);
 
-	for (i = 1; i < op_info->num_lo_sges; i++) {
-		byte_off = I40IW_BYTE_32 + (i - 1) * 16;
+	for (i = 1, byte_off = I40IW_BYTE_32; i < op_info->num_lo_sges; i++) {
 		i40iw_set_fragment(wqe, byte_off, &op_info->lo_sg_list[i]);
+		byte_off += 16;
 	}
 
 	i40iw_wmb(); /* make sure WQE is populated before valid bit is set */
@@ -404,9 +404,9 @@ static enum i40iw_status_code i40iw_send(struct i40iw_qp_uk *qp,
 
 	i40iw_set_fragment(wqe, I40IW_BYTE_0, op_info->sg_list);
 
-	for (i = 1; i < op_info->num_sges; i++) {
-		byte_off = I40IW_BYTE_32 + (i - 1) * 16;
+	for (i = 1, byte_off = I40IW_BYTE_32; i < op_info->num_sges; i++) {
 		i40iw_set_fragment(wqe, byte_off, &op_info->sg_list[i]);
+		byte_off += 16;
 	}
 
 	i40iw_wmb(); /* make sure WQE is populated before valid bit is set */
@@ -692,9 +692,9 @@ static enum i40iw_status_code i40iw_post_receive(struct i40iw_qp_uk *qp,
 
 	i40iw_set_fragment(wqe, I40IW_BYTE_0, info->sg_list);
 
-	for (i = 1; i < info->num_sges; i++) {
-		byte_off = I40IW_BYTE_32 + (i - 1) * 16;
+	for (i = 1, byte_off = I40IW_BYTE_32; i < info->num_sges; i++) {
 		i40iw_set_fragment(wqe, byte_off, &info->sg_list[i]);
+		byte_off += 16;
 	}
 
 	i40iw_wmb(); /* make sure WQE is populated before valid bit is set */
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 02/10] i40iw: Remove unnecessary parameter
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2016-12-08 18:16   ` [PATCH rdma-core 01/10] i40iw: Optimize setting fragments Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 03/10] i40iw: Fix incorrect assignment of SQ head Tatyana Nikolova
                     ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Remove unnecessary parameter which is always true.

Signed-off-by: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_uk.c     | 11 ++++-------
 providers/i40iw/i40iw_user.h   |  2 +-
 providers/i40iw/i40iw_uverbs.c |  2 +-
 3 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/providers/i40iw/i40iw_uk.c b/providers/i40iw/i40iw_uk.c
index 4d353b5..341772e 100644
--- a/providers/i40iw/i40iw_uk.c
+++ b/providers/i40iw/i40iw_uk.c
@@ -760,8 +760,7 @@ static enum i40iw_status_code i40iw_cq_post_entries(struct i40iw_cq_uk *cq,
  * @post_cq: update cq tail
  */
 static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
-						       struct i40iw_cq_poll_info *info,
-						       bool post_cq)
+						       struct i40iw_cq_poll_info *info)
 {
 	u64 comp_ctx, qword0, qword2, qword3, wqe_qword;
 	u64 *cqe, *sw_wqe;
@@ -885,11 +884,9 @@ exit:
 		if (I40IW_RING_GETCURRENT_HEAD(cq->cq_ring) == 0)
 			cq->polarity ^= 1;
 
-		if (post_cq) {
-			I40IW_RING_MOVE_TAIL(cq->cq_ring);
-			set_64bit_val(cq->shadow_area, I40IW_BYTE_0,
-				      I40IW_RING_GETCURRENT_HEAD(cq->cq_ring));
-		}
+		I40IW_RING_MOVE_TAIL(cq->cq_ring);
+		set_64bit_val(cq->shadow_area, I40IW_BYTE_0,
+			      I40IW_RING_GETCURRENT_HEAD(cq->cq_ring));
 	} else {
 		if (info->is_srq)
 			return ret_code;
diff --git a/providers/i40iw/i40iw_user.h b/providers/i40iw/i40iw_user.h
index ab96ed3..8d345b3 100644
--- a/providers/i40iw/i40iw_user.h
+++ b/providers/i40iw/i40iw_user.h
@@ -327,7 +327,7 @@ struct i40iw_cq_ops {
 	void (*iw_cq_request_notification)(struct i40iw_cq_uk *,
 					   enum i40iw_completion_notify);
 	enum i40iw_status_code (*iw_cq_poll_completion)(struct i40iw_cq_uk *,
-							struct i40iw_cq_poll_info *, bool);
+							struct i40iw_cq_poll_info *);
 	enum i40iw_status_code (*iw_cq_post_entries)(struct i40iw_cq_uk *, u8 count);
 	void (*iw_cq_clean)(void *, struct i40iw_cq_uk *);
 };
diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c
index a117b53..651a91d 100644
--- a/providers/i40iw/i40iw_uverbs.c
+++ b/providers/i40iw/i40iw_uverbs.c
@@ -337,7 +337,7 @@ int i40iw_upoll_cq(struct ibv_cq *cq, int num_entries, struct ibv_wc *entry)
 	if (ret)
 		return ret;
 	while (cqe_count < num_entries) {
-		ret = iwucq->cq.ops.iw_cq_poll_completion(&iwucq->cq, &cq_poll_info, true);
+		ret = iwucq->cq.ops.iw_cq_poll_completion(&iwucq->cq, &cq_poll_info);
 		if (ret == I40IW_ERR_QUEUE_EMPTY) {
 			break;
 		} else if (ret) {
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 03/10] i40iw: Fix incorrect assignment of SQ head
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2016-12-08 18:16   ` [PATCH rdma-core 01/10] i40iw: Optimize setting fragments Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 02/10] i40iw: Remove unnecessary parameter Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 04/10] i40iw: Optimize inline data copy Tatyana Nikolova
                     ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

The SQ head is incorrectly incremented when the number
of WQEs required is greater than the number available.
The fix is to use the I40IW_RING_MOV_HEAD_BY_COUNT
macro. This checks for the SQ full condition first and
only if SQ has room for the request, then we move the
head appropriately.

Signed-off-by: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_uk.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/providers/i40iw/i40iw_uk.c b/providers/i40iw/i40iw_uk.c
index 341772e..9b1a618 100644
--- a/providers/i40iw/i40iw_uk.c
+++ b/providers/i40iw/i40iw_uk.c
@@ -175,11 +175,10 @@ u64 *i40iw_qp_get_next_send_wqe(struct i40iw_qp_uk *qp,
 			qp->swqe_polarity = !qp->swqe_polarity;
 	}
 
-	for (i = 0; i < wqe_size / I40IW_QP_WQE_MIN_SIZE; i++) {
-		I40IW_RING_MOVE_HEAD(qp->sq_ring, ret_code);
-		if (ret_code)
-			return NULL;
-	}
+	I40IW_RING_MOVE_HEAD_BY_COUNT(qp->sq_ring,
+				      (wqe_size / I40IW_QP_WQE_MIN_SIZE), ret_code);
+	if (ret_code)
+		return NULL;
 
 	wqe = qp->sq_base[*wqe_idx].elem;
 
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 04/10] i40iw: Optimize inline data copy
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (2 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 03/10] i40iw: Fix incorrect assignment of SQ head Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 05/10] i40iw: Remove unnecessary check for moving CQ head Tatyana Nikolova
                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Use memcpy for inline data copy in sends
and writes instead of byte by byte copy.

Signed-off-by: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_uk.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/providers/i40iw/i40iw_uk.c b/providers/i40iw/i40iw_uk.c
index 9b1a618..bf194f3 100644
--- a/providers/i40iw/i40iw_uk.c
+++ b/providers/i40iw/i40iw_uk.c
@@ -432,7 +432,7 @@ static enum i40iw_status_code i40iw_inline_rdma_write(struct i40iw_qp_uk *qp,
 	struct i40iw_inline_rdma_write *op_info;
 	u64 *push;
 	u64 header = 0;
-	u32 i, wqe_idx;
+	u32 wqe_idx;
 	enum i40iw_status_code ret_code;
 	bool read_fence = false;
 	u8 wqe_size;
@@ -468,14 +468,12 @@ static enum i40iw_status_code i40iw_inline_rdma_write(struct i40iw_qp_uk *qp,
 	src = (u8 *)(op_info->data);
 
 	if (op_info->len <= I40IW_BYTE_16) {
-		for (i = 0; i < op_info->len; i++, src++, dest++)
-			*dest = *src;
+		memcpy(dest, src, op_info->len);
 	} else {
-		for (i = 0; i < I40IW_BYTE_16; i++, src++, dest++)
-			*dest = *src;
+		memcpy(dest, src, I40IW_BYTE_16);
+		src += I40IW_BYTE_16;
 		dest = (u8 *)wqe + I40IW_BYTE_32;
-		for (; i < op_info->len; i++, src++, dest++)
-			*dest = *src;
+		memcpy(dest, src, op_info->len - I40IW_BYTE_16);
 	}
 
 	i40iw_wmb(); /* make sure WQE is populated before valid bit is set */
@@ -510,7 +508,7 @@ static enum i40iw_status_code i40iw_inline_send(struct i40iw_qp_uk *qp,
 	u8 *dest, *src;
 	struct i40iw_post_inline_send *op_info;
 	u64 header;
-	u32 wqe_idx, i;
+	u32 wqe_idx;
 	enum i40iw_status_code ret_code;
 	bool read_fence = false;
 	u8 wqe_size;
@@ -544,14 +542,12 @@ static enum i40iw_status_code i40iw_inline_send(struct i40iw_qp_uk *qp,
 	src = (u8 *)(op_info->data);
 
 	if (op_info->len <= I40IW_BYTE_16) {
-		for (i = 0; i < op_info->len; i++, src++, dest++)
-			*dest = *src;
+		memcpy(dest, src, op_info->len);
 	} else {
-		for (i = 0; i < I40IW_BYTE_16; i++, src++, dest++)
-			*dest = *src;
+		memcpy(dest, src, I40IW_BYTE_16);
+		src += I40IW_BYTE_16;
 		dest = (u8 *)wqe + I40IW_BYTE_32;
-		for (; i < op_info->len; i++, src++, dest++)
-			*dest = *src;
+		memcpy(dest, src, op_info->len - I40IW_BYTE_16);
 	}
 
 	i40iw_wmb(); /* make sure WQE is populated before valid bit is set */
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 05/10] i40iw: Remove unnecessary check for moving CQ head
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (3 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 04/10] i40iw: Optimize inline data copy Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 06/10] i40iw: Return correct error codes for destroy QP and CQ Tatyana Nikolova
                     ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

In i40iw_cq_poll_completion(), we always move the tail.
So there is no reason to check for overflow everytime
we move the head.

Signed-off-by: Mustafa Ismail <mustafa.ismail-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_d.h  | 3 +++
 providers/i40iw/i40iw_uk.c | 6 +-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/providers/i40iw/i40iw_d.h b/providers/i40iw/i40iw_d.h
index 7b58199..1906cbf 100644
--- a/providers/i40iw/i40iw_d.h
+++ b/providers/i40iw/i40iw_d.h
@@ -1586,6 +1586,9 @@ enum i40iw_alignment {
 #define I40IW_RING_MOVE_TAIL(_ring) \
 	(_ring).tail = ((_ring).tail + 1) % (_ring).size
 
+#define I40IW_RING_MOVE_HEAD_NOCHECK(_ring) \
+	(_ring).head = ((_ring).head + 1) % (_ring).size
+
 #define I40IW_RING_MOVE_TAIL_BY_COUNT(_ring, _count) \
 	(_ring).tail = ((_ring).tail + (_count)) % (_ring).size
 
diff --git a/providers/i40iw/i40iw_uk.c b/providers/i40iw/i40iw_uk.c
index bf194f3..392e858 100644
--- a/providers/i40iw/i40iw_uk.c
+++ b/providers/i40iw/i40iw_uk.c
@@ -763,7 +763,6 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
 	struct i40iw_ring *pring = NULL;
 	u32 wqe_idx, q_type, array_idx = 0;
 	enum i40iw_status_code ret_code = 0;
-	enum i40iw_status_code ret_code2 = 0;
 	bool move_cq_head = true;
 	u8 polarity;
 	u8 addl_wqes = 0;
@@ -871,10 +870,7 @@ exit:
 			move_cq_head = false;
 
 	if (move_cq_head) {
-		I40IW_RING_MOVE_HEAD(cq->cq_ring, ret_code2);
-
-		if (ret_code2 && !ret_code)
-			ret_code = ret_code2;
+		I40IW_RING_MOVE_HEAD_NOCHECK(cq->cq_ring);
 
 		if (I40IW_RING_GETCURRENT_HEAD(cq->cq_ring) == 0)
 			cq->polarity ^= 1;
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 06/10] i40iw: Return correct error codes for destroy QP and CQ
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (4 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 05/10] i40iw: Remove unnecessary check for moving CQ head Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 07/10] i40iw: Do not destroy QP/CQ if lock is held Tatyana Nikolova
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Return correct error codes to application if destroy QP or
CQ fails. Make sure to deregister memory only if the
destroy is successful.

Signed-off-by: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_uverbs.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c
index 651a91d..ef381ed 100644
--- a/providers/i40iw/i40iw_uverbs.c
+++ b/providers/i40iw/i40iw_uverbs.c
@@ -305,9 +305,11 @@ int i40iw_udestroy_cq(struct ibv_cq *cq)
 	struct i40iw_ucq *iwucq = to_i40iw_ucq(cq);
 	int ret;
 
+	ret = ibv_cmd_destroy_cq(cq);
+	if (ret)
+		return ret;
+
 	ibv_cmd_dereg_mr(&iwucq->mr);
-	if (ibv_cmd_destroy_cq(cq))
-		fprintf(stderr, PFX "%s: failed to destroy CQ\n", __func__);
 
 	free(iwucq->cq.cq_base);
 	ret = pthread_spin_destroy(&iwucq->lock);
@@ -460,16 +462,24 @@ void i40iw_cq_event(struct ibv_cq *cq)
 	pthread_spin_unlock(&iwucq->lock);
 }
 
-static void i40iw_destroy_vmapped_qp(struct i40iw_uqp *iwuqp,
+static int i40iw_destroy_vmapped_qp(struct i40iw_uqp *iwuqp,
 					struct i40iw_qp_quanta *sq_base)
 {
+	int ret;
+
+	ret = ibv_cmd_destroy_qp(&iwuqp->ibv_qp);
+	if (ret)
+		return ret;
+
 	if (iwuqp->push_db)
 		munmap(iwuqp->push_db, I40IW_HW_PAGE_SIZE);
 	if (iwuqp->push_wqe)
 		munmap(iwuqp->push_wqe, I40IW_HW_PAGE_SIZE);
-	ibv_cmd_destroy_qp(&iwuqp->ibv_qp);
+
 	ibv_cmd_dereg_mr(&iwuqp->mr);
 	free((void *)sq_base);
+
+	return 0;
 }
 
 /**
@@ -759,7 +769,9 @@ int i40iw_udestroy_qp(struct ibv_qp *qp)
 	struct i40iw_uqp *iwuqp = to_i40iw_uqp(qp);
 	int ret;
 
-	i40iw_destroy_vmapped_qp(iwuqp, iwuqp->qp.sq_base);
+	ret = i40iw_destroy_vmapped_qp(iwuqp, iwuqp->qp.sq_base);
+	if (ret)
+		return ret;
 
 	if (iwuqp->qp.sq_wrtrk_array)
 		free(iwuqp->qp.sq_wrtrk_array);
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 07/10] i40iw: Do not destroy QP/CQ if lock is held
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (5 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 06/10] i40iw: Return correct error codes for destroy QP and CQ Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 08/10] i40iw: Control debug error prints using env variable Tatyana Nikolova
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Destroy the QP/CQ if their lock can be destroyed first.

Signed-off-by: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_uverbs.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c
index ef381ed..f6d9196 100644
--- a/providers/i40iw/i40iw_uverbs.c
+++ b/providers/i40iw/i40iw_uverbs.c
@@ -305,6 +305,10 @@ int i40iw_udestroy_cq(struct ibv_cq *cq)
 	struct i40iw_ucq *iwucq = to_i40iw_ucq(cq);
 	int ret;
 
+	ret = pthread_spin_destroy(&iwucq->lock);
+	if (ret)
+		return ret;
+
 	ret = ibv_cmd_destroy_cq(cq);
 	if (ret)
 		return ret;
@@ -312,9 +316,6 @@ int i40iw_udestroy_cq(struct ibv_cq *cq)
 	ibv_cmd_dereg_mr(&iwucq->mr);
 
 	free(iwucq->cq.cq_base);
-	ret = pthread_spin_destroy(&iwucq->lock);
-	if (ret)
-		return ret;
 	free(iwucq);
 
 	return 0;
@@ -769,6 +770,10 @@ int i40iw_udestroy_qp(struct ibv_qp *qp)
 	struct i40iw_uqp *iwuqp = to_i40iw_uqp(qp);
 	int ret;
 
+	ret = pthread_spin_destroy(&iwuqp->lock);
+	if (ret)
+		return ret;
+
 	ret = i40iw_destroy_vmapped_qp(iwuqp, iwuqp->qp.sq_base);
 	if (ret)
 		return ret;
@@ -784,10 +789,9 @@ int i40iw_udestroy_qp(struct ibv_qp *qp)
 	if ((iwuqp->recv_cq) && (iwuqp->recv_cq != iwuqp->send_cq))
 		i40iw_clean_cq((void *)&iwuqp->qp, &iwuqp->recv_cq->cq);
 
-	ret = pthread_spin_destroy(&iwuqp->lock);
 	free(iwuqp);
 
-	return ret;
+	return 0;
 }
 
 /**
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 08/10] i40iw: Control debug error prints using env variable
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (6 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 07/10] i40iw: Do not destroy QP/CQ if lock is held Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
       [not found]     ` <1481221001-1044-9-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2016-12-08 18:16   ` [PATCH rdma-core 09/10] i40iw: Use 2M huge pages for CQ/QP memory if available Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 10/10] i40iw: Remove SQ size constraint Tatyana Nikolova
  9 siblings, 1 reply; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Debug prints for error paths are off by default. User
has the option to turn them on by setting environment
variable I40IW_DEBUG in command line.

Signed-off-by: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_umain.c  | 11 ++++++---
 providers/i40iw/i40iw_umain.h  |  5 ++++
 providers/i40iw/i40iw_uverbs.c | 52 +++++++++++++++++++++++-------------------
 3 files changed, 42 insertions(+), 26 deletions(-)

diff --git a/providers/i40iw/i40iw_umain.c b/providers/i40iw/i40iw_umain.c
index 1756e65..a204859 100644
--- a/providers/i40iw/i40iw_umain.c
+++ b/providers/i40iw/i40iw_umain.c
@@ -46,7 +46,7 @@
 #include "i40iw_umain.h"
 #include "i40iw-abi.h"
 
-unsigned int i40iw_debug_level;
+unsigned int i40iw_dbg;
 
 #include <sys/types.h>
 #include <sys/stat.h>
@@ -184,7 +184,7 @@ static struct ibv_context *i40iw_ualloc_context(struct ibv_device *ibdev, int cm
 	return &iwvctx->ibv_ctx;
 
 err_free:
-	fprintf(stderr, PFX "%s: failed to allocate context for device.\n", __func__);
+	i40iw_debug("failed to allocate context for device.\n");
 	free(iwvctx);
 
 	return NULL;
@@ -216,6 +216,7 @@ static struct ibv_device_ops i40iw_udev_ops = {
 struct ibv_device *i40iw_driver_init(const char *uverbs_sys_path, int abi_version)
 {
 	char value[16];
+	char *env_val;
 	struct i40iw_udevice *dev;
 	unsigned int vendor, device;
 	int i;
@@ -236,9 +237,13 @@ struct ibv_device *i40iw_driver_init(const char *uverbs_sys_path, int abi_versio
 
 	return NULL;
 found:
+	env_val = getenv("I40IW_DEBUG");
+	if (env_val)
+		i40iw_dbg = atoi(env_val);
+
 	dev = malloc(sizeof(*dev));
 	if (!dev) {
-		fprintf(stderr, PFX "%s: failed to allocate memory for device object\n", __func__);
+		i40iw_debug("failed to allocate memory for device object\n");
 		return NULL;
 	}
 
diff --git a/providers/i40iw/i40iw_umain.h b/providers/i40iw/i40iw_umain.h
index 13d3da8..2a8370b 100644
--- a/providers/i40iw/i40iw_umain.h
+++ b/providers/i40iw/i40iw_umain.h
@@ -72,6 +72,11 @@
 #define I40E_DB_SHADOW_AREA_SIZE 64
 #define I40E_DB_CQ_OFFSET 0x40
 
+extern unsigned int i40iw_dbg;
+#define i40iw_debug(fmt, args...) \
+	if (i40iw_dbg) \
+		fprintf(stderr, PFX "%s: " fmt, __FUNCTION__, ##args)
+
 enum i40iw_uhca_type {
 	INTEL_i40iw
 };
diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c
index f6d9196..464900b 100644
--- a/providers/i40iw/i40iw_uverbs.c
+++ b/providers/i40iw/i40iw_uverbs.c
@@ -65,7 +65,7 @@ int i40iw_uquery_device(struct ibv_context *context, struct ibv_device_attr *att
 
 	ret = ibv_cmd_query_device(context, attr, &i40iw_fw_ver, &cmd, sizeof(cmd));
 	if (ret) {
-		fprintf(stderr, PFX "%s: query device failed and returned status code: %d\n", __func__, ret);
+		i40iw_debug("query device failed and returned status code: %d\n", ret);
 		return ret;
 	}
 
@@ -165,7 +165,7 @@ struct ibv_mr *i40iw_ureg_mr(struct ibv_pd *pd, void *addr, size_t length, int a
 	if (ibv_cmd_reg_mr(pd, addr, length, (uintptr_t)addr,
 			   access, mr, &cmd.ibv_cmd, sizeof(cmd),
 			   &resp, sizeof(resp))) {
-		fprintf(stderr, PFX "%s: Failed to register memory\n", __func__);
+		i40iw_debug("Failed to register memory\n");
 		free(mr);
 		return NULL;
 	}
@@ -264,7 +264,7 @@ struct ibv_cq *i40iw_ucreate_cq(struct ibv_context *context, int cqe,
 			     &iwucq->mr, &reg_mr_cmd.ibv_cmd, sizeof(reg_mr_cmd), &reg_mr_resp,
 			     sizeof(reg_mr_resp));
 	if (ret) {
-		fprintf(stderr, PFX "%s: failed to pin memory for CQ\n", __func__);
+		i40iw_debug("failed to pin memory for CQ\n");
 		goto err;
 	}
 
@@ -274,7 +274,7 @@ struct ibv_cq *i40iw_ucreate_cq(struct ibv_context *context, int cqe,
 				&resp.ibv_resp, sizeof(resp));
 	if (ret) {
 		ibv_cmd_dereg_mr(&iwucq->mr);
-		fprintf(stderr, PFX "%s: failed to create CQ\n", __func__);
+		i40iw_debug("failed to create CQ\n");
 		goto err;
 	}
 
@@ -286,7 +286,7 @@ struct ibv_cq *i40iw_ucreate_cq(struct ibv_context *context, int cqe,
 	if (!ret)
 		return &iwucq->ibv_cq;
 	else
-		fprintf(stderr, PFX "%s: failed to initialze CQ, status %d\n", __func__, ret);
+		i40iw_debug("failed to initialze CQ, status %d\n", ret);
 err:
 	if (info.cq_base)
 		free(info.cq_base);
@@ -307,11 +307,11 @@ int i40iw_udestroy_cq(struct ibv_cq *cq)
 
 	ret = pthread_spin_destroy(&iwucq->lock);
 	if (ret)
-		return ret;
+		goto err;
 
 	ret = ibv_cmd_destroy_cq(cq);
 	if (ret)
-		return ret;
+		goto err;
 
 	ibv_cmd_dereg_mr(&iwucq->mr);
 
@@ -319,6 +319,9 @@ int i40iw_udestroy_cq(struct ibv_cq *cq)
 	free(iwucq);
 
 	return 0;
+err:
+	i40iw_debug("failed to destroy CQ, status %d\n", ret);
+	return ret;
 }
 
 /**
@@ -344,7 +347,7 @@ int i40iw_upoll_cq(struct ibv_cq *cq, int num_entries, struct ibv_wc *entry)
 		if (ret == I40IW_ERR_QUEUE_EMPTY) {
 			break;
 		} else if (ret) {
-			fprintf(stderr, PFX "%s: Error polling CQ, status %d\n", __func__, ret);
+			i40iw_debug("Error polling CQ, status %d\n", ret);
 			if (!cqe_count)
 				/* Indicate error */
 				cqe_count = -1;
@@ -519,7 +522,7 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 	info->sq = memalign(I40IW_HW_PAGE_SIZE, totalqpsize);
 
 	if (!info->sq) {
-		fprintf(stderr, PFX "%s: failed to allocate memory for SQ\n", __func__);
+		i40iw_debug("failed to allocate memory for SQ\n");
 		return 0;
 	}
 
@@ -535,7 +538,7 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 			     IBV_ACCESS_LOCAL_WRITE, &iwuqp->mr, &reg_mr_cmd.ibv_cmd,
 			     sizeof(reg_mr_cmd), &reg_mr_resp, sizeof(reg_mr_resp));
 	if (ret) {
-		fprintf(stderr, PFX "%s: failed to pin memory for SQ\n", __func__);
+		i40iw_debug("failed to pin memory for SQ\n");
 		free(info->sq);
 		return 0;
 	}
@@ -545,7 +548,7 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 	ret = ibv_cmd_create_qp(pd, &iwuqp->ibv_qp, attr, &cmd.ibv_cmd, sizeof(cmd),
 				&resp->ibv_resp, sizeof(struct i40iw_ucreate_qp_resp));
 	if (ret) {
-		fprintf(stderr, PFX "%s: failed to create QP, status %d\n", __func__, ret);
+		i40iw_debug("failed to create QP, status %d\n", ret);
 		ibv_cmd_dereg_mr(&iwuqp->mr);
 		free(info->sq);
 		return 0;
@@ -565,7 +568,7 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 		map = mmap(NULL, I40IW_HW_PAGE_SIZE, PROT_WRITE | PROT_READ, MAP_SHARED,
 			   pd->context->cmd_fd, offset);
 		if (map == MAP_FAILED) {
-			fprintf(stderr, PFX "%s: failed to map push page, errno %d\n", __func__, errno);
+			i40iw_debug("failed to map push page, errno %d\n", errno);
 			info->push_wqe = NULL;
 			info->push_db = NULL;
 		} else {
@@ -575,7 +578,7 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 			map = mmap(NULL, I40IW_HW_PAGE_SIZE, PROT_WRITE | PROT_READ, MAP_SHARED,
 				   pd->context->cmd_fd, offset);
 			if (map == MAP_FAILED) {
-				fprintf(stderr, PFX "%s: failed to map push doorbell, errno %d\n", __func__, errno);
+				i40iw_debug("failed to map push doorbell, errno %d\n", errno);
 				munmap(info->push_wqe, I40IW_HW_PAGE_SIZE);
 				info->push_wqe = NULL;
 				info->push_db = NULL;
@@ -639,7 +642,7 @@ struct ibv_qp *i40iw_ucreate_qp(struct ibv_pd *pd, struct ibv_qp_init_attr *attr
 	int sq_attr, rq_attr;
 
 	if (attr->qp_type != IBV_QPT_RC) {
-		fprintf(stderr, PFX "%s: failed to create QP, unsupported QP type: 0x%x\n", __func__, attr->qp_type);
+		i40iw_debug("failed to create QP, unsupported QP type: 0x%x\n", attr->qp_type);
 		return NULL;
 	}
 
@@ -658,8 +661,8 @@ struct ibv_qp *i40iw_ucreate_qp(struct ibv_pd *pd, struct ibv_qp_init_attr *attr
 	/* Sanity check QP size before proceeding */
 	sqdepth = i40iw_qp_get_qdepth(sq_attr, attr->cap.max_send_sge, attr->cap.max_inline_data);
 	if (!sqdepth) {
-		fprintf(stderr, PFX "%s: invalid SQ attributes, max_send_wr=%d max_send_sge=%d\n",
-			__func__, attr->cap.max_send_wr, attr->cap.max_send_sge);
+		i40iw_debug("invalid SQ attributes, max_send_wr=%d max_send_sge=%d\n",
+			attr->cap.max_send_wr, attr->cap.max_send_sge);
 		return NULL;
 	}
 
@@ -691,13 +694,13 @@ struct ibv_qp *i40iw_ucreate_qp(struct ibv_pd *pd, struct ibv_qp_init_attr *attr
 	info.sq_wrtrk_array = calloc(sqdepth, sizeof(*info.sq_wrtrk_array));
 
 	if (!info.sq_wrtrk_array) {
-		fprintf(stderr, PFX "%s: failed to allocate memory for SQ work array\n", __func__);
+		i40iw_debug("failed to allocate memory for SQ work array\n");
 		goto err_destroy_lock;
 	}
 
 	info.rq_wrid_array = calloc(rqdepth, sizeof(*info.rq_wrid_array));
 	if (!info.rq_wrid_array) {
-		fprintf(stderr, PFX "%s: failed to allocate memory for RQ work array\n", __func__);
+		i40iw_debug("failed to allocate memory for RQ work array\n");
 		goto err_free_sq_wrtrk;
 	}
 
@@ -706,7 +709,7 @@ struct ibv_qp *i40iw_ucreate_qp(struct ibv_pd *pd, struct ibv_qp_init_attr *attr
 	status = i40iw_vmapped_qp(iwuqp, pd, attr, &resp, sqdepth, rqdepth, &info);
 
 	if (!status) {
-		fprintf(stderr, PFX "%s: failed to map QP\n", __func__);
+		i40iw_debug("failed to map QP\n");
 		goto err_free_rq_wrid;
 	}
 	info.qp_id = resp.qp_id;
@@ -772,11 +775,11 @@ int i40iw_udestroy_qp(struct ibv_qp *qp)
 
 	ret = pthread_spin_destroy(&iwuqp->lock);
 	if (ret)
-		return ret;
+		goto err;
 
 	ret = i40iw_destroy_vmapped_qp(iwuqp, iwuqp->qp.sq_base);
 	if (ret)
-		return ret;
+		goto err;
 
 	if (iwuqp->qp.sq_wrtrk_array)
 		free(iwuqp->qp.sq_wrtrk_array);
@@ -792,6 +795,9 @@ int i40iw_udestroy_qp(struct ibv_qp *qp)
 	free(iwuqp);
 
 	return 0;
+err:
+	i40iw_debug("failed to destroy QP, status %d\n", ret);
+	return ret;
 }
 
 /**
@@ -916,7 +922,7 @@ int i40iw_upost_send(struct ibv_qp *ib_qp, struct ibv_send_wr *ib_wr, struct ibv
 		default:
 			/* error */
 			err = -EINVAL;
-			fprintf(stderr, PFX "%s: post work request failed, invalid opcode: 0x%x\n", __func__, ib_wr->opcode);
+			i40iw_debug("post work request failed, invalid opcode: 0x%x\n", ib_wr->opcode);
 			break;
 		}
 
@@ -960,7 +966,7 @@ int i40iw_upost_recv(struct ibv_qp *ib_qp, struct ibv_recv_wr *ib_wr, struct ibv
 		post_recv.sg_list = sg_list;
 		ret = iwuqp->qp.ops.iw_post_receive(&iwuqp->qp, &post_recv);
 		if (ret) {
-			fprintf(stderr, PFX "%s: failed to post receives, status %d\n", __func__, ret);
+			i40iw_debug("failed to post receives, status %d\n", ret);
 			if (ret == I40IW_ERR_QP_TOOMANY_WRS_POSTED)
 				err = -ENOMEM;
 			else
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 09/10] i40iw: Use 2M huge pages for CQ/QP memory if available
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (7 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 08/10] i40iw: Control debug error prints using env variable Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  2016-12-08 18:16   ` [PATCH rdma-core 10/10] i40iw: Remove SQ size constraint Tatyana Nikolova
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Attempt to allocate a 2M huge page from the pool
for QP/CQ memory if its size is > 4K and < 2M.
This will lead to physically contiguous QP/CQ memory
and avoid the use of PBLs. The total number of 2M huge
pages available for this feature is controlled by a user
environment variable, I40IW_MAX_HUGEPGCNT, with an upper
limit of 100, and 0 if invalid values are used.

Signed-off-by: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_umain.c  | 10 +++++++++
 providers/i40iw/i40iw_umain.h  |  4 ++++
 providers/i40iw/i40iw_uverbs.c | 46 ++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/providers/i40iw/i40iw_umain.c b/providers/i40iw/i40iw_umain.c
index a204859..b2e6236 100644
--- a/providers/i40iw/i40iw_umain.c
+++ b/providers/i40iw/i40iw_umain.c
@@ -47,6 +47,7 @@
 #include "i40iw-abi.h"
 
 unsigned int i40iw_dbg;
+unsigned int i40iw_max_hugepgcnt;
 
 #include <sys/types.h>
 #include <sys/stat.h>
@@ -241,6 +242,15 @@ found:
 	if (env_val)
 		i40iw_dbg = atoi(env_val);
 
+	env_val = getenv("I40IW_MAX_HUGEPGCNT");
+	if (env_val) {
+		if ((atoi(env_val) < 0) || (atoi(env_val) > 100))
+			fprintf(stderr, PFX "%s: Valid range for Max Huge Page Count is 0 to 100. Setting to 0\n",
+				__func__);
+		else
+			i40iw_max_hugepgcnt = atoi(env_val);
+	}
+
 	dev = malloc(sizeof(*dev));
 	if (!dev) {
 		i40iw_debug("failed to allocate memory for device object\n");
diff --git a/providers/i40iw/i40iw_umain.h b/providers/i40iw/i40iw_umain.h
index 2a8370b..3b66388 100644
--- a/providers/i40iw/i40iw_umain.h
+++ b/providers/i40iw/i40iw_umain.h
@@ -72,6 +72,8 @@
 #define I40E_DB_SHADOW_AREA_SIZE 64
 #define I40E_DB_CQ_OFFSET 0x40
 
+#define I40IW_HPAGE_SIZE_2M (2 * 1024 * 1024)
+
 extern unsigned int i40iw_dbg;
 #define i40iw_debug(fmt, args...) \
 	if (i40iw_dbg) \
@@ -118,6 +120,7 @@ struct i40iw_ucq {
 	int comp_vector;
 	struct i40iw_uqp *udqp;
 	struct i40iw_cq_uk cq;
+	bool is_hugetlb;
 };
 
 struct i40iw_uqp {
@@ -136,6 +139,7 @@ struct i40iw_uqp {
 	uint32_t wq_size;
 	struct ibv_recv_wr *pend_rx_wr;
 	struct i40iw_qp_uk qp;
+	bool is_hugetlb;
 
 };
 
diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c
index 464900b..85ed77c 100644
--- a/providers/i40iw/i40iw_uverbs.c
+++ b/providers/i40iw/i40iw_uverbs.c
@@ -51,6 +51,9 @@
 #include "i40iw_umain.h"
 #include "i40iw-abi.h"
 
+unsigned int i40iw_hugepgcnt;
+extern unsigned int i40iw_max_hugepgcnt;
+
 /**
  * i40iw_uquery_device - call driver to query device for max resources
  * @context: user context for the device
@@ -248,11 +251,24 @@ struct ibv_cq *i40iw_ucreate_cq(struct ibv_context *context, int cqe,
 	cq_pages = i40iw_num_of_pages(info.cq_size * cqe_struct_size);
 	totalsize = (cq_pages << 12) + I40E_DB_SHADOW_AREA_SIZE;
 
+	if ((totalsize > I40IW_HW_PAGE_SIZE) && (totalsize <= I40IW_HPAGE_SIZE_2M)) {
+		if (i40iw_hugepgcnt < i40iw_max_hugepgcnt) {
+			info.cq_base = mmap(NULL, I40IW_HPAGE_SIZE_2M, PROT_READ | PROT_WRITE,
+					MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, 0, 0);
+			if (info.cq_base != MAP_FAILED) {
+				iwucq->is_hugetlb = true;
+				i40iw_hugepgcnt++;
+				goto cqmem_alloc_done;
+			}
+		}
+	}
+
 	info.cq_base = memalign(I40IW_HW_PAGE_SIZE, totalsize);
 
 	if (!info.cq_base)
 		goto err;
 
+cqmem_alloc_done:
 	memset(info.cq_base, 0, totalsize);
 	info.shadow_area = (u64 *)((u8 *)info.cq_base + (cq_pages << 12));
 	reg_mr_cmd.reg_type = I40IW_UMEMREG_TYPE_CQ;
@@ -315,7 +331,13 @@ int i40iw_udestroy_cq(struct ibv_cq *cq)
 
 	ibv_cmd_dereg_mr(&iwucq->mr);
 
-	free(iwucq->cq.cq_base);
+	if (iwucq->is_hugetlb) {
+		munmap(iwucq->cq.cq_base, I40IW_HPAGE_SIZE_2M);
+		i40iw_hugepgcnt--;
+	} else {
+		free(iwucq->cq.cq_base);
+	}
+
 	free(iwucq);
 
 	return 0;
@@ -481,7 +503,13 @@ static int i40iw_destroy_vmapped_qp(struct i40iw_uqp *iwuqp,
 		munmap(iwuqp->push_wqe, I40IW_HW_PAGE_SIZE);
 
 	ibv_cmd_dereg_mr(&iwuqp->mr);
-	free((void *)sq_base);
+
+	if (iwuqp->is_hugetlb) {
+		munmap((void *)sq_base, I40IW_HPAGE_SIZE_2M);
+		i40iw_hugepgcnt--;
+	} else {
+		free((void *)sq_base);
+	}
 
 	return 0;
 }
@@ -519,6 +547,19 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 	sqsize = sq_pages << 12;
 	rqsize = rq_pages << 12;
 	totalqpsize = rqsize + sqsize + I40E_DB_SHADOW_AREA_SIZE;
+
+	if ((totalqpsize > I40IW_HW_PAGE_SIZE) && (totalqpsize <= I40IW_HPAGE_SIZE_2M)) {
+		if (i40iw_hugepgcnt < i40iw_max_hugepgcnt) {
+			info->sq = mmap(NULL, I40IW_HPAGE_SIZE_2M, PROT_READ | PROT_WRITE,
+					MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, 0, 0);
+			if (info->sq != MAP_FAILED) {
+				iwuqp->is_hugetlb = true;
+				i40iw_hugepgcnt++;
+				goto qpmem_alloc_done;
+			}
+		}
+	}
+
 	info->sq = memalign(I40IW_HW_PAGE_SIZE, totalqpsize);
 
 	if (!info->sq) {
@@ -526,6 +567,7 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 		return 0;
 	}
 
+qpmem_alloc_done:
 	memset(info->sq, 0, totalqpsize);
 	info->rq = &info->sq[sqsize / I40IW_QP_WQE_MIN_SIZE];
 	info->shadow_area = info->rq[rqsize / I40IW_QP_WQE_MIN_SIZE].elem;
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-core 10/10] i40iw: Remove SQ size constraint
       [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
                     ` (8 preceding siblings ...)
  2016-12-08 18:16   ` [PATCH rdma-core 09/10] i40iw: Use 2M huge pages for CQ/QP memory if available Tatyana Nikolova
@ 2016-12-08 18:16   ` Tatyana Nikolova
  9 siblings, 0 replies; 13+ messages in thread
From: Tatyana Nikolova @ 2016-12-08 18:16 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Pre-production firmware does not invalidate RQ PBL indexes
if the RQ base is not cache aligned. Remove the workaround
which constraints SQ depth to be a multiple of 1024.

Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
 providers/i40iw/i40iw_d.h      | 3 ---
 providers/i40iw/i40iw_uverbs.c | 2 --
 2 files changed, 5 deletions(-)

diff --git a/providers/i40iw/i40iw_d.h b/providers/i40iw/i40iw_d.h
index 1906cbf..174115c 100644
--- a/providers/i40iw/i40iw_d.h
+++ b/providers/i40iw/i40iw_d.h
@@ -1318,9 +1318,6 @@
 /* wqe size considering 32 bytes per wqe*/
 #define I40IWQP_SW_MIN_WQSIZE 4		/* 128 bytes */
 #define I40IWQP_SW_MAX_WQSIZE 2048	/* 2048 bytes */
-
-#define I40IWQP_SW_WQSIZE_1024 1024
-
 #define I40IWQP_OP_RDMA_WRITE 0
 #define I40IWQP_OP_RDMA_READ 1
 #define I40IWQP_OP_RDMA_SEND 3
diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c
index 85ed77c..75b7cb0 100644
--- a/providers/i40iw/i40iw_uverbs.c
+++ b/providers/i40iw/i40iw_uverbs.c
@@ -537,8 +537,6 @@ static int i40iw_vmapped_qp(struct i40iw_uqp *iwuqp, struct ibv_pd *pd,
 	struct ibv_reg_mr_resp reg_mr_resp;
 
 	memset(&reg_mr_cmd, 0, sizeof(reg_mr_cmd));
-	if ((sqdepth % I40IWQP_SW_WQSIZE_1024))
-		sqdepth = sqdepth + I40IWQP_SW_WQSIZE_1024 - (sqdepth % I40IWQP_SW_WQSIZE_1024);
 	sqsize = sqdepth * I40IW_QP_WQE_MIN_SIZE;
 	rqsize = rqdepth * I40IW_QP_WQE_MIN_SIZE;
 
-- 
1.8.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-core 08/10] i40iw: Control debug error prints using env variable
       [not found]     ` <1481221001-1044-9-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
@ 2016-12-08 18:22       ` Jason Gunthorpe
       [not found]         ` <20161208182225.GA32232-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2016-12-08 18:22 UTC (permalink / raw)
  To: Tatyana Nikolova
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, leonro-VPRAkNaXOzVWk0Htik3J/w,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

On Thu, Dec 08, 2016 at 12:16:39PM -0600, Tatyana Nikolova wrote:

> +extern unsigned int i40iw_dbg;
> +#define i40iw_debug(fmt, args...) \
> +	if (i40iw_dbg) \
> +		fprintf(stderr, PFX "%s: " fmt, __FUNCTION__, ##args)
> +

No, this is an unsafe way to use defines, wrap it in do / while (0)

I would also welcome providing general infrastructure for this  -
every drvier seems to have this same basic approach.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-core 08/10] i40iw: Control debug error prints using env variable
       [not found]         ` <20161208182225.GA32232-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2016-12-08 18:48           ` Leon Romanovsky
  0 siblings, 0 replies; 13+ messages in thread
From: Leon Romanovsky @ 2016-12-08 18:48 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Tatyana Nikolova, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	e1000-rdma-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

[-- Attachment #1: Type: text/plain, Size: 822 bytes --]

On Thu, Dec 08, 2016 at 11:22:25AM -0700, Jason Gunthorpe wrote:
> On Thu, Dec 08, 2016 at 12:16:39PM -0600, Tatyana Nikolova wrote:
> 
> > +extern unsigned int i40iw_dbg;
> > +#define i40iw_debug(fmt, args...) \
> > +	if (i40iw_dbg) \
> > +		fprintf(stderr, PFX "%s: " fmt, __FUNCTION__, ##args)
> > +
> 
> No, this is an unsafe way to use defines, wrap it in do / while (0)
> 
> I would also welcome providing general infrastructure for this  -
> every drvier seems to have this same basic approach.

+1,
The amount of copypasta in rdma-core drives me crazy.

> 
> Jason
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-12-08 18:48 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-08 18:16 [PATCH rdma-core 00/10] i40iw: Fixes and optimizations Tatyana Nikolova
     [not found] ` <1481221001-1044-1-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2016-12-08 18:16   ` [PATCH rdma-core 01/10] i40iw: Optimize setting fragments Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 02/10] i40iw: Remove unnecessary parameter Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 03/10] i40iw: Fix incorrect assignment of SQ head Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 04/10] i40iw: Optimize inline data copy Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 05/10] i40iw: Remove unnecessary check for moving CQ head Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 06/10] i40iw: Return correct error codes for destroy QP and CQ Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 07/10] i40iw: Do not destroy QP/CQ if lock is held Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 08/10] i40iw: Control debug error prints using env variable Tatyana Nikolova
     [not found]     ` <1481221001-1044-9-git-send-email-tatyana.e.nikolova-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2016-12-08 18:22       ` Jason Gunthorpe
     [not found]         ` <20161208182225.GA32232-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2016-12-08 18:48           ` Leon Romanovsky
2016-12-08 18:16   ` [PATCH rdma-core 09/10] i40iw: Use 2M huge pages for CQ/QP memory if available Tatyana Nikolova
2016-12-08 18:16   ` [PATCH rdma-core 10/10] i40iw: Remove SQ size constraint Tatyana Nikolova

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox