* [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA
@ 2013-08-17 13:38 Yann Droneaud
[not found] ` <cover.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 10+ messages in thread
From: Yann Droneaud @ 2013-08-17 13:38 UTC (permalink / raw)
To: Sean Hefty, infinipath-ral2JQCrhuEAvxtiuMwx3w
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Yann Droneaud
While doing some tests, I've found that rdma_client failed
on my QLogic/Intel QLE7340 / QLE7342 HCA:
# rdma_client
rdma_client: start
rdma_post_send 22
rdma_client: end -1
I had a deeper look on the examples and found that max_inline_data was returned as 0,
thus IBV_SEND_INLINE is not available and memory must be registered to provide a valid
lkey in WR.
[BTW, having registered the memory and still use IBV_SEND_INLINE was OK,
that should be an error, isn't it ?]
Please find some patches to
- document IBV_SEND_INLINE and memory registration,
- add a query function to check max_inline_data on QP created by cma,
- fix the examples to not use IBV_SEND_INLINE if max_inline_data is less than expected.
Yann Droneaud (3):
man: rdma_post_*(): memory region is optional only with
IBV_SEND_INLINE.
adds rdma_query_qp() function.
examples: use IBV_SEND_INLINE if supported
examples/rdma_client.c | 25 +++++++++++++++++++------
examples/rdma_server.c | 31 +++++++++++++++++++++++++------
examples/rdma_xclient.c | 42 ++++++++++++++++++++++++++++++++++--------
examples/rdma_xserver.c | 39 +++++++++++++++++++++++++++++----------
include/rdma/rdma_cma.h | 16 ++++++++++++++++
man/rdma_post_send.3 | 2 +-
man/rdma_post_ud_send.3 | 2 +-
man/rdma_post_write.3 | 2 +-
src/cma.c | 24 ++++++++++++++++++++++++
src/librdmacm.map | 1 +
10 files changed, 151 insertions(+), 33 deletions(-)
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH librdmacm 1/3] man: rdma_post_*(): memory region is optional only with IBV_SEND_INLINE.
[not found] ` <cover.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
@ 2013-08-17 13:38 ` Yann Droneaud
[not found] ` <dfe767c849fc9472bd87c723c401052b89d40d7c.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
2013-08-17 13:38 ` [PATCH librdmacm 2/3] adds rdma_query_qp() function Yann Droneaud
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Yann Droneaud @ 2013-08-17 13:38 UTC (permalink / raw)
To: Sean Hefty, infinipath-ral2JQCrhuEAvxtiuMwx3w
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Yann Droneaud
State explicitly when MR argument is optional.
MR should be a mandatory argument to rdma_post_*() functions.
Signed-off-by: Yann Droneaud <ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
---
man/rdma_post_send.3 | 2 +-
man/rdma_post_ud_send.3 | 2 +-
man/rdma_post_write.3 | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/man/rdma_post_send.3 b/man/rdma_post_send.3
index 82bcf37..bf35716 100644
--- a/man/rdma_post_send.3
+++ b/man/rdma_post_send.3
@@ -22,7 +22,7 @@ The address of the memory buffer to post.
.IP "length" 12
The length of the memory buffer.
.IP "mr" 12
-Optional registered memory region associated with the posted buffer.
+Registered memory region associated with the posted buffer. Optional if IBV_SEND_INLINE flag is used.
.IP "flags" 12
Optional flags used to control the send operation.
.SH "DESCRIPTION"
diff --git a/man/rdma_post_ud_send.3 b/man/rdma_post_ud_send.3
index f8e2ada..dd40b8a 100644
--- a/man/rdma_post_ud_send.3
+++ b/man/rdma_post_ud_send.3
@@ -24,7 +24,7 @@ The address of the memory buffer to post.
.IP "length" 12
The length of the memory buffer.
.IP "mr" 12
-Optional registered memory region associated with the posted buffer.
+Registered memory region associated with the posted buffer. Optional if IBV_SEND_INLINE_FLAG is used.
.IP "flags" 12
Optional flags used to control the send operation.
.IP "ah" 12
diff --git a/man/rdma_post_write.3 b/man/rdma_post_write.3
index 896996c..3fbd8a1 100644
--- a/man/rdma_post_write.3
+++ b/man/rdma_post_write.3
@@ -24,7 +24,7 @@ The local address of the source of the write request.
.IP "length" 12
The length of the write operation.
.IP "mr" 12
-Optional memory region associated with the local buffer.
+Registered memory region associated with the local buffer. Optional if IBV_SEND_INLINE flag is used.
.IP "flags" 12
Optional flags used to control the write operation.
.IP "remote_addr" 12
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH librdmacm 2/3] adds rdma_query_qp() function.
[not found] ` <cover.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
2013-08-17 13:38 ` [PATCH librdmacm 1/3] man: rdma_post_*(): memory region is optional only with IBV_SEND_INLINE Yann Droneaud
@ 2013-08-17 13:38 ` Yann Droneaud
2013-08-17 13:38 ` [PATCH librdmacm 3/3] examples: use IBV_SEND_INLINE if supported Yann Droneaud
2013-08-27 18:36 ` [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA Hefty, Sean
3 siblings, 0 replies; 10+ messages in thread
From: Yann Droneaud @ 2013-08-17 13:38 UTC (permalink / raw)
To: Sean Hefty, infinipath-ral2JQCrhuEAvxtiuMwx3w
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Yann Droneaud
This patch adds a new function used to retrieve the QP initial parameters
and capabilities.
This function is especially useful to retrieve information on QP created by
rdma_get_request(): its parameters and capabilities might be slightly different
from those registered as part of rdma_create_ep().
Signed-off-by: Yann Droneaud <ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
---
include/rdma/rdma_cma.h | 16 ++++++++++++++++
src/cma.c | 24 ++++++++++++++++++++++++
src/librdmacm.map | 1 +
3 files changed, 41 insertions(+)
diff --git a/include/rdma/rdma_cma.h b/include/rdma/rdma_cma.h
index 4c4a057..5b433ab 100644
--- a/include/rdma/rdma_cma.h
+++ b/include/rdma/rdma_cma.h
@@ -386,6 +386,22 @@ int rdma_create_qp(struct rdma_cm_id *id, struct ibv_pd *pd,
struct ibv_qp_init_attr *qp_init_attr);
/**
+ * rdma_query_qp - Query a QP.
+ * @id: RDMA identifier.
+ * @qp_init_attr: initial QP attributes.
+ * Description:
+ * Query QP init attributes.
+ * Notes:
+ * The rdma_cm_id must be bound to QP before calling this function.
+ * This function should be used to check QP capabilities.
+ * See also:
+ * rdma_create_ep, rdma_get_request, rdma_create_qp, ibv_create_qp,
+ * ibv_query_qp
+ */
+int rdma_query_qp(struct rdma_cm_id *id,
+ struct ibv_qp_init_attr *qp_init_attr);
+
+/**
* rdma_destroy_qp - Deallocate a QP.
* @id: RDMA identifier.
* Description:
diff --git a/src/cma.c b/src/cma.c
index 374844c..24fd86b 100644
--- a/src/cma.c
+++ b/src/cma.c
@@ -1265,6 +1265,30 @@ err1:
return ret;
}
+int rdma_query_qp(struct rdma_cm_id *id,
+ struct ibv_qp_init_attr *qp_init_attr)
+{
+ struct ibv_qp_attr attr;
+ int ret;
+
+ memset(&attr, 0, sizeof attr);
+ memset(qp_init_attr, 0, sizeof *qp_init_attr);
+
+ ret = ibv_query_qp(id->qp, &attr, IBV_QP_CAP, qp_init_attr);
+ if (ret)
+ return ERR(ret);
+
+ assert(attr.cap.max_send_wr == qp_init_attr->cap.max_send_wr);
+ assert(attr.cap.max_recv_wr == qp_init_attr->cap.max_recv_wr);
+
+ assert(attr.cap.max_send_sge == qp_init_attr->cap.max_send_sge);
+ assert(attr.cap.max_recv_sge == qp_init_attr->cap.max_recv_sge);
+
+ assert(attr.cap.max_inline_data == qp_init_attr->cap.max_inline_data);
+
+ return 0;
+}
+
void rdma_destroy_qp(struct rdma_cm_id *id)
{
ibv_destroy_qp(id->qp);
diff --git a/src/librdmacm.map b/src/librdmacm.map
index d5ef736..fd2daef 100644
--- a/src/librdmacm.map
+++ b/src/librdmacm.map
@@ -8,6 +8,7 @@ RDMACM_1.0 {
rdma_resolve_addr;
rdma_resolve_route;
rdma_create_qp;
+ rdma_query_qp;
rdma_destroy_qp;
rdma_connect;
rdma_listen;
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH librdmacm 3/3] examples: use IBV_SEND_INLINE if supported
[not found] ` <cover.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
2013-08-17 13:38 ` [PATCH librdmacm 1/3] man: rdma_post_*(): memory region is optional only with IBV_SEND_INLINE Yann Droneaud
2013-08-17 13:38 ` [PATCH librdmacm 2/3] adds rdma_query_qp() function Yann Droneaud
@ 2013-08-17 13:38 ` Yann Droneaud
2013-08-27 18:36 ` [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA Hefty, Sean
3 siblings, 0 replies; 10+ messages in thread
From: Yann Droneaud @ 2013-08-17 13:38 UTC (permalink / raw)
To: Sean Hefty, infinipath-ral2JQCrhuEAvxtiuMwx3w
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Yann Droneaud
On QLogic InfiniPath HCA / Intel True Scale Fabric HCA,
using driver ipath (libipathverbs), module qib,
IBV_SEND_INLINE could not be used because
init_attr.cap.max_inline_data is set to 0 uppon QP creation.
rdma_client and rdma_server try to set max inline data to 16,
but don't check the value after rdma_create_ep() and rdma_get_request().
This patch adds
- check for current max inline data,
- memory registration if max inline data is less than requested,
- conditional use of IBV_SEND_INLINE.
Tested with QLE7340-CK, QL7342-CK,
on Fedora 18, Fedora 19 with kernel v3.11-rc5,
using libipathverbs 1.2,
libibverbs 1.1.17
librdmacm 1.0.17.
Signed-off-by: Yann Droneaud <ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
---
examples/rdma_client.c | 25 +++++++++++++++++++------
examples/rdma_server.c | 31 +++++++++++++++++++++++++------
examples/rdma_xclient.c | 42 ++++++++++++++++++++++++++++++++++--------
examples/rdma_xserver.c | 39 +++++++++++++++++++++++++++++----------
4 files changed, 107 insertions(+), 30 deletions(-)
diff --git a/examples/rdma_client.c b/examples/rdma_client.c
index 7a59d97..830bdc2 100644
--- a/examples/rdma_client.c
+++ b/examples/rdma_client.c
@@ -39,7 +39,8 @@ static char *server = "127.0.0.1";
static char *port = "7471";
struct rdma_cm_id *id;
-struct ibv_mr *mr;
+struct ibv_mr *send_mr;
+struct ibv_mr *recv_mr;
uint8_t send_msg[16];
uint8_t recv_msg[16];
@@ -71,13 +72,13 @@ static int run(void)
return ret;
}
- mr = rdma_reg_msgs(id, recv_msg, 16);
- if (!mr) {
+ recv_mr = rdma_reg_msgs(id, recv_msg, 16);
+ if (!recv_mr) {
printf("rdma_reg_msgs %d\n", errno);
return ret;
}
- ret = rdma_post_recv(id, NULL, recv_msg, 16, mr);
+ ret = rdma_post_recv(id, NULL, recv_msg, 16, recv_mr);
if (ret) {
printf("rdma_post_recv %d\n", errno);
return ret;
@@ -89,7 +90,17 @@ static int run(void)
return ret;
}
- ret = rdma_post_send(id, NULL, send_msg, 16, NULL, IBV_SEND_INLINE);
+ if (attr.cap.max_inline_data < 16) {
+ send_mr = rdma_reg_msgs(id, send_msg, 16);
+ if (!send_mr) {
+ printf("rdma_reg_msgs %d\n", errno);
+ return ret;
+ }
+ }
+
+ ret = rdma_post_send(id, NULL, send_msg, 16,
+ attr.cap.max_inline_data < 16 ? send_mr : NULL,
+ attr.cap.max_inline_data < 16 ? 0 : IBV_SEND_INLINE);
if (ret) {
printf("rdma_post_send %d\n", errno);
return ret;
@@ -102,7 +113,9 @@ static int run(void)
}
rdma_disconnect(id);
- rdma_dereg_mr(mr);
+ rdma_dereg_mr(recv_mr);
+ if (attr.cap.max_inline_data < 16)
+ rdma_dereg_mr(send_mr);
rdma_destroy_ep(id);
return 0;
}
diff --git a/examples/rdma_server.c b/examples/rdma_server.c
index 5b9e16d..459cb38 100644
--- a/examples/rdma_server.c
+++ b/examples/rdma_server.c
@@ -39,7 +39,8 @@
static char *port = "7471";
struct rdma_cm_id *listen_id, *id;
-struct ibv_mr *mr;
+struct ibv_mr *send_mr;
+struct ibv_mr *recv_mr;
uint8_t send_msg[16];
uint8_t recv_msg[16];
@@ -83,13 +84,13 @@ static int run(void)
return ret;
}
- mr = rdma_reg_msgs(id, recv_msg, 16);
- if (!mr) {
+ recv_mr = rdma_reg_msgs(id, recv_msg, 16);
+ if (!recv_mr) {
printf("rdma_reg_msgs %d\n", errno);
return ret;
}
- ret = rdma_post_recv(id, NULL, recv_msg, 16, mr);
+ ret = rdma_post_recv(id, NULL, recv_msg, 16, recv_mr);
if (ret) {
printf("rdma_post_recv %d\n", errno);
return ret;
@@ -107,7 +108,23 @@ static int run(void)
return ret;
}
- ret = rdma_post_send(id, NULL, send_msg, 16, NULL, IBV_SEND_INLINE);
+ ret = rdma_query_qp(id, &attr);
+ if (ret) {
+ printf("rdma_query_qp %d\n", errno);
+ return ret;
+ }
+
+ if (attr.cap.max_inline_data < 16) {
+ send_mr = rdma_reg_msgs(id, send_msg, 16);
+ if (!send_mr) {
+ printf("rdma_reg_msgs %d\n", errno);
+ return ret;
+ }
+ }
+
+ ret = rdma_post_send(id, NULL, send_msg, 16,
+ attr.cap.max_inline_data < 16 ? send_mr : NULL,
+ attr.cap.max_inline_data < 16 ? 0 : IBV_SEND_INLINE);
if (ret) {
printf("rdma_post_send %d\n", errno);
return ret;
@@ -120,7 +137,9 @@ static int run(void)
}
rdma_disconnect(id);
- rdma_dereg_mr(mr);
+ rdma_dereg_mr(recv_mr);
+ if (attr.cap.max_inline_data < 16)
+ rdma_dereg_mr(send_mr);
rdma_destroy_ep(id);
rdma_destroy_ep(listen_id);
return 0;
diff --git a/examples/rdma_xclient.c b/examples/rdma_xclient.c
index e192290..d93b40c 100644
--- a/examples/rdma_xclient.c
+++ b/examples/rdma_xclient.c
@@ -41,7 +41,8 @@ static char port[6] = "7471";
static int (*run_func)() = NULL;
struct rdma_cm_id *id;
-struct ibv_mr *mr;
+struct ibv_mr *send_mr;
+struct ibv_mr *recv_mr;
enum ibv_qp_type qpt = IBV_QPT_RC;
#define MSG_SIZE 16
@@ -131,6 +132,7 @@ static int xrc_resolve_srqn(void)
static int xrc_test(void)
{
+ struct ibv_qp_init_attr attr;
struct ibv_send_wr wr, *bad;
struct ibv_sge sge;
struct ibv_wc wc;
@@ -144,15 +146,25 @@ static int xrc_test(void)
if (ret)
return ret;
+ ret = rdma_query_qp(id, &attr);
+ if (ret)
+ return ret;
+
+ if (attr.cap.max_inline_data < sizeof send_msg) {
+ send_mr = rdma_reg_msgs(id, send_msg, sizeof send_msg);
+ if (!send_mr)
+ return -1;
+ }
+
sge.addr = (uint64_t) (uintptr_t) send_msg;
sge.length = (uint32_t) sizeof send_msg;
- sge.lkey = 0;
+ sge.lkey = attr.cap.max_inline_data < sizeof send_msg ? send_mr->lkey : 0;
wr.wr_id = (uintptr_t) NULL;
wr.next = NULL;
wr.sg_list = &sge;
wr.num_sge = 1;
wr.opcode = IBV_WR_SEND;
- wr.send_flags = IBV_SEND_INLINE;
+ wr.send_flags = attr.cap.max_inline_data < sizeof send_msg ? 0 : IBV_SEND_INLINE;
wr.wr.xrc.remote_srqn = srqn;
ret = ibv_post_send(id->qp, &wr, &bad);
@@ -168,6 +180,8 @@ static int xrc_test(void)
}
rdma_disconnect(id);
+ if (attr.cap.max_inline_data < sizeof send_msg)
+ rdma_dereg_mr(send_mr);
rdma_destroy_ep(id);
return 0;
}
@@ -212,13 +226,13 @@ static int rc_test(void)
return ret;
}
- mr = rdma_reg_msgs(id, recv_msg, sizeof recv_msg);
- if (!mr) {
+ recv_mr = rdma_reg_msgs(id, recv_msg, sizeof recv_msg);
+ if (!recv_mr) {
printf("rdma_reg_msgs %d\n", errno);
return ret;
}
- ret = rdma_post_recv(id, NULL, recv_msg, sizeof recv_msg, mr);
+ ret = rdma_post_recv(id, NULL, recv_msg, sizeof recv_msg, recv_mr);
if (ret) {
printf("rdma_post_recv %d\n", errno);
return ret;
@@ -230,7 +244,17 @@ static int rc_test(void)
return ret;
}
- ret = rdma_post_send(id, NULL, send_msg, sizeof send_msg, NULL, IBV_SEND_INLINE);
+ if (attr.cap.max_inline_data < sizeof send_msg) {
+ send_mr = rdma_reg_msgs(id, send_msg, sizeof send_msg);
+ if (!send_mr) {
+ printf("rdma_reg_msgs %d\n", errno);
+ return ret;
+ }
+ }
+
+ ret = rdma_post_send(id, NULL, send_msg, sizeof send_msg,
+ attr.cap.max_inline_data < sizeof send_msg ? send_mr : NULL,
+ attr.cap.max_inline_data < sizeof send_msg ? 0 : IBV_SEND_INLINE);
if (ret) {
printf("rdma_post_send %d\n", errno);
return ret;
@@ -243,7 +267,9 @@ static int rc_test(void)
}
rdma_disconnect(id);
- rdma_dereg_mr(mr);
+ rdma_dereg_mr(recv_mr);
+ if (attr.cap.max_inline_data < sizeof send_msg)
+ rdma_dereg_mr(send_mr);
rdma_destroy_ep(id);
return 0;
}
diff --git a/examples/rdma_xserver.c b/examples/rdma_xserver.c
index df3e665..32a119a 100644
--- a/examples/rdma_xserver.c
+++ b/examples/rdma_xserver.c
@@ -42,7 +42,8 @@ static char *port = "7471";
static int (*run_func)();
struct rdma_cm_id *listen_id, *id;
-struct ibv_mr *mr;
+struct ibv_mr *send_mr;
+struct ibv_mr *recv_mr;
enum ibv_qp_type qpt = IBV_QPT_RC;
#define MSG_SIZE 16
@@ -192,13 +193,13 @@ static int xrc_test(void)
return ret;
}
- mr = rdma_reg_msgs(srq_id, recv_msg, sizeof recv_msg);
- if (!mr) {
+ recv_mr = rdma_reg_msgs(srq_id, recv_msg, sizeof recv_msg);
+ if (!recv_mr) {
printf("ibv_reg_msgs %d\n", errno);
return ret;
}
- ret = rdma_post_recv(srq_id, NULL, recv_msg, sizeof recv_msg, mr);
+ ret = rdma_post_recv(srq_id, NULL, recv_msg, sizeof recv_msg, recv_mr);
if (ret) {
printf("rdma_post_recv %d\n", errno);
return ret;
@@ -229,7 +230,7 @@ static int xrc_test(void)
rdma_ack_cm_event(event);
rdma_disconnect(conn_id);
rdma_destroy_ep(conn_id);
- rdma_dereg_mr(mr);
+ rdma_dereg_mr(recv_mr);
rdma_destroy_ep(srq_id);
rdma_destroy_ep(listen_id);
return 0;
@@ -288,13 +289,13 @@ static int rc_test(void)
return ret;
}
- mr = rdma_reg_msgs(id, recv_msg, sizeof recv_msg);
- if (!mr) {
+ recv_mr = rdma_reg_msgs(id, recv_msg, sizeof recv_msg);
+ if (!recv_mr) {
printf("rdma_reg_msgs %d\n", errno);
return ret;
}
- ret = rdma_post_recv(id, NULL, recv_msg, sizeof recv_msg, mr);
+ ret = rdma_post_recv(id, NULL, recv_msg, sizeof recv_msg, recv_mr);
if (ret) {
printf("rdma_post_recv %d\n", errno);
return ret;
@@ -312,7 +313,23 @@ static int rc_test(void)
return ret;
}
- ret = rdma_post_send(id, NULL, send_msg, sizeof send_msg, NULL, IBV_SEND_INLINE);
+ ret = rdma_query_qp(id, &attr);
+ if (ret) {
+ printf("rdma_query_qp %d\n", errno);
+ return ret;
+ }
+
+ if (attr.cap.max_inline_data < sizeof send_msg) {
+ send_mr = rdma_reg_msgs(id, send_msg, sizeof send_msg);
+ if (!send_mr) {
+ printf("rdma_reg_msgs %d\n", errno);
+ return ret;
+ }
+ }
+
+ ret = rdma_post_send(id, NULL, send_msg, sizeof send_msg,
+ attr.cap.max_inline_data < sizeof send_msg ? send_mr : NULL,
+ attr.cap.max_inline_data < sizeof send_msg ? 0 : IBV_SEND_INLINE);
if (ret) {
printf("rdma_post_send %d\n", errno);
return ret;
@@ -325,7 +342,9 @@ static int rc_test(void)
}
rdma_disconnect(id);
- rdma_dereg_mr(mr);
+ rdma_dereg_mr(recv_mr);
+ if (attr.cap.max_inline_data < sizeof send_msg)
+ rdma_dereg_mr(send_mr);
rdma_destroy_ep(id);
rdma_destroy_ep(listen_id);
return 0;
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH librdmacm 1/3] man: rdma_post_*(): memory region is optional only with IBV_SEND_INLINE.
[not found] ` <dfe767c849fc9472bd87c723c401052b89d40d7c.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
@ 2013-08-17 13:51 ` Yann Droneaud
0 siblings, 0 replies; 10+ messages in thread
From: Yann Droneaud @ 2013-08-17 13:51 UTC (permalink / raw)
To: Sean Hefty; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA
Le 17.08.2013 15:38, Yann Droneaud a écrit :
> State explicitly when MR argument is optional.
>
> MR should be a mandatory argument to rdma_post_*() functions.
>
> Signed-off-by: Yann Droneaud <ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
> ---
...
> man/rdma_post_ud_send.3 | 2 +-
...
> diff --git a/man/rdma_post_ud_send.3 b/man/rdma_post_ud_send.3
> index f8e2ada..dd40b8a 100644
> --- a/man/rdma_post_ud_send.3
> +++ b/man/rdma_post_ud_send.3
> @@ -24,7 +24,7 @@ The address of the memory buffer to post.
> .IP "length" 12
> The length of the memory buffer.
> .IP "mr" 12
> -Optional registered memory region associated with the posted buffer.
> +Registered memory region associated with the posted buffer. Optional
> if IBV_SEND_INLINE_FLAG is used.
s/_FLAG/ flag/
--
Yann Droneaud
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA
[not found] ` <cover.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
` (2 preceding siblings ...)
2013-08-17 13:38 ` [PATCH librdmacm 3/3] examples: use IBV_SEND_INLINE if supported Yann Droneaud
@ 2013-08-27 18:36 ` Hefty, Sean
[not found] ` <1828884A29C6694DAF28B7E6B8A8237388CA890E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
3 siblings, 1 reply; 10+ messages in thread
From: Hefty, Sean @ 2013-08-27 18:36 UTC (permalink / raw)
To: Yann Droneaud, infinipath
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> While doing some tests, I've found that rdma_client failed
> on my QLogic/Intel QLE7340 / QLE7342 HCA:
>
> # rdma_client
> rdma_client: start
> rdma_post_send 22
> rdma_client: end -1
>
> I had a deeper look on the examples and found that max_inline_data was
> returned as 0,
The man page to ibv_create_qp() states:
The function ibv_create_qp() will update the qp_init_attr->cap struct
with the actual QP values of the QP that was created; the values will
be greater than or equal to the values requested.
>From this, it sounds like there is a bug in the Intel provider. The ibv_create_qp call should have failed.
- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA
[not found] ` <1828884A29C6694DAF28B7E6B8A8237388CA890E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-28 14:09 ` Marciniszyn, Mike
[not found] ` <32E1700B9017364D9B60AED9960492BC21193FDB-AtyAts71sc9zLByeVOV5+bfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 10+ messages in thread
From: Marciniszyn, Mike @ 2013-08-28 14:09 UTC (permalink / raw)
To: Hefty, Sean, Yann Droneaud, infinipath
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> The function ibv_create_qp() will update the qp_init_attr->cap struct
> with the actual QP values of the QP that was created; the values will
> be greater than or equal to the values requested.
>
You are correct. For qib, this is controlled by the kernel driver.
The qp_create will succeed, but will update the caps.max_inline_data with the value of zero, which is its max.
Looking at the kernel's Infiniband/hw directory this is implemented inconsistently:
- ehca, nes, and ipath (not surprising) behave like qib and return their max
- max for ehca, ipath, and qib is 0
- nes inline is either 0 or 64 depending on a module parameter
- mlx5 appears (I'm not 100% sure) to behave like qib and returns its supported value
- mthca,mlx4,cxgb[34] test and fail with -EINVAL as you suggest
None of the -EINVAL failure cases write back the caps.max_inline_data.
Has librdmacm tried this across all the 'hw'?
Perhaps it would be best to have ULP's ask for 0 and have providers return their >= 0 max, which would implement this as the man page suggests?
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA
[not found] ` <32E1700B9017364D9B60AED9960492BC21193FDB-AtyAts71sc9zLByeVOV5+bfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-28 16:00 ` Hefty, Sean
[not found] ` <1828884A29C6694DAF28B7E6B8A8237388CA95B6-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 10+ messages in thread
From: Hefty, Sean @ 2013-08-28 16:00 UTC (permalink / raw)
To: Marciniszyn, Mike, 'Yann Droneaud', infinipath
Cc: 'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'
> Perhaps it would be best to have ULP's ask for 0 and have providers return
> their >= 0 max, which would implement this as the man page suggests?
Based on performance tests, increasing the maximum inline can result in a significant decrease in overall bandwidth. So, I don't believe that we want providers to increase max_inline automatically to what could be supported.
I'm not sure what to do here, given that the documentation and behaviors do not match. But having a consistent behavior would help. I guess user space providers could work around the issue by allocating internal buffers, registering them, and copying the user data into them on inline sends.
- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA
[not found] ` <1828884A29C6694DAF28B7E6B8A8237388CA95B6-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-28 16:14 ` Marciniszyn, Mike
[not found] ` <32E1700B9017364D9B60AED9960492BC211940BD-AtyAts71sc9zLByeVOV5+bfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 10+ messages in thread
From: Marciniszyn, Mike @ 2013-08-28 16:14 UTC (permalink / raw)
To: Hefty, Sean, 'Yann Droneaud', infinipath
Cc: 'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'
> Based on performance tests, increasing the maximum inline can result in a
> significant decrease in overall bandwidth. So, I don't believe that we want
> providers to increase max_inline automatically to what could be supported.
>
So why would providers return a max_inline value that invites non-optimal performance?
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA
[not found] ` <32E1700B9017364D9B60AED9960492BC211940BD-AtyAts71sc9zLByeVOV5+bfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-28 17:02 ` Hefty, Sean
0 siblings, 0 replies; 10+ messages in thread
From: Hefty, Sean @ 2013-08-28 17:02 UTC (permalink / raw)
To: Marciniszyn, Mike, 'Yann Droneaud', infinipath
Cc: 'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'
> > Based on performance tests, increasing the maximum inline can result in a
> > significant decrease in overall bandwidth. So, I don't believe that we
> want
> > providers to increase max_inline automatically to what could be
> supported.
> >
>
> So why would providers return a max_inline value that invites non-optimal
> performance?
It can improve small message latency.
I think you want the application (or administrator) to specify the desired inline size based on its needs. The other trade-off is that the amount of memory allocated for the QP increases with inline size.
- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2013-08-28 17:02 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-17 13:38 [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA Yann Droneaud
[not found] ` <cover.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
2013-08-17 13:38 ` [PATCH librdmacm 1/3] man: rdma_post_*(): memory region is optional only with IBV_SEND_INLINE Yann Droneaud
[not found] ` <dfe767c849fc9472bd87c723c401052b89d40d7c.1376746185.git.ydroneaud-RlY5vtjFyJ3QT0dZR+AlfA@public.gmane.org>
2013-08-17 13:51 ` Yann Droneaud
2013-08-17 13:38 ` [PATCH librdmacm 2/3] adds rdma_query_qp() function Yann Droneaud
2013-08-17 13:38 ` [PATCH librdmacm 3/3] examples: use IBV_SEND_INLINE if supported Yann Droneaud
2013-08-27 18:36 ` [PATCH librdmacm 0/3] no IBV_SEND_INLINE thus memory registration required on QLogic/Intel HCA Hefty, Sean
[not found] ` <1828884A29C6694DAF28B7E6B8A8237388CA890E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-28 14:09 ` Marciniszyn, Mike
[not found] ` <32E1700B9017364D9B60AED9960492BC21193FDB-AtyAts71sc9zLByeVOV5+bfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-28 16:00 ` Hefty, Sean
[not found] ` <1828884A29C6694DAF28B7E6B8A8237388CA95B6-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-28 16:14 ` Marciniszyn, Mike
[not found] ` <32E1700B9017364D9B60AED9960492BC211940BD-AtyAts71sc9zLByeVOV5+bfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-28 17:02 ` Hefty, Sean
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).