* rdma_lat whos
@ 2010-11-17 10:54 Or Gerlitz
[not found] ` <4CE3B455.4080003-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 11+ messages in thread
From: Or Gerlitz @ 2010-11-17 10:54 UTC (permalink / raw)
To: Jack Morgenstein, Ido Shamai, Vladimir Sokolovsky, linux-rdma,
Moni Shoua
Cc: Sean Hefty
Hi Ido,
We came into a situation where running rdma_lat with vs with out the -c flag, which means w. or w.o using the rdma-cm introduces a notable ~1us difference in latency for 1k messages, that is ~3us w.o using rdma-cm and 3.9us when using the rdma-cm.
I have reproduced that now with the latest code from your git tree and also with the RHEL provided package of perftest-1.2.3-1.el5, see the results below. Also, your tree is not available through the ofa git web service, Vlad, can you help set this out.
Now, Jack, using this patch,
Index: perftest/rdma_lat.c
===================================================================
--- perftest.orig/rdma_lat.c
+++ perftest/rdma_lat.c
@@ -666,7 +666,7 @@ static int pp_connect_ctx(struct pingpon
{
struct ibv_qp_attr attr = {
.qp_state = IBV_QPS_RTR,
- .path_mtu = IBV_MTU_256,
+ .path_mtu = IBV_MTU_2048,
.dest_qp_num = data->rem_dest->qpn,
.rq_psn = data->rem_dest->psn,
.max_dest_rd_atomic = 1,
I could get rdma_lat which doesn't use the rdma-cm, which means setting all the low-level QP params
"by the hand" to produce the SAME result of 3.9us as with the rdma-cm, as you can see its one liner
patch which uses higher MTU of 2048 vs the hard coded MTU of 256 used in the code. This is quite counter
intuitive, for packets whose size is > 256, correct? is there any known issue that can
explain that?! The SA is convinced that 2048 (0x84) is the best MTU for that path, both nodes
have ConnectX DDR with firmware 2.7.0
Or.
> # saquery -p --src-to-dst 1:14
> Path record for 1 -> 14
> PathRecord dump:
> service_id..............0x0000000000000000
> dgid....................fe80::8:f104:399:3c91
> sgid....................fe80::2:c903:2:6be3
> dlid....................0xE
> slid....................0x1
> hop_flow_raw............0x0
> tclass..................0x0
> num_path_revers.........0x80
> pkey....................0xFFFF
> qos_class...............0x0
> sl......................0x0
> mtu.....................0x84
> rate....................0x86
> pkt_life................0x92
> preference..............0x0
> resv2...................0x0
> resv3...................0x0
before the patch
active side, w.o rdma-cm
> # rdma_lat 192.168.20.15 -s 1024 -n 10000
> 26113:pp_client_connect: Couldn't connect to 192.168.20.15:18515
> [root@nsg1 ~]# rdma_lat 192.168.20.15 -s 1024 -n 10000
> local address: LID 0x0e QPN 0x1c004d PSN 0x3a3dca RKey 0x48002600 VAddr 0x00000008a71400
> remote address: LID 0x04 QPN 0x20004c PSN 0x27973 RKey 0x50042700 VAddr 0x0000001b724400
> Latency typical: 3.01932 usec
> Latency best : 2.97582 usec
> Latency worst : 11.3183 usec
passive side w.o rdma-cm
> # rdma_lat -s 1024 -n 10000
> local address: LID 0x04 QPN 0x20004c PSN 0x27973 RKey 0x50042700 VAddr 0x0000001b724400
> remote address: LID 0x0e QPN 0x1c004d PSN 0x3a3dca RKey 0x48002600 VAddr 0x00000008a71400
> Latency typical: 3.02386 usec
> Latency best : 2.97436 usec
> Latency worst : 6.63569 usec
active side, w.o rdma-cm
> # rdma_lat 192.168.20.15 -s 1024 -n 10000 -c
> 26133: Local address: LID 0000, QPN 000000, PSN 0xa12538 RKey 0x50002600 VAddr 0x00000013d27400
> 26133: Remote address: LID 0000, QPN 000000, PSN 0x5c01e8, RKey 0x58042700 VAddr 0x00000006dbb400
>
> Latency typical: 3.89977 usec
> Latency best : 3.83227 usec
> Latency worst : 13.6462 usec
passive side, w.o rdma-cm
> # rdma_lat -s 1024 -n 10000 -c
> 21826: Local address: LID 0000, QPN 000000, PSN 0x5c01e8 RKey 0x58042700 VAddr 0x00000006dbb400
> 21826: Remote address: LID 0000, QPN 000000, PSN 0xa12538, RKey 0x50002600 VAddr 0x00000013d27400
>
> Latency typical: 3.89982 usec
> Latency best : 3.83082 usec
> Latency worst : 13.6974 usec
after the patch, the result w.o -c and with MTU=2048 becomes 3.9us as well,
> /home/ogerlitz/linux/tools/perftest/rdma_lat 192.168.20.15 -s 1024 -n 10000
> local address: LID 0x0e QPN 0x3c004d PSN 0x14ff1e RKey 0x68002600 VAddr 0x00000016c5d400
> remote address: LID 0x04 QPN 0x40004c PSN 0xba137e RKey 0x70042700 VAddr 0x0000001f259400
> Latency typical: 3.88327 usec
> Latency best : 3.80378 usec
> Latency worst : 8.27951 usec
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE3B455.4080003-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
@ 2010-11-17 11:30 ` Or Gerlitz
[not found] ` <4CE3BCE3.7090107-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 11+ messages in thread
From: Or Gerlitz @ 2010-11-17 11:30 UTC (permalink / raw)
To: Jack Morgenstein, Ido Shamai, Vladimir Sokolovsky, linux-rdma,
Moni Shoua
Cc: Sean Hefty
Or Gerlitz wrote:
> Now, Jack, using this patch,
> I could get rdma_lat which doesn't use the rdma-cm, which means setting all the low-level QP params
> "by the hand" to produce the SAME result of 3.9us as with the rdma-cm, as you can see its one liner
> patch which uses higher MTU of 2048 vs the hard coded MTU of 256 used in the code. This is quite counter
> intuitive, for packets whose size is > 256, correct? is there any known issue that can
> explain that?! The SA is convinced that 2048 (0x84) is the best MTU for that path, both nodes
> have ConnectX DDR with firmware 2.7.0
running with mtu=256 vs mtu=2048 on various msg size, you can see this phenomena in action
2.70us for 1k msg with LOWER mtu vs 3.54us with mtu=2048
> # ib_send_lat -a -m 256 celery
> ------------------------------------------------------------------
> Send Latency Test
> Inline data is used up to 400 bytes message
> Connection type : RC
> local address: LID 0x04 QPN 0x4c004b PSN 0xf91b6e
> remote address: LID 0x04 QPN 0x4c004a PSN 0xb9e19a
> Mtu : 256
> ------------------------------------------------------------------
> #bytes #iterations t_min[usec] t_max[usec] t_typical[usec]
> 2 1000 1.04 10.33 1.06
> 4 1000 1.03 9.80 1.07
> 8 1000 1.03 7.89 1.07
> 16 1000 1.05 7.81 1.09
> 32 1000 1.09 7.77 1.11
> 64 1000 1.18 7.99 1.21
> 128 1000 1.48 8.05 1.51
> 256 1000 2.06 8.67 2.19
> 512 1000 2.31 14.38 2.38
> 1024 1000 2.62 14.54 2.70
> 2048 1000 3.26 14.90 3.36
> 4096 1000 4.54 28.84 4.64
> 8192 1000 7.19 18.48 7.30
> 16384 1000 12.61 24.15 12.75
> 32768 1000 23.46 34.77 24.30
> 65536 1000 45.19 56.48 45.75
> 131072 1000 88.64 102.99 89.22
> 262144 1000 175.78 186.59 175.97
> 524288 1000 349.16 359.92 349.35
> 1048576 1000 695.80 706.90 696.22
> 2097152 1000 1389.41 1400.87 1389.95
> 4194304 1000 2777.21 2799.41 2778.08
> 8388608 1000 5691.77 5752.48 5737.45
> ------------------------------------------------------------------
> All resources were Released successfully
> # ib_send_lat -a -m 2048 celery
> ------------------------------------------------------------------
> Send Latency Test
> Inline data is used up to 400 bytes message
> Connection type : RC
> local address: LID 0x04 QPN 0x54004b PSN 0x59384f
> remote address: LID 0x04 QPN 0x54004a PSN 0xcac60e
> Mtu : 2048
> ------------------------------------------------------------------
> #bytes #iterations t_min[usec] t_max[usec] t_typical[usec]
> 2 1000 1.02 10.66 1.06
> 4 1000 1.03 54.53 1.06
> 8 1000 1.04 36.84 1.06
> 16 1000 1.05 34.58 1.08
> 32 1000 1.08 40.70 1.11
> 64 1000 1.18 66.86 1.21
> 128 1000 1.48 38.76 1.51
> 256 1000 2.07 39.15 2.18
> 512 1000 2.58 45.90 2.67
> 1024 1000 3.46 38.07 3.54
> 2048 1000 5.22 40.55 5.31
> 4096 1000 6.53 39.17 6.61
> 8192 1000 9.16 38.85 9.29
> 16384 1000 14.52 50.14 14.99
> 32768 1000 25.54 51.20 25.64
> 65536 1000 47.18 59.76 47.30
> 131072 1000 90.87 96.92 90.99
> 262144 1000 178.22 183.27 178.37
> 524288 1000 352.93 358.01 353.13
> 1048576 1000 702.37 707.69 702.66
> 2097152 1000 1401.30 1411.63 1401.66
> 4194304 1000 2799.16 2845.59 2801.92
> 8388608 1000 5833.30 5911.70 5886.55
> ------------------------------------------------------------------
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE3BCE3.7090107-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
@ 2010-11-18 10:36 ` Ido Shamai
[not found] ` <4CE50190.3080707-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
0 siblings, 1 reply; 11+ messages in thread
From: Ido Shamai @ 2010-11-18 10:36 UTC (permalink / raw)
To: Or Gerlitz
Cc: Jack Morgenstein, Vladimir Sokolovsky, linux-rdma, Moni Shoua,
Sean Hefty
Hello Or ,
I will check the issue now.
The latest git tree is available at git://git.openfabrics.org/~shamoya/perftest.git ,
I've just checked it and it works.
Will get back to you soon
Ido
On 11/17/2010 1:30 PM, Or Gerlitz wrote:
> Or Gerlitz wrote:
>> Now, Jack, using this patch,
>> I could get rdma_lat which doesn't use the rdma-cm, which means setting all the low-level QP params
>> "by the hand" to produce the SAME result of 3.9us as with the rdma-cm, as you can see its one liner
>> patch which uses higher MTU of 2048 vs the hard coded MTU of 256 used in the code. This is quite counter
>> intuitive, for packets whose size is> 256, correct? is there any known issue that can
>> explain that?! The SA is convinced that 2048 (0x84) is the best MTU for that path, both nodes
>> have ConnectX DDR with firmware 2.7.0
> running with mtu=256 vs mtu=2048 on various msg size, you can see this phenomena in action
> 2.70us for 1k msg with LOWER mtu vs 3.54us with mtu=2048
>
>> # ib_send_lat -a -m 256 celery
>> ------------------------------------------------------------------
>> Send Latency Test
>> Inline data is used up to 400 bytes message
>> Connection type : RC
>> local address: LID 0x04 QPN 0x4c004b PSN 0xf91b6e
>> remote address: LID 0x04 QPN 0x4c004a PSN 0xb9e19a
>> Mtu : 256
>> ------------------------------------------------------------------
>> #bytes #iterations t_min[usec] t_max[usec] t_typical[usec]
>> 2 1000 1.04 10.33 1.06
>> 4 1000 1.03 9.80 1.07
>> 8 1000 1.03 7.89 1.07
>> 16 1000 1.05 7.81 1.09
>> 32 1000 1.09 7.77 1.11
>> 64 1000 1.18 7.99 1.21
>> 128 1000 1.48 8.05 1.51
>> 256 1000 2.06 8.67 2.19
>> 512 1000 2.31 14.38 2.38
>> 1024 1000 2.62 14.54 2.70
>> 2048 1000 3.26 14.90 3.36
>> 4096 1000 4.54 28.84 4.64
>> 8192 1000 7.19 18.48 7.30
>> 16384 1000 12.61 24.15 12.75
>> 32768 1000 23.46 34.77 24.30
>> 65536 1000 45.19 56.48 45.75
>> 131072 1000 88.64 102.99 89.22
>> 262144 1000 175.78 186.59 175.97
>> 524288 1000 349.16 359.92 349.35
>> 1048576 1000 695.80 706.90 696.22
>> 2097152 1000 1389.41 1400.87 1389.95
>> 4194304 1000 2777.21 2799.41 2778.08
>> 8388608 1000 5691.77 5752.48 5737.45
>> ------------------------------------------------------------------
>> All resources were Released successfully
>> # ib_send_lat -a -m 2048 celery
>> ------------------------------------------------------------------
>> Send Latency Test
>> Inline data is used up to 400 bytes message
>> Connection type : RC
>> local address: LID 0x04 QPN 0x54004b PSN 0x59384f
>> remote address: LID 0x04 QPN 0x54004a PSN 0xcac60e
>> Mtu : 2048
>> ------------------------------------------------------------------
>> #bytes #iterations t_min[usec] t_max[usec] t_typical[usec]
>> 2 1000 1.02 10.66 1.06
>> 4 1000 1.03 54.53 1.06
>> 8 1000 1.04 36.84 1.06
>> 16 1000 1.05 34.58 1.08
>> 32 1000 1.08 40.70 1.11
>> 64 1000 1.18 66.86 1.21
>> 128 1000 1.48 38.76 1.51
>> 256 1000 2.07 39.15 2.18
>> 512 1000 2.58 45.90 2.67
>> 1024 1000 3.46 38.07 3.54
>> 2048 1000 5.22 40.55 5.31
>> 4096 1000 6.53 39.17 6.61
>> 8192 1000 9.16 38.85 9.29
>> 16384 1000 14.52 50.14 14.99
>> 32768 1000 25.54 51.20 25.64
>> 65536 1000 47.18 59.76 47.30
>> 131072 1000 90.87 96.92 90.99
>> 262144 1000 178.22 183.27 178.37
>> 524288 1000 352.93 358.01 353.13
>> 1048576 1000 702.37 707.69 702.66
>> 2097152 1000 1401.30 1411.63 1401.66
>> 4194304 1000 2799.16 2845.59 2801.92
>> 8388608 1000 5833.30 5911.70 5886.55
>> ------------------------------------------------------------------
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE50190.3080707-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2010-11-18 11:44 ` Or Gerlitz
[not found] ` <4CE511A8.9010400-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-11-24 15:00 ` Or Gerlitz
2010-12-09 10:33 ` Or Gerlitz
2 siblings, 1 reply; 11+ messages in thread
From: Or Gerlitz @ 2010-11-18 11:44 UTC (permalink / raw)
To: Ido Shamai; +Cc: Jack Morgenstein, Vladimir Sokolovsky, linux-rdma, Moni Shoua
Ido Shamai wrote:
> The latest git tree is available at git://git.openfabrics.org/~shamoya/perftest.git
> I've just checked it and it works.
Indeed its available but not through the ofa gitweb @ http://git.openfabrics.org/git/ so commits you are doing can't be browsed out, Vlad can help you fix that up.
Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE511A8.9010400-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
@ 2010-11-18 11:49 ` Vladimir Sokolovsky
[not found] ` <4CE512B7.9080608-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Sokolovsky @ 2010-11-18 11:49 UTC (permalink / raw)
To: Or Gerlitz; +Cc: Ido Shamai, Jack Morgenstein, linux-rdma, Moni Shoua
On 11/18/2010 01:44 PM, Or Gerlitz wrote:
> Ido Shamai wrote:
>> The latest git tree is available at git://git.openfabrics.org/~shamoya/perftest.git
>> I've just checked it and it works.
>
> Indeed its available but not through the ofa gitweb @ http://git.openfabrics.org/git/ so commits you are doing can't be browsed out, Vlad can help you fix that up.
>
> Or.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Hi Or,
perftest is available under:
http://git.openfabrics.org/git?p=~shamoya/perftest.git;a=summary
Regards,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE512B7.9080608-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2010-11-18 12:38 ` Or Gerlitz
0 siblings, 0 replies; 11+ messages in thread
From: Or Gerlitz @ 2010-11-18 12:38 UTC (permalink / raw)
To: Vladimir Sokolovsky; +Cc: Ido Shamai, linux-rdma
Vladimir Sokolovsky wrote:
> perftest is available under:
> http://git.openfabrics.org/git?p=~shamoya/perftest.git;a=summary
Got that, its says "Unnamed repository; edit this file to name it for gitweb." and doesn't have owner, can you fix that, please?
Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE50190.3080707-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2010-11-18 11:44 ` Or Gerlitz
@ 2010-11-24 15:00 ` Or Gerlitz
2010-12-09 10:33 ` Or Gerlitz
2 siblings, 0 replies; 11+ messages in thread
From: Or Gerlitz @ 2010-11-24 15:00 UTC (permalink / raw)
To: Ido Shamai, Jack Morgenstein
Cc: linux-rdma, Moni Shoua, Sean Hefty, Alex Rosenbaum
Ido Shamai wrote:
> I will check the issue now.
Thinking a little further on this and talking with some colleagues, a question was made,
whether the issue can be related to IB credits/starvation and/or protocol/interoperability
with switches.
In an attempt to eliminate that, I repeated the test, this time in a loopback manner, e.g used ib_send_lat once with mtu=256 and once with mtu=2048 with both client/server running on the same node/hca - same result, the mtu=256 produces much better latency for large messages, e.g 1k and onward, for example for msg size=8k the latency is ~7us with mtu=256 and ~9us with mtu=2k
I recalled that on the 2nd generation HCA (tavor), if mtu=2048 is used, the bandwidth is severely damaged (hence the ofed tavor quirk and friends) can this problem which I see on the 4th generation HCA, be somehow related?
Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4CE50190.3080707-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2010-11-18 11:44 ` Or Gerlitz
2010-11-24 15:00 ` Or Gerlitz
@ 2010-12-09 10:33 ` Or Gerlitz
[not found] ` <4D00B066.80807-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2 siblings, 1 reply; 11+ messages in thread
From: Or Gerlitz @ 2010-12-09 10:33 UTC (permalink / raw)
To: Ido Shamai; +Cc: Jack Morgenstein, linux-rdma, Moni Shoua
Ido Shamai wrote:
> The latest git tree is available at git://git.openfabrics.org/~shamoya/perftest.git
Ido, on a related issue - I'm trying to run ib_send_lat in IBoE environment and it fails.
I'm using the latest cut of the perftest sources from git, as for the other components (libibverbs, libmlx4, kernel, FW and HW, see below). Its a system of two nodes connected back-to-back with port1 being IB and port2 being Eth, so the same perftest code works okay on IB / p1-p1 I have ping working fine over mlx4_en, so basically things are okay, I think you made a comment few weeks ago that perftest should be working now with IBoE, so I wonder what goes wrong here?
client side:
> ib_send_lat -d mlx4_0 -i 2 boo1
> ------------------------------------------------------------------
> Send Latency Test
> Connection type : RC
> Inline data is used up to 400 bytes message
> Mtu : 1024
> Link type is Ethernet
> Using gid index 0 as source GID
> local address: LID 0000 QPN 0x44004f PSN 0x6b567a
> GID: 254:128:00:00:00:00:00:00:02:02:201:255:254:07:237:03
> remote address: LID 0000 QPN 0x48004f PSN 0x3e78fc
> GID: 254:128:00:00:00:00:00:00:02:02:201:255:254:07:236:243
> ------------------------------------------------------------------
> #bytes #iterations t_min[usec] t_max[usec] t_typical[usec]
> Completion with error at server
> Failed status 5: wr_id 0 syndrom 0xf4
> rcnt=0
server side
> ib_send_lat -d mlx4_0 -i 2
> ------------------------------------------------------------------
> Send Latency Test
> Connection type : RC
> Inline data is used up to 400 bytes message
> Mtu : 1024
> Link type is Ethernet
> local address: LID 0000 QPN 0x48004f PSN 0x3e78fc
> GID: 254:128:00:00:00:00:00:00:02:02:201:255:254:07:236:243
> remote address: LID 0000 QPN 0x44004f PSN 0x6b567a
> GID: 254:128:00:00:00:00:00:00:02:02:201:255:254:07:237:03
> ------------------------------------------------------------------
> #bytes #iterations t_min[usec] t_max[usec] t_typical[usec]
Or.
its ConnectX2 on both sides with firmware 2.7.700
> ibv_devinfo
> hca_id: mlx4_0
> transport: InfiniBand (0)
> fw_ver: 2.7.700
> node_guid: 0002:c903:0007:ed02
> sys_image_guid: 0002:c903:0007:ed05
> vendor_id: 0x02c9
> vendor_part_id: 26428
> hw_ver: 0xB0
> board_id: MT_0DD0120009
> phys_port_cnt: 2
> port: 1
> state: PORT_ACTIVE (4)
> max_mtu: 2048 (4)
> active_mtu: 2048 (4)
> sm_lid: 12
> port_lid: 9
> port_lmc: 0x00
> link_layer: IB
>
> port: 2
> state: PORT_ACTIVE (4)
> max_mtu: 2048 (4)
> active_mtu: 1024 (3)
> sm_lid: 0
> port_lid: 0
> port_lmc: 0x00
> link_layer: Ethernet
> ofa_kernel:
> git://git.openfabrics.org/ofed_1_5/linux-2.6.git ofed_kernel_1_5
> commit 21556e24411b4e4b0694f70244d4a33a454ddbf5
> libibverbs:
> http://www.openfabrics.org/downloads/libibverbs/libibverbs-1.1.4-0.14.gb6c138b.tar.gz
> libmlx4:
> http://www.openfabrics.org/downloads/libmlx4/libmlx4-1.0-0.13.g4e5c43f.tar.gz
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4D00B066.80807-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
@ 2010-12-09 10:39 ` Or Gerlitz
[not found] ` <4D00B1C4.3040804-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-12-09 12:20 ` Or Gerlitz
1 sibling, 1 reply; 11+ messages in thread
From: Or Gerlitz @ 2010-12-09 10:39 UTC (permalink / raw)
To: Ido Shamai; +Cc: linux-rdma
>> local address: LID 0000 QPN 0x44004f PSN 0x6b567a
>> GID: 254:128:00:00:00:00:00:00:02:02:201:255:254:07:237:03
Also, it would be much easier to track/debug if the GID octets will be printed in hexadecimal, can you?
Or
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4D00B066.80807-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-12-09 10:39 ` Or Gerlitz
@ 2010-12-09 12:20 ` Or Gerlitz
1 sibling, 0 replies; 11+ messages in thread
From: Or Gerlitz @ 2010-12-09 12:20 UTC (permalink / raw)
To: Ido Shamai; +Cc: linux-rdma
Or Gerlitz wrote:
> Ido Shamai wrote:
> I'm trying to run ib_send_lat in IBoE environment and it fails.
I got this to work now by specifying the ip address associated with the relevant mlx4_en network device on the server side, is this documented anywhere?
Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: rdma_lat whos
[not found] ` <4D00B1C4.3040804-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
@ 2010-12-09 17:28 ` Jason Gunthorpe
0 siblings, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2010-12-09 17:28 UTC (permalink / raw)
To: Or Gerlitz; +Cc: Ido Shamai, linux-rdma
On Thu, Dec 09, 2010 at 12:39:00PM +0200, Or Gerlitz wrote:
> >> local address: LID 0000 QPN 0x44004f PSN 0x6b567a
> >> GID: 254:128:00:00:00:00:00:00:02:02:201:255:254:07:237:03
>
> Also, it would be much easier to track/debug if the GID octets will be printed in hexadecimal, can you?
GIDs should be printed with inet_ntop(AF_INET6)
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2010-12-09 17:28 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-17 10:54 rdma_lat whos Or Gerlitz
[not found] ` <4CE3B455.4080003-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-11-17 11:30 ` Or Gerlitz
[not found] ` <4CE3BCE3.7090107-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-11-18 10:36 ` Ido Shamai
[not found] ` <4CE50190.3080707-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2010-11-18 11:44 ` Or Gerlitz
[not found] ` <4CE511A8.9010400-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-11-18 11:49 ` Vladimir Sokolovsky
[not found] ` <4CE512B7.9080608-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2010-11-18 12:38 ` Or Gerlitz
2010-11-24 15:00 ` Or Gerlitz
2010-12-09 10:33 ` Or Gerlitz
[not found] ` <4D00B066.80807-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-12-09 10:39 ` Or Gerlitz
[not found] ` <4D00B1C4.3040804-hKgKHo2Ms0FWk0Htik3J/w@public.gmane.org>
2010-12-09 17:28 ` Jason Gunthorpe
2010-12-09 12:20 ` Or Gerlitz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).