public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* IPoIB to Ethernet routing performance
@ 2010-12-06 10:24 sebastien dugue
       [not found] ` <20101206112454.76bb85f1-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: sebastien dugue @ 2010-12-06 10:24 UTC (permalink / raw)
  To: OF EWG; +Cc: linux-rdma


  Hi,

  I know this might be off topic, but somebody may have already run into the same
problem before.

  I'm trying to use a server as a router between an IB fabric and an Ethernet network.

  The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
Ethernet adapter.

  I did some bandwidth measurements using iperf with the following setup:

  +---------+               +---------+               +---------+
  |         |               |         |   10G Eth     |         |
  |         |    QDR IB     |         +---------------+         |
  | client  +---------------+  Router |   10G Eth     |  Server |
  |         |               |         +---------------+         |
  |         |               |         |               |         |
  +---------+               +---------+               +---------+

  
  However, the routing performance is far from what I would have expected.

  Here are some numbers:

  - 1 IPoIB stream between client and router: 20 Gbits/sec

    Looks OK.

  - 2 Ethernet streams between router and server: 19.5 Gbits/sec

    Looks OK.

  - routing 1 IPoIB stream to 1 Ethernet stream from client to server: 9.8 Gbits/sec

    We manage to saturate the Ethernet link, looks good so far.

  - routing 2 IPoIB streams to 2 Ethernet streams from client to server: 9.3 Gbits/sec

    Argh, even less that when routing a single stream. I would have expected
    a bit more than this.


  Has anybody ever tried to do some routing between an IB fabric and an Ethernet
network and achieved some sensible bandwidth figures?

  Are there some known limitations in what I try to achieve?


  Thanks,

  Sébastien.




--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found] ` <20101206112454.76bb85f1-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
@ 2010-12-06 10:49   ` Richard Croucher
  2010-12-06 11:40     ` [ewg] " sebastien dugue
  2010-12-06 20:47   ` Jabe
  1 sibling, 1 reply; 29+ messages in thread
From: Richard Croucher @ 2010-12-06 10:49 UTC (permalink / raw)
  To: 'sebastien dugue', 'OF EWG'; +Cc: 'linux-rdma'

You may be able to improve by doing some OS tuning.  All this data should stay in kernel mode but there are lots of bottlenecks in the TCP/IP stack that limit scalability.  The IPoIB code has not been optimized for this use case.

You don't mention what Server, kernel and OFED distro you are running.

The best performance is achieved using InfiniBand/Ethernet hardware gateways.  Most of these provide virtual Ethernet NICs to InfiniBand hosts, but the Voltaire 4036E does provide a  IPoIB to Ethernet gateway capability.  This is FPGA based so does provide much higher performance than you will achieve using a standard server solution. 

-----Original Message-----
From: ewg-bounces@lists.openfabrics.org [mailto:ewg-bounces@lists.openfabrics.org] On Behalf Of sebastien dugue
Sent: 06 December 2010 10:25
To: OF EWG
Cc: linux-rdma
Subject: [ewg] IPoIB to Ethernet routing performance


  Hi,

  I know this might be off topic, but somebody may have already run into the same
problem before.

  I'm trying to use a server as a router between an IB fabric and an Ethernet network.

  The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
Ethernet adapter.

  I did some bandwidth measurements using iperf with the following setup:

  +---------+               +---------+               +---------+
  |         |               |         |   10G Eth     |         |
  |         |    QDR IB     |         +---------------+         |
  | client  +---------------+  Router |   10G Eth     |  Server |
  |         |               |         +---------------+         |
  |         |               |         |               |         |
  +---------+               +---------+               +---------+

  
  However, the routing performance is far from what I would have expected.

  Here are some numbers:

  - 1 IPoIB stream between client and router: 20 Gbits/sec

    Looks OK.

  - 2 Ethernet streams between router and server: 19.5 Gbits/sec

    Looks OK.

  - routing 1 IPoIB stream to 1 Ethernet stream from client to server: 9.8 Gbits/sec

    We manage to saturate the Ethernet link, looks good so far.

  - routing 2 IPoIB streams to 2 Ethernet streams from client to server: 9.3 Gbits/sec

    Argh, even less that when routing a single stream. I would have expected
    a bit more than this.


  Has anybody ever tried to do some routing between an IB fabric and an Ethernet
network and achieved some sensible bandwidth figures?

  Are there some known limitations in what I try to achieve?


  Thanks,

  Sébastien.




_______________________________________________
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg

_______________________________________________
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
  2010-12-06 10:49   ` Richard Croucher
@ 2010-12-06 11:40     ` sebastien dugue
       [not found]       ` <20101206124023.025c2f88-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: sebastien dugue @ 2010-12-06 11:40 UTC (permalink / raw)
  To: Richard Croucher; +Cc: 'OF EWG', 'linux-rdma'

On Mon, 6 Dec 2010 10:49:58 -0000
"Richard Croucher" <richard-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8@public.gmane.org> wrote:

> You may be able to improve by doing some OS tuning.

  Right, I tried a few things concerning the TCP/IP stack tuning but nothing
really came out of it.

>  All this data should stay in kernel mode but there are lots of bottlenecks in
> the TCP/IP stack that limit scalability.

  That may be my problem in fact.

>  The IPoIB code has not been optimized for this use case.

  I don't think IPoIB to be the bottleneck. In this case as I managed to feed
2 IPoIB streams between the client and the router yielding about 40 Gbits/s bandwidth.

> 
> You don't mention what Server, kernel and OFED distro you are running.

  Right, sorry. The router is one of our 4 sockets Nehalem-EX box with 2 IOHs which
is running an OFED 1.5.2.

> 
> The best performance is achieved using InfiniBand/Ethernet hardware gateways.
> Most of these provide virtual Ethernet NICs to InfiniBand hosts, but the Voltaire
> 4036E does provide a  IPoIB to Ethernet gateway capability.  This is FPGA based
> so does provide much higher performance than you will achieve using a standard server solution.

  That may be a solution indeed. Are there any real world figures out there
concerning the 4036E performance?

  Thanks Richard,

  Sébastien.


> 
> -----Original Message-----
> From: ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org [mailto:ewg-bounces-ZwoEplunGu2zQB+pC5nmwQ@public.gmane.orgnfabrics.org] On Behalf Of sebastien dugue
> Sent: 06 December 2010 10:25
> To: OF EWG
> Cc: linux-rdma
> Subject: [ewg] IPoIB to Ethernet routing performance
> 
> 
>   Hi,
> 
>   I know this might be off topic, but somebody may have already run into the same
> problem before.
> 
>   I'm trying to use a server as a router between an IB fabric and an Ethernet network.
> 
>   The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
> Ethernet adapter.
> 
>   I did some bandwidth measurements using iperf with the following setup:
> 
>   +---------+               +---------+               +---------+
>   |         |               |         |   10G Eth     |         |
>   |         |    QDR IB     |         +---------------+         |
>   | client  +---------------+  Router |   10G Eth     |  Server |
>   |         |               |         +---------------+         |
>   |         |               |         |               |         |
>   +---------+               +---------+               +---------+
> 
>   
>   However, the routing performance is far from what I would have expected.
> 
>   Here are some numbers:
> 
>   - 1 IPoIB stream between client and router: 20 Gbits/sec
> 
>     Looks OK.
> 
>   - 2 Ethernet streams between router and server: 19.5 Gbits/sec
> 
>     Looks OK.
> 
>   - routing 1 IPoIB stream to 1 Ethernet stream from client to server: 9.8 Gbits/sec
> 
>     We manage to saturate the Ethernet link, looks good so far.
> 
>   - routing 2 IPoIB streams to 2 Ethernet streams from client to server: 9.3 Gbits/sec
> 
>     Argh, even less that when routing a single stream. I would have expected
>     a bit more than this.
> 
> 
>   Has anybody ever tried to do some routing between an IB fabric and an Ethernet
> network and achieved some sensible bandwidth figures?
> 
>   Are there some known limitations in what I try to achieve?
> 
> 
>   Thanks,
> 
>   Sébastien.
> 
> 
> 
> 
> _______________________________________________
> ewg mailing list
> ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
> 
> _______________________________________________
> ewg mailing list
> ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [ewg] IPoIB to Ethernet routing performance
       [not found]       ` <20101206124023.025c2f88-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
@ 2010-12-06 12:08         ` Richard Croucher
  2010-12-06 13:05           ` sebastien dugue
  2010-12-07 14:42           ` Or Gerlitz
  0 siblings, 2 replies; 29+ messages in thread
From: Richard Croucher @ 2010-12-06 12:08 UTC (permalink / raw)
  To: 'sebastien dugue'; +Cc: 'OF EWG', 'linux-rdma'

Unfortunately, the 4036E only has two 10G Ethernet ports which will ultimately limit the throughput.  

The Mellanox BridgeX looks a better hardware solution with 12x 10Ge ports but when I tested this they could only provide vNIC functionality and would not commit to adding IPoIB gateway on their roadmap.  

Qlogic also offer the 12400 Gateway.  This has 6x 10ge ports.   However, like the Mellanox, I understand they only provide host vNIC support.

I'll leave it to representatives from Voltaire, Mellanox and Qlogic to update us. Particularly on support for InfiniBand to Ethernet Gateway for RoCEE.  This is needed so that RDMA sessions can be run between InfiniBand and RoCEE connected hosts.  I don't believe this will work over any of the today's available products.

Richard

-----Original Message-----
From: sebastien dugue [mailto:sebastien.dugue-6ktuUTfB/bM@public.gmane.org] 
Sent: 06 December 2010 11:40
To: Richard Croucher
Cc: 'OF EWG'; 'linux-rdma'
Subject: Re: [ewg] IPoIB to Ethernet routing performance

On Mon, 6 Dec 2010 10:49:58 -0000
"Richard Croucher" <richard-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8@public.gmane.org> wrote:

> You may be able to improve by doing some OS tuning.

  Right, I tried a few things concerning the TCP/IP stack tuning but nothing
really came out of it.

>  All this data should stay in kernel mode but there are lots of bottlenecks in
> the TCP/IP stack that limit scalability.

  That may be my problem in fact.

>  The IPoIB code has not been optimized for this use case.

  I don't think IPoIB to be the bottleneck. In this case as I managed to feed
2 IPoIB streams between the client and the router yielding about 40 Gbits/s bandwidth.

> 
> You don't mention what Server, kernel and OFED distro you are running.

  Right, sorry. The router is one of our 4 sockets Nehalem-EX box with 2 IOHs which
is running an OFED 1.5.2.

> 
> The best performance is achieved using InfiniBand/Ethernet hardware gateways.
> Most of these provide virtual Ethernet NICs to InfiniBand hosts, but the Voltaire
> 4036E does provide a  IPoIB to Ethernet gateway capability.  This is FPGA based
> so does provide much higher performance than you will achieve using a standard server solution.

  That may be a solution indeed. Are there any real world figures out there
concerning the 4036E performance?

  Thanks Richard,

  Sébastien.


> 
> -----Original Message-----
> From: ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org [mailto:ewg-bounces-ZwoEplunGu2zQB+pC5nmwQ@public.gmane.orgnfabrics.org] On Behalf Of sebastien dugue
> Sent: 06 December 2010 10:25
> To: OF EWG
> Cc: linux-rdma
> Subject: [ewg] IPoIB to Ethernet routing performance
> 
> 
>   Hi,
> 
>   I know this might be off topic, but somebody may have already run into the same
> problem before.
> 
>   I'm trying to use a server as a router between an IB fabric and an Ethernet network.
> 
>   The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
> Ethernet adapter.
> 
>   I did some bandwidth measurements using iperf with the following setup:
> 
>   +---------+               +---------+               +---------+
>   |         |               |         |   10G Eth     |         |
>   |         |    QDR IB     |         +---------------+         |
>   | client  +---------------+  Router |   10G Eth     |  Server |
>   |         |               |         +---------------+         |
>   |         |               |         |               |         |
>   +---------+               +---------+               +---------+
> 
>   
>   However, the routing performance is far from what I would have expected.
> 
>   Here are some numbers:
> 
>   - 1 IPoIB stream between client and router: 20 Gbits/sec
> 
>     Looks OK.
> 
>   - 2 Ethernet streams between router and server: 19.5 Gbits/sec
> 
>     Looks OK.
> 
>   - routing 1 IPoIB stream to 1 Ethernet stream from client to server: 9.8 Gbits/sec
> 
>     We manage to saturate the Ethernet link, looks good so far.
> 
>   - routing 2 IPoIB streams to 2 Ethernet streams from client to server: 9.3 Gbits/sec
> 
>     Argh, even less that when routing a single stream. I would have expected
>     a bit more than this.
> 
> 
>   Has anybody ever tried to do some routing between an IB fabric and an Ethernet
> network and achieved some sensible bandwidth figures?
> 
>   Are there some known limitations in what I try to achieve?
> 
> 
>   Thanks,
> 
>   Sébastien.
> 
> 
> 
> 
> _______________________________________________
> ewg mailing list
> ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
> 
> _______________________________________________
> ewg mailing list
> ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
  2010-12-06 12:08         ` Richard Croucher
@ 2010-12-06 13:05           ` sebastien dugue
       [not found]             ` <20101206140505.20cfc9e2-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  2010-12-07 14:42           ` Or Gerlitz
  1 sibling, 1 reply; 29+ messages in thread
From: sebastien dugue @ 2010-12-06 13:05 UTC (permalink / raw)
  To: Richard Croucher; +Cc: 'linux-rdma', 'OF EWG'

On Mon, 6 Dec 2010 12:08:43 -0000
"Richard Croucher" <richard-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8@public.gmane.org> wrote:

> Unfortunately, the 4036E only has two 10G Ethernet ports which will ultimately limit the throughput.

  I'll need to look into this option.

> 
> The Mellanox BridgeX looks a better hardware solution with 12x 10Ge ports but when I tested this they could only provide vNIC functionality and would not commit to adding IPoIB gateway on their roadmap.

  Right, we did some evaluation on it and this was really a show stopper.

  Thanks,

  Sébastien.

> 
> Qlogic also offer the 12400 Gateway.  This has 6x 10ge ports.   However, like the Mellanox, I understand they only provide host vNIC support.
> 
> I'll leave it to representatives from Voltaire, Mellanox and Qlogic to update us. Particularly on support for InfiniBand to Ethernet Gateway for RoCEE.  This is needed so that RDMA sessions can be run between InfiniBand and RoCEE connected hosts.  I don't believe this will work over any of the today's available products.
> 
> Richard
> 
> -----Original Message-----
> From: sebastien dugue [mailto:sebastien.dugue-6ktuUTfB/bM@public.gmane.org] 
> Sent: 06 December 2010 11:40
> To: Richard Croucher
> Cc: 'OF EWG'; 'linux-rdma'
> Subject: Re: [ewg] IPoIB to Ethernet routing performance
> 
> On Mon, 6 Dec 2010 10:49:58 -0000
> "Richard Croucher" <richard-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8@public.gmane.org> wrote:
> 
> > You may be able to improve by doing some OS tuning.
> 
>   Right, I tried a few things concerning the TCP/IP stack tuning but nothing
> really came out of it.
> 
> >  All this data should stay in kernel mode but there are lots of bottlenecks in
> > the TCP/IP stack that limit scalability.
> 
>   That may be my problem in fact.
> 
> >  The IPoIB code has not been optimized for this use case.
> 
>   I don't think IPoIB to be the bottleneck. In this case as I managed to feed
> 2 IPoIB streams between the client and the router yielding about 40 Gbits/s bandwidth.
> 
> > 
> > You don't mention what Server, kernel and OFED distro you are running.
> 
>   Right, sorry. The router is one of our 4 sockets Nehalem-EX box with 2 IOHs which
> is running an OFED 1.5.2.
> 
> > 
> > The best performance is achieved using InfiniBand/Ethernet hardware gateways.
> > Most of these provide virtual Ethernet NICs to InfiniBand hosts, but the Voltaire
> > 4036E does provide a  IPoIB to Ethernet gateway capability.  This is FPGA based
> > so does provide much higher performance than you will achieve using a standard server solution.
> 
>   That may be a solution indeed. Are there any real world figures out there
> concerning the 4036E performance?
> 
>   Thanks Richard,
> 
>   Sébastien.
> 
> 
> > 
> > -----Original Message-----
> > From: ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org [mailto:ewg-bounces@lists.openfabrics.org] On Behalf Of sebastien dugue
> > Sent: 06 December 2010 10:25
> > To: OF EWG
> > Cc: linux-rdma
> > Subject: [ewg] IPoIB to Ethernet routing performance
> > 
> > 
> >   Hi,
> > 
> >   I know this might be off topic, but somebody may have already run into the same
> > problem before.
> > 
> >   I'm trying to use a server as a router between an IB fabric and an Ethernet network.
> > 
> >   The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
> > Ethernet adapter.
> > 
> >   I did some bandwidth measurements using iperf with the following setup:
> > 
> >   +---------+               +---------+               +---------+
> >   |         |               |         |   10G Eth     |         |
> >   |         |    QDR IB     |         +---------------+         |
> >   | client  +---------------+  Router |   10G Eth     |  Server |
> >   |         |               |         +---------------+         |
> >   |         |               |         |               |         |
> >   +---------+               +---------+               +---------+
> > 
> >   
> >   However, the routing performance is far from what I would have expected.
> > 
> >   Here are some numbers:
> > 
> >   - 1 IPoIB stream between client and router: 20 Gbits/sec
> > 
> >     Looks OK.
> > 
> >   - 2 Ethernet streams between router and server: 19.5 Gbits/sec
> > 
> >     Looks OK.
> > 
> >   - routing 1 IPoIB stream to 1 Ethernet stream from client to server: 9.8 Gbits/sec
> > 
> >     We manage to saturate the Ethernet link, looks good so far.
> > 
> >   - routing 2 IPoIB streams to 2 Ethernet streams from client to server: 9.3 Gbits/sec
> > 
> >     Argh, even less that when routing a single stream. I would have expected
> >     a bit more than this.
> > 
> > 
> >   Has anybody ever tried to do some routing between an IB fabric and an Ethernet
> > network and achieved some sensible bandwidth figures?
> > 
> >   Are there some known limitations in what I try to achieve?
> > 
> > 
> >   Thanks,
> > 
> >   Sébastien.
> > 
> > 
> > 
> > 
> > _______________________________________________
> > ewg mailing list
> > ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> > http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
> > 
> > _______________________________________________
> > ewg mailing list
> > ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> > http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
> 
> _______________________________________________
> ewg mailing list
> ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found] ` <20101206112454.76bb85f1-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  2010-12-06 10:49   ` Richard Croucher
@ 2010-12-06 20:47   ` Jabe
       [not found]     ` <4CFD4BEE.5070205-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
  1 sibling, 1 reply; 29+ messages in thread
From: Jabe @ 2010-12-06 20:47 UTC (permalink / raw)
  To: sebastien dugue; +Cc: OF EWG, linux-rdma


>    The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
> Ethernet adapter.
>
> ...
>
>    Here are some numbers:
>
>    - 1 IPoIB stream between client and router: 20 Gbits/sec
>
>      Looks OK.
>
>    - 2 Ethernet streams between router and server: 19.5 Gbits/sec
>
>      Looks OK.
>    


Actually I am amazed you can get such a speed with IPoIB. Trying with 
NPtcp on my DDR infiniband I can only obtain about 4.6Gbit/sec at the 
best packet size (that is 1/4 of the infiniband bandwidth) with this 
chip embedded in the mainboard: InfiniBand: Mellanox Technologies 
MT25204 [InfiniHost III Lx HCA]; and dual E5430 xeon (not nehalem).
That's with 2.6.37 kernel and vanilla ib_ipoib module. What's wrong with 
my setup?
I always assumed that such a slow speed was due to the lack of 
offloading capabilities you get with ethernet cards, but maybe I was 
wrong...?
Also what application did you use for the benchmark?
Thank you
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]     ` <4CFD4BEE.5070205-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
@ 2010-12-06 21:27       ` Jason Gunthorpe
       [not found]         ` <20101206212759.GB16788-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  2010-12-07  7:36       ` sebastien dugue
  1 sibling, 1 reply; 29+ messages in thread
From: Jason Gunthorpe @ 2010-12-06 21:27 UTC (permalink / raw)
  To: Jabe; +Cc: sebastien dugue, OF EWG, linux-rdma

On Mon, Dec 06, 2010 at 09:47:42PM +0100, Jabe wrote:

> Technologies MT25204 [InfiniHost III Lx HCA]; and dual E5430 xeon
> (not nehalem).

Newer Mellanox cards have most of the offloads you see for ethernet so
they get better performance. Plus > Nehalem is just better at TCP in
the first place..

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]     ` <4CFD4BEE.5070205-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
  2010-12-06 21:27       ` Jason Gunthorpe
@ 2010-12-07  7:36       ` sebastien dugue
  1 sibling, 0 replies; 29+ messages in thread
From: sebastien dugue @ 2010-12-07  7:36 UTC (permalink / raw)
  To: Jabe; +Cc: OF EWG, linux-rdma


  Hi Jabe,

On Mon, 06 Dec 2010 21:47:42 +0100
Jabe <jabe.chapman-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org> wrote:

> 
> >    The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 10G
> > Ethernet adapter.
> >
> > ...
> >
> >    Here are some numbers:
> >
> >    - 1 IPoIB stream between client and router: 20 Gbits/sec
> >
> >      Looks OK.
> >
> >    - 2 Ethernet streams between router and server: 19.5 Gbits/sec
> >
> >      Looks OK.
> >    
> 
> 
> Actually I am amazed you can get such a speed with IPoIB. Trying with 
> NPtcp on my DDR infiniband I can only obtain about 4.6Gbit/sec at the 
> best packet size (that is 1/4 of the infiniband bandwidth) with this 
> chip embedded in the mainboard: InfiniBand: Mellanox Technologies 
> MT25204 [InfiniHost III Lx HCA]; and dual E5430 xeon (not nehalem).
> That's with 2.6.37 kernel and vanilla ib_ipoib module. What's wrong with 
> my setup?
> I always assumed that such a slow speed was due to the lack of 
> offloading capabilities you get with ethernet cards, but maybe I was 
> wrong...?
> Also what application did you use for the benchmark?

 I'm using iperf.

  Sébastien.

> Thank you
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]         ` <20101206212759.GB16788-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2010-12-07  7:39           ` sebastien dugue
       [not found]             ` <20101207083911.2ab47a59-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  2010-12-13 16:02           ` Jabe
  1 sibling, 1 reply; 29+ messages in thread
From: sebastien dugue @ 2010-12-07  7:39 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jabe, OF EWG, linux-rdma


 Hi Jason,

On Mon, 6 Dec 2010 14:27:59 -0700
Jason Gunthorpe <jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org> wrote:

> On Mon, Dec 06, 2010 at 09:47:42PM +0100, Jabe wrote:
> 
> > Technologies MT25204 [InfiniHost III Lx HCA]; and dual E5430 xeon
> > (not nehalem).
> 
> Newer Mellanox cards have most of the offloads you see for ethernet so
> they get better performance.

  What kind of offload capabilities are you referring to for IPoIB?

> Plus > Nehalem is just better at TCP in
> the first place..

  Well that depends on which Nehalem we're talking about. I've found that the EX
performs more poorly than the EP, though I didn't dig enough to find out why.

  Sébastien.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]             ` <20101207083911.2ab47a59-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
@ 2010-12-07 10:02               ` Or Gerlitz
       [not found]                 ` <4CFE0638.2040105-smomgflXvOZWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2010-12-07 10:02 UTC (permalink / raw)
  To: sebastien dugue; +Cc: Jason Gunthorpe, Jabe, linux-rdma

sebastien dugue wrote:
>   What kind of offload capabilities are you referring to for IPoIB?
>
TCP stateless offloads: checksum, LSO, LRO (to be replaced to GRO, 
sometime, sooner the better)

Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                 ` <4CFE0638.2040105-smomgflXvOZWk0Htik3J/w@public.gmane.org>
@ 2010-12-07 10:27                   ` sebastien dugue
       [not found]                     ` <20101207112739.0c95db46-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: sebastien dugue @ 2010-12-07 10:27 UTC (permalink / raw)
  To: Or Gerlitz; +Cc: Jason Gunthorpe, Jabe, linux-rdma

On Tue, 7 Dec 2010 12:02:32 +0200
Or Gerlitz <ogerlitz-smomgflXvOZWk0Htik3J/w@public.gmane.org> wrote:

> sebastien dugue wrote:
> >   What kind of offload capabilities are you referring to for IPoIB?
> >
> TCP stateless offloads: checksum, LSO, LRO (to be replaced to GRO, 
> sometime, sooner the better)
> 
> Or.
> 

  Huh? How do you do that?

# ethtool -k ib0
Offload parameters for ib0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off

# ethtool -K ib0 rx on
Cannot set device rx csum settings: Operation not supported

# ethtool -K ib0 tx on
Cannot set device tx csum settings: Operation not supported

# ethtool -K ib0 sg on
Cannot set device scatter-gather settings: Operation not supported

# ethtool -K ib0 tso on
Cannot set device tcp segmentation offload settings: Operation not supported

# ethtool -K ib0 lro on
--> Looks like this is the only knob available


# ibstat
CA 'mlx4_0'
	CA type: MT26428
	Number of ports: 2
	Firmware version: 2.7.0
	Hardware version: a0
	Node GUID: 0x0002c90300048914
	System image GUID: 0x0002c90300048917
	Port 1:
		State: Active
		Physical state: LinkUp
		Rate: 40
		Base lid: 1
		LMC: 0
		SM lid: 1
		Capability mask: 0x0251086a
		Port GUID: 0x0002c90300048915
	Port 2:
		State: Active
		Physical state: LinkUp
		Rate: 40
		Base lid: 9
		LMC: 0
		SM lid: 1
		Capability mask: 0x02510868
		Port GUID: 0x0002c90300048916


  Sébastien.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                     ` <20101207112739.0c95db46-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
@ 2010-12-07 10:33                       ` Or Gerlitz
       [not found]                         ` <4CFE0D7C.4020904-smomgflXvOZWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2010-12-07 10:33 UTC (permalink / raw)
  To: sebastien dugue; +Cc: Jason Gunthorpe, Jabe, linux-rdma

On 12/7/2010 12:27 PM, sebastien dugue wrote:
>
>    Huh? How do you do that?
see Documentation/infiniband/ipoib.txt on your clone of Linus tree or if 
you're not a developer, see
http://lxr.linux.no/#linux+v2.6.36/Documentation/infiniband/ipoib.txt 
the HCA you're using does support offloads, so you're probably running 
connected mode, use datagram

Or.


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                         ` <4CFE0D7C.4020904-smomgflXvOZWk0Htik3J/w@public.gmane.org>
@ 2010-12-07 11:48                           ` sebastien dugue
       [not found]                             ` <20101207124805.1fede78f-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: sebastien dugue @ 2010-12-07 11:48 UTC (permalink / raw)
  To: Or Gerlitz; +Cc: Jason Gunthorpe, Jabe, linux-rdma

On Tue, 7 Dec 2010 12:33:32 +0200
Or Gerlitz <ogerlitz-smomgflXvOZWk0Htik3J/w@public.gmane.org> wrote:

> On 12/7/2010 12:27 PM, sebastien dugue wrote:
> >
> >    Huh? How do you do that?
> see Documentation/infiniband/ipoib.txt on your clone of Linus tree or if 
> you're not a developer, see
> http://lxr.linux.no/#linux+v2.6.36/Documentation/infiniband/ipoib.txt 
> the HCA you're using does support offloads, so you're probably running 
> connected mode, use datagram

  Right, I'm running in connected mode. Last time I checked, datagram mode
was far from connected mode performance wise.

  Sébastien.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                             ` <20101207124805.1fede78f-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
@ 2010-12-07 12:32                               ` Or Gerlitz
       [not found]                                 ` <4CFE2953.2020807-smomgflXvOZWk0Htik3J/w@public.gmane.org>
  2010-12-07 13:01                               ` Hiroyuki Sato
  1 sibling, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2010-12-07 12:32 UTC (permalink / raw)
  To: sebastien dugue; +Cc: Jason Gunthorpe, Jabe, linux-rdma

On 12/7/2010 1:48 PM, sebastien dugue wrote:
> I'm running in connected mode. Last time I checked, datagram mode was 
> far from connected mode performance wise.

I believe that datagram mode can bring you to or closely to 20gbs 
without too much pain

Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                             ` <20101207124805.1fede78f-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
  2010-12-07 12:32                               ` Or Gerlitz
@ 2010-12-07 13:01                               ` Hiroyuki Sato
       [not found]                                 ` <AANLkTik2XaM6qyUUc+uzYCzKL+zKbN5kHz2Kf8So8arC-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 29+ messages in thread
From: Hiroyuki Sato @ 2010-12-07 13:01 UTC (permalink / raw)
  To: sebastien dugue; +Cc: Or Gerlitz, Jason Gunthorpe, Jabe, linux-rdma

Basic Question.
  MTU size matched?? on 10GbE and Infiniband

BTW, could you tell me more detail about 10GbE configuration.
what kind of teaming are you using?? (ex IEEE802.3ad)
(It is my interest.)

--
Hiroyuki Sato.


2010/12/7 sebastien dugue <sebastien.dugue-6ktuUTfB/bM@public.gmane.org>:
> On Tue, 7 Dec 2010 12:33:32 +0200
> Or Gerlitz <ogerlitz-smomgflXvOZWk0Htik3J/w@public.gmane.org> wrote:
>
>> On 12/7/2010 12:27 PM, sebastien dugue wrote:
>> >
>> >    Huh? How do you do that?
>> see Documentation/infiniband/ipoib.txt on your clone of Linus tree or if
>> you're not a developer, see
>> http://lxr.linux.no/#linux+v2.6.36/Documentation/infiniband/ipoib.txt
>> the HCA you're using does support offloads, so you're probably running
>> connected mode, use datagram
>
>  Right, I'm running in connected mode. Last time I checked, datagram mode
> was far from connected mode performance wise.
>
>  Sébastien.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                                 ` <4CFE2953.2020807-smomgflXvOZWk0Htik3J/w@public.gmane.org>
@ 2010-12-07 13:19                                   ` sebastien dugue
  0 siblings, 0 replies; 29+ messages in thread
From: sebastien dugue @ 2010-12-07 13:19 UTC (permalink / raw)
  To: Or Gerlitz; +Cc: Jason Gunthorpe, Jabe, linux-rdma

On Tue, 7 Dec 2010 14:32:19 +0200
Or Gerlitz <ogerlitz-smomgflXvOZWk0Htik3J/w@public.gmane.org> wrote:

> On 12/7/2010 1:48 PM, sebastien dugue wrote:
> > I'm running in connected mode. Last time I checked, datagram mode was 
> > far from connected mode performance wise.
> 
> I believe that datagram mode can bring you to or closely to 20gbs 
> without too much pain

  OK, I will try that.

  Thanks,

  Sébastien.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]                                 ` <AANLkTik2XaM6qyUUc+uzYCzKL+zKbN5kHz2Kf8So8arC-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2010-12-07 13:25                                   ` sebastien dugue
  0 siblings, 0 replies; 29+ messages in thread
From: sebastien dugue @ 2010-12-07 13:25 UTC (permalink / raw)
  To: Hiroyuki Sato; +Cc: Or Gerlitz, Jason Gunthorpe, Jabe, linux-rdma


  Hiroyuki,

On Tue, 7 Dec 2010 22:01:51 +0900
Hiroyuki Sato <hiroysato-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> Basic Question.
>   MTU size matched?? on 10GbE and Infiniband

  Well, I found that the best end to end IPoIB MTU for routing to 10G Ethernet is
8176 (in connected mode). That way the IB packet is 8192 bytes that splits nicely
to the IB MTU of 2048 and at the same time fits in an Ethernet jumbo frame (9000). 

> 
> BTW, could you tell me more detail about 10GbE configuration.
> what kind of teaming are you using?? (ex IEEE802.3ad)
> (It is my interest.)

  I'm using no teaming right now. I'm just trying to pass 2 streams into
a single IB link that then gets routed through 2 ethernet links.

  Sébastien.

> 
> --
> Hiroyuki Sato.
> 
> 
> 2010/12/7 sebastien dugue <sebastien.dugue-6ktuUTfB/bM@public.gmane.org>:
> > On Tue, 7 Dec 2010 12:33:32 +0200
> > Or Gerlitz <ogerlitz-smomgflXvOZWk0Htik3J/w@public.gmane.org> wrote:
> >
> >> On 12/7/2010 12:27 PM, sebastien dugue wrote:
> >> >
> >> >    Huh? How do you do that?
> >> see Documentation/infiniband/ipoib.txt on your clone of Linus tree or if
> >> you're not a developer, see
> >> http://lxr.linux.no/#linux+v2.6.36/Documentation/infiniband/ipoib.txt
> >> the HCA you're using does support offloads, so you're probably running
> >> connected mode, use datagram
> >
> >  Right, I'm running in connected mode. Last time I checked, datagram mode
> > was far from connected mode performance wise.
> >
> >  Sébastien.
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
  2010-12-06 12:08         ` Richard Croucher
  2010-12-06 13:05           ` sebastien dugue
@ 2010-12-07 14:42           ` Or Gerlitz
  1 sibling, 0 replies; 29+ messages in thread
From: Or Gerlitz @ 2010-12-07 14:42 UTC (permalink / raw)
  To: richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8,
	'Paul Chan', Liran Liss
  Cc: 'sebastien dugue', 'linux-rdma', Jason Gunthorpe

Richard Croucher wrote:
> Particularly on support for InfiniBand to Ethernet Gateway for RoCEE.  
> This is needed so that RDMA sessions can be run between InfiniBand and RoCEE connected hosts.  

I will be happy to hear what would be a real-life use case for such a configuration. I tend to think you'll need an IB router for this to work, Paul?, Liran?

Or.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
       [not found]             ` <20101206140505.20cfc9e2-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
@ 2010-12-09 23:46               ` Christoph Lameter
       [not found]                 ` <alpine.DEB.2.00.1012091745070.29367-sBS69tsa9Uj/9pzu0YdTqQ@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Christoph Lameter @ 2010-12-09 23:46 UTC (permalink / raw)
  To: sebastien dugue; +Cc: Richard Croucher, 'linux-rdma', 'OF EWG'

On Mon, 6 Dec 2010, sebastien dugue wrote:

> > The Mellanox BridgeX looks a better hardware solution with 12x 10Ge
> > ports but when I tested this they could only provide vNIC
> > functionality and would not commit to adding IPoIB gateway on their
> > roadmap.
>
>   Right, we did some evaluation on it and this was really a show stopper.

Did the same thing here came to the same conclusions.

> > Qlogic also offer the 12400 Gateway.  This has 6x 10ge ports.
> > However, like the Mellanox, I understand they only provide host vNIC
> > support.

Really? I was hoping that they would have something worth looking at.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: IPoIB to Ethernet routing performance
       [not found]         ` <20101206212759.GB16788-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  2010-12-07  7:39           ` sebastien dugue
@ 2010-12-13 16:02           ` Jabe
       [not found]             ` <4D06438C.9040307-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
  1 sibling, 1 reply; 29+ messages in thread
From: Jabe @ 2010-12-13 16:02 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: OF EWG, linux-rdma

On 12/06/2010 10:27 PM, Jason Gunthorpe wrote:
> On Mon, Dec 06, 2010 at 09:47:42PM +0100, Jabe wrote:
>    
>> Technologies MT25204 [InfiniHost III Lx HCA]; and dual E5430 xeon
>> (not nehalem).
>>      
> Newer Mellanox cards have most of the offloads you see for ethernet so
> they get better performance. Plus>  Nehalem is just better at TCP in
> the first place..
>    

Very interesting

Do you know if new Qlogic IB cards like QLE7340 also have such offloads?

In general which brand would you recommend for IB and for IPoIB?

Thank you
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [ewg] IPoIB to Ethernet routing performance
       [not found]             ` <4D06438C.9040307-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
@ 2010-12-14 16:35               ` Richard Croucher
  0 siblings, 0 replies; 29+ messages in thread
From: Richard Croucher @ 2010-12-14 16:35 UTC (permalink / raw)
  To: 'Jabe', 'Jason Gunthorpe'
  Cc: 'linux-rdma', 'OF EWG'

Qlogic claim their QLE7340 is lowest latency for MPI applications but it is
restricted to a single port.   I've not carried out IPoIB testing of this
card.  There are plenty of published results for the Mellanox ConnectX cards
which tend to account for the majority of HCA's deployed.

My opinion is that the Mellanox  ConnectX has more capabilities on board and
is probably the best all rounder, but the Qlogic TrueScale lets certain apps
get closer to the metal and therefore lower latency.

You really have to carry out your own testing, since it depends on what you
consider important. 

Richard

-----Original Message-----
From: ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
[mailto:ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org] On Behalf Of Jabe
Sent: 13 December 2010 16:02
To: Jason Gunthorpe
Cc: linux-rdma; OF EWG
Subject: Re: [ewg] IPoIB to Ethernet routing performance

On 12/06/2010 10:27 PM, Jason Gunthorpe wrote:
> On Mon, Dec 06, 2010 at 09:47:42PM +0100, Jabe wrote:
>    
>> Technologies MT25204 [InfiniHost III Lx HCA]; and dual E5430 xeon
>> (not nehalem).
>>      
> Newer Mellanox cards have most of the offloads you see for ethernet so
> they get better performance. Plus>  Nehalem is just better at TCP in
> the first place..
>    

Very interesting

Do you know if new Qlogic IB cards like QLE7340 also have such offloads?

In general which brand would you recommend for IB and for IPoIB?

Thank you
_______________________________________________
ewg mailing list
ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
       [not found]                 ` <alpine.DEB.2.00.1012091745070.29367-sBS69tsa9Uj/9pzu0YdTqQ@public.gmane.org>
@ 2010-12-26  7:43                   ` Ali Ayoub
       [not found]                     ` <AANLkTi=2-FJtJCY0+79wyXRszwrhRwQStDTNjRCjr66F-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Ali Ayoub @ 2010-12-26  7:43 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: sebastien dugue, linux-rdma, Richard Croucher, OF EWG

On Thu, Dec 9, 2010 at 3:46 PM, Christoph Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org> wrote:
> On Mon, 6 Dec 2010, sebastien dugue wrote:
>
>> > The Mellanox BridgeX looks a better hardware solution with 12x 10Ge
>> > ports but when I tested this they could only provide vNIC
>> > functionality and would not commit to adding IPoIB gateway on their
>> > roadmap.
>>
>>   Right, we did some evaluation on it and this was really a show stopper.
>
> Did the same thing here came to the same conclusions.

May I ask why do you need IPoIB when you have EoIB (vNic driver)?
Why it's a show stopper?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [ewg] IPoIB to Ethernet routing performance
       [not found]                     ` <AANLkTi=2-FJtJCY0+79wyXRszwrhRwQStDTNjRCjr66F-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2010-12-26 10:57                       ` Richard Croucher
  2010-12-27 11:51                         ` Jabe
  2010-12-28  0:06                         ` Ali Ayoub
  2011-01-03 22:33                       ` Christoph Lameter
  1 sibling, 2 replies; 29+ messages in thread
From: Richard Croucher @ 2010-12-26 10:57 UTC (permalink / raw)
  To: 'Ali Ayoub', 'Christoph Lameter'
  Cc: 'linux-rdma', 'sebastien dugue', 'OF EWG'

The vNIC driver only works when you have Ethernet/InfiniBand hardware
gateways in your environment.   It is useful when you have external hosts to
communicate with which do not have direct InfiniBand connectivity.
IPoIB is still heavily used in these environments to provide TCP/IP
connectivity within the InfiniBand fabric.
The primary Use Case for vNICs is probably for virtualization servers, so
that individual Guests can be presented with a virtual Ethernet NIC and do
not lead to load any InfiniBand drivers.  Only the hypervisor needs to have
the InfiniBand software stack loaded.
I've also applied vNICs in the Financial Services arena, for connectivity to
external TCP/IP services but there the IPoIB gateway function is arguably
more useful.

The whole vNIC arena is complicated by different, incompatible
implementations from each of Qlogic and Mellanox.

Richard

-----Original Message-----
From: ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
[mailto:ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org] On Behalf Of Ali Ayoub
Sent: 26 December 2010 07:43
To: Christoph Lameter
Cc: linux-rdma; sebastien dugue; Richard Croucher; OF EWG
Subject: Re: [ewg] IPoIB to Ethernet routing performance

On Thu, Dec 9, 2010 at 3:46 PM, Christoph Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org> wrote:
> On Mon, 6 Dec 2010, sebastien dugue wrote:
>
>> > The Mellanox BridgeX looks a better hardware solution with 12x 10Ge
>> > ports but when I tested this they could only provide vNIC
>> > functionality and would not commit to adding IPoIB gateway on their
>> > roadmap.
>>
>>   Right, we did some evaluation on it and this was really a show stopper.
>
> Did the same thing here came to the same conclusions.

May I ask why do you need IPoIB when you have EoIB (vNic driver)?
Why it's a show stopper?
_______________________________________________
ewg mailing list
ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
  2010-12-26 10:57                       ` Richard Croucher
@ 2010-12-27 11:51                         ` Jabe
       [not found]                           ` <4D187DB1.5020005-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
  2010-12-28  0:06                         ` Ali Ayoub
  1 sibling, 1 reply; 29+ messages in thread
From: Jabe @ 2010-12-27 11:51 UTC (permalink / raw)
  To: richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8
  Cc: Richard Croucher, 'Ali Ayoub',
	'Christoph Lameter', 'linux-rdma',
	'sebastien dugue', 'OF EWG'

On 12/26/2010 11:57 AM, Richard Croucher wrote:
> The vNIC driver only works when you have Ethernet/InfiniBand hardware
> gateways in your environment.   It is useful when you have external hosts to
> communicate with which do not have direct InfiniBand connectivity.
> IPoIB is still heavily used in these environments to provide TCP/IP
> connectivity within the InfiniBand fabric.
> The primary Use Case for vNICs is probably for virtualization servers, so
> that individual Guests can be presented with a virtual Ethernet NIC and do
> not lead to load any InfiniBand drivers.  Only the hypervisor needs to have
> the InfiniBand software stack loaded.
> I've also applied vNICs in the Financial Services arena, for connectivity to
> external TCP/IP services but there the IPoIB gateway function is arguably
> more useful.
>
> The whole vNIC arena is complicated by different, incompatible
> implementations from each of Qlogic and Mellanox.
>
> Richard
>    


Richard, with your explanation I understand why vNIC / EoIB is used in 
the case you cite, but I don't understand why it is NOT used in the 
other cases (like Ali says).

I can *guess* it's probably because with a virtual ethernet fabric you 
have to do all IP stack in software, probably without even having the 
stateless offloads (so it would be a performance reason). Is that the 
reason?

Thank you
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
  2010-12-26 10:57                       ` Richard Croucher
  2010-12-27 11:51                         ` Jabe
@ 2010-12-28  0:06                         ` Ali Ayoub
       [not found]                           ` <AANLkTi=ausgOuEMRe622dNRzVyhN3pxPWeTAJiJihssW-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 29+ messages in thread
From: Ali Ayoub @ 2010-12-28  0:06 UTC (permalink / raw)
  To: richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8
  Cc: Christoph Lameter, linux-rdma, sebastien dugue, OF EWG

On Sun, Dec 26, 2010 at 2:57 AM, Richard Croucher
<richard-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8@public.gmane.org> wrote:
> The vNIC driver only works when you have Ethernet/InfiniBand hardware
> gateways in your environment.   It is useful when you have external hosts to
> communicate with which do not have direct InfiniBand connectivity.
> IPoIB is still heavily used in these environments to provide TCP/IP
> connectivity within the InfiniBand fabric.

Once you have BridgeX HW, Mellanox vNic (EoIB) driver provides IB to
EN connectivity, as well as IB to IB connectivity.
Note that IB to IB connectivity doesn't involve the bridge HW
(peer-to-peer communication) so any packet sent internally within the
IB fabric doesn't reach the bridge HW.
Today, EoIB requires the BridgeX HW, in the future, Mellanox may
support "bridge-less" mode where it can work without the bridge HW.

> The primary Use Case for vNICs is probably for virtualization servers, so
> that individual Guests can be presented with a virtual Ethernet NIC and do
> not lead to load any InfiniBand drivers.  Only the hypervisor needs to have
> the InfiniBand software stack loaded.

EoIB primary use is not virtualization, although it can support it in
better ways than other ULPs.
FYI, today running full/para virtualized driver in the Guest OS is
needed also for IPoIB.
Only when platform-virtualization solution is available, the GOS will
run IB stack (for any ULP).

> -----Original Message-----
> From: ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> [mailto:ewg-bounces-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org] On Behalf Of Ali Ayoub
> Sent: 26 December 2010 07:43
> To: Christoph Lameter
> Cc: linux-rdma; sebastien dugue; Richard Croucher; OF EWG
> Subject: Re: [ewg] IPoIB to Ethernet routing performance
>
> On Thu, Dec 9, 2010 at 3:46 PM, Christoph Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org> wrote:
>> On Mon, 6 Dec 2010, sebastien dugue wrote:
>>
>>> > The Mellanox BridgeX looks a better hardware solution with 12x 10Ge
>>> > ports but when I tested this they could only provide vNIC
>>> > functionality and would not commit to adding IPoIB gateway on their
>>> > roadmap.
>>>
>>>   Right, we did some evaluation on it and this was really a show stopper.
>>
>> Did the same thing here came to the same conclusions.
>
> May I ask why do you need IPoIB when you have EoIB (vNic driver)?
> Why it's a show stopper?
> _______________________________________________
> ewg mailing list
> ewg-ZwoEplunGu1OwGhvXhtEPSCwEArCW2h5@public.gmane.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
       [not found]                           ` <AANLkTi=ausgOuEMRe622dNRzVyhN3pxPWeTAJiJihssW-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2010-12-28 15:30                             ` Reeted
       [not found]                               ` <4D1A02A8.1060104-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
  0 siblings, 1 reply; 29+ messages in thread
From: Reeted @ 2010-12-28 15:30 UTC (permalink / raw)
  To: Ali Ayoub, richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8
  Cc: Christoph Lameter, linux-rdma, sebastien dugue, OF EWG

On 12/28/2010 01:06 AM, Ali Ayoub wrote:
> EoIB primary use is not virtualization, although it can support it in
> better ways than other ULPs.
> FYI, today running full/para virtualized driver in the Guest OS is
> needed also for IPoIB.
> Only when platform-virtualization solution is available, the GOS will
> run IB stack (for any ULP).
>    

You and Richard seem to have good experience of infiniband in 
virtualized environments. May I ask one thing?
We were thinking about buying some Mellanox Connectx-2 for use with 
SR-IOV (hardware virtualization for PCI bypass, supposedly supported by 
connectx-2) in KVM (also supposedly supports SR-IOV and PCI bypass).
Do you have info if this will work, in KVM or other hypervisors?
I asked in KVM mailing list but they have not tried this card (which is 
the only SR-IOV card among Infiniband ones, so they have not tried 
infiniband).
We can be interested in both true infiniband and IPoIB support.
Thank you.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
       [not found]                               ` <4D1A02A8.1060104-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
@ 2010-12-29 15:18                                 ` Tziporet Koren
  0 siblings, 0 replies; 29+ messages in thread
From: Tziporet Koren @ 2010-12-29 15:18 UTC (permalink / raw)
  To: Reeted
  Cc: Ali Ayoub, richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8,
	linux-rdma, Christoph Lameter, sebastien dugue, OF EWG

  On 12/28/2010 5:30 PM, Reeted wrote:
>
> You and Richard seem to have good experience of infiniband in 
> virtualized environments. May I ask one thing?
> We were thinking about buying some Mellanox Connectx-2 for use with 
> SR-IOV (hardware virtualization for PCI bypass, supposedly supported 
> by connectx-2) in KVM (also supposedly supports SR-IOV and PCI bypass).
> Do you have info if this will work, in KVM or other hypervisors?
> I asked in KVM mailing list but they have not tried this card (which 
> is the only SR-IOV card among Infiniband ones, so they have not tried 
> infiniband).
>
We are working on enabling SRIOV on ConnectX2 cards
Once we will have it working with KVM we will submit the patches to the 
linux-rdma list
Should be in few months - but don't ask how much is few :-)

Tziporet

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: [ewg] IPoIB to Ethernet routing performance
       [not found]                           ` <4D187DB1.5020005-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
@ 2010-12-30 17:37                             ` Richard Croucher
  0 siblings, 0 replies; 29+ messages in thread
From: Richard Croucher @ 2010-12-30 17:37 UTC (permalink / raw)
  To: 'Jabe', richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8
  Cc: 'Ali Ayoub', 'Christoph Lameter',
	'linux-rdma', 'sebastien dugue', 'OF EWG'

IPoIB is far easier to use and does not carry out the additional management
burden of vNICS.

With vNICs you have to manage the MAC address mapping to Ethernet g/w port.
In some situations, such as when multiple G/w's are used for resiliency this
can amount to a lot of separate vNICs on each server to manage.  In a small
configuration I had, we ended up with 6 vNICS per server to manage.  On a
large configuration this additional management would be a big burden.

My experience with IPoIB has always been very positive. All my existing
socket programs have worked, even some esoteric ioctls I use for multicast
and buffer  management.
Performance could always be better, but in my experience it's not great for
the vNICS either.   Latency in particular was very disappointing when I
tested.  
If you want high performance you have to avoid TCP/IP.

-----Original Message-----
From: Jabe [mailto:jabe.chapman-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org] 
Sent: 27 December 2010 11:51
To: richard.croucher-jNDFPZUTrfRJpuwtbJ71GdBPR1lH4CV8@public.gmane.org
Cc: Richard Croucher; 'Ali Ayoub'; 'Christoph Lameter'; 'linux-rdma';
'sebastien dugue'; 'OF EWG'
Subject: Re: [ewg] IPoIB to Ethernet routing performance

On 12/26/2010 11:57 AM, Richard Croucher wrote:
> The vNIC driver only works when you have Ethernet/InfiniBand hardware
> gateways in your environment.   It is useful when you have external hosts
to
> communicate with which do not have direct InfiniBand connectivity.
> IPoIB is still heavily used in these environments to provide TCP/IP
> connectivity within the InfiniBand fabric.
> The primary Use Case for vNICs is probably for virtualization servers, so
> that individual Guests can be presented with a virtual Ethernet NIC and do
> not lead to load any InfiniBand drivers.  Only the hypervisor needs to
have
> the InfiniBand software stack loaded.
> I've also applied vNICs in the Financial Services arena, for connectivity
to
> external TCP/IP services but there the IPoIB gateway function is arguably
> more useful.
>
> The whole vNIC arena is complicated by different, incompatible
> implementations from each of Qlogic and Mellanox.
>
> Richard
>    


Richard, with your explanation I understand why vNIC / EoIB is used in 
the case you cite, but I don't understand why it is NOT used in the 
other cases (like Ali says).

I can *guess* it's probably because with a virtual ethernet fabric you 
have to do all IP stack in software, probably without even having the 
stateless offloads (so it would be a performance reason). Is that the 
reason?

Thank you

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [ewg] IPoIB to Ethernet routing performance
       [not found]                     ` <AANLkTi=2-FJtJCY0+79wyXRszwrhRwQStDTNjRCjr66F-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2010-12-26 10:57                       ` Richard Croucher
@ 2011-01-03 22:33                       ` Christoph Lameter
  1 sibling, 0 replies; 29+ messages in thread
From: Christoph Lameter @ 2011-01-03 22:33 UTC (permalink / raw)
  To: Ali Ayoub; +Cc: sebastien dugue, linux-rdma, Richard Croucher, OF EWG

[-- Attachment #1: Type: TEXT/PLAIN, Size: 945 bytes --]

On Sat, 25 Dec 2010, Ali Ayoub wrote:

> On Thu, Dec 9, 2010 at 3:46 PM, Christoph Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org> wrote:
> > On Mon, 6 Dec 2010, sebastien dugue wrote:
> >
> >> > The Mellanox BridgeX looks a better hardware solution with 12x 10Ge
> >> > ports but when I tested this they could only provide vNIC
> >> > functionality and would not commit to adding IPoIB gateway on their
> >> > roadmap.
> >>
> >>   Right, we did some evaluation on it and this was really a show stopper.
> >
> > Did the same thing here came to the same conclusions.
>
> May I ask why do you need IPoIB when you have EoIB (vNic driver)?
> Why it's a show stopper?

EoIB is immature for some use cases like financial. No multicast support
f.e. All multicast becomes broadcast. There is extensive support for
multicast on IPoIB and the various gotchas and hiccups that where there
initially have mostly been worked out.


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2011-01-03 22:33 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-12-06 10:24 IPoIB to Ethernet routing performance sebastien dugue
     [not found] ` <20101206112454.76bb85f1-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
2010-12-06 10:49   ` Richard Croucher
2010-12-06 11:40     ` [ewg] " sebastien dugue
     [not found]       ` <20101206124023.025c2f88-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
2010-12-06 12:08         ` Richard Croucher
2010-12-06 13:05           ` sebastien dugue
     [not found]             ` <20101206140505.20cfc9e2-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
2010-12-09 23:46               ` Christoph Lameter
     [not found]                 ` <alpine.DEB.2.00.1012091745070.29367-sBS69tsa9Uj/9pzu0YdTqQ@public.gmane.org>
2010-12-26  7:43                   ` Ali Ayoub
     [not found]                     ` <AANLkTi=2-FJtJCY0+79wyXRszwrhRwQStDTNjRCjr66F-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-12-26 10:57                       ` Richard Croucher
2010-12-27 11:51                         ` Jabe
     [not found]                           ` <4D187DB1.5020005-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
2010-12-30 17:37                             ` Richard Croucher
2010-12-28  0:06                         ` Ali Ayoub
     [not found]                           ` <AANLkTi=ausgOuEMRe622dNRzVyhN3pxPWeTAJiJihssW-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-12-28 15:30                             ` Reeted
     [not found]                               ` <4D1A02A8.1060104-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
2010-12-29 15:18                                 ` Tziporet Koren
2011-01-03 22:33                       ` Christoph Lameter
2010-12-07 14:42           ` Or Gerlitz
2010-12-06 20:47   ` Jabe
     [not found]     ` <4CFD4BEE.5070205-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
2010-12-06 21:27       ` Jason Gunthorpe
     [not found]         ` <20101206212759.GB16788-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2010-12-07  7:39           ` sebastien dugue
     [not found]             ` <20101207083911.2ab47a59-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
2010-12-07 10:02               ` Or Gerlitz
     [not found]                 ` <4CFE0638.2040105-smomgflXvOZWk0Htik3J/w@public.gmane.org>
2010-12-07 10:27                   ` sebastien dugue
     [not found]                     ` <20101207112739.0c95db46-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
2010-12-07 10:33                       ` Or Gerlitz
     [not found]                         ` <4CFE0D7C.4020904-smomgflXvOZWk0Htik3J/w@public.gmane.org>
2010-12-07 11:48                           ` sebastien dugue
     [not found]                             ` <20101207124805.1fede78f-xRPE6/W2vR9iM9LT7/dT9zWMkbuR3peG@public.gmane.org>
2010-12-07 12:32                               ` Or Gerlitz
     [not found]                                 ` <4CFE2953.2020807-smomgflXvOZWk0Htik3J/w@public.gmane.org>
2010-12-07 13:19                                   ` sebastien dugue
2010-12-07 13:01                               ` Hiroyuki Sato
     [not found]                                 ` <AANLkTik2XaM6qyUUc+uzYCzKL+zKbN5kHz2Kf8So8arC-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-12-07 13:25                                   ` sebastien dugue
2010-12-13 16:02           ` Jabe
     [not found]             ` <4D06438C.9040307-9AbUPqfR1/2XDw4h08c5KA@public.gmane.org>
2010-12-14 16:35               ` [ewg] " Richard Croucher
2010-12-07  7:36       ` sebastien dugue

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox