netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: bonding vs 802.3ad/Cisco EtherChannel link agregation
@ 2002-09-13  1:30 Feldman, Scott
  2002-09-13 14:50 ` Boris Protopopov
  0 siblings, 1 reply; 19+ messages in thread
From: Feldman, Scott @ 2002-09-13  1:30 UTC (permalink / raw)
  To: 'David S. Miller', borisp; +Cc: linux-net, netdev

> Bonding does not help with single stream performance.
> You have to have multiple apps generating multiple streams
> of data before you'll realize any improvement.
> Therefore netpipe is a bad test for what you're doing.

The analogy that I like is to imagine a culvert under your driveway and you
want to fill up the ditch on the other side, so you stick your garden hose
in the culvert.  The rate of water flow is good, but you're just not
utilizing the volume (bandwidth) of the culvert.  So you stick your
neighbors garden hose in the culvert.  And so on.  Now the ditch is filling
up.

So stick a bunch of garden hoses (streams) into that culvert (gigabit) and
flood it to the point of saturation, and now measure the efficiency of the
system (CPU %)  How much CPU is left to do other useful work?  The lower the
CPU utilization, the higher the efficiency of the system.

Ok, the analogy is corny, and it doesn't have anything to do with bonding,
but you'd be surprise how often this question comes up.

-scott

^ permalink raw reply	[flat|nested] 19+ messages in thread
* RE: bonding vs 802.3ad/Cisco EtherChannel link agregation
@ 2002-09-16 20:12 Yan-Fa Li
  0 siblings, 0 replies; 19+ messages in thread
From: Yan-Fa Li @ 2002-09-16 20:12 UTC (permalink / raw)
  To: 'Ben Greear', Chris Friesen; +Cc: Cacophonix, linux-net, netdev

I had similar problems with NAPI and DL2K.  I was only able to "resolve" the
issue by forcing my application and the NIC to a single CPU using CPU
affinity
hacks.

-----Original Message-----
From: Ben Greear [mailto:greearb@candelatech.com] 
Sent: Monday, September 16, 2002 9:10 AM
To: Chris Friesen
Cc: Cacophonix; linux-net@vger.kernel.org; netdev@oss.sgi.com
Subject: Re: bonding vs 802.3ad/Cisco EtherChannel link agregation


Chris Friesen wrote:
> Cacophonix wrote:
> 
>>--- Chris Friesen <cfriesen@nortelnetworks.com> wrote:
> 
> 
>>>This has always confused me.  Why doesn't the bonding driver try and 
>>>spread all the traffic over all the links?
>>
>>Because then you risk heavy packet reordering within an individual 
>>flow, which can be detrimental in some cases. --karthik
> 
> 
> I can see how it could make the receiving host work more on 
> reassembly, but if throughput is key, wouldn't you still end up better 
> if you can push twice as many packets through the pipe?
> 
> Chris

Also, I notice lots of out-of-order packets on a single gigE link when
running at high speeds (SMP machine), so the kernel is still having to
reorder quite a few packets. Has anyone done any tests to see how much worse
it is with dual-port bonding?

NAPI helps my problem, but does not make it go away entirely.

Ben

> 


-- 
Ben Greear <greearb@candelatech.com>       <Ben_Greear AT excite.com>
President of Candela Technologies Inc      http://www.candelatech.com
ScryMUD:  http://scry.wanfear.com     http://scry.wanfear.com/~greear

^ permalink raw reply	[flat|nested] 19+ messages in thread
* bonding vs 802.3ad/Cisco EtherChannel link agregation
@ 2002-09-12 18:39 Boris Protopopov
  2002-09-12 23:34 ` David S. Miller
  0 siblings, 1 reply; 19+ messages in thread
From: Boris Protopopov @ 2002-09-12 18:39 UTC (permalink / raw)
  To: linux-net, netdev

Hi, I got couple questions about bonding vs link agregation in GigE space. I may
be doing something wrong on the user level, or there might be system-level
issues I am not aware of. 

I am trying to increase bandwidth of GigE connecitons between boxes in my
cluster by means of bonding or aggregating several GigE links together. The
simplest setup that I have is two boxes with dual GigE e1000 cards connected to
each other directly by means of crossover cables. The boxes are running Redhat
7.3. I can successfully bond the links; however, I do not see any increase in
Netpipe bandwidth as compared to the case of a single GigE link. My CPU
utilization is around 20%, and I can get ~900Mbits/sec over a single link (MTU
7500). With bonding, CPU utilization is marginally higher, and there is no
increase (or some decrease) in Netpipe numbers. Looking at the ifconfig eth*
statistics, I can see that the traffic is distributed evenly between the
connections. The cards are plugged into 100Mhz PCI-X 64. Memory bw seems to be
sufficient (estimated with "stream" to be ~ 1200MB/s). Can anybody offer an
explanation ? 

Another question is about link agregation. Intel offers "teaming" option (with
the driver and utils) for agregating the e1000 cards in EtherChannel- or 802.3ad
- compatible way. From a few docs I could find that have any technical merit
(802.3ad doc on order), it appears that there are some sort of Ethernet frame
ordering restrictions that EtherChannel and 802.3ad impose on the traffic spread
across the aggregated links. So, in my case, I can see (ifconfig statistics)
that one link is always sending GigE frames, whereas the other clink is always
receiving. Obviously, in this arrangement, Netpipe will not benefit from the
aggregation. A response that I got from Intel customer support indicated that
"this is how EtherChannel works". Can someone explain why there are some
ordering restrictions ? If I am not mistaken, TCP/IP would handle out-of-order
frames transparently because it is designed to do so.

Thanks in advance for your help,
Boris Protopopov.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2002-09-18 11:48 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-09-13  1:30 bonding vs 802.3ad/Cisco EtherChannel link agregation Feldman, Scott
2002-09-13 14:50 ` Boris Protopopov
  -- strict thread matches above, loose matches on Subject: below --
2002-09-16 20:12 Yan-Fa Li
2002-09-12 18:39 Boris Protopopov
2002-09-12 23:34 ` David S. Miller
2002-09-13 14:29   ` Chris Friesen
2002-09-13 22:22     ` Cacophonix
2002-09-16 13:23       ` Chris Friesen
2002-09-16 16:09         ` Ben Greear
2002-09-16 19:55           ` David S. Miller
2002-09-16 21:10             ` Chris Friesen
2002-09-16 21:04               ` David S. Miller
2002-09-16 21:22                 ` Chris Friesen
2002-09-16 21:17                   ` David S. Miller
2002-09-17 10:16           ` jamal
2002-09-17 16:43             ` Ben Greear
2002-09-18  1:07               ` jamal
2002-09-18  4:06                 ` Ben Greear
2002-09-18 11:48                   ` jamal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).