From: Boris Protopopov <borisp@mc.com>
To: linux-net@vger.kernel.org, netdev@oss.sgi.com
Subject: bonding vs 802.3ad/Cisco EtherChannel link agregation
Date: Thu, 12 Sep 2002 14:39:11 -0400 [thread overview]
Message-ID: <3D80DF4F.5B218F19@mc.com> (raw)
Hi, I got couple questions about bonding vs link agregation in GigE space. I may
be doing something wrong on the user level, or there might be system-level
issues I am not aware of.
I am trying to increase bandwidth of GigE connecitons between boxes in my
cluster by means of bonding or aggregating several GigE links together. The
simplest setup that I have is two boxes with dual GigE e1000 cards connected to
each other directly by means of crossover cables. The boxes are running Redhat
7.3. I can successfully bond the links; however, I do not see any increase in
Netpipe bandwidth as compared to the case of a single GigE link. My CPU
utilization is around 20%, and I can get ~900Mbits/sec over a single link (MTU
7500). With bonding, CPU utilization is marginally higher, and there is no
increase (or some decrease) in Netpipe numbers. Looking at the ifconfig eth*
statistics, I can see that the traffic is distributed evenly between the
connections. The cards are plugged into 100Mhz PCI-X 64. Memory bw seems to be
sufficient (estimated with "stream" to be ~ 1200MB/s). Can anybody offer an
explanation ?
Another question is about link agregation. Intel offers "teaming" option (with
the driver and utils) for agregating the e1000 cards in EtherChannel- or 802.3ad
- compatible way. From a few docs I could find that have any technical merit
(802.3ad doc on order), it appears that there are some sort of Ethernet frame
ordering restrictions that EtherChannel and 802.3ad impose on the traffic spread
across the aggregated links. So, in my case, I can see (ifconfig statistics)
that one link is always sending GigE frames, whereas the other clink is always
receiving. Obviously, in this arrangement, Netpipe will not benefit from the
aggregation. A response that I got from Intel customer support indicated that
"this is how EtherChannel works". Can someone explain why there are some
ordering restrictions ? If I am not mistaken, TCP/IP would handle out-of-order
frames transparently because it is designed to do so.
Thanks in advance for your help,
Boris Protopopov.
next reply other threads:[~2002-09-12 18:39 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-09-12 18:39 Boris Protopopov [this message]
2002-09-12 23:34 ` bonding vs 802.3ad/Cisco EtherChannel link agregation David S. Miller
2002-09-13 14:29 ` Chris Friesen
2002-09-13 22:22 ` Cacophonix
2002-09-16 13:23 ` Chris Friesen
2002-09-16 16:09 ` Ben Greear
2002-09-16 19:55 ` David S. Miller
2002-09-16 21:10 ` Chris Friesen
2002-09-16 21:04 ` David S. Miller
2002-09-16 21:22 ` Chris Friesen
2002-09-16 21:17 ` David S. Miller
2002-09-17 10:16 ` jamal
2002-09-17 16:43 ` Ben Greear
2002-09-18 1:07 ` jamal
2002-09-18 4:06 ` Ben Greear
2002-09-18 11:48 ` jamal
-- strict thread matches above, loose matches on Subject: below --
2002-09-13 1:30 Feldman, Scott
2002-09-13 14:50 ` Boris Protopopov
2002-09-16 20:12 Yan-Fa Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3D80DF4F.5B218F19@mc.com \
--to=borisp@mc.com \
--cc=linux-net@vger.kernel.org \
--cc=netdev@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).