From: Eric Dumazet <dada1@cosmosbay.com>
To: Vladimir Ivashchenko <hazard@francoudi.com>
Cc: netdev@vger.kernel.org
Subject: Re: bond + tc regression ?
Date: Tue, 05 May 2009 18:31:47 +0200 [thread overview]
Message-ID: <4A0069F3.5030607@cosmosbay.com> (raw)
In-Reply-To: <1241538358.27647.9.camel@hazard2.francoudi.com>
Vladimir Ivashchenko a écrit :
> Hi,
>
> I have a traffic policing setup running on Linux, serving about 800 mbps
> of traffic. Due to the traffic growth I decided to employ network
> interface bonding to scale over a single GigE.
>
> The Sun X4150 server has 2xIntel E5450 QuadCore CPUs and a total of four
> built-in e1000e interfaces, which I grouped into two bond interfaces.
>
> With kernel 2.6.23.1, everything works fine, but the system locked up
> after a few days.
>
> With kernel 2.6.28.7/2.6.29.1, I get 10-20% packet loss. I get packet loss as
> soon as I put a classful qdisc, even prio, without even having any
> classes or filters. TC prio statistics report lots of drops, around 10k
> per sec. With exactly the same setup on 2.6.23, the number of drops is
> only 50 per sec.
>
> On both kernels, the system is running with at least 70% idle CPU.
> The network interrupts are distributed accross the cores.
You should not distribute interrupts, but bound a NIC to one CPU
>
> I thought it was a e1000e driver issue, but tweaking e1000e ring buffers
> didn't help. I tried using e1000 on 2.6.28 by adding necessary PCI IDs,
> I tried running on a different server with bnx cards, I tried disabling
> NO_HZ and HRTICK, but still I have the same problem.
>
> However, if I don't utilize bond, but just apply rules on normal ethX
> interfaces, there is no packet loss with 2.6.28/29.
>
> So, the problem appears only when I use 2.6.28/29 + bond + classful tc
> combination.
>
> Any ideas ?
>
Yes, we need much more information :)
Is it a forwarding setup only ?
cat /proc/interrupts
cat /proc/net/bonding/bond0
cat /proc/net/bonding/bond1
tc -s -d qdisc
mpstat -P ALL 10
ifconfig -a
and so on ...
next prev parent reply other threads:[~2009-05-05 16:31 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-05 15:45 bond + tc regression ? Vladimir Ivashchenko
2009-05-05 16:25 ` Denys Fedoryschenko
2009-05-05 16:31 ` Eric Dumazet [this message]
2009-05-05 17:41 ` Vladimir Ivashchenko
2009-05-05 18:50 ` Eric Dumazet
2009-05-05 23:50 ` Vladimir Ivashchenko
2009-05-05 23:52 ` Stephen Hemminger
2009-05-06 3:36 ` Eric Dumazet
2009-05-06 10:28 ` Vladimir Ivashchenko
2009-05-06 10:41 ` Eric Dumazet
2009-05-06 10:49 ` Denys Fedoryschenko
2009-05-06 18:45 ` Vladimir Ivashchenko
2009-05-06 19:30 ` Denys Fedoryschenko
2009-05-06 20:47 ` Vladimir Ivashchenko
2009-05-06 21:46 ` Denys Fedoryschenko
2009-05-08 20:46 ` Vladimir Ivashchenko
2009-05-08 21:05 ` Denys Fedoryschenko
2009-05-08 22:07 ` Vladimir Ivashchenko
2009-05-08 22:42 ` Denys Fedoryschenko
2009-05-17 18:46 ` Vladimir Ivashchenko
2009-05-18 8:51 ` Jarek Poplawski
2009-05-06 8:03 ` Ingo Molnar
2009-05-06 6:10 ` Jarek Poplawski
2009-05-06 10:36 ` Vladimir Ivashchenko
2009-05-06 10:48 ` Jarek Poplawski
2009-05-06 13:11 ` Vladimir Ivashchenko
2009-05-06 13:31 ` Patrick McHardy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A0069F3.5030607@cosmosbay.com \
--to=dada1@cosmosbay.com \
--cc=hazard@francoudi.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).