From: Neil Horman <nhorman@tuxdriver.com>
To: Kenny Chang <kchang@athenacr.com>
Cc: netdev@vger.kernel.org
Subject: Re: Multicast packet loss
Date: Tue, 3 Feb 2009 20:21:44 -0500 [thread overview]
Message-ID: <20090204012144.GC3650@localhost.localdomain> (raw)
In-Reply-To: <4988803E.2020009@athenacr.com>
On Tue, Feb 03, 2009 at 12:34:54PM -0500, Kenny Chang wrote:
> Eric Dumazet wrote:
>> Wes Chow a écrit :
>>
>>> Eric Dumazet wrote:
>>>
>>>> Wes Chow a écrit :
>>>>
>>>>> (I'm Kenny's colleague, and I've been doing the kernel builds)
>>>>>
>>>>> First I'd like to note that there were a lot of bnx2 NAPI changes
>>>>> between 2.6.21 and 2.6.22. As a reminder, 2.6.21 shows tiny amounts
>>>>> of packet loss,
>>>>> whereas loss in 2.6.22 is significant.
>>>>>
>>>>> Second, some CPU affinity info: if I do like Eric and pin all of the
>>>>> apps onto a single CPU, I see no packet loss. Also, I do *not* see
>>>>> ksoftirqd show up on top at all!
>>>>>
>>>>> If I pin half the processes on one CPU and the other half on another
>>>>> CPU, one ksoftirqd processes shows up in top and completely pegs one
>>>>> CPU. My packet loss
>>>>> in that case is significant (25%).
>>>>>
>>>>> Now, the strange case: if I pin 3 processes to one CPU and 1 process
>>>>> to another, I get about 25% packet loss and ksoftirqd pins one CPU.
>>>>> However, one
>>>>> of the apps takes significantly less CPU than the others, and all
>>>>> apps lose the
>>>>> *exact same number of packets*. In all other situations where we see
>>>>> packet
>>>>> loss, the actual number lost per application instance appears random.
>>>>>
>>>> You see same number of packet lost because they are lost at NIC level
>>>>
>>> Understood.
>>>
>>> I have a new observation: if I pin processes to just CPUs 0 and 1, I see
>>> no packet loss. Pinning to 0 and 2, I do see packet loss. Pinning 2 and
>>> 3, no packet loss. 4 & 5 - no packet loss, 6 & 7 - no packet loss. Any
>>> other combination appears to produce loss (though I have not tried all
>>> 28 combinations, this seems to be the case).
>>>
>>> At first I thought maybe it had to do with processes pinned to the same
>>> CPU, but different cores. The machine is a dual quad core, which means
>>> that CPUs 0-3 should be a physical CPU, correct? Pinning to 0/2 and 0/3
>>> produce packet loss.
>>>
>>
>> a quad core is really a 2 x 2 core
>>
>> L2 cache is splited on two blocks, one block used by CPU0/1, other by
>> CPU2/3
>>
>> You are at the limit of the machine with such workload, so as soon as your
>> CPUs have to transfert 64 bytes lines between those two L2 blocks, you loose.
>>
>>
>>
>>> I've also noticed that it does not matter which of the working pairs I
>>> pin to. For example, pinning 5 processes in any combination on either
>>> 0/1 produce no packet loss, pinning all 5 to just CPU 0 also produces no
>>> packet loss.
>>>
>>> The failures are also sudden. In all of the working cases mentioned
>>> above, I don't see ksoftirqd on top at all. But when I run 6 processes
>>> on a single CPU, ksoftirqd shoots up to 100% and I lose a huge number of
>>> packets.
>>>
>>>
>>>> Normaly, softirq runs on same cpu (the one handling hard irq)
>>>>
>>> What determines which CPU the hard irq occurs on?
>>>
>>>
>>
>> Check /proc/irq/{irqnumber}/smp_affinity
>>
>> If you want IRQ16 only served by CPU0 :
>>
>> echo 1 >/proc/irq/16/smp_affinity
>>
>>
> Hi everyone,
>
> First, thanks for all the effort so far, I think we've learned so much
> more about the problem in the last couple of days than we had previously
> in a month.
>
> Just to summarize where we are:
>
> * pinning processes to specific cores/CPUs alleviate the problem
> * issues exist from 2.6.22 up to 2.6.29-rc3
> * issue does not appear to be isolated to 64-bit, 32-bits have problems
> too.
> * I'm attaching an updated test program with the PR_SET_TIMERSTACK call
> added.
> * on troubled machines, we are seeing high number of context switches
> and interrupts.
> * we've ordered an Intel card to try in our machine to see if we can
> circumvent the issue with a different driver.
>
> Kernel Version Has Problem? Notes
> ---------- ---------- ----------
> 2.6.15.x N 2.6.16.x -
> 2.6.17.x - Doesn't build on Hardy
> 2.6.18.x - Doesn't boot (kernel panic)
> 2.6.19.7 N ksoftirqd is up there, but not
> pegging a CPU.
> Takes roughly same amount of CPU
> as the other
> processes, all of which are from
> 20-40%
> 2.6.20.21 N
> 2.6.21.7 N sort of lopsided load, but no
> load from
> ksoftirqd -- strange
> 2.6.22.19 Y First broken kernel
> 2.6.23.x -
> 2.6.24-19 Y (from Hardy)
> 2.6.25.x -
> 2.6.26.x -
> 2.6.27.x Y (from Intrepid)
> 2.6.28.1 Y
> 2.6.29-rc Y
>
>
> Correct me if I'm wrong, from what we've seen, it looks like its
> pointing to some inefficiency in the softirq handling. The question is
> whether it's something in the driver or the kernel. If we can isolate
> that, maybe we can take some action to have it fixed.
>
I don't think its sofirq ineffeciencies (oprofile would have shown that). I
know I keep harping on this, but I still think irq affininty is your problem.
I'd be interested in knowning what your /proc/interrupts file looked like on
each of the above kenrels. Perhaps its not that the bnx2 card you have can't
handle the setting of MSI interrupt affinities, but rather that something
changeed to break irq affinity on this card.
Neil
>
next prev parent reply other threads:[~2009-02-04 1:21 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-30 17:49 Multicast packet loss Kenny Chang
2009-01-30 19:04 ` Eric Dumazet
2009-01-30 19:17 ` Denys Fedoryschenko
2009-01-30 20:03 ` Neil Horman
2009-01-30 22:29 ` Kenny Chang
2009-01-30 22:41 ` Eric Dumazet
2009-01-31 16:03 ` Neil Horman
2009-02-02 16:13 ` Kenny Chang
2009-02-02 16:48 ` Kenny Chang
2009-02-03 11:55 ` Neil Horman
2009-02-03 15:20 ` Kenny Chang
2009-02-04 1:15 ` Neil Horman
2009-02-04 16:07 ` Kenny Chang
2009-02-04 16:46 ` Wesley Chow
2009-02-04 18:11 ` Eric Dumazet
2009-02-05 13:33 ` Neil Horman
2009-02-05 13:46 ` Wesley Chow
2009-02-05 13:29 ` Neil Horman
2009-02-01 12:40 ` Eric Dumazet
2009-02-02 13:45 ` Neil Horman
2009-02-02 16:57 ` Eric Dumazet
2009-02-02 18:22 ` Neil Horman
2009-02-02 19:51 ` Wes Chow
2009-02-02 20:29 ` Eric Dumazet
2009-02-02 21:09 ` Wes Chow
2009-02-02 21:31 ` Eric Dumazet
2009-02-03 17:34 ` Kenny Chang
2009-02-04 1:21 ` Neil Horman [this message]
2009-02-26 17:15 ` Kenny Chang
2009-02-28 8:51 ` Eric Dumazet
2009-03-01 17:03 ` Eric Dumazet
2009-03-04 8:16 ` David Miller
2009-03-04 8:36 ` Eric Dumazet
2009-03-07 7:46 ` Eric Dumazet
2009-03-08 16:46 ` Eric Dumazet
2009-03-09 2:49 ` David Miller
2009-03-09 6:36 ` Eric Dumazet
2009-03-13 21:51 ` David Miller
2009-03-13 22:30 ` Eric Dumazet
2009-03-13 22:38 ` David Miller
2009-03-13 22:45 ` Eric Dumazet
2009-03-14 9:03 ` [PATCH] net: reorder fields of struct socket Eric Dumazet
2009-03-16 2:59 ` David Miller
2009-03-16 22:22 ` Multicast packet loss Eric Dumazet
2009-03-17 10:11 ` Peter Zijlstra
2009-03-17 11:08 ` Eric Dumazet
2009-03-17 11:57 ` Peter Zijlstra
2009-03-17 15:00 ` Brian Bloniarz
2009-03-17 15:16 ` Eric Dumazet
2009-03-17 19:39 ` David Stevens
2009-03-17 21:19 ` Eric Dumazet
2009-04-03 19:28 ` Brian Bloniarz
2009-04-05 13:49 ` Eric Dumazet
2009-04-06 21:53 ` Brian Bloniarz
2009-04-06 22:12 ` Brian Bloniarz
2009-04-07 20:08 ` Brian Bloniarz
2009-04-08 8:12 ` Eric Dumazet
2009-03-09 22:56 ` Brian Bloniarz
2009-03-10 5:28 ` Eric Dumazet
2009-03-10 23:22 ` Brian Bloniarz
2009-03-11 3:00 ` Eric Dumazet
2009-03-12 15:47 ` Brian Bloniarz
2009-03-12 16:34 ` Eric Dumazet
2009-02-27 18:40 ` Christoph Lameter
2009-02-27 18:56 ` Eric Dumazet
2009-02-27 19:45 ` Christoph Lameter
2009-02-27 20:12 ` Eric Dumazet
2009-02-27 21:36 ` Eric Dumazet
2009-02-02 13:53 ` Eric Dumazet
-- strict thread matches above, loose matches on Subject: below --
2009-04-05 14:42 bmb
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090204012144.GC3650@localhost.localdomain \
--to=nhorman@tuxdriver.com \
--cc=kchang@athenacr.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).