netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* via-rhine interrupts
@ 2010-07-29 11:03 Jakub Ružička
  2010-08-15 20:04 ` Jarek Poplawski
  0 siblings, 1 reply; 4+ messages in thread
From: Jakub Ružička @ 2010-07-29 11:03 UTC (permalink / raw)
  To: netdev

[-- Attachment #1: Type: text/plain, Size: 735 bytes --]

Hello,

the via-rhine driver powered cards generate a really big number of
interrupts, almost one per packet (11429 interrupts for 8210 incoming
and 2475 outgoing packets per second on full 100 Mbps load). This is
observed on multiple different machines (embbed and desktop) and
kernels (2.6.25 with and without NAPI, 2.6.30, 2.6.32 and 2.6.33). Do
you have any idea why isn't the polling used or what can I try to find
out what's wrong?

I have tested sending to/from the machines with nc and scp, measured
interrups and load with atop. Few of these measurements on an embbed
device (where the interrupt handling is a problem) are attached.

I'm not a subscriber, please Cc me if needed.

Thankfully
Jakub Ružička

[-- Attachment #2: atop_on_2.6.25.20_no_NAPI --]
[-- Type: application/octet-stream, Size: 1013 bytes --]

PRC | sys   0.06s | user   0.00s | #proc     44 | #zombie    0 | #exit    0/s
CPU | sys      0% | user      0% | irq      45% | idle     54% | wait   0% |
CPL | avg1   0.00 | avg5    0.00 | avg15   0.00 | csw     13/s | intr 10757/s
MEM | tot  247.4M | free  183.1M | cache  38.4M | buff    4.8M | slab    6.2M
SWP | tot    0.0M | free    0.0M |              | vmcom  46.0M | vmlim 123.7M
PAG | scan    0/s | stall    0/s |              | swin     0/s | swout    0/s
DSK |         sda | busy      0% | read     0/s | write    0/s | avio    0 ms
NET | transport   | tcpi     1/s | tcpo     1/s | udpi     0/s | udpo     0/s
NET | network     | ipi      1/s | ipo      1/s | ipfrw    0/s | deliv    1/s
NET | eth2    98% | pcki  8112/s | pcko     0/s | si   98 Mbps | so    0 Kbps
NET | eth1     2% | pcki  3838/s | pcko     0/s | si 2032 Kbps | so    0 Kbps
NET | eth0     0% | pcki     7/s | pcko     1/s | si    3 Kbps | so    1 Kbps
NET | lo     ---- | pcki     0/s | pcko     0/s | si    0 Kbps | so    0 Kbps

[-- Attachment #3: atop_on_2.6.25.20_with_NAPI --]
[-- Type: application/octet-stream, Size: 1014 bytes --]

PRC | sys   0.08s | user   0.00s | #proc     42 | #zombie    0 | #exit    0/s
CPU | sys      1% | user      2% | irq      47% | idle     50% | wait   0% |
CPL | avg1   0.06 | avg5    0.23 | avg15   0.12 | csw     12/s | intr 11902/s
MEM | tot  247.4M | free  189.1M | cache  35.5M | buff    2.8M | slab    6.2M
SWP | tot    0.0M | free    0.0M |              | vmcom  42.0M | vmlim 123.7M
PAG | scan    0/s | stall    0/s |              | swin     0/s | swout    0/s
DSK |         sda | busy      0% | read     0/s | write    2/s | avio    2 ms
NET | transport   | tcpi     1/s | tcpo     1/s | udpi     0/s | udpo     0/s
NET | network     | ipi      1/s | ipo      1/s | ipfrw    0/s | deliv    1/s
NET | eth2    98% | pcki  8133/s | pcko     0/s | si   98 Mbps | so    0 Kbps
NET | eth1     2% | pcki  3983/s | pcko     0/s | si 2104 Kbps | so    0 Kbps
NET | eth0     0% | pcki     3/s | pcko     1/s | si    1 Kbps | so    1 Kbps
NET | lo     ---- | pcki     0/s | pcko     0/s | si    0 Kbps | so    0 Kbps


[-- Attachment #4: atop_on_2.6.32 --]
[-- Type: application/octet-stream, Size: 1014 bytes --]

PRC | sys   0.16s | user   0.01s | #proc     50 | #zombie    0 | #exit    0/s
CPU | sys      3% | user      0% | irq      69% | idle     28% | wait      0%
CPL | avg1   0.03 | avg5    0.26 | avg15   0.15 | csw     11/s | intr 11342/s
MEM | tot  243.2M | free  180.1M | cache  36.8M | buff    2.8M | slab    6.8M
SWP | tot    0.0M | free    0.0M |              | vmcom  45.8M | vmlim 121.6M
PAG | scan    0/s | stall    0/s |              | swin     0/s | swout    0/s
DSK |         sda | busy      0% | read     0/s | write    1/s | avio    8 ms
NET | transport   | tcpi     1/s | tcpo     1/s | udpi     0/s | udpo     0/s
NET | network     | ipi      1/s | ipo      1/s | ipfrw    0/s | deliv    1/s
NET | eth2    98% | pcki  8126/s | pcko     0/s | si   98 Mbps | so    0 Kbps
NET | eth1     2% | pcki  3887/s | pcko     0/s | si 2057 Kbps | so    0 Kbps
NET | eth0     0% | pcki     1/s | pcko     1/s | si    0 Kbps | so    1 Kbps
NET | lo     ---- | pcki     0/s | pcko     0/s | si    0 Kbps | so    0 Kbps

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: via-rhine interrupts
  2010-07-29 11:03 via-rhine interrupts Jakub Ružička
@ 2010-08-15 20:04 ` Jarek Poplawski
  2010-08-19  6:49   ` David Miller
  0 siblings, 1 reply; 4+ messages in thread
From: Jarek Poplawski @ 2010-08-15 20:04 UTC (permalink / raw)
  To: =?UTF-8?B?SmFrdWIgUnXFvmnEjWth?=; +Cc: netdev

Jakub Ružička wrote, On 29.07.2010 13:03:

> Hello,
Hi,

> the via-rhine driver powered cards generate a really big number of
> interrupts, almost one per packet (11429 interrupts for 8210 incoming
> and 2475 outgoing packets per second on full 100 Mbps load). This is
> observed on multiple different machines (embbed and desktop) and
> kernels (2.6.25 with and without NAPI, 2.6.30, 2.6.32 and 2.6.33). Do
> you have any idea why isn't the polling used or what can I try to find
> out what's wrong?
> 
> I have tested sending to/from the machines with nc and scp, measured
> interrups and load with atop. Few of these measurements on an embbed
> device (where the interrupt handling is a problem) are attached.

I've just tested it using a simplistic patch below, which skips
some napi receiving by doing it only every second jiffie (on even
ones), and I've got around 30% less interrupts from via-rhine,
which seems to suggest napi works OK, but there is too low
traffic (or too fast soft interrupt handling) to affect hard
interrupts. (Btw, probably CONFIG_HZ can matter here a bit too.
I tested with 1000.)

Cheers,
Jarek P.

--- (patch only for testing)

diff -Nurp a/net/core/dev.c b/net/core/dev.c
--- a/net/core/dev.c	2010-08-15 20:29:58.000000000 +0200
+++ b/net/core/dev.c	2010-08-15 21:15:04.000000000 +0200
@@ -3495,6 +3495,9 @@ static void net_rx_action(struct softirq
 		if (unlikely(budget <= 0 || time_after(jiffies, time_limit)))
 			goto softnet_break;
 
+		if (jiffies & 1)
+			goto softnet_break;
+
 		local_irq_enable();
 
 		/* Even though interrupts have been re-enabled, this


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: via-rhine interrupts
  2010-08-15 20:04 ` Jarek Poplawski
@ 2010-08-19  6:49   ` David Miller
  2010-08-19 17:09     ` Jarek Poplawski
  0 siblings, 1 reply; 4+ messages in thread
From: David Miller @ 2010-08-19  6:49 UTC (permalink / raw)
  To: jarkao2; +Cc: ruzicka.jakub, netdev

From: Jarek Poplawski <jarkao2@gmail.com>
Date: Sun, 15 Aug 2010 22:04:31 +0200

> I've just tested it using a simplistic patch below, which skips
> some napi receiving by doing it only every second jiffie (on even
> ones), and I've got around 30% less interrupts from via-rhine,
> which seems to suggest napi works OK, but there is too low
> traffic (or too fast soft interrupt handling) to affect hard
> interrupts. (Btw, probably CONFIG_HZ can matter here a bit too.
> I tested with 1000.)

100Mbit on any modern system isn't going to trigger NAPI much at all
even with near full link utilization.

The simply cpu processes the packets too fast for them to gather up
much at all.

Some improvement in polling could be gained if the via-rhine has some
HW interrupt mitigation settings.  However after a quick perusal of
the driver I don't see anything about this.  The mitigation ethtool
ops aren't implemented either, so I'm not optimistic :-)




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: via-rhine interrupts
  2010-08-19  6:49   ` David Miller
@ 2010-08-19 17:09     ` Jarek Poplawski
  0 siblings, 0 replies; 4+ messages in thread
From: Jarek Poplawski @ 2010-08-19 17:09 UTC (permalink / raw)
  To: David Miller; +Cc: ruzicka.jakub, netdev

On Wed, Aug 18, 2010 at 11:49:39PM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Sun, 15 Aug 2010 22:04:31 +0200
> 
> > I've just tested it using a simplistic patch below, which skips
> > some napi receiving by doing it only every second jiffie (on even
> > ones), and I've got around 30% less interrupts from via-rhine,
> > which seems to suggest napi works OK, but there is too low
> > traffic (or too fast soft interrupt handling) to affect hard
> > interrupts. (Btw, probably CONFIG_HZ can matter here a bit too.
> > I tested with 1000.)
> 
> 100Mbit on any modern system isn't going to trigger NAPI much at all
> even with near full link utilization.
> 
> The simply cpu processes the packets too fast for them to gather up
> much at all.
> 
> Some improvement in polling could be gained if the via-rhine has some
> HW interrupt mitigation settings.  However after a quick perusal of
> the driver I don't see anything about this.  The mitigation ethtool
> ops aren't implemented either, so I'm not optimistic :-)
> 

Yes, now I see the old comments (pre 2.6.27) mentioned 10kpps
threshold, which wasn't reached in Jakub's example.

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=32b0f53e5bc80b87fd20d4d78a0e0cb602c9157a

Jarek P.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-08-19 17:09 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-29 11:03 via-rhine interrupts Jakub Ružička
2010-08-15 20:04 ` Jarek Poplawski
2010-08-19  6:49   ` David Miller
2010-08-19 17:09     ` Jarek Poplawski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).