* NAPI interrupt data
@ 2003-02-15 7:16 Jeff Garzik
2003-02-15 7:24 ` Jeff Garzik
2003-02-15 14:34 ` jamal
0 siblings, 2 replies; 5+ messages in thread
From: Jeff Garzik @ 2003-02-15 7:16 UTC (permalink / raw)
To: netdev, linux-net
[-- Attachment #1: Type: text/plain, Size: 817 bytes --]
I looked at my latest tg3 driver's activity in /proc/interrupts and was
a bit surprised. Using "ttcp" to send 500,000 bursts from a
uniprocessor P3 ("hum") to a dual athlon ("crumb"), I recorded the
interrupts using the simple
while true
do
cat /proc/interrupts >> data
sleep 1
done
method. On hum, eth0 shared interrupts with acpi. On crumb, eth0
shared interrupts with the potentially-skewing aic7xxx. The results of
tg3[NAPI] one ttcp process on unloaded boxes are the following, in
"approximate packets per second":
bash-2.05b$ ./x.pl data.crumb
135 samples, 21578 avg
bash-2.05b$ ./x.pl data.hum
130 samples, 11213 avg
The raw sample data and compute-the-average perl script were so small
that I simply attached them to this email. Feel free to check my math
for something dumb.
Jeff
[-- Attachment #2: interrupt-data.tar.bz2 --]
[-- Type: application/octet-stream, Size: 1528 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: NAPI interrupt data
2003-02-15 7:16 NAPI interrupt data Jeff Garzik
@ 2003-02-15 7:24 ` Jeff Garzik
2003-02-15 14:34 ` jamal
1 sibling, 0 replies; 5+ messages in thread
From: Jeff Garzik @ 2003-02-15 7:24 UTC (permalink / raw)
To: netdev, linux-net
Jeff Garzik wrote:
> "approximate packets per second":
>
> bash-2.05b$ ./x.pl data.crumb
> 135 samples, 21578 avg
> bash-2.05b$ ./x.pl data.hum
> 130 samples, 11213 avg
er. I meant _interrupts_ per second.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: NAPI interrupt data
2003-02-15 7:16 NAPI interrupt data Jeff Garzik
2003-02-15 7:24 ` Jeff Garzik
@ 2003-02-15 14:34 ` jamal
2003-02-15 18:55 ` Jeff Garzik
1 sibling, 1 reply; 5+ messages in thread
From: jamal @ 2003-02-15 14:34 UTC (permalink / raw)
To: Jeff Garzik; +Cc: netdev, linux-net
On Sat, 15 Feb 2003, Jeff Garzik wrote:
> bash-2.05b$ ./x.pl data.crumb
> 135 samples, 21578 avg
> bash-2.05b$ ./x.pl data.hum
> 130 samples, 11213 avg
>
Probably the first 5-10 samples as well as the last 5-10 amples to get
more accuracy.
This data looks fine, no? definetly the scsi device is skewing things
(you are writting data to disk for example).
- The 500Kpps from ttcp doesnt sound right; tcp will slow you down.
perhaps use ttcp to send udp packets to get a more interesting view.
cheers,
jamal
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: NAPI interrupt data
2003-02-15 14:34 ` jamal
@ 2003-02-15 18:55 ` Jeff Garzik
2003-02-15 22:14 ` jamal
0 siblings, 1 reply; 5+ messages in thread
From: Jeff Garzik @ 2003-02-15 18:55 UTC (permalink / raw)
To: jamal; +Cc: netdev, linux-net
jamal wrote:
>
> On Sat, 15 Feb 2003, Jeff Garzik wrote:
>
>
>>bash-2.05b$ ./x.pl data.crumb
>>135 samples, 21578 avg
>>bash-2.05b$ ./x.pl data.hum
>>130 samples, 11213 avg
>>
>
>
> Probably the first 5-10 samples as well as the last 5-10 amples to get
> more accuracy.
>
> This data looks fine, no?
Over 4000 interrupts per second was not something I was hoping for, to
be honest. ttcp did not even report 50% CPU utilization, so I reach the
conclusion that both machines can handle well in excess of 4,000
interrupts per second... but overall I do not like the unbounded nature
of the interrupt rate. This data makes me lean towards a software[NAPI]
+ hardware mitigation solution, as opposed to totally depending on
software interrupt mitigation.
> definetly the scsi device is skewing things
> (you are writting data to disk for example).
Yes, though only once 5 seconds when ext3 flushes. With nothing else
going on but "ttcp" and "cat /proc/interrupts >> data ; sleep 1" there
should be very little disk I/O. I agree it is skewing by an unknown
factor, however.
> - The 500Kpps from ttcp doesnt sound right; tcp will slow you down.
> perhaps use ttcp to send udp packets to get a more interesting view.
No, I ran 500,000 buffer I/Os total from ttcp ("-n 500000"). That
doesn't really say anything about packets per second. The only thing I
measured was interrupts per second. It was my mistake to type "packets"
in the first email :/
Jeff
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: NAPI interrupt data
2003-02-15 18:55 ` Jeff Garzik
@ 2003-02-15 22:14 ` jamal
0 siblings, 0 replies; 5+ messages in thread
From: jamal @ 2003-02-15 22:14 UTC (permalink / raw)
To: Jeff Garzik; +Cc: netdev, linux-net
On Sat, 15 Feb 2003, Jeff Garzik wrote:
> jamal wrote:
> >
> > On Sat, 15 Feb 2003, Jeff Garzik wrote:
> >
> >
> > Probably the first 5-10 samples as well as the last 5-10 amples to get
> > more accuracy.
> >
I actually meant to say ignore those first 5-10 and last 5-10 samples --
looking at your data that wouldnt have made a big difference.
> > This data looks fine, no?
>
> Over 4000 interrupts per second was not something I was hoping for, to
> be honest. ttcp did not even report 50% CPU utilization, so I reach the
> conclusion that both machines can handle well in excess of 4,000
> interrupts per second... but overall I do not like the unbounded nature
> of the interrupt rate. This data makes me lean towards a software[NAPI]
> + hardware mitigation solution, as opposed to totally depending on
> software interrupt mitigation.
>
Well, it is not "unbounded" perse.
It scales according to the CPU capacity. For any CPU there is an upper
limit input rate where the device would forever remain in polling mode.
If this limit is exceeded say on bootup, and a million packets are
received in a burst then youll probably see only one interupt for the
million packets. If you remove that processor and add a faster in the
same motherboard you should see more interupts than one being processed.
Therefore there is an upper bound interupt rate and it is dependent on
the CPU capacity (not to ignore other factors like PCI bus speed, memory
bandwidth etc; cpu capacity plays a much bigger role though)
Mitigation is valuable when the cost of PCI IO per packet is something
that is bothersome. It becomes bothersome if the rate of input packets is
such that you end up processing one packet per interupt; as you
yourself have pointed out in the past, the cost of PCI IO per packet is
high with NAPI.
Of course cost of PCI IO per packet is demonstrated in CPU load observed. On
slow CPUs this is clearly observed; Manfreds results for example
demonstrated this. I also saw upto 8% CPU more with NAPI on 10kpps input
rate. On a fast CPU that will probably show up as 0.5% more load (so the
question is who cares?).
What mitigation would do in the above case is amortize the cost of
PCI-IO per packet. Instead of one packet, for the same PCI cost now its 2
etc.
Mitigation becomes useless on higher input rates.
In summary: Adding mitigation helps in the low rate case and doesnt harm
in the high input case.
BTW 4k interupts/sec is a very small rate.
Try sending 5 or 6 ttcp flows instead of one and observe.
>
> > definetly the scsi device is skewing things
> > (you are writting data to disk for example).
>
> Yes, though only once 5 seconds when ext3 flushes. With nothing else
> going on but "ttcp" and "cat /proc/interrupts >> data ; sleep 1" there
> should be very little disk I/O. I agree it is skewing by an unknown
> factor, however.
>
theres not that many interupts, so nothing to worry about there.
Of course if you want cleaner results dont share interupts or collect
the data from the driver instead.
>
> > - The 500Kpps from ttcp doesnt sound right; tcp will slow you down.
> > perhaps use ttcp to send udp packets to get a more interesting view.
>
>
> No, I ran 500,000 buffer I/Os total from ttcp ("-n 500000"). That
> doesn't really say anything about packets per second. The only thing I
> measured was interrupts per second. It was my mistake to type "packets"
> in the first email :/
>
hit it with 10 ttcps instead or send 2 or so udp ttcp flows. It starts
getting interesting then ..
cheers,
jamal
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2003-02-15 22:14 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-02-15 7:16 NAPI interrupt data Jeff Garzik
2003-02-15 7:24 ` Jeff Garzik
2003-02-15 14:34 ` jamal
2003-02-15 18:55 ` Jeff Garzik
2003-02-15 22:14 ` jamal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).