netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: jamal <hadi@cyberus.ca>
To: James Chapman <jchapman@katalix.com>
Cc: netdev@vger.kernel.org, davem@davemloft.net, jeff@garzik.org,
	mandeep.baines@gmail.com, ossthema@de.ibm.com,
	Stephen Hemminger <shemminger@osdl.org>
Subject: Re: RFC: possible NAPI improvements to reduce interrupt rates for low traffic rates
Date: Fri, 07 Sep 2007 09:22:50 -0400	[thread overview]
Message-ID: <1189171370.4234.38.camel@localhost> (raw)
In-Reply-To: <46E11A61.9030409@katalix.com>

On Fri, 2007-07-09 at 10:31 +0100, James Chapman wrote:
> Not really. I used 3-year-old, single CPU x86 boxes with e100 
> interfaces. 
> The idle poll change keeps them in polled mode. Without idle 
> poll, I get twice as many interrupts as packets, one for txdone and one 
> for rx. NAPI is continuously scheduled in/out.

Certainly faster than the machine in the paper (which was about 2 years
old in 2005).
I could never get ping -f to do that for me - so things must be getting
worse with newer machines then.

> No. Since I did a flood ping from the machine under test, the improved 
> latency meant that the ping response was handled more quickly, causing 
> the next packet to be sent sooner. So more packets were transmitted in 
> the allotted time (10 seconds).

ok.

> With current NAPI:
> rtt min/avg/max/mdev = 0.902/1.843/101.727/4.659 ms, pipe 9, ipg/ewma 
> 1.611/1.421 ms
> 
> With idle poll changes:
> rtt min/avg/max/mdev = 0.898/1.117/28.371/0.689 ms, pipe 3, ipg/ewma 
> 1.175/1.236 ms

Not bad in terms of latency. The deviation certainly looks better.

> But the CPU has done more work. 

I am going to be the devil's advocate[1]:
If the problem i am trying to solve is "reduce cpu use at lower rate",
then this is not the right answer because your cpu use has gone up.
Your latency numbers have not improved that much (looking at the avg)
and your throughput is not that much higher. Will i be willing to pay
more cpu (of an already piggish cpu use by NAPI at that rate with 2
interupts per packet)?

Another test: try a simple ping and compare the rtts.

> The problem I started thinking about was the one where NAPI thrashes 
> in/out of polled mode at higher and higher rates as network interface 
> speeds and CPU speeds increase. A flood ping demonstrates this even on 
> 100M links on my boxes. 

things must be getting worse in the state of average hardware out there.
It will be worthwile exercise to compare on an even faster machine
and see what transpires there.
 
> Networking boxes want consistent 
> performance/latency for all traffic patterns and they need to avoid 
> interrupt livelock. Current practice seems to be to use hardware 
> interrupt mitigation or timers to limit interrupt rate but this just 
> hurts latency, as you noted. So I'm trying to find a way to limit the 
> NAPI interrupt rate without increasing latency. My comment about this 
> approach being suitable for routers and networked servers is that these 
> boxes care more about minimizing packet latency than they do about 
> wasting CPU cycles by polling idle devices.

I think the arguement of "who cares about a little more cpu" is valid
for the case of routers. It is a double edged sword, because it applies
to the case of "who cares if NAPI uses a little more cpu at low rates"
and "who cares if James turns on polling and abuses a little more-more
cpu". Since NAPI is the incumbent, the onus(sp?) is to do better. You
must do better sir!

Look at the timers, she said - that way you may be able to cut the cpu
abuse.

cheers,
jamal

[1] historically the devils advocate was a farce really ;->


  reply	other threads:[~2007-09-07 13:22 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-09-06 14:16 RFC: possible NAPI improvements to reduce interrupt rates for low traffic rates James Chapman
2007-09-06 14:37 ` Stephen Hemminger
2007-09-06 15:30   ` James Chapman
2007-09-06 15:37     ` Stephen Hemminger
2007-09-06 16:07       ` James Chapman
2007-09-06 23:06 ` jamal
2007-09-07  9:31   ` James Chapman
2007-09-07 13:22     ` jamal [this message]
2007-09-10  9:20       ` James Chapman
2007-09-10 12:27         ` jamal
2007-09-12  7:04       ` Bill Fink
2007-09-12 12:12         ` jamal
2007-09-12 13:50           ` James Chapman
2007-09-12 14:02             ` Stephen Hemminger
2007-09-12 16:26               ` James Chapman
2007-09-12 16:47               ` Mandeep Baines
2007-09-13  6:57                 ` David Miller
2007-09-14 13:14             ` jamal
2007-09-07 21:20     ` Jason Lunz
2007-09-10  9:25       ` James Chapman
2007-09-07  3:55 ` Mandeep Singh Baines
2007-09-07  9:38   ` James Chapman
2007-09-08 16:42     ` Mandeep Singh Baines
2007-09-10  9:33       ` James Chapman
2007-09-10 12:12       ` jamal
2007-09-08 16:32 ` Andi Kleen
2007-09-10  9:25   ` James Chapman
2007-09-12 15:12 ` David Miller
2007-09-12 16:39   ` James Chapman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1189171370.4234.38.camel@localhost \
    --to=hadi@cyberus.ca \
    --cc=davem@davemloft.net \
    --cc=jchapman@katalix.com \
    --cc=jeff@garzik.org \
    --cc=mandeep.baines@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=ossthema@de.ibm.com \
    --cc=shemminger@osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).