From: Jeff Garzik <jgarzik@pobox.com>
To: netdev@oss.sgi.com, linux-net@vger.kernel.org
Cc: davem@redhat.com
Subject: NAPI vs. interrupts
Date: Sat, 11 Jan 2003 17:08:26 -0500 [thread overview]
Message-ID: <20030111220826.GA10085@gtf.org> (raw)
I am seeing some tg3 reports occasionally that show a fair number of
interrupts-per-second, even though tg3 is 100% NAPI.
It seems to me that as machines get faster, and amount of memory
increase [xlat: less waiting for free RAM in all parts of the kernel,
and less GFP_ATOMIC alloc failures], the likelihood that a NAPI driver
can process 100% of the RX and TX work without having to reqquest
subsequent iterations of dev->poll().
NAPI's benefits kick in when there is some amount of system load.
However if the box is fast enough to eliminate cases where system load
would otherwise exist (interrupt and packet processing overhead), the
NAPI "worst case" kicks in, where a NAPI driver _always_ does
ack some irqs
mask irqs
ack some more irqs
process events
unmask irqs
whereas a non-NAPI driver _always_ does
ack irqs
process events
When there is load, the obvious NAPI benefits kick in. However, on
super-fast servers, SMP boxes, etc. it seems likely to me that one can
receive well in excess of 1,000 interrupts per second, simply because
the box is so fast it can run thousands of iterations of the NAPI "worst
case", above.
The purpose of this email is to solicit suggestions to develop a
strategy to fix what I believe is a problem with NAPI.
Here are some comments of mine:
1) Can this problem be alleviated entirely without driver changes? For
example, would it be reasonable to do pkts-per-second sampling in the
net core, and enable software mitigation based on that?
2) Implement hardware mitigation in addition to NAPI. Either the driver
does adaptive sampling, or simply hard-locks mitigation settings at
something that averages out to N pkts per second.
3) Implement an alternate driver path that follows the classical,
non-NAPI interrupt handling path in addition to NAPI, by logic similar
to this[warning: off the cuff and not analyzed... i.e. just an idea]:
ack irqs
call dev->poll() from irq handler
[processes events until budget runs out,
or available events are all processed]
if budget ran out,
mask irqs
netif_rx_schedule()
[this, #3, does not address the irq-per-sec problem directly, but does
lessen the effect of "worst case"]
Anyway, for tg3 specifically, I am leaning towards the latter part of #2,
hard-locking mitigation settings at something tests prove is
"reasonable", and in heavy load situations NAPI will kick in as
expected, and perform its magic ;-)
Comments/feedback requested.
Jeff
next reply other threads:[~2003-01-11 22:08 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-01-11 22:08 Jeff Garzik [this message]
2003-01-12 3:41 ` NAPI vs. interrupts jamal
2003-01-13 15:57 ` Robert Olsson
2003-01-13 16:46 ` Ethernet simulation driver for VME bus Zhigang Liu
-- strict thread matches above, loose matches on Subject: below --
2003-01-15 8:06 NAPI vs. interrupts Feldman, Scott
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030111220826.GA10085@gtf.org \
--to=jgarzik@pobox.com \
--cc=davem@redhat.com \
--cc=linux-net@vger.kernel.org \
--cc=netdev@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).