From: Grant Grundler <grundler@parisc-linux.org>
To: jamal <hadi@cyberus.ca>
Cc: Lennert Buytenhek <buytenh@wantstofly.org>,
Robert Olsson <Robert.Olsson@data.slu.se>,
netdev@oss.sgi.com, Grant Grundler <grundler@parisc-linux.org>
Subject: Re: pktgen
Date: Mon, 29 Nov 2004 09:57:50 -0700 [thread overview]
Message-ID: <20041129165750.GA11413@colo.lackof.org> (raw)
In-Reply-To: <1101736191.1044.196.camel@jzny.localdomain>
On Mon, Nov 29, 2004 at 08:49:52AM -0500, jamal wrote:
> On Sun, 2004-11-28 at 13:31, Lennert Buytenhek wrote:
> > Indeed. Right now it feels like I'm just poking around in the dark. I'm
> > really interested by now in finding out exactly what part of packet TX is
> > taking how long and where all my cycles are going.
ia64 PMU can measure exactly where/why the CPU is stalling.
MMIO reads are by far the worst offenders - but not the only ones.
"bubbles" in the pipeline can be caused by lots of other
stalls and will affect CPU utilization as well.
A very nice description of CPU stalls caused by memory subsystem is here:
http://www.gelato.org/pdf/mysql_itanium2_perf.pdf
Gelato.org, sgi.com, intel.com, hp.com have more white papers
on ia64 performance tools and tuning.
> > I don't have an Itanic but it's still possible to instrument the driver
> > and do some stuff Grant talks about in his OLS paper, something like the
> > attached. (Exports # of MMIO reads/writes/flushes in the RX frame/
> > TX carrier/collision stats field. Beware, flushes are double-counted
> > as reads. Produces lots of output.)
I'd be happy to give you access to an IA64 machine to poke at.
If you can send me:
o preferred login
o public ssh key
o work telephone #
BTW, Jamal, I'm expecting we'll be able to get Robur an RX2600
to play with this quarter. I need to ask about that again.
> > During a 10Mpkt pktgen session (~16 seconds), I'm seeing:
> > - 131757 interrupts, ~8k ints/sec, ~76 pkts/int
> > - 131789 pure MMIO reads (i.e. not counting MMIO reads intended as write
> > flushes), which is E1000_READ_REG(icr) in the irq handler
> > - 10263536 MMIO writes (which would be 1 per packet plus 2 per interrupt)
> > - 131757 MMIO write flushes (readl() of the e1000 status register after
> > re-enabling IRQs in dev->poll())
> >
> > Pretty consistent with what Grant was seeing.
yup.
> >
> > MMIO reads from the e1000 are somewhere between 2000 and 3000 cycles a
> > pop on my hardware. 2400MHz CPU -> ~1us/each. (Reading netdevice stats
> > does ~50 of those in a row.)
> >
>
> Reads are known to be expensive. Good to see how much they are reduced.
> Not sure if this applies to MMIO reads though. Grant?
I don't differentiate between "pure" MMIO reads and posted MMIO write
flushes. They cost the same AFAIK. If one can tweak the algorithm so
either is not needed, it's a win.
But I didn't see any opportunity to do that in e1000 driver.
There is such an opportunity in tg3 though. I just won't have
a chance to pursue it. :^(
The absolute cost in CPU cycles of an MMIO read will depend on chipset,
CPU speed, and number of bridges the transaction has to cross.
On an idle 1Ghz system, I've measured ~1000-1200 cycles.
When measured in time (not CPU cycles), the cost hasn't changed that
much in the past 6-8 years (mostly 66Mhz PCI busses).
Adding or removing a PCI-PCI bridge is the biggest variable in
absolute time.
thanks,
grant
next prev parent reply other threads:[~2004-11-29 16:57 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20041110165318.GA19639@xi.wantstofly.org>
[not found] ` <20041111233507.GA3202@xi.wantstofly.org>
[not found] ` <20041124161848.GA18059@xi.wantstofly.org>
[not found] ` <16804.48120.375307.718766@robur.slu.se>
[not found] ` <20041124170948.GC18059@xi.wantstofly.org>
[not found] ` <16804.60621.990421.525393@robur.slu.se>
[not found] ` <20041125030450.GA24417@xi.wantstofly.org>
[not found] ` <16805.40983.937641.670275@robur.slu.se>
[not found] ` <20041127002841.GA17184@xi.wantstofly.org>
[not found] ` <20041127004325.GA17401@xi.wantstofly.org>
2004-11-27 12:04 ` pktgen Robert Olsson
2004-11-27 13:53 ` pktgen Lennert Buytenhek
2004-11-27 14:39 ` pktgen Lennert Buytenhek
2004-11-27 15:04 ` pktgen jamal
2004-11-28 18:31 ` pktgen Lennert Buytenhek
2004-11-29 13:49 ` pktgen jamal
2004-11-29 16:57 ` Grant Grundler [this message]
2004-11-29 15:27 ` pktgen Robert Olsson
[not found] <20061121174737.GA5220@martell.zuzino.mipt.ru>
[not found] ` <9fa31860611211236i104cb510mb5100ea056b657db@mail.gmail.com>
2006-11-21 21:22 ` pktgen (was Re: tests of kernel modules) Alexey Dobriyan
2006-11-28 23:33 ` pktgen David Miller
2006-11-29 20:04 ` pktgen Alexey Dobriyan
2006-11-30 1:49 ` pktgen David Miller
2006-11-30 7:30 ` pktgen Alexey Dobriyan
2006-11-30 8:45 ` pktgen Robert Olsson
2006-11-30 17:33 ` pktgen Ben Greear
2006-12-01 4:14 ` pktgen David Miller
2006-12-01 8:14 ` pktgen Robert Olsson
2006-12-01 9:51 ` pktgen Alexey Dobriyan
2006-12-01 17:18 ` pktgen Robert Olsson
2006-12-01 23:25 ` pktgen David Miller
2007-01-02 4:53 ` pktgen David Miller
2006-12-01 8:22 ` pktgen Christoph Hellwig
2006-12-01 23:17 ` pktgen David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20041129165750.GA11413@colo.lackof.org \
--to=grundler@parisc-linux.org \
--cc=Robert.Olsson@data.slu.se \
--cc=buytenh@wantstofly.org \
--cc=hadi@cyberus.ca \
--cc=netdev@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).