public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Karim Yaghmour <karim@opersys.com>
Cc: Kristian Benoit <kbenoit@opersys.com>,
	linux-kernel@vger.kernel.org, paulmck@us.ibm.com, bhuey@lnxw.com,
	tglx@linutronix.de, pmarques@grupopie.com, bruce@andrew.cmu.edu,
	nickpiggin@yahoo.com.au, ak@muc.de, sdietrich@mvista.com,
	dwalker@mvista.com, hch@infradead.org, akpm@osdl.org,
	rpm@xenomai.org
Subject: Re: PREEMPT_RT and I-PIPE: the numbers, part 4
Date: Mon, 11 Jul 2005 09:05:16 +0200	[thread overview]
Message-ID: <20050711070516.GA2238@elte.hu> (raw)
In-Reply-To: <42CFEFC9.7070007@opersys.com>


* Karim Yaghmour <karim@opersys.com> wrote:

> With ping floods, as with other things, there is room for improvement, 
> but keep in mind that these are standard tests [...]

the problem is that ping -f isnt what it used to be. If you are using a 
recent distribution with an updated ping utility, these days the 
equivalent of 'ping -f' is something like:

	ping -q -l 500 -A -s 10 <target>

and even this variant (and the old variant) needs to be carefully 
validated for actual workload generated. Note that this is true for 
workloads against vanilla kernels too. (Also note that i did not claim 
that the flood ping workload you used is invalid - you have not 
published packet rates or interrupt rates that could help us judge how 
constant the workload was. I only said that according to my measurements 
it's quite unstable, and that you should double-check it.  Just running 
it and ACK-ing that the packet rates are stable and identical amongst 
all of these kernels would be enough to put this concern to rest.)

to see why i think there might be something wrong with the measurement, 
just look at the raw numbers:

 LMbench running times:
 +--------------------+-------+-------+-------+-------+-------+
 | Kernel             | plain | IRQ   | ping  | IRQ & | IRQ & |
 |                    |       | test  | flood | ping  |  hd   |
 +====================+=======+=======+=======+=======+=======+
 | Vanilla-2.6.12     | 152 s | 150 s | 188 s | 185 s | 239 s |
 +====================+=======+=======+=======+=======+=======+
 | with RT-V0.7.51-02 | 152 s | 153 s | 203 s | 201 s | 239 s |
 +====================+=======+=======+=======+=======+=======+

note that both the 'IRQ' and 'IRQ & hd' test involves interrupts, and 
PREEMPT_RT shows overhead within statistical error, but only the 'flood 
ping' workload created a ~8% slowdown.

my own testing (whatever it's worth) shows that during flood-pings, the 
maximum overhead PREEMPT_RT caused was 4%. I.e. PREEMPT_RT used 4% more 
system-time than the vanilla UP kernel when the CPU was 99% dedicated to 
handling ping replies. But in your tests not the full CPU was dedicated 
to flood ping replies (of course). Your above numbers suggest that under 
the vanilla kernel 23% of CPU time was used up by flood pinging.  
(188/152 == +23.6%)

Under PREEMPT_RT, my tentative guesstimation would be that it should go 
from 23.6% to 24.8% - i.e. a 1.2% less CPU time for lmbench - which 
turns into roughly +1 seconds of lmbench wall-clock time slowdown. Not 
15 seconds, like your test suggests. So there's a more than an order of 
magnitude difference in the numbers, which i felt appropriate sharing :)

_And_ your own hd and stable-rate irq workloads suggest that PREEMPT_RT 
and vanilla are very close to each other. Let me repeat the table, with 
only the numbers included where there was no flood pinging going on:

 LMbench running times:
 +--------------------+-------+-------+-------+-------+-------+
 | Kernel             | plain | IRQ   |       |       | IRQ & |
 |                    |       | test  |       |       |  hd   |
 +====================+=======+=======+=======+=======+=======+
 | Vanilla-2.6.12     | 152 s | 150 s |       |       | 239 s |
 +====================+=======+=======+=======+=======+=======+
 | with RT-V0.7.51-02 | 152 s | 153 s |       |       | 239 s |
 +====================+=======+=======+=======+=======+=======+
 | with Ipipe-0.7     | 149 s | 150 s |       |       | 236 s |
 +====================+=======+=======+=======+=======+=======+

these numbers suggest that outside of ping-flooding all IRQ overhead 
results are within statistical error.

So why do your "ping flood" results show such difference? It really is 
just another type of interrupt workload and has nothing special in it.

> but keep in mind that these are standard tests used as-is by others 
> [...]

are you suggesting this is not really a benchmark but a way to test how 
well a particular system withholds against extreme external load?

> For one thing, the heavy fluctuation in ping packets may actually 
> induce a state in the monitored kernel which is more akin to the one 
> we want to measure than if we had a steady flow of packets.

so you can see ping packet flow fluctuations in your tests? Then you 
cannot use those results as any sort of benchmark metric.

under PREEMPT_RT, if you wish to tone down the effects of an interrupt 
source then all you have to do is something like:

 P=$(pidof "IRQ "$(grep eth1 /proc/interrupts | cut -d: -f1 | xargs echo))

 chrt -o -p 0 $P   # net irq thread
 renice -n 19 $P
 chrt -o -p 0 5    # softirq-tx
 renice -n 19 5
 chrt -o -p 0 6    # softirq-rx
 renice -n 19 6

and from this point on you should see zero lmbench overhead from flood 
pinging. Can vanilla or I-PIPE do that?

	Ingo

  parent reply	other threads:[~2005-07-11  7:06 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-07-08 23:01 PREEMPT_RT and I-PIPE: the numbers, part 4 Kristian Benoit
2005-07-09  1:28 ` Karim Yaghmour
2005-07-09  7:19 ` Ingo Molnar
2005-07-09 15:39   ` Karim Yaghmour
2005-07-09 15:53     ` Karim Yaghmour
2005-07-09 15:53       ` Karim Yaghmour
2005-07-11  7:05     ` Ingo Molnar [this message]
2005-07-11 11:25       ` Karim Yaghmour
2005-07-09 17:22   ` Daniel Walker
2005-07-09 23:37     ` Bill Huey
2005-07-09  9:01 ` Paul Rolland
2005-07-09 14:47   ` Karim Yaghmour
2005-07-09 15:22   ` Ingo Molnar
2005-07-11  5:24 ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050711070516.GA2238@elte.hu \
    --to=mingo@elte.hu \
    --cc=ak@muc.de \
    --cc=akpm@osdl.org \
    --cc=bhuey@lnxw.com \
    --cc=bruce@andrew.cmu.edu \
    --cc=dwalker@mvista.com \
    --cc=hch@infradead.org \
    --cc=karim@opersys.com \
    --cc=kbenoit@opersys.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nickpiggin@yahoo.com.au \
    --cc=paulmck@us.ibm.com \
    --cc=pmarques@grupopie.com \
    --cc=rpm@xenomai.org \
    --cc=sdietrich@mvista.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox