* realtime-preempt-2.6.12-final-V0.7.51-11 glitches
[not found] ` <1120428465.21398.2.camel@cmn37.stanford.edu>
@ 2005-07-07 18:46 ` Rui Nuno Capela
2005-07-07 18:53 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches [2] Rui Nuno Capela
2005-07-07 19:49 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Ingo Molnar
0 siblings, 2 replies; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-07 18:46 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 2054 bytes --]
Hi all,
These are one of my latest consolidated results while using (my)
jack_test4.2 suite, against a couple of 2.6.12 kernels patched for
PREEMPT_RT, on my P4@2.5GHz/UP laptop.
See anything funny?
As it seems, the kernel latency performance is in some unfortunate
regression, and I'm experiencing this unsatisfactory behavior ever since
the latest RT-V0.7.50-xx patch series.
------------------------------ ------------- -------------
RT-V0.7.51-11 RT-V0.7.49-01
------------------------------ ------------- -------------
Total seconds ran . . . . . . : 900 900
Number of clients . . . . . . : 14 14
Ports per client . . . . . . : 4 4
Frames per buffer . . . . . . : 64 64
Number of runs . . . . . . . : 1 1
------------------------------ ------------- -------------
Failure Rate . . . . . . . . : ( 0.0 ) ( 0.0) /hour
XRUN Rate . . . . . . . . . . : 373.3 0.0 /hour
Delay Rate (>spare time) . . : 220.0 0.0 /hour
Delay Rate (>1000 usecs) . . : 0.0 0.0 /hour
Delay Maximum . . . . . . . . : 7853 295 usecs
Cycle Maximum . . . . . . . . : 852 943 usecs
Average DSP Load. . . . . . . : 41.8 44.4 %
Average CPU System Load . . . : 6.8 16.3 %
Average CPU User Load . . . . : 28.8 30.1 %
Average CPU Nice Load . . . . : 0.0 0.0 %
Average CPU I/O Wait Load . . : 0.0 0.1 %
Average CPU IRQ Load . . . . : 0.0 0.0 %
Average CPU Soft-IRQ Load . . : 0.0 0.0 %
Average Interrupt Rate . . . : 1679.3 1680.4 /sec
Average Context-Switch Rate . : 12508.6 14463.2 /sec
------------------------------ ------------- -------------
JFYI respective kernel configs are also attached.
Cheers.
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
[-- Attachment #2: jack_test4.2.tar.gz --]
[-- Type: application/x-gzip-compressed, Size: 14204 bytes --]
[-- Attachment #3: config-2.6.12-RT-V0.7.49-01.0.gz --]
[-- Type: application/octet-stream, Size: 0 bytes --]
[-- Attachment #4: config-2.6.12-RT-V0.7.51-11.0.gz --]
[-- Type: application/octet-stream, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* realtime-preempt-2.6.12-final-V0.7.51-11 glitches [2]
2005-07-07 18:46 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Rui Nuno Capela
@ 2005-07-07 18:53 ` Rui Nuno Capela
2005-07-07 19:49 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Ingo Molnar
1 sibling, 0 replies; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-07 18:53 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 2217 bytes --]
> Hi all,
>
> These are one of my latest consolidated results while using (my)
> jack_test4.2 suite, against a couple of 2.6.12 kernels patched for
> PREEMPT_RT, on my P4@2.5GHz/UP laptop.
>
> See anything funny?
>
> As it seems, the kernel latency performance is in some unfortunate
> regression, and I'm experiencing this unsatisfactory behavior ever since
> the latest RT-V0.7.50-xx patch series.
>
> ------------------------------ ------------- -------------
> RT-V0.7.51-11 RT-V0.7.49-01
> ------------------------------ ------------- -------------
> Total seconds ran . . . . . . : 900 900
> Number of clients . . . . . . : 14 14
> Ports per client . . . . . . : 4 4
> Frames per buffer . . . . . . : 64 64
> Number of runs . . . . . . . : 1 1
> ------------------------------ ------------- -------------
> Failure Rate . . . . . . . . : ( 0.0 ) ( 0.0) /hour
> XRUN Rate . . . . . . . . . . : 373.3 0.0 /hour
> Delay Rate (>spare time) . . : 220.0 0.0 /hour
> Delay Rate (>1000 usecs) . . : 0.0 0.0 /hour
> Delay Maximum . . . . . . . . : 7853 295 usecs
> Cycle Maximum . . . . . . . . : 852 943 usecs
> Average DSP Load. . . . . . . : 41.8 44.4 %
> Average CPU System Load . . . : 6.8 16.3 %
> Average CPU User Load . . . . : 28.8 30.1 %
> Average CPU Nice Load . . . . : 0.0 0.0 %
> Average CPU I/O Wait Load . . : 0.0 0.1 %
> Average CPU IRQ Load . . . . : 0.0 0.0 %
> Average CPU Soft-IRQ Load . . : 0.0 0.0 %
> Average Interrupt Rate . . . : 1679.3 1680.4 /sec
> Average Context-Switch Rate . : 12508.6 14463.2 /sec
> ------------------------------ ------------- -------------
>
> JFYI respective kernel configs are also attached.
>
Sorry, my last post failed on packing those configs away. Here it goes
again, hopefully.
Cheers.
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
[-- Attachment #2: config-2.6.12-RT-V0.7.49-01.0.gz --]
[-- Type: application/x-gzip-compressed, Size: 8823 bytes --]
[-- Attachment #3: config-2.6.12-RT-V0.7.51-11.0.gz --]
[-- Type: application/x-gzip-compressed, Size: 8844 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches
2005-07-07 18:46 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Rui Nuno Capela
2005-07-07 18:53 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches [2] Rui Nuno Capela
@ 2005-07-07 19:49 ` Ingo Molnar
2005-07-07 19:59 ` Ingo Molnar
2005-07-07 23:19 ` Rui Nuno Capela
1 sibling, 2 replies; 14+ messages in thread
From: Ingo Molnar @ 2005-07-07 19:49 UTC (permalink / raw)
To: Rui Nuno Capela; +Cc: linux-kernel
* Rui Nuno Capela <rncbc@rncbc.org> wrote:
> Hi all,
>
> These are one of my latest consolidated results while using (my)
> jack_test4.2 suite, against a couple of 2.6.12 kernels patched for
> PREEMPT_RT, on my P4@2.5GHz/UP laptop.
>
> See anything funny?
hm, you dont seem to have PREEMPT_RT enabled in your .config - it's set
to PREEMPT_DESKTOP (config-2.6.12-RT-V0.7.51-11.0). OTOH, your 49-01
config has PREEMPT_RT enabled. So it's not an apples to apples
comparison. Just to make sure, could you check 51-11 with PREEMPT_RT
enabled too?
i have just done a jack_test4.1 run, and indeed larger latencies seem to
have crept in. (But i forgot to chrt the sound IRQ above the network
IRQ, so i'll retest.)
Ingo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches
2005-07-07 19:49 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Ingo Molnar
@ 2005-07-07 19:59 ` Ingo Molnar
2005-07-07 23:19 ` Rui Nuno Capela
1 sibling, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2005-07-07 19:59 UTC (permalink / raw)
To: Rui Nuno Capela; +Cc: linux-kernel
* Ingo Molnar <mingo@elte.hu> wrote:
> i have just done a jack_test4.1 run, and indeed larger latencies seem
> to have crept in. (But i forgot to chrt the sound IRQ above the
> network IRQ, so i'll retest.)
with the sound irq chrt-ed to prio 90 the latencies look pretty good:
************* SUMMARY RESULT ****************
Total seconds ran . . . . . . : 300
Number of clients . . . . . . : 14
Ports per client . . . . . . : 4
Frames per buffer . . . . . . : 64
Number of runs . . . . . . . :( 1)
*********************************************
Timeout Count . . . . . . . . :( 0)
XRUN Count . . . . . . . . . : 0
Delay Count (>spare time) . . : 0
Delay Count (>1000 usecs) . . : 0
Delay Maximum . . . . . . . . : 18 usecs
Cycle Maximum . . . . . . . . : 637 usecs
Average DSP Load. . . . . . . : 36.3 %
Average CPU System Load . . . : 18.8 %
Average CPU User Load . . . . : 29.3 %
Average CPU Nice Load . . . . : 0.0 %
Average CPU I/O Wait Load . . : 0.0 %
Average CPU IRQ Load . . . . : 0.0 %
Average CPU Soft-IRQ Load . . : 0.0 %
Average Interrupt Rate . . . : 1680.1 /sec
Average Context-Switch Rate . : 14361.3 /sec
*********************************************
Delta Maximum . . . . . . . . : 0.00000
*********************************************
and this was with LATENCY_TRACE turned on.
Ingo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches
2005-07-07 19:49 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Ingo Molnar
2005-07-07 19:59 ` Ingo Molnar
@ 2005-07-07 23:19 ` Rui Nuno Capela
2005-07-08 8:46 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more] Rui Nuno Capela
1 sibling, 1 reply; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-07 23:19 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
> * Rui Nuno Capela <rncbc@rncbc.org> wrote:
>
>> Hi all,
>>
>> These are one of my latest consolidated results while using (my)
>> jack_test4.2 suite, against a couple of 2.6.12 kernels patched for
>> PREEMPT_RT, on my P4@2.5GHz/UP laptop.
>>
>> See anything funny?
>
> hm, you dont seem to have PREEMPT_RT enabled in your .config - it's set
> to PREEMPT_DESKTOP (config-2.6.12-RT-V0.7.51-11.0). OTOH, your 49-01
> config has PREEMPT_RT enabled. So it's not an apples to apples
> comparison. Just to make sure, could you check 51-11 with PREEMPT_RT
> enabled too?
>
Damn. You're right... grrr! I gotta spank myself later... I've been
running PREEMPT_DESKTOP all along since V0.7.5x . Anyway, all that seems
to show some pretty figures on the distinction between PREEMPT_DESKTOP and
PREEMPT_RT performance :)
> i have just done a jack_test4.1 run, and indeed larger latencies seem to
> have crept in. (But i forgot to chrt the sound IRQ above the network
> IRQ, so i'll retest.)
>
I will do my homework too ;)
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
^ permalink raw reply [flat|nested] 14+ messages in thread
* realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-07 23:19 ` Rui Nuno Capela
@ 2005-07-08 8:46 ` Rui Nuno Capela
2005-07-08 8:52 ` Ingo Molnar
0 siblings, 1 reply; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-08 8:46 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
OK.
Just for the heads up, here goes todays summary results regarding
my jack_test4.2 test suite against 2.6.12 kernels configured with
PREEMPT_RT, but... now with 99.9% certainty :)
------------------------------ ------------- -------------
RT-V0.7.51-13 RT-V0.7.49-01
------------------------------ ------------- -------------
Total seconds ran . . . . . . : 900 900
Number of clients . . . . . . : 14 14
Ports per client . . . . . . : 4 4
Frames per buffer . . . . . . : 64 64
Number of runs . . . . . . . : 1 1
------------------------------ ------------- -------------
Failure Rate . . . . . . . . : ( 0.0 ) ( 0.0) /hour
XRUN Rate . . . . . . . . . . : 0.0 0.0 /hour
Delay Rate (>spare time) . . : 0.0 0.0 /hour
Delay Rate (>1000 usecs) . . : 0.0 0.0 /hour
Delay Maximum . . . . . . . . : 333 295 usecs
Cycle Maximum . . . . . . . . : 970 943 usecs
Average DSP Load. . . . . . . : 45.7 44.4 %
Average CPU System Load . . . : 15.6 16.3 %
Average CPU User Load . . . . : 32.0 30.1 %
Average CPU Nice Load . . . . : 0.0 0.0 %
Average CPU I/O Wait Load . . : 0.0 0.1 %
Average CPU IRQ Load . . . . : 0.0 0.0 %
Average CPU Soft-IRQ Load . . : 0.0 0.0 %
Average Interrupt Rate . . . : 1678.6 1680.4 /sec
Average Context-Switch Rate . : 14446.4 14463.2 /sec
------------------------------ ------------- -------------
Just for the forgiveness sake, my mistake was useful after all,
where one can incidentally assert the significantly lower performance
of PREEMPT_DESKTOP for audio work, when PREEMPT_RT is a clear winner.
As it was already, for quite some time. No more worries ;)
Thanks,
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-08 8:46 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more] Rui Nuno Capela
@ 2005-07-08 8:52 ` Ingo Molnar
2005-07-08 9:40 ` Rui Nuno Capela
0 siblings, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2005-07-08 8:52 UTC (permalink / raw)
To: Rui Nuno Capela; +Cc: linux-kernel
* Rui Nuno Capela <rncbc@rncbc.org> wrote:
> OK.
>
> Just for the heads up, here goes todays summary results regarding my
> jack_test4.2 test suite against 2.6.12 kernels configured with
> PREEMPT_RT, but... now with 99.9% certainty :)
thanks for the testing!
> ------------------------------ ------------- -------------
> RT-V0.7.51-13 RT-V0.7.49-01
> ------------------------------ ------------- -------------
> Delay Maximum . . . . . . . . : 333 295 usecs
> Cycle Maximum . . . . . . . . : 970 943 usecs
> Average DSP Load. . . . . . . : 45.7 44.4 %
> Average CPU System Load . . . : 15.6 16.3 %
> Average CPU User Load . . . . : 32.0 30.1 %
i'm wondering - is this slight increase in CPU utilization (and
latencies) due to natural fluctuations, or is it a genuine overhead
increase?
Ingo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-08 8:52 ` Ingo Molnar
@ 2005-07-08 9:40 ` Rui Nuno Capela
2005-07-08 9:56 ` Ingo Molnar
0 siblings, 1 reply; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-08 9:40 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1838 bytes --]
>> ------------------------------ ------------- -------------
>> RT-V0.7.51-13 RT-V0.7.49-01
>> ------------------------------ ------------- -------------
>
>> Delay Maximum . . . . . . . . : 333 295 usecs
>> Cycle Maximum . . . . . . . . : 970 943 usecs
>> Average DSP Load. . . . . . . : 45.7 44.4 %
>> Average CPU System Load . . . : 15.6 16.3 %
>> Average CPU User Load . . . . : 32.0 30.1 %
>
> i'm wondering - is this slight increase in CPU utilization (and
> latencies) due to natural fluctuations, or is it a genuine overhead
> increase?
>
I'm pretty confortable in saying that both results are almost equivalent,
the slight differences probably due to environmental effects. All tests
were conducted from a regular user konsole screen, under KDE 3.3 and
perfectly functional X.org desktop session. Runlevel 5 with network down
however. Oh, gkrellm is also on screen :)
OTOH, I'll take this chance to show you something that is annoying me for
quite some time. Just look to the attached chart where I've marked the
spot with an arrow and a question mark. Its just one example of a strange
behavior/phenomenon while running the jack_test4.2 test on PREEMPT_RT
kernels: the CPU usage, which stays normally around 50%, suddenly jumps to
60% steady, starting at random points in time but always some time after
the test has been started. Note that this randomness surely adds to the
the slight differences found on the above results.
First suspicions gone to cpu frequency/throttling, but I have it disabled
on every kernel I build. Denormals? Don't think so, as jack_test_client is
specially coded to treat floats as zero if below 1E-6.
Any hints?
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
[-- Attachment #2: jack_test4-2.6.12-RT-V0.7.51-13.0-200507080917.png --]
[-- Type: image/x-png, Size: 7101 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-08 9:40 ` Rui Nuno Capela
@ 2005-07-08 9:56 ` Ingo Molnar
2005-07-08 10:53 ` Rui Nuno Capela
2005-07-11 15:12 ` Rui Nuno Capela
0 siblings, 2 replies; 14+ messages in thread
From: Ingo Molnar @ 2005-07-08 9:56 UTC (permalink / raw)
To: Rui Nuno Capela; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 2609 bytes --]
* Rui Nuno Capela <rncbc@rncbc.org> wrote:
> OTOH, I'll take this chance to show you something that is annoying me
> for quite some time. Just look to the attached chart where I've marked
> the spot with an arrow and a question mark. Its just one example of a
> strange behavior/phenomenon while running the jack_test4.2 test on
> PREEMPT_RT kernels: the CPU usage, which stays normally around 50%,
> suddenly jumps to 60% steady, starting at random points in time but
> always some time after the test has been started. Note that this
> randomness surely adds to the the slight differences found on the
> above results.
how long does this condition persist? Firstly, please upgrade to the
-51-16 kernel, previous kernels had a condition where interrupt storms
(or repeat interrupts) could occur. (Your irqs/sec values dont suggest
such a condition, but it could still occur.)
Then could you enable profiling (CONFIG_PROFILING=y and profile=1 boot
parameter), and create a script like this to capture a kernel profile
for a fixed amount of time:
#!/bin/bash
readprofile -r # reset profile
sleep 10
readprofile -n -m /home/mingo/System.map | sort -n
and start it manually when the anomaly triggers. Also start it during a
'normal' period of the test. The output should give us a rough idea of
what is happening. This type of profiling is very low-overhead so it
wont disturb the condition.
Note that you can increase the frequency and the quality of profiling by
enabling the NMI watchdog (LOCAL_APIC in the .config and nmi_watchdog=2
boot option), in the -RT kernel it will automatically switch the
profiling tick to occur from NMI context. Such tracing will also show
overhead occuring in irqs-off functions.
a more intrusive method would be to capture a trace of the anomalous
workload. For that you'll need to enable WAKEUP_TIMING+LATENCY_TRACING
(this is the lowest-overhead tracing variant), and run the attached
trace-it utility during the anomaly:
trace-it > anomaly.log
and during the normal period as well:
trace-it > normal.log
note that to capture a significant amount of system activity you might
need to increase MAX_TRACE in kernel/latency.c (e.g. 8-fold should be
enough to capture a second worth of system activity). If you generate
such large traces then please bzip2 them and send them to me privately.
You might also need to increase the usleep() period in trace-it.c, to
capture longer periods of time.
note that the enabling of tracing could also make the anomalous
condition go away, so it might not be the right method.
Ingo
[-- Attachment #2: trace-it.c --]
[-- Type: text/plain, Size: 1183 bytes --]
/*
* Copyright (C) 1999, Ingo Molnar <mingo@redhat.com>
*/
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <sys/wait.h>
#include <linux/unistd.h>
int main (int argc, char **argv)
{
int ret;
if (getuid() != 0) {
fprintf(stderr, "needs to run as root.\n");
exit(1);
}
ret = system("cat /proc/sys/kernel/mcount_enabled >/dev/null 2>/dev/null");
if (ret) {
fprintf(stderr, "CONFIG_LATENCY_TRACING not enabled?\n");
exit(1);
}
system("echo 1 > /proc/sys/kernel/trace_all_cpus");
system("echo 1 > /proc/sys/kernel/trace_enabled");
system("echo 0 > /proc/sys/kernel/trace_freerunning");
system("echo 0 > /proc/sys/kernel/trace_print_at_crash");
system("echo 1 > /proc/sys/kernel/trace_user_triggered");
system("echo 0 > /proc/sys/kernel/trace_verbose");
system("echo 0 > /proc/sys/kernel/preempt_max_latency");
system("echo 0 > /proc/sys/kernel/preempt_thresh");
system("[ -e /proc/sys/kernel/wakeup_timing ] && echo 0 > /proc/sys/kernel/wakeup_timing");
system("echo 1 > /proc/sys/kernel/mcount_enabled");
gettimeofday(0, 1);
usleep(100000);
gettimeofday(0, 0);
system("cat /proc/latency_trace");
return 0;
}
[-- Attachment #3: trace-it --]
[-- Type: application/octet-stream, Size: 6183 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-08 9:56 ` Ingo Molnar
@ 2005-07-08 10:53 ` Rui Nuno Capela
2005-07-11 15:12 ` Rui Nuno Capela
1 sibling, 0 replies; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-08 10:53 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
> * Rui Nuno Capela <rncbc@rncbc.org> wrote:
>
>> OTOH, I'll take this chance to show you something that is annoying me
>> for quite some time. Just look to the attached chart where I've marked
>> the spot with an arrow and a question mark. Its just one example of a
>> strange behavior/phenomenon while running the jack_test4.2 test on
>> PREEMPT_RT kernels: the CPU usage, which stays normally around 50%,
>> suddenly jumps to 60% steady, starting at random points in time but
>> always some time after the test has been started. Note that this
>> randomness surely adds to the the slight differences found on the
>> above results.
>
> how long does this condition persist?
On all of my extensive tests, it persists on all of the remaining
jack_test4 time span, doesn't matter how long it is.
I'll look later into your profiling and tracing instructions. Many thanks
for the directions.
Seeya.
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-08 9:56 ` Ingo Molnar
2005-07-08 10:53 ` Rui Nuno Capela
@ 2005-07-11 15:12 ` Rui Nuno Capela
2005-07-11 15:55 ` Ingo Molnar
1 sibling, 1 reply; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-11 15:12 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
> * Rui Nuno Capela <rncbc@rncbc.org> wrote:
>
>> OTOH, I'll take this chance to show you something that is annoying me
>> for quite some time. Just look to the attached chart where I've marked
>> the spot with an arrow and a question mark. Its just one example of a
>> strange behavior/phenomenon while running the jack_test4.2 test on
>> PREEMPT_RT kernels: the CPU usage, which stays normally around 50%,
>> suddenly jumps to 60% steady, starting at random points in time but
>> always some time after the test has been started. Note that this
>> randomness surely adds to the the slight differences found on the
>> above results.
>
> how long does this condition persist? Firstly, please upgrade to the
> -51-16 kernel, previous kernels had a condition where interrupt storms
> (or repeat interrupts) could occur. (Your irqs/sec values dont suggest
> such a condition, but it could still occur.)
>
> Then could you enable profiling (CONFIG_PROFILING=y and profile=1 boot
> parameter), and create a script like this to capture a kernel profile
> for a fixed amount of time:
>
> #!/bin/bash
>
> readprofile -r # reset profile
> sleep 10
> readprofile -n -m /home/mingo/System.map | sort -n
>
> and start it manually when the anomaly triggers. Also start it during a
> 'normal' period of the test. The output should give us a rough idea of
> what is happening. This type of profiling is very low-overhead so it
> wont disturb the condition.
>
> Note that you can increase the frequency and the quality of profiling by
> enabling the NMI watchdog (LOCAL_APIC in the .config and nmi_watchdog=2
> boot option), in the -RT kernel it will automatically switch the
> profiling tick to occur from NMI context. Such tracing will also show
> overhead occuring in irqs-off functions.
>
After several trials, with CONFIG_PROFILING=y and profile=1 nmi_watchdog=2
as boot parameters, I'm almost convinced I'm doing something wrong :)
- `readprofile` always just outputs one line:
0 total 0.0000
- `readprofile -a` gives the whole kernel symbol list, all with zero times.
Is there anything else I can check around here?
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-11 15:12 ` Rui Nuno Capela
@ 2005-07-11 15:55 ` Ingo Molnar
2005-07-11 22:00 ` Rui Nuno Capela
0 siblings, 1 reply; 14+ messages in thread
From: Ingo Molnar @ 2005-07-11 15:55 UTC (permalink / raw)
To: Rui Nuno Capela; +Cc: linux-kernel
* Rui Nuno Capela <rncbc@rncbc.org> wrote:
> After several trials, with CONFIG_PROFILING=y and profile=1
> nmi_watchdog=2 as boot parameters, I'm almost convinced I'm doing
> something wrong :)
>
> - `readprofile` always just outputs one line:
>
> 0 total 0.0000
>
> - `readprofile -a` gives the whole kernel symbol list, all with zero times.
>
> Is there anything else I can check around here?
it means that the NMI watchdog was not activated - i.e. the 'NMI' counts
in /proc/interrupts do not increase. Do you have LOCAL_APIC enabled in
the .config? If yes and if nmi_watchdog=1 does not work either then it's
probably not possible to activate the NMI watchdog on your box. In that
case try nmi_watchdog=0, that should activate normal profiling. (unless
i've broken it via the profile-via-NMI changes ...)
Ingo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-11 15:55 ` Ingo Molnar
@ 2005-07-11 22:00 ` Rui Nuno Capela
2005-07-11 22:09 ` Rui Nuno Capela
0 siblings, 1 reply; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-11 22:00 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
> * Rui Nuno Capela <rncbc@rncbc.org> wrote:
>
>> After several trials, with CONFIG_PROFILING=y and profile=1
>> nmi_watchdog=2 as boot parameters, I'm almost convinced I'm doing
>> something wrong :)
>>
>> - `readprofile` always just outputs one line:
>>
>> 0 total 0.0000
>>
>> - `readprofile -a` gives the whole kernel symbol list, all with zero
>> times.
>>
>> Is there anything else I can check around here?
>
> it means that the NMI watchdog was not activated - i.e. the 'NMI' counts
> in /proc/interrupts do not increase. Do you have LOCAL_APIC enabled in
> the .config? If yes and if nmi_watchdog=1 does not work either then it's
> probably not possible to activate the NMI watchdog on your box. In that
> case try nmi_watchdog=0, that should activate normal profiling. (unless
> i've broken it via the profile-via-NMI changes ...)
>
I've tried whether having nmi_watchdog has any influence, to no
distinguishable result; readprofile always says zero times. And I'm sure I
have LOCAL_APIC=y (see attached config.gz)
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more]
2005-07-11 22:00 ` Rui Nuno Capela
@ 2005-07-11 22:09 ` Rui Nuno Capela
0 siblings, 0 replies; 14+ messages in thread
From: Rui Nuno Capela @ 2005-07-11 22:09 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 285 bytes --]
> I've tried whether having nmi_watchdog has any influence, to no
> distinguishable result; readprofile always says zero times. And
> I'm sure I have LOCAL_APIC=y (see attached config.gz)
Damn, forgot the attachement. Here it goes.
Sorry.
--
rncbc aka Rui Nuno Capela
rncbc@rncbc.org
[-- Attachment #2: config-2.6.12-RT-V0.7.51-26.0.gz --]
[-- Type: application/x-gzip, Size: 8847 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2005-07-11 22:15 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1119299227.20873.113.camel@cmn37.stanford.edu>
[not found] ` <20050621105954.GA18765@elte.hu>
[not found] ` <1119370868.26957.9.camel@cmn37.stanford.edu>
[not found] ` <20050621164622.GA30225@elte.hu>
[not found] ` <1119375988.28018.44.camel@cmn37.stanford.edu>
[not found] ` <1120256404.22902.46.camel@cmn37.stanford.edu>
[not found] ` <20050703133738.GB14260@elte.hu>
[not found] ` <1120428465.21398.2.camel@cmn37.stanford.edu>
2005-07-07 18:46 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Rui Nuno Capela
2005-07-07 18:53 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches [2] Rui Nuno Capela
2005-07-07 19:49 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches Ingo Molnar
2005-07-07 19:59 ` Ingo Molnar
2005-07-07 23:19 ` Rui Nuno Capela
2005-07-08 8:46 ` realtime-preempt-2.6.12-final-V0.7.51-11 glitches [no more] Rui Nuno Capela
2005-07-08 8:52 ` Ingo Molnar
2005-07-08 9:40 ` Rui Nuno Capela
2005-07-08 9:56 ` Ingo Molnar
2005-07-08 10:53 ` Rui Nuno Capela
2005-07-11 15:12 ` Rui Nuno Capela
2005-07-11 15:55 ` Ingo Molnar
2005-07-11 22:00 ` Rui Nuno Capela
2005-07-11 22:09 ` Rui Nuno Capela
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox