* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
@ 2004-09-02 7:57 mika.penttila
2004-09-02 8:32 ` Ingo Molnar
0 siblings, 1 reply; 15+ messages in thread
From: mika.penttila @ 2004-09-02 7:57 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
Ingo,
I think there might be a problem with voluntary-preempt's hadling of softirqs. Namely, in cond_resched_softirq(), you do __local_bh_enable() and local_bh_disable(). But it may be the case that the softirq is handled from ksoftirqd, and then the preempt_count isn't elevated with SOFTIRQ_OFFSET (only PF_SOFTIRQ is set). So the __local_bh_enable() actually makes preempt_count negative, which might have bad effects. Or am I missing something?
Mika
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:57 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 mika.penttila
@ 2004-09-02 8:32 ` Ingo Molnar
2004-09-02 9:06 ` Peter Zijlstra
0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 8:32 UTC (permalink / raw)
To: mika.penttila; +Cc: linux-kernel
* mika.penttila@kolumbus.fi <mika.penttila@kolumbus.fi> wrote:
> Ingo,
>
> I think there might be a problem with voluntary-preempt's hadling of
> softirqs. Namely, in cond_resched_softirq(), you do
> __local_bh_enable() and local_bh_disable(). But it may be the case
> that the softirq is handled from ksoftirqd, and then the preempt_count
> isn't elevated with SOFTIRQ_OFFSET (only PF_SOFTIRQ is set). So the
> __local_bh_enable() actually makes preempt_count negative, which might
> have bad effects. Or am I missing something?
you are right. Fortunately the main use of cond_resched_softirq() is via
cond_resched_all() - which is safe because it uses softirq_count(). But
the kernel/timer.c explicit call to cond_resched_softirq() is unsafe.
I've fixed this in my tree and i've added an assert to catch the
underflow when it happens.
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 8:32 ` Ingo Molnar
@ 2004-09-02 9:06 ` Peter Zijlstra
0 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2004-09-02 9:06 UTC (permalink / raw)
To: Ingo Molnar; +Cc: mika.penttila, linux-kernel
On Thu, 2004-09-02 at 10:32, Ingo Molnar wrote:
> * mika.penttila@kolumbus.fi <mika.penttila@kolumbus.fi> wrote:
>
> > Ingo,
> >
> > I think there might be a problem with voluntary-preempt's hadling of
> > softirqs. Namely, in cond_resched_softirq(), you do
> > __local_bh_enable() and local_bh_disable(). But it may be the case
> > that the softirq is handled from ksoftirqd, and then the preempt_count
> > isn't elevated with SOFTIRQ_OFFSET (only PF_SOFTIRQ is set). So the
> > __local_bh_enable() actually makes preempt_count negative, which might
> > have bad effects. Or am I missing something?
>
> you are right. Fortunately the main use of cond_resched_softirq() is via
> cond_resched_all() - which is safe because it uses softirq_count(). But
> the kernel/timer.c explicit call to cond_resched_softirq() is unsafe.
> I've fixed this in my tree and i've added an assert to catch the
> underflow when it happens.
>
> Ingo
I've had linux-2.6.9-rc1-bk8-Q7 lock up on me this morning not long
after starting a glibc compile resulting from: emerge -uo gnome
although it did survive a make World on xorg-cvs.
Could this have been caused by the bug under discussion?
Unfortunatly I don't have much testing time before I go on hollidays,
so for now I went back to linux-2.6.9-rc1-bk6-Q5 which on my machine is
rock solid.
Peter
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-30 19:13 Mark_H_Johnson
2004-09-02 6:33 ` Ingo Molnar
0 siblings, 1 reply; 15+ messages in thread
From: Mark_H_Johnson @ 2004-08-30 19:13 UTC (permalink / raw)
To: Ingo Molnar
Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
Lee Revell
>i've uploaded -Q5 to:
> [snip the rest...]
Thanks.
This appears to be the first 2.6.x kernel I've run that has results
comparable to 2.4.x kernels with low latency patches and kernel preemption.
The few remaining symptoms I see include:
- a few long (> 1 msec) delays in the real time CPU loop (no system calls)
- varying time to complete the write system call (for audio) - much
different than 2.4
- a couple latency traces (> 700 usec) in the network driver
For reference, these tests were performed on the following SMP system:
Dual 866 Mhz Pentium III
512 Mbyte memory
IDE system disk (DMA enabled)
The basic test is Benno's latency test
(http://www.gardena.net/benno/linux/audio) with some slight modifications
to the tests to keep the second CPU busy (non real time CPU burner) and to
add network I/O tests. The 2.4 tests were run with 2.4.20, the 2.6 tests
were run with 2.4.9-rc1-Q5. On 2.6, voluntary_preemption,
kernel_preemption, hardirq_preemption, and softirq_preemption are all 1. I
also set
/sys/block/hda/queue/max_sectors_kb = 32
/sys/block/hda/queue/read_ahead_kb = 32
/proc/sys/net/core/netdev_max_backlog = 8
and the audio driver was set to be non-threaded.
BASIC RESULTS
=============
Comparison of results between 2.6.x and 2.4.x; values in milliseconds.
Nominal values for the write operation is 1.45 msec; the CPU loop is 1.16
msec.
Max CPU Delta Max Write Delta
Test 2.4.x 2.6.x 2.4.x 2.6.x
X11 0.10 0.16 0.05 0.65
/proc 0.07 0.17 0.05 0.65
net out 0.15 0.19 0.05 0.75
net in 0.17 0.23 0.05 0.95
dsk wrt 0.49 0.18 0.25 1.05
dsk copy 2.48 0.68 2.25 1.25
disk rd 3.03 1.61 2.75 1.35
LONG DELAYS
===========
Note I still see over 110% worst case overhead on a max priority real time
CPU task (no system calls) when doing heavy disk I/O on 2.6. It is much
better than 2.4, but still disturbing. What I would hope would happen on a
dual CPU system like mine, is that the real time task tends to be on one
CPU and the other system activity would tend to stay on the other CPU.
However, the results do not seem to indicate that behavior.
VARYING SYSTEM CALL TIMES
=========================
In 2.4, it appears that the duration of the write system call is basically
fixed and dependent on the duration of the audio fragment. In 2.6, this
behavior is now different. If I look at the chart in detail, it appears the
system is queueing up several write operations during the first few seconds
of testing. You can see this by consistently low elapsed times for the
write system call. Then the elapsed time for the write bounces up / down in
a sawtooth pattern over a 1 msec range. Could someone explain the cause of
this new behavior and if there is a setting to restore the old behavior? I
am concerned that this queueing adds latency to audio operations (when
trying to synchronize audio with other real time behavior).
LONG NETWORK LATENCIES
======================
In about 25 minutes of heavy testing, I had two latency traces with
/proc/sys/kernel/preempt_max_latency set to 700. They had the same start /
end location with the long delay as follows:
730 us, entries: 361
...
started at rtl8139_poll+0x3c/0x160
ended at rtl8139_poll+0x100/0x160
00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
00000001 0.140ms (+0.140ms): rtl8139_rx (rtl8139_poll)
00000001 0.556ms (+0.416ms): alloc_skb (rtl8139_rx)
... remaining items all > +0.005ms ...
731 us, entries: 360
...
started at rtl8139_poll+0x3c/0x160
ended at rtl8139_poll+0x100/0x160
00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
00000001 0.000ms (+0.000ms): rtl8139_rx (rtl8139_poll)
00000001 0.002ms (+0.001ms): alloc_skb (rtl8139_rx)
00000001 0.141ms (+0.139ms): kmem_cache_alloc (alloc_skb)
00000001 0.211ms (+0.070ms): __kmalloc (alloc_skb)
00000001 0.496ms (+0.284ms): eth_type_trans (rtl8139_rx)
00000001 0.565ms (+0.068ms): netif_receive_skb (rtl8139_rx)
... remaining items all > +0.005ms ...
Still much better than my previous results (before setting
netdev_max_backlog).
I will be running some additional tests
- reducing preempt_max_latency
- running with sortirq and hardirq_preemption=0
to see if these uncover any further problems.
Thanks again for the good work.
--Mark H Johnson
<mailto:Mark_H_Johnson@raytheon.com>
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
2004-08-30 19:13 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 Mark_H_Johnson
@ 2004-09-02 6:33 ` Ingo Molnar
2004-09-02 6:55 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 Ingo Molnar
0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 6:33 UTC (permalink / raw)
To: Mark_H_Johnson
Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
Lee Revell
* Mark_H_Johnson@raytheon.com <Mark_H_Johnson@raytheon.com> wrote:
> In 2.4, it appears that the duration of the write system call is
> basically fixed and dependent on the duration of the audio fragment.
> In 2.6, this behavior is now different. If I look at the chart in
> detail, it appears the system is queueing up several write operations
> during the first few seconds of testing. You can see this by
> consistently low elapsed times for the write system call. Then the
> elapsed time for the write bounces up / down in a sawtooth pattern
> over a 1 msec range. Could someone explain the cause of this new
> behavior and if there is a setting to restore the old behavior? I am
> concerned that this queueing adds latency to audio operations (when
> trying to synchronize audio with other real time behavior).
i think i found the reason for the sawtooth: it's a bug in hardirq
redirection. In certain situations we can end up not waking up softirqd,
resulting in a random 0-1msec latency between hardirq arrival and
softirq execution. We dont see higher latencies because timer IRQs
always wake up softirqd which hides the bug to a certain degree.
I'll fix this in -Q8.
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 6:33 ` Ingo Molnar
@ 2004-09-02 6:55 ` Ingo Molnar
2004-09-02 7:04 ` Lee Revell
2004-09-02 8:23 ` Ingo Molnar
0 siblings, 2 replies; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 6:55 UTC (permalink / raw)
To: Mark_H_Johnson
Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
Lee Revell, alsa-devel
i've released the -Q8 patch:
http://redhat.com/~mingo/voluntary-preempt/voluntary-preempt-2.6.9-rc1-bk4-Q8
ontop of:
http://redhat.com/~mingo/voluntary-preempt/diff-bk-040828-2.6.8.1.bz2
this release fixes an artificial 0-1msec delay between hardirq arrival
and softirq invocation. This should solve some of the ALSA artifacts
reported by Mark H Johnson. It should also solve the rtl8139 problems -
i've put such a card into a testbox and with -Q7 i had similar packet
latency problems while with -Q8 it works just fine.
So netdev_backlog_granularity is still a value of 1 in -Q8, please check
whether the networking problems (bootup and service startup and latency)
problems are resolved. (and increase this value in case there are still
problems.)
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 6:55 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 Ingo Molnar
@ 2004-09-02 7:04 ` Lee Revell
2004-09-02 7:15 ` Ingo Molnar
2004-09-02 7:17 ` Ingo Molnar
2004-09-02 8:23 ` Ingo Molnar
1 sibling, 2 replies; 15+ messages in thread
From: Lee Revell @ 2004-09-02 7:04 UTC (permalink / raw)
To: Ingo Molnar
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel
On Thu, 2004-09-02 at 02:55, Ingo Molnar wrote:
> i've released the -Q8 patch:
>
> http://redhat.com/~mingo/voluntary-preempt/voluntary-preempt-2.6.9-rc1-bk4-Q8
>
> ontop of:
>
> http://redhat.com/~mingo/voluntary-preempt/diff-bk-040828-2.6.8.1.bz2
>
Here are traces of a 145, 190, and 217 usec latencies in
netif_receive_skb:
http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace2.txt
http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace3.txt
http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace4.txt
Some of these are with ip_conntrack enabled, at the request of another
poster, this does not make much of a difference, it increases the worst
case latency by 20 usec or so.
Also there is the rt_garbage_collect issue, previously reported. I have
not seen this lately but I do not remember seeing that it was fixed.
Lee
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:04 ` Lee Revell
@ 2004-09-02 7:15 ` Ingo Molnar
2004-09-02 7:31 ` Lee Revell
2004-09-02 23:25 ` Lee Revell
2004-09-02 7:17 ` Ingo Molnar
1 sibling, 2 replies; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 7:15 UTC (permalink / raw)
To: Lee Revell
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel
* Lee Revell <rlrevell@joe-job.com> wrote:
> Here are traces of a 145, 190, and 217 usec latencies in
> netif_receive_skb:
>
> http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace2.txt
> http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace3.txt
> http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace4.txt
these all seem to be single-packet processing latencies - it would be
quite hard to make those codepaths preemptible.
i'd suggest to turn off things like netfilter and ip_conntrack (and
other optional networking features that show up in the trace), they can
only increase latency:
00000001 0.016ms (+0.000ms): ip_rcv (netif_receive_skb)
00000001 0.019ms (+0.002ms): nf_hook_slow (ip_rcv)
00000002 0.019ms (+0.000ms): nf_iterate (nf_hook_slow)
00000002 0.021ms (+0.001ms): ip_conntrack_defrag (nf_iterate)
00000002 0.022ms (+0.000ms): ip_conntrack_in (nf_iterate)
00000002 0.022ms (+0.000ms): ip_ct_find_proto (ip_conntrack_in)
00000103 0.023ms (+0.000ms): __ip_ct_find_proto (ip_ct_find_proto)
00000102 0.024ms (+0.000ms): local_bh_enable (ip_ct_find_proto)
00000002 0.025ms (+0.001ms): tcp_error (ip_conntrack_in)
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:15 ` Ingo Molnar
@ 2004-09-02 7:31 ` Lee Revell
2004-09-02 7:46 ` Ingo Molnar
2004-09-02 23:25 ` Lee Revell
1 sibling, 1 reply; 15+ messages in thread
From: Lee Revell @ 2004-09-02 7:31 UTC (permalink / raw)
To: Ingo Molnar
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel
On Thu, 2004-09-02 at 03:15, Ingo Molnar wrote:
> * Lee Revell <rlrevell@joe-job.com> wrote:
>
> > Here are traces of a 145, 190, and 217 usec latencies in
> > netif_receive_skb:
> >
> > http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace2.txt
> > http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace3.txt
> > http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace4.txt
>
> these all seem to be single-packet processing latencies - it would be
> quite hard to make those codepaths preemptible.
>
I suspected as much, these are not a problem. The large latencies from
reading the /proc filesystem are a bit worrisome (trace1.txt), I will
report these again if they still happen with Q8.
Lee
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:31 ` Lee Revell
@ 2004-09-02 7:46 ` Ingo Molnar
2004-09-03 1:10 ` Rusty Russell
0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 7:46 UTC (permalink / raw)
To: Lee Revell
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel, Rusty Russell, netfilter-devel
* Lee Revell <rlrevell@joe-job.com> wrote:
> > these all seem to be single-packet processing latencies - it would be
> > quite hard to make those codepaths preemptible.
>
> I suspected as much, these are not a problem. The large latencies
> from reading the /proc filesystem are a bit worrisome (trace1.txt), I
> will report these again if they still happen with Q8.
conntrack's ct_seq ops indeed seems to have latency problems - the quick
workaround is to disable conntrack.
The reason for the latency is that ct_seq_start() does a read_lock() on
ip_conntrack_lock and only ct_seq_stop() releases it - possibly
milliseconds later. But the whole conntrack /proc code is quite flawed:
READ_LOCK(&ip_conntrack_lock);
if (*pos >= ip_conntrack_htable_size)
return NULL;
bucket = kmalloc(sizeof(unsigned int), GFP_KERNEL);
if (!bucket) {
return ERR_PTR(-ENOMEM);
}
*bucket = *pos;
return bucket;
#1: we kmalloc(GFP_KERNEL) with a spinlock held and softirqs off - ouch!
#2: why does it do the kmalloc() anyway? It could store the position in
the seq pointer just fine. No need to alloc an integer pointer to
store the value in ...
#3: to fix the latency, ct_seq_show() could take the ip_conntrack_lock
and could check the current index against ip_conntrack_htable_size.
There's not much point in making this non-preemptible, there's
a 4K granularity anyway.
Rusty, what's going on in this code?
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:46 ` Ingo Molnar
@ 2004-09-03 1:10 ` Rusty Russell
0 siblings, 0 replies; 15+ messages in thread
From: Rusty Russell @ 2004-09-03 1:10 UTC (permalink / raw)
To: Ingo Molnar
Cc: Daniel Schmitt, netfilter-devel, linux-kernel, K.R. Foley,
Lee Revell, Mark_H_Johnson, alsa-devel, Felipe Alfaro Solana
On Thu, 2004-09-02 at 17:46, Ingo Molnar wrote:
> Rusty, what's going on in this code?
Good question! Not my code, fortunately...
> #1: we kmalloc(GFP_KERNEL) with a spinlock held and softirqs off - ouch!
>
> #2: why does it do the kmalloc() anyway? It could store the position in
> the seq pointer just fine. No need to alloc an integer pointer to
> store the value in ...
>
> #3: to fix the latency, ct_seq_show() could take the ip_conntrack_lock
> and could check the current index against ip_conntrack_htable_size.
> There's not much point in making this non-preemptible, there's
> a 4K granularity anyway.
The code tries to put an entire hash bucket into a single seq_read().
That's not going to work if the hash is really deep. On the other hand,
not much will, and it's simple.
The lock is only needed on traversing: htable_size can't change after
init anyway, so it should be done in ct_seq_show.
Fix should be fairly simple...
Rusty.
--
Anyone who quotes me in their signature is an idiot -- Rusty Russell
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
@ 2004-09-03 1:10 ` Rusty Russell
0 siblings, 0 replies; 15+ messages in thread
From: Rusty Russell @ 2004-09-03 1:10 UTC (permalink / raw)
To: Ingo Molnar
Cc: Lee Revell, Mark_H_Johnson, K.R. Foley, linux-kernel,
Felipe Alfaro Solana, Daniel Schmitt, alsa-devel, netfilter-devel
On Thu, 2004-09-02 at 17:46, Ingo Molnar wrote:
> Rusty, what's going on in this code?
Good question! Not my code, fortunately...
> #1: we kmalloc(GFP_KERNEL) with a spinlock held and softirqs off - ouch!
>
> #2: why does it do the kmalloc() anyway? It could store the position in
> the seq pointer just fine. No need to alloc an integer pointer to
> store the value in ...
>
> #3: to fix the latency, ct_seq_show() could take the ip_conntrack_lock
> and could check the current index against ip_conntrack_htable_size.
> There's not much point in making this non-preemptible, there's
> a 4K granularity anyway.
The code tries to put an entire hash bucket into a single seq_read().
That's not going to work if the hash is really deep. On the other hand,
not much will, and it's simple.
The lock is only needed on traversing: htable_size can't change after
init anyway, so it should be done in ct_seq_show.
Fix should be fairly simple...
Rusty.
--
Anyone who quotes me in their signature is an idiot -- Rusty Russell
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:15 ` Ingo Molnar
2004-09-02 7:31 ` Lee Revell
@ 2004-09-02 23:25 ` Lee Revell
2004-09-02 23:28 ` Ingo Molnar
1 sibling, 1 reply; 15+ messages in thread
From: Lee Revell @ 2004-09-02 23:25 UTC (permalink / raw)
To: Ingo Molnar
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel
On Thu, 2004-09-02 at 03:15, Ingo Molnar wrote:
> * Lee Revell <rlrevell@joe-job.com> wrote:
>
> > Here are traces of a 145, 190, and 217 usec latencies in
> > netif_receive_skb:
> >
> > http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace2.txt
> > http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace3.txt
> > http://krustophenia.net/testresults.php?dataset=2.6.9-rc1-Q6#/var/www/2.6.9-rc1-Q6/trace4.txt
>
> these all seem to be single-packet processing latencies - it would be
> quite hard to make those codepaths preemptible.
>
> i'd suggest to turn off things like netfilter and ip_conntrack (and
> other optional networking features that show up in the trace), they can
> only increase latency:
>
Do you see any optional networking features in the trace (other than
ip_conntrack)? I was under the impression that I had everything
optional disabled.
Lee
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 23:25 ` Lee Revell
@ 2004-09-02 23:28 ` Ingo Molnar
2004-09-02 23:32 ` Lee Revell
0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 23:28 UTC (permalink / raw)
To: Lee Revell
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel
* Lee Revell <rlrevell@joe-job.com> wrote:
> Do you see any optional networking features in the trace (other than
> ip_conntrack)? I was under the impression that I had everything
> optional disabled.
yeah, it seems to be only ip_conntrack and netfilter (which conntrack
relies on).
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 23:28 ` Ingo Molnar
@ 2004-09-02 23:32 ` Lee Revell
0 siblings, 0 replies; 15+ messages in thread
From: Lee Revell @ 2004-09-02 23:32 UTC (permalink / raw)
To: Ingo Molnar
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt, alsa-devel
On Thu, 2004-09-02 at 19:28, Ingo Molnar wrote:
> * Lee Revell <rlrevell@joe-job.com> wrote:
>
> > Do you see any optional networking features in the trace (other than
> > ip_conntrack)? I was under the impression that I had everything
> > optional disabled.
>
> yeah, it seems to be only ip_conntrack and netfilter (which conntrack
> relies on).
>
FWIW these seem to only slow down the single packet path by about 10%.
This is pretty good.
Lee
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 7:04 ` Lee Revell
2004-09-02 7:15 ` Ingo Molnar
@ 2004-09-02 7:17 ` Ingo Molnar
1 sibling, 0 replies; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 7:17 UTC (permalink / raw)
To: Lee Revell
Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
Daniel Schmitt
* Lee Revell <rlrevell@joe-job.com> wrote:
> Also there is the rt_garbage_collect issue, previously reported. I
> have not seen this lately but I do not remember seeing that it was
> fixed.
i dont think it's fixed, please re-report it if it occurs again, there
have been many changes.
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
2004-09-02 6:55 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 Ingo Molnar
2004-09-02 7:04 ` Lee Revell
@ 2004-09-02 8:23 ` Ingo Molnar
1 sibling, 0 replies; 15+ messages in thread
From: Ingo Molnar @ 2004-09-02 8:23 UTC (permalink / raw)
To: Mark_H_Johnson
Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
Lee Revell, alsa-devel
* Ingo Molnar <mingo@elte.hu> wrote:
> this release fixes an artificial 0-1msec delay between hardirq arrival
> and softirq invocation. This should solve some of the ALSA artifacts
> reported by Mark H Johnson. It should also solve the rtl8139 problems
> - i've put such a card into a testbox and with -Q7 i had similar
> packet latency problems while with -Q8 it works just fine.
the rtl8139 problems are not fixed yet - i can still reproduce the
delayed packet issues.
Ingo
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2004-09-03 1:23 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-09-02 7:57 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 mika.penttila
2004-09-02 8:32 ` Ingo Molnar
2004-09-02 9:06 ` Peter Zijlstra
-- strict thread matches above, loose matches on Subject: below --
2004-08-30 19:13 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 Mark_H_Johnson
2004-09-02 6:33 ` Ingo Molnar
2004-09-02 6:55 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 Ingo Molnar
2004-09-02 7:04 ` Lee Revell
2004-09-02 7:15 ` Ingo Molnar
2004-09-02 7:31 ` Lee Revell
2004-09-02 7:46 ` Ingo Molnar
2004-09-03 1:10 ` Rusty Russell
2004-09-03 1:10 ` Rusty Russell
2004-09-02 23:25 ` Lee Revell
2004-09-02 23:28 ` Ingo Molnar
2004-09-02 23:32 ` Lee Revell
2004-09-02 7:17 ` Ingo Molnar
2004-09-02 8:23 ` Ingo Molnar
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.