public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-30 19:13 Mark_H_Johnson
  2004-08-30 19:21 ` Ingo Molnar
                   ` (2 more replies)
  0 siblings, 3 replies; 76+ messages in thread
From: Mark_H_Johnson @ 2004-08-30 19:13 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
	Lee Revell

>i've uploaded -Q5 to:
> [snip the rest...]

Thanks.

This appears to be the first 2.6.x kernel I've run that has results
comparable to 2.4.x kernels with low latency patches and kernel preemption.
The few remaining symptoms I see include:

 - a few long (> 1 msec) delays in the real time CPU loop (no system calls)
 - varying time to complete the write system call (for audio) - much
different than 2.4
 - a couple latency traces (> 700 usec) in the network driver

For reference, these tests were performed on the following SMP system:
  Dual 866 Mhz Pentium III
  512 Mbyte memory
  IDE system disk (DMA enabled)
The basic test is Benno's latency test
(http://www.gardena.net/benno/linux/audio) with some slight modifications
to the tests to keep the second CPU busy (non real time CPU burner) and to
add network I/O tests. The 2.4 tests were run with 2.4.20, the 2.6 tests
were run with 2.4.9-rc1-Q5. On 2.6, voluntary_preemption,
kernel_preemption, hardirq_preemption, and softirq_preemption are all 1. I
also set
  /sys/block/hda/queue/max_sectors_kb = 32
  /sys/block/hda/queue/read_ahead_kb = 32
  /proc/sys/net/core/netdev_max_backlog = 8
and the audio driver was set to be non-threaded.

BASIC RESULTS
=============
Comparison of results between 2.6.x and 2.4.x; values in milliseconds.
Nominal values for the write operation is 1.45 msec; the CPU loop is 1.16
msec.

          Max CPU Delta     Max Write Delta
Test     2.4.x     2.6.x    2.4.x     2.6.x
X11       0.10      0.16     0.05      0.65
/proc     0.07      0.17     0.05      0.65
net out   0.15      0.19     0.05      0.75
net in    0.17      0.23     0.05      0.95
dsk wrt   0.49      0.18     0.25      1.05
dsk copy  2.48      0.68     2.25      1.25
disk rd   3.03      1.61     2.75      1.35

LONG DELAYS
===========

Note I still see over 110% worst case overhead on a max priority real time
CPU task (no system calls) when doing heavy disk I/O on 2.6. It is much
better than 2.4, but still disturbing. What I would hope would happen on a
dual CPU system like mine, is that the real time task tends to be on one
CPU and the other system activity would tend to stay on the other CPU.
However, the results do not seem to indicate that behavior.

VARYING SYSTEM CALL TIMES
=========================

In 2.4, it appears that the duration of the write system call is basically
fixed and dependent on the duration of the audio fragment. In 2.6, this
behavior is now different. If I look at the chart in detail, it appears the
system is queueing up several write operations during the first few seconds
of testing. You can see this by consistently low elapsed times for the
write system call. Then the elapsed time for the write bounces up / down in
a sawtooth pattern over a 1 msec range. Could someone explain the cause of
this new behavior and if there is a setting to restore the old behavior? I
am concerned that this queueing adds latency to audio operations (when
trying to synchronize audio with other real time behavior).

LONG NETWORK LATENCIES
======================

In about 25 minutes of heavy testing, I had two latency traces with
/proc/sys/kernel/preempt_max_latency set to 700. They had the same start /
end location with the long delay as follows:
  730 us, entries: 361
  ...
  started at rtl8139_poll+0x3c/0x160
  ended at   rtl8139_poll+0x100/0x160
  00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
  00000001 0.140ms (+0.140ms): rtl8139_rx (rtl8139_poll)
  00000001 0.556ms (+0.416ms): alloc_skb (rtl8139_rx)
  ... remaining items all > +0.005ms ...

  731 us, entries: 360
  ...
  started at rtl8139_poll+0x3c/0x160
  ended at   rtl8139_poll+0x100/0x160
  00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
  00000001 0.000ms (+0.000ms): rtl8139_rx (rtl8139_poll)
  00000001 0.002ms (+0.001ms): alloc_skb (rtl8139_rx)
  00000001 0.141ms (+0.139ms): kmem_cache_alloc (alloc_skb)
  00000001 0.211ms (+0.070ms): __kmalloc (alloc_skb)
  00000001 0.496ms (+0.284ms): eth_type_trans (rtl8139_rx)
  00000001 0.565ms (+0.068ms): netif_receive_skb (rtl8139_rx)
  ... remaining items all > +0.005ms ...

Still much better than my previous results (before setting
netdev_max_backlog).

I will be running some additional tests
 - reducing preempt_max_latency
 - running with sortirq and hardirq_preemption=0
to see if these uncover any further problems.

Thanks again for the good work.
--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


^ permalink raw reply	[flat|nested] 76+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8
@ 2004-09-02  7:57 mika.penttila
  2004-09-02  8:32 ` Ingo Molnar
  0 siblings, 1 reply; 76+ messages in thread
From: mika.penttila @ 2004-09-02  7:57 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel

Ingo,

I think there might be a problem with voluntary-preempt's hadling of softirqs. Namely, in cond_resched_softirq(), you do __local_bh_enable() and local_bh_disable(). But it may be the case that the softirq is handled from ksoftirqd, and then the preempt_count isn't elevated with SOFTIRQ_OFFSET (only PF_SOFTIRQ is set). So the __local_bh_enable() actually makes preempt_count negative, which might have bad effects. Or am I missing something?

Mika



^ permalink raw reply	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2004-09-04 20:04 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-30 19:13 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 Mark_H_Johnson
2004-08-30 19:21 ` Ingo Molnar
2004-09-01 12:31   ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 - netdev_max_back_log is too small P.O. Gaillard
2004-09-01 13:05     ` Ingo Molnar
2004-09-02 11:24       ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q6 - network is no longer smooth P.O. Gaillard
2004-09-02 11:28         ` Ingo Molnar
2004-09-02 15:26           ` P.O. Gaillard
2004-08-31  8:49 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 Ingo Molnar
2004-09-02  6:33 ` Ingo Molnar
2004-09-02  6:55   ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 Ingo Molnar
2004-09-02  7:04     ` Lee Revell
2004-09-02  7:15       ` Ingo Molnar
2004-09-02  7:31         ` Lee Revell
2004-09-02  7:46           ` Ingo Molnar
2004-09-03  1:10             ` Rusty Russell
2004-09-02 23:25         ` Lee Revell
2004-09-02 23:28           ` Ingo Molnar
2004-09-02 23:32             ` Lee Revell
2004-09-02  7:17       ` Ingo Molnar
2004-09-02  8:23     ` Ingo Molnar
2004-09-02 11:10     ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q9 Ingo Molnar
2004-09-02 12:14       ` Thomas Charbonnel
2004-09-02 13:16       ` Thomas Charbonnel
2004-09-02 13:23         ` Ingo Molnar
2004-09-02 14:38           ` Thomas Charbonnel
2004-09-02 21:57       ` [patch] voluntary-preempt-2.6.9-rc1-bk4-R0 Ingo Molnar
2004-09-02 22:06         ` Lee Revell
2004-09-02 22:14           ` Ingo Molnar
2004-09-02 22:15             ` Lee Revell
2004-09-03  0:24             ` Lee Revell
2004-09-03  3:17               ` Eric St-Laurent
2004-09-03  6:26                 ` Lee Revell
2004-09-03  6:36                   ` Ingo Molnar
2004-09-03  6:49                     ` Lee Revell
2004-09-03  7:01                       ` Ingo Molnar
2004-09-03  7:05                       ` Ingo Molnar
2004-09-03  7:40                         ` Lee Revell
2004-09-03  7:50                           ` Free Ekanayaka
2004-09-03  8:05                             ` Lee Revell
2004-09-03  9:05                               ` Free Ekanayaka
2004-09-03  9:25                               ` [patch] voluntary-preempt-2.6.9-rc1-bk4-R1 Ingo Molnar
2004-09-03  9:50                                 ` Luke Yelavich
2004-09-03 10:29                                   ` Ingo Molnar
2004-09-03 10:43                                     ` Luke Yelavich
2004-09-03 11:33                                 ` Thomas Charbonnel
2004-09-03 11:49                                   ` Ingo Molnar
2004-09-03 12:05                                     ` Thomas Charbonnel
2004-09-03 16:14                                     ` Thomas Charbonnel
2004-09-03 17:36                                       ` Thomas Charbonnel
2004-09-03 11:36                                 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-R2 Ingo Molnar
2004-09-03  8:09                           ` [patch] voluntary-preempt-2.6.9-rc1-bk4-R0 Luke Yelavich
2004-09-03  8:13                             ` Lee Revell
2004-09-03  8:21                               ` Luke Yelavich
2004-09-03 12:52                               ` Luke Yelavich
2004-09-03 18:09                         ` K.R. Foley
2004-09-03 11:04         ` K.R. Foley
2004-09-03 17:02         ` K.R. Foley
2004-09-03 20:40           ` Lee Revell
2004-09-03 17:10         ` K.R. Foley
2004-09-03 18:17           ` Ingo Molnar
2004-09-03 18:36             ` K.R. Foley
2004-09-03 19:30             ` [patch] voluntary-preempt-2.6.9-rc1-bk4-R3 Ingo Molnar
2004-09-03 19:49               ` K.R. Foley
2004-09-04  3:39               ` K.R. Foley
2004-09-04  3:43               ` K.R. Foley
2004-09-04  6:41                 ` Ingo Molnar
2004-09-04 12:28                   ` K.R. Foley
2004-09-04  8:57                 ` Ingo Molnar
2004-09-04 10:16                   ` Lee Revell
2004-09-04 14:35                   ` K.R. Foley
2004-09-04 20:05                     ` Ingo Molnar
2004-09-03 18:39           ` [patch] voluntary-preempt-2.6.9-rc1-bk4-R0 Ingo Molnar
2004-09-03 18:41             ` K.R. Foley
  -- strict thread matches above, loose matches on Subject: below --
2004-09-02  7:57 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q8 mika.penttila
2004-09-02  8:32 ` Ingo Molnar
2004-09-02  9:06   ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox