public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-31 15:17 Mark_H_Johnson
  2004-08-31 17:20 ` Lee Revell
  0 siblings, 1 reply; 57+ messages in thread
From: Mark_H_Johnson @ 2004-08-31 15:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mark_H_Johnson, K.R. Foley, linux-kernel, Felipe Alfaro Solana,
	Daniel Schmitt, Lee Revell

>I will be running some additional tests
>- reducing preempt_max_latency
>- running with sortirq and hardirq_preemption=0
>to see if these uncover any further problems.

Same system / kernel combination as described previously.

With preempt_max_latency=500, I went from a handful of traces to 63 in
a 25 minute test run. Most traces during the last half of the test while
the disk was active (write, copy, read). I will send a copy of the full
traces separately (not to linux-kernel), but here is a summary of the
information gathered. If someone else wants the full traces, please send
an email separately.

Note - I did not repeat the network poll / number of read cycles
problem since we are already working that one.

Cascade
=======
Occurred five times, with latencies of
 latency: 964 us, entries: 285 (285)
 latency: 964 us, entries: 285 (285)
 latency: 1827 us, entries: 454 (454)
 latency: 1111 us, entries: 318 (318)
 latency: 969 us, entries: 279 (279)

Starts / ends at
 => started at: run_timer_softirq+0x12f/0x2a0
 => ended at:   run_timer_softirq+0x10a/0x2a0

For example:
00000001 0.000ms (+0.000ms): run_timer_softirq (___do_softirq)
00000001 0.000ms (+0.000ms): cascade (run_timer_softirq)
00000001 0.005ms (+0.004ms): cascade (run_timer_softirq)
00000001 0.009ms (+0.004ms): cascade (run_timer_softirq)
00000001 0.013ms (+0.004ms): cascade (run_timer_softirq)
00000001 0.018ms (+0.004ms): cascade (run_timer_softirq)
...
00000001 0.891ms (+0.004ms): cascade (run_timer_softirq)
00000001 0.895ms (+0.004ms): cascade (run_timer_softirq)
00000001 0.896ms (+0.000ms): internal_add_timer (cascade)
00010001 0.899ms (+0.003ms): do_IRQ (run_timer_softirq)
00010002 0.899ms (+0.000ms): mask_and_ack_level_ioapic_irq (do_IRQ)
00010002 0.899ms (+0.000ms): mask_IO_APIC_irq
(mask_and_ack_level_ioapic_irq)
00010003 0.900ms (+0.000ms): __mask_IO_APIC_irq (mask_IO_APIC_irq)
00010003 0.900ms (+0.000ms): __modify_IO_APIC_irq (__mask_IO_APIC_irq)
00010002 0.914ms (+0.013ms): generic_redirect_hardirq (do_IRQ)
...


Long Duration Trace Entries
===========================

Each of these traces had a delay of about 1/2 msec at one step.

#1 - audio driver
 latency: 621 us, entries: 28 (28)
    -----------------
    | task: latencytest/11492, uid:0 nice:0 policy:1 rt_prio:99
    -----------------
 => started at: snd_ensoniq_playback1_prepare+0x74/0x180
 => ended at:   snd_ensoniq_playback1_prepare+0x11d/0x180
=======>
00000001 0.000ms (+0.000ms): snd_ensoniq_playback1_prepare
(snd_pcm_do_prepare)
00000001 0.014ms (+0.014ms): snd_es1371_dac1_rate
(snd_ensoniq_playback1_prepare)
00000001 0.014ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_dac1_rate)
00000001 0.562ms (+0.548ms): snd_es1371_src_read (snd_es1371_dac1_rate)
00000001 0.562ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_src_read)
00000001 0.578ms (+0.015ms): snd_es1371_wait_src_ready
(snd_es1371_src_read)
00000001 0.585ms (+0.006ms): snd_es1371_src_write (snd_es1371_dac1_rate)
00000001 0.585ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_src_write)
00000001 0.601ms (+0.015ms): snd_es1371_src_write (snd_es1371_dac1_rate)
00000001 0.601ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_src_write)
00000001 0.602ms (+0.001ms): snd_es1371_wait_src_ready
(snd_es1371_dac1_rate)
00000001 0.616ms (+0.013ms): smp_apic_timer_interrupt
(snd_ensoniq_playback1_prepare)

or

 latency: 663 us, entries: 41 (41)
    -----------------
    | task: latencytest/11492, uid:0 nice:0 policy:1 rt_prio:99
    -----------------
 => started at: snd_ensoniq_playback1_prepare+0x74/0x180
 => ended at:   snd_ensoniq_playback1_prepare+0x11d/0x180
=======>
00000001 0.000ms (+0.000ms): snd_ensoniq_playback1_prepare
(snd_pcm_do_prepare)
00000001 0.004ms (+0.004ms): snd_es1371_dac1_rate
(snd_ensoniq_playback1_prepare)
00000001 0.005ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_dac1_rate)
00000001 0.006ms (+0.001ms): snd_es1371_src_read (snd_es1371_dac1_rate)
00000001 0.006ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_src_read)
00000001 0.019ms (+0.012ms): snd_es1371_wait_src_ready
(snd_es1371_src_read)
00000001 0.607ms (+0.588ms): snd_es1371_src_write (snd_es1371_dac1_rate)
00000001 0.608ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_src_write)
00000001 0.624ms (+0.016ms): snd_es1371_src_write (snd_es1371_dac1_rate)
00000001 0.624ms (+0.000ms): snd_es1371_wait_src_ready
(snd_es1371_src_write)
00000001 0.626ms (+0.001ms): snd_es1371_wait_src_ready
(snd_es1371_dac1_rate)
00000001 0.639ms (+0.013ms): smp_apic_timer_interrupt
(snd_ensoniq_playback1_prepare)

#2 - Scheduler

preemption latency trace v1.0.2
-------------------------------
 latency: 567 us, entries: 48 (48)
    -----------------
    | task: cpu_burn/9444, uid:0 nice:10 policy:0 rt_prio:0
    -----------------
 => started at: schedule+0x51/0x7b0
 => ended at:   schedule+0x35b/0x7b0
=======>
00000001 0.000ms (+0.000ms): schedule (io_schedule)
00000001 0.001ms (+0.001ms): sched_clock (schedule)
00000002 0.001ms (+0.000ms): deactivate_task (schedule)
00000002 0.002ms (+0.000ms): dequeue_task (deactivate_task)
00000002 0.549ms (+0.546ms): __switch_to (schedule)
00000002 0.550ms (+0.001ms): finish_task_switch (schedule)
00000002 0.550ms (+0.000ms): smp_apic_timer_interrupt (finish_task_switch)
00010002 0.551ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010002 0.551ms (+0.000ms): profile_hook (profile_tick)

[I have a LOT more traces where __switch_to has the big time delay]
but also note...

preemption latency trace v1.0.2
-------------------------------
 latency: 591 us, entries: 62 (62)
    -----------------
    | task: fam/4524, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: schedule+0x51/0x7b0
 => ended at:   schedule+0x35b/0x7b0
=======>
00000001 0.000ms (+0.000ms): schedule (io_schedule)
00000001 0.000ms (+0.000ms): sched_clock (schedule)
00000002 0.066ms (+0.066ms): deactivate_task (schedule)
00000002 0.066ms (+0.000ms): dequeue_task (deactivate_task)
00000002 0.475ms (+0.408ms): dequeue_task (schedule)
00000002 0.475ms (+0.000ms): recalc_task_prio (schedule)
00000002 0.475ms (+0.000ms): effective_prio (recalc_task_prio)
00000002 0.475ms (+0.000ms): enqueue_task (schedule)
00000002 0.557ms (+0.081ms): __switch_to (schedule)
00000002 0.558ms (+0.000ms): finish_task_switch (schedule)
00000002 0.558ms (+0.000ms): smp_apic_timer_interrupt (finish_task_switch)
00010002 0.559ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
where dequeue_task can take a while as well or this one

preemption latency trace v1.0.2
-------------------------------
 latency: 591 us, entries: 77 (77)
    -----------------
    | task: ksoftirqd/0/3, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: schedule+0x51/0x7b0
 => ended at:   schedule+0x35b/0x7b0
=======>
00000001 0.000ms (+0.000ms): schedule (io_schedule)
00000001 0.000ms (+0.000ms): sched_clock (schedule)
00000002 0.000ms (+0.000ms): deactivate_task (schedule)
00000002 0.000ms (+0.000ms): dequeue_task (deactivate_task)
00000002 0.480ms (+0.479ms): load_balance_newidle (schedule)
00000002 0.514ms (+0.034ms): find_busiest_group (load_balance_newidle)
00000002 0.554ms (+0.039ms): find_next_bit (find_busiest_group)
00000002 0.555ms (+0.001ms): find_next_bit (find_busiest_group)
00000002 0.555ms (+0.000ms): find_busiest_queue (load_balance_newidle)
00000002 0.556ms (+0.000ms): find_next_bit (find_busiest_queue)
00000002 0.557ms (+0.000ms): double_lock_balance (load_balance_newidle)
00000003 0.557ms (+0.000ms): move_tasks (load_balance_newidle)
00000003 0.559ms (+0.002ms): find_next_bit (move_tasks)
00000003 0.560ms (+0.000ms): find_next_bit (move_tasks)
00000003 0.561ms (+0.000ms): find_next_bit (move_tasks)
04000002 0.563ms (+0.002ms): __switch_to (schedule)
04000002 0.564ms (+0.000ms): finish_task_switch (schedule)

where it appears load balancing takes a long time.

#3 - kmap / kunmap

preemption latency trace v1.0.2
-------------------------------
 latency: 602 us, entries: 53 (53)
    -----------------
    | task: cp/11501, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: kmap_atomic+0x23/0xe0
 => ended at:   kunmap_atomic+0x7b/0xa0
=======>
00000001 0.000ms (+0.000ms): kmap_atomic (file_read_actor)
00000001 0.000ms (+0.000ms): page_address (file_read_actor)
00000001 0.000ms (+0.000ms): __copy_to_user_ll (file_read_actor)
00000001 0.502ms (+0.501ms): smp_apic_timer_interrupt (__copy_to_user_ll)
00010001 0.502ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 0.502ms (+0.000ms): profile_hook (profile_tick)
00010002 0.502ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 0.570ms (+0.068ms): profile_hit (smp_apic_timer_interrupt)
00010001 0.571ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 0.571ms (+0.000ms): update_one_process (update_process_times)
00010001 0.571ms (+0.000ms): run_local_timers (update_process_times)

or

preemption latency trace v1.0.2
-------------------------------
 latency: 615 us, entries: 75 (75)
    -----------------
    | task: cat/11844, uid:0 nice:0 policy:0 rt_prio:0
    -----------------
 => started at: kmap_atomic+0x23/0xe0
 => ended at:   kunmap_atomic+0x7b/0xa0
=======>
00000001 0.000ms (+0.000ms): kmap_atomic (file_read_actor)
00000001 0.000ms (+0.000ms): page_address (file_read_actor)
00000001 0.000ms (+0.000ms): __copy_to_user_ll (file_read_actor)
00000001 0.563ms (+0.562ms): smp_apic_timer_interrupt (__copy_to_user_ll)
00010001 0.563ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 0.563ms (+0.000ms): profile_hook (profile_tick)
00010002 0.564ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 0.564ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 0.564ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 0.564ms (+0.000ms): update_one_process (update_process_times)

#4 - mmap

preemption latency trace v1.0.2
-------------------------------
 latency: 660 us, entries: 48 (48)
    -----------------
    | task: get_ltrace.sh/12120, uid:0 nice:-20 policy:0 rt_prio:0
    -----------------
 => started at: cond_resched_lock+0x7b/0x140
 => ended at:   exit_mmap+0x168/0x210
=======>
00000001 0.000ms (+0.000ms): touch_preempt_timing (cond_resched_lock)
00000001 0.000ms (+0.000ms): __bitmap_weight (unmap_vmas)
00000001 0.000ms (+0.000ms): vm_acct_memory (exit_mmap)
00000001 0.001ms (+0.000ms): clear_page_tables (exit_mmap)
00010001 0.520ms (+0.518ms): do_IRQ (clear_page_tables)
00010002 0.564ms (+0.044ms): ack_edge_ioapic_irq (do_IRQ)
00010002 0.564ms (+0.000ms): generic_redirect_hardirq (do_IRQ)
00010001 0.564ms (+0.000ms): generic_handle_IRQ_event (do_IRQ)
00010001 0.564ms (+0.000ms): timer_interrupt (generic_handle_IRQ_event)
00010002 0.565ms (+0.001ms): mark_offset_tsc (timer_interrupt)
00010002 0.618ms (+0.052ms): do_timer (timer_interrupt)
00010002 0.618ms (+0.000ms): update_wall_time (do_timer)
00010002 0.618ms (+0.000ms): update_wall_time_one_tick (update_wall_time)
00010002 0.619ms (+0.000ms): generic_note_interrupt (do_IRQ)
00010002 0.619ms (+0.000ms): end_edge_ioapic_irq (do_IRQ)
00000002 0.619ms (+0.000ms): do_softirq (do_IRQ)

#5 - network poll

preemption latency trace v1.0.2
-------------------------------
 latency: 753 us, entries: 371 (371)
    -----------------
    | task: ksoftirqd/1/5, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: rtl8139_poll+0x3c/0x160
 => ended at:   rtl8139_poll+0x100/0x160
=======>
00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
00000001 0.000ms (+0.000ms): rtl8139_rx (rtl8139_poll)
00000001 0.002ms (+0.001ms): alloc_skb (rtl8139_rx)
00000001 0.002ms (+0.000ms): kmem_cache_alloc (alloc_skb)
00000001 0.002ms (+0.000ms): __kmalloc (alloc_skb)
00000001 0.004ms (+0.002ms): eth_type_trans (rtl8139_rx)
00000001 0.005ms (+0.000ms): netif_receive_skb (rtl8139_rx)
00000002 0.008ms (+0.002ms): packet_rcv_spkt (netif_receive_skb)
00000002 0.008ms (+0.000ms): skb_clone (packet_rcv_spkt)
00000002 0.009ms (+0.000ms): kmem_cache_alloc (skb_clone)
00000002 0.078ms (+0.069ms): memcpy (skb_clone)
00010002 0.498ms (+0.419ms): do_IRQ (skb_clone)
00010003 0.498ms (+0.000ms): ack_edge_ioapic_irq (do_IRQ)
00010003 0.498ms (+0.000ms): generic_redirect_hardirq (do_IRQ)
00010002 0.499ms (+0.000ms): generic_handle_IRQ_event (do_IRQ)
00010002 0.499ms (+0.000ms): timer_interrupt (generic_handle_IRQ_event)
00010003 0.568ms (+0.068ms): mark_offset_tsc (timer_interrupt)
00010003 0.582ms (+0.014ms): do_timer (timer_interrupt)
00010003 0.582ms (+0.000ms): update_wall_time (do_timer)

or

preemption latency trace v1.0.2
-------------------------------
 latency: 752 us, entries: 395 (395)
    -----------------
    | task: ksoftirqd/1/5, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: rtl8139_poll+0x3c/0x160
 => ended at:   rtl8139_poll+0x100/0x160
=======>
00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
00000001 0.000ms (+0.000ms): rtl8139_rx (rtl8139_poll)
00000001 0.002ms (+0.001ms): alloc_skb (rtl8139_rx)
00000001 0.002ms (+0.000ms): kmem_cache_alloc (alloc_skb)
00000001 0.002ms (+0.000ms): __kmalloc (alloc_skb)
00000001 0.005ms (+0.002ms): eth_type_trans (rtl8139_rx)
00000001 0.146ms (+0.140ms): netif_receive_skb (rtl8139_rx)
00000002 0.566ms (+0.420ms): packet_rcv_spkt (netif_receive_skb)
00000002 0.567ms (+0.000ms): skb_clone (packet_rcv_spkt)
00000002 0.567ms (+0.000ms): kmem_cache_alloc (skb_clone)
00000002 0.568ms (+0.000ms): memcpy (skb_clone)
00000002 0.570ms (+0.001ms): strlcpy (packet_rcv_spkt)

Separately I ran a series of tests with:
  preempt_max_latency=500
  hardirq_preemption=0
  softirq_preemption=0
which should be similar to the configuration I used in 2.4 kernels.
There were > 100 latency traces (my script stops at 100) in the same
25 minute test. In addition to the traces listed above, I had the
following problems.

RT Run Flush
============

preemption latency trace v1.0.2
-------------------------------
 latency: 1592 us, entries: 4000 (6306)
    -----------------
    | task: latencytest/6440, uid:0 nice:0 policy:1 rt_prio:99
    -----------------
 => started at: smp_apic_timer_interrupt+0x43/0x130
 => ended at:   smp_apic_timer_interrupt+0xaa/0x130

...
00000101 0.041ms (+0.001ms): add_entropy_words (extract_entropy)
00000101 0.042ms (+0.000ms): SHATransform (extract_entropy)
00000101 0.042ms (+0.000ms): memcpy (SHATransform)
00000101 0.044ms (+0.001ms): add_entropy_words (extract_entropy)
00000101 0.045ms (+0.000ms): add_entropy_words (extract_entropy)
00000101 0.046ms (+0.001ms): credit_entropy_store (extract_entropy)
00000102 0.047ms (+0.001ms): __wake_up (extract_entropy)
00000103 0.047ms (+0.000ms): __wake_up_common (__wake_up)
00000101 0.048ms (+0.000ms): SHATransform (extract_entropy)
00000101 0.048ms (+0.000ms): memcpy (SHATransform)
00000101 0.050ms (+0.001ms): add_entropy_words (extract_entropy)
00000101 0.050ms (+0.000ms): SHATransform (extract_entropy)
00000101 0.050ms (+0.000ms): memcpy (SHATransform)
00000101 0.052ms (+0.001ms): add_entropy_words (extract_entropy)
00000201 0.053ms (+0.001ms): local_bh_enable (rt_run_flush)
00000101 0.053ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 0.053ms (+0.000ms): cond_resched_softirq (rt_run_flush)
00000201 0.054ms (+0.000ms): local_bh_enable (rt_run_flush)
00000101 0.054ms (+0.000ms): cond_resched_all (rt_run_flush)
00000101 0.054ms (+0.000ms): cond_resched_softirq (rt_run_flush)
... the last 3 lines repeat over 1000 times and fill
the trace buffer completely ...

The above sequence occurred twice during testing.

Short But Long
==============

preemption latency trace v1.0.2
-------------------------------
 latency: 549 us, entries: 4 (4)
    -----------------
    | task: kblockd/1/11, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: worker_thread+0x22d/0x3a0
 => ended at:   worker_thread+0x2a2/0x3a0
=======>
00000001 0.000ms (+0.000ms): worker_thread (kthread)
00000001 0.000ms (+0.000ms): __wake_up (worker_thread)
00000002 0.549ms (+0.549ms): __wake_up_common (__wake_up)
00000001 0.550ms (+0.000ms): sub_preempt_count (worker_thread)

or

preemption latency trace v1.0.2
-------------------------------
 latency: 551 us, entries: 4 (4)
    -----------------
    | task: kblockd/1/11, uid:0 nice:-10 policy:0 rt_prio:0
    -----------------
 => started at: worker_thread+0x22d/0x3a0
 => ended at:   worker_thread+0x2a2/0x3a0
=======>
00000001 0.000ms (+0.000ms): worker_thread (kthread)
00000001 0.000ms (+0.000ms): __wake_up (worker_thread)
00000002 0.000ms (+0.000ms): __wake_up_common (__wake_up)
00000001 0.552ms (+0.551ms): sub_preempt_count (worker_thread)

or this one apparently preempting the real time task [why??]
Now that I look, the first set of tests preempted the max priority
real time application six (6) times and the second set of tests
preempted the RT application thirty (30) times.

preemption latency trace v1.0.2
-------------------------------
 latency: 566 us, entries: 13 (13)
    -----------------
    | task: latencytest/7959, uid:0 nice:0 policy:1 rt_prio:99
    -----------------
 => started at: do_IRQ+0x19/0x290
 => ended at:   do_IRQ+0x1cf/0x290
=======>
00010000 0.000ms (+0.000ms): do_IRQ (common_interrupt)
00010000 0.000ms (+0.000ms): do_IRQ (<08049b20>)
00010001 0.000ms (+0.000ms): ack_edge_ioapic_irq (do_IRQ)
00010001 0.000ms (+0.000ms): generic_redirect_hardirq (do_IRQ)
00010000 0.000ms (+0.000ms): generic_handle_IRQ_event (do_IRQ)
00010000 0.001ms (+0.000ms): timer_interrupt (generic_handle_IRQ_event)
00010001 0.549ms (+0.548ms): mark_offset_tsc (timer_interrupt)
00010001 0.564ms (+0.014ms): do_timer (timer_interrupt)
00010001 0.564ms (+0.000ms): update_wall_time (do_timer)
00010001 0.564ms (+0.000ms): update_wall_time_one_tick (update_wall_time)
00010001 0.565ms (+0.001ms): generic_note_interrupt (do_IRQ)
00010001 0.566ms (+0.000ms): end_edge_ioapic_irq (do_IRQ)
00000001 0.566ms (+0.000ms): sub_preempt_count (do_IRQ)

Another Long Time
=================

#1 - kmap / kunmap

Similar stop / end locations to above, but a different cause.

preemption latency trace v1.0.2
-------------------------------
 latency: 696 us, entries: 131 (131)
    -----------------
    | task: sleep/8854, uid:0 nice:-20 policy:0 rt_prio:0
    -----------------
 => started at: kmap_atomic+0x23/0xe0
 => ended at:   kunmap_atomic+0x7b/0xa0
=======>
00000001 0.000ms (+0.000ms): kmap_atomic (do_anonymous_page)
00000001 0.000ms (+0.000ms): page_address (do_anonymous_page)
00010001 0.413ms (+0.413ms): do_IRQ (do_anonymous_page)
00010002 0.452ms (+0.038ms): ack_edge_ioapic_irq (do_IRQ)
00010002 0.452ms (+0.000ms): generic_redirect_hardirq (do_IRQ)
00010001 0.454ms (+0.001ms): generic_handle_IRQ_event (do_IRQ)
00010001 0.488ms (+0.034ms): timer_interrupt (generic_handle_IRQ_event)
00010002 0.557ms (+0.069ms): mark_offset_tsc (timer_interrupt)
00010002 0.573ms (+0.015ms): do_timer (timer_interrupt)
00010002 0.574ms (+0.000ms): update_wall_time (do_timer)
00010002 0.574ms (+0.000ms): update_wall_time_one_tick (update_wall_time)

Perhaps the most disturbing finding tat the max priority RT application
can get preempted for a long time, even though there is:
 - only one real time task
 - two CPU's to do work

  --Mark


^ permalink raw reply	[flat|nested] 57+ messages in thread
[parent not found: <OFD220F58F.002C5901-ON86256F02.005C2FB1-86256F02.005C2FD5@raytheon.com>]
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-09-01 15:21 Mark_H_Johnson
  2004-09-02 22:24 ` Ingo Molnar
  0 siblings, 1 reply; 57+ messages in thread
From: Mark_H_Johnson @ 2004-09-01 15:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
	Lee Revell

>... Need to increase that data area so
>#define PERCPU_ENOUGH_ROOM 196608
>or something similar (should leave about 50K free for modules).
>
>I will rebuild with this change plus the latest of the others.

This booted fine today.

I reported the results of the audio patch separately. That seems to have
removed the audio latencies I saw previously.

A Whopper!
==========

A latency trace > 20 msec was seen (once). It starts pretty bad and then
gets stuck in a 1 msec loop.

preemption latency trace v1.0.2
-------------------------------
 latency: 23120 us, entries: 406 (406)
    -----------------
    | task: latencytest/4999, uid:0 nice:0 policy:1 rt_prio:99
    -----------------
 => started at: del_timer+0x4c/0x150
 => ended at:   del_timer+0xf2/0x150
=======>
00000001 0.000ms (+0.000ms): del_timer (del_singleshot_timer_sync)
00000001 0.700ms (+0.700ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 0.700ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 0.701ms (+0.000ms): profile_hook (profile_tick)
00010002 0.701ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 0.702ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 0.702ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 0.702ms (+0.000ms): update_one_process (update_process_times)
00010001 0.703ms (+0.000ms): run_local_timers (update_process_times)
00010001 0.703ms (+0.000ms): raise_softirq (update_process_times)
00010001 0.703ms (+0.000ms): scheduler_tick (update_process_times)
00010001 0.703ms (+0.000ms): sched_clock (scheduler_tick)
00010001 0.704ms (+0.001ms): rebalance_tick (scheduler_tick)
00000002 0.704ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 0.705ms (+0.000ms): __do_softirq (do_softirq)
00000002 0.705ms (+0.000ms): wake_up_process (do_softirq)
00000002 0.705ms (+0.000ms): try_to_wake_up (wake_up_process)
00000002 0.705ms (+0.000ms): task_rq_lock (try_to_wake_up)
00000003 0.706ms (+0.000ms): activate_task (try_to_wake_up)
00000003 0.706ms (+0.000ms): sched_clock (activate_task)
00000003 0.706ms (+0.000ms): recalc_task_prio (activate_task)
00000003 0.706ms (+0.000ms): effective_prio (recalc_task_prio)
00000003 0.706ms (+0.000ms): enqueue_task (activate_task)
00010001 1.301ms (+0.594ms): do_IRQ (.text.lock.timer)
00010002 1.302ms (+0.000ms): mask_and_ack_level_ioapic_irq (do_IRQ)
00010002 1.302ms (+0.000ms): mask_IO_APIC_irq
(mask_and_ack_level_ioapic_irq)
00010003 1.302ms (+0.000ms): __mask_IO_APIC_irq (mask_IO_APIC_irq)
00010003 1.302ms (+0.000ms): __modify_IO_APIC_irq (__mask_IO_APIC_irq)
00010002 1.315ms (+0.013ms): generic_redirect_hardirq (do_IRQ)
00010001 1.315ms (+0.000ms): generic_handle_IRQ_event (do_IRQ)
00010001 1.316ms (+0.000ms): usb_hcd_irq (generic_handle_IRQ_event)
00010001 1.316ms (+0.000ms): uhci_irq (usb_hcd_irq)
00010001 1.316ms (+0.000ms): usb_hcd_irq (generic_handle_IRQ_event)
00010001 1.317ms (+0.000ms): uhci_irq (usb_hcd_irq)
00010001 1.317ms (+0.000ms): snd_audiopci_interrupt
(generic_handle_IRQ_event)
00010001 1.319ms (+0.001ms): snd_pcm_period_elapsed
(snd_audiopci_interrupt)
00010003 1.319ms (+0.000ms): snd_ensoniq_playback1_pointer
(snd_pcm_period_elapsed)
00010003 1.321ms (+0.001ms): snd_pcm_playback_silence
(snd_pcm_period_elapsed)
00010003 1.321ms (+0.000ms): __wake_up (snd_pcm_period_elapsed)
00010004 1.321ms (+0.000ms): __wake_up_common (__wake_up)
00010004 1.322ms (+0.000ms): default_wake_function (__wake_up_common)
00010004 1.322ms (+0.000ms): try_to_wake_up (__wake_up_common)
00010004 1.322ms (+0.000ms): task_rq_lock (try_to_wake_up)
00010001 1.323ms (+0.000ms): kill_fasync (snd_pcm_period_elapsed)
00010002 1.323ms (+0.000ms): generic_note_interrupt (do_IRQ)
00010002 1.323ms (+0.000ms): end_level_ioapic_irq (do_IRQ)
00010002 1.323ms (+0.000ms): unmask_IO_APIC_irq (do_IRQ)
00010003 1.324ms (+0.000ms): __unmask_IO_APIC_irq (unmask_IO_APIC_irq)
00010003 1.324ms (+0.000ms): __modify_IO_APIC_irq (__unmask_IO_APIC_irq)
00000002 1.338ms (+0.013ms): do_softirq (do_IRQ)
00000002 1.338ms (+0.000ms): __do_softirq (do_softirq)
00000001 1.699ms (+0.361ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 1.700ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 1.700ms (+0.000ms): profile_hook (profile_tick)
00010002 1.700ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 1.700ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 1.700ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 1.701ms (+0.000ms): update_one_process (update_process_times)
00010001 1.701ms (+0.000ms): run_local_timers (update_process_times)
00010001 1.701ms (+0.000ms): raise_softirq (update_process_times)
00010001 1.701ms (+0.000ms): scheduler_tick (update_process_times)
00010001 1.701ms (+0.000ms): sched_clock (scheduler_tick)
00010001 1.702ms (+0.000ms): rebalance_tick (scheduler_tick)
00000002 1.702ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 1.702ms (+0.000ms): __do_softirq (do_softirq)
00000001 2.699ms (+0.996ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 2.699ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 2.699ms (+0.000ms): profile_hook (profile_tick)
00010002 2.699ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 2.699ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 2.700ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 2.700ms (+0.000ms): update_one_process (update_process_times)
00010001 2.700ms (+0.000ms): run_local_timers (update_process_times)
00010001 2.700ms (+0.000ms): raise_softirq (update_process_times)
00010001 2.700ms (+0.000ms): scheduler_tick (update_process_times)
00010001 2.701ms (+0.000ms): sched_clock (scheduler_tick)
00010001 2.701ms (+0.000ms): rebalance_tick (scheduler_tick)
00000002 2.701ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 2.701ms (+0.000ms): __do_softirq (do_softirq)
00010001 2.759ms (+0.057ms): do_IRQ (.text.lock.timer)
00010002 2.759ms (+0.000ms): mask_and_ack_level_ioapic_irq (do_IRQ)
00010002 2.759ms (+0.000ms): mask_IO_APIC_irq
(mask_and_ack_level_ioapic_irq)
00010003 2.760ms (+0.000ms): __mask_IO_APIC_irq (mask_IO_APIC_irq)
00010003 2.760ms (+0.000ms): __modify_IO_APIC_irq (__mask_IO_APIC_irq)
00010002 2.773ms (+0.013ms): generic_redirect_hardirq (do_IRQ)
00010001 2.773ms (+0.000ms): generic_handle_IRQ_event (do_IRQ)
00010001 2.773ms (+0.000ms): usb_hcd_irq (generic_handle_IRQ_event)
00010001 2.774ms (+0.000ms): uhci_irq (usb_hcd_irq)
00010001 2.774ms (+0.000ms): usb_hcd_irq (generic_handle_IRQ_event)
00010001 2.774ms (+0.000ms): uhci_irq (usb_hcd_irq)
00010001 2.775ms (+0.000ms): snd_audiopci_interrupt
(generic_handle_IRQ_event)
00010001 2.776ms (+0.001ms): snd_pcm_period_elapsed
(snd_audiopci_interrupt)
00010003 2.777ms (+0.000ms): snd_ensoniq_playback1_pointer
(snd_pcm_period_elapsed)
00010003 2.778ms (+0.001ms): snd_pcm_playback_silence
(snd_pcm_period_elapsed)
00010003 2.779ms (+0.001ms): snd_pcm_format_set_silence
(snd_pcm_playback_silence)
00010003 2.780ms (+0.000ms): snd_pcm_format_set_silence
(snd_pcm_playback_silence)
00010003 2.781ms (+0.000ms): xrun (snd_pcm_period_elapsed)
00010003 2.781ms (+0.000ms): snd_pcm_stop (xrun)
00010003 2.782ms (+0.000ms): snd_pcm_action (snd_pcm_stop)
00010003 2.783ms (+0.000ms): snd_pcm_action_single (snd_pcm_action)
00010003 2.783ms (+0.000ms): snd_pcm_pre_stop (snd_pcm_action_single)
00010003 2.783ms (+0.000ms): snd_pcm_do_stop (snd_pcm_action_single)
00010003 2.784ms (+0.000ms): snd_ensoniq_trigger (snd_pcm_do_stop)
00010003 2.786ms (+0.001ms): snd_pcm_post_stop (snd_pcm_action_single)
00010003 2.786ms (+0.000ms): snd_pcm_trigger_tstamp (snd_pcm_post_stop)
00010003 2.787ms (+0.000ms): do_gettimeofday (snd_pcm_trigger_tstamp)
00010003 2.787ms (+0.000ms): get_offset_tsc (do_gettimeofday)
00010003 2.788ms (+0.000ms): snd_timer_notify (snd_pcm_post_stop)
00010003 2.789ms (+0.001ms): snd_pcm_tick_set (snd_pcm_post_stop)
00010003 2.790ms (+0.000ms): snd_pcm_system_tick_set (snd_pcm_post_stop)
00010003 2.790ms (+0.000ms): del_timer (snd_pcm_post_stop)
00010003 2.791ms (+0.000ms): __wake_up (snd_pcm_action_single)
00010004 2.791ms (+0.000ms): __wake_up_common (__wake_up)
00010004 2.791ms (+0.000ms): default_wake_function (__wake_up_common)
00010004 2.791ms (+0.000ms): try_to_wake_up (__wake_up_common)
00010004 2.791ms (+0.000ms): task_rq_lock (try_to_wake_up)
00010001 2.792ms (+0.000ms): kill_fasync (snd_pcm_period_elapsed)
00010002 2.793ms (+0.000ms): generic_note_interrupt (do_IRQ)
00010002 2.793ms (+0.000ms): end_level_ioapic_irq (do_IRQ)
00010002 2.793ms (+0.000ms): unmask_IO_APIC_irq (do_IRQ)
00010003 2.793ms (+0.000ms): __unmask_IO_APIC_irq (unmask_IO_APIC_irq)
00010003 2.793ms (+0.000ms): __modify_IO_APIC_irq (__unmask_IO_APIC_irq)
00000002 2.807ms (+0.013ms): do_softirq (do_IRQ)
00000002 2.807ms (+0.000ms): __do_softirq (do_softirq)
00000001 3.698ms (+0.890ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 3.698ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 3.698ms (+0.000ms): profile_hook (profile_tick)
00010002 3.699ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 3.699ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 3.699ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 3.699ms (+0.000ms): update_one_process (update_process_times)
00010001 3.700ms (+0.000ms): run_local_timers (update_process_times)
00010001 3.700ms (+0.000ms): raise_softirq (update_process_times)
00010001 3.700ms (+0.000ms): scheduler_tick (update_process_times)
00010001 3.700ms (+0.000ms): sched_clock (scheduler_tick)
00010001 3.701ms (+0.000ms): rebalance_tick (scheduler_tick)
00000002 3.701ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 3.701ms (+0.000ms): __do_softirq (do_softirq)
00000001 4.697ms (+0.996ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 4.698ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 4.698ms (+0.000ms): profile_hook (profile_tick)
00010002 4.698ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 4.698ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 4.698ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 4.699ms (+0.000ms): update_one_process (update_process_times)
00010001 4.699ms (+0.000ms): run_local_timers (update_process_times)
00010001 4.699ms (+0.000ms): raise_softirq (update_process_times)
00010001 4.699ms (+0.000ms): scheduler_tick (update_process_times)
00010001 4.699ms (+0.000ms): sched_clock (scheduler_tick)
00010001 4.700ms (+0.000ms): rebalance_tick (scheduler_tick)
00000002 4.700ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 4.700ms (+0.000ms): __do_softirq (do_softirq)
00000001 5.697ms (+0.996ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 5.697ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 5.697ms (+0.000ms): profile_hook (profile_tick)
00010002 5.697ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 5.698ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 5.698ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 5.698ms (+0.000ms): update_one_process (update_process_times)
00010001 5.698ms (+0.000ms): run_local_timers (update_process_times)
00010001 5.698ms (+0.000ms): raise_softirq (update_process_times)
00010001 5.698ms (+0.000ms): scheduler_tick (update_process_times)
(this cycle repeats several times, exited as follows)
00000001 21.686ms (+0.996ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 21.686ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 21.686ms (+0.000ms): profile_hook (profile_tick)
00010002 21.686ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 21.687ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 21.687ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 21.687ms (+0.000ms): update_one_process (update_process_times)
00010001 21.687ms (+0.000ms): run_local_timers (update_process_times)
00010001 21.687ms (+0.000ms): raise_softirq (update_process_times)
00010001 21.687ms (+0.000ms): scheduler_tick (update_process_times)
00010001 21.688ms (+0.000ms): sched_clock (scheduler_tick)
00010001 21.688ms (+0.000ms): rebalance_tick (scheduler_tick)
00000002 21.688ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 21.688ms (+0.000ms): __do_softirq (do_softirq)
00000001 22.685ms (+0.996ms): smp_apic_timer_interrupt (.text.lock.timer)
00010001 22.685ms (+0.000ms): profile_tick (smp_apic_timer_interrupt)
00010001 22.685ms (+0.000ms): profile_hook (profile_tick)
00010002 22.686ms (+0.000ms): notifier_call_chain (profile_hook)
00010001 22.686ms (+0.000ms): profile_hit (smp_apic_timer_interrupt)
00010001 22.686ms (+0.000ms): update_process_times
(smp_apic_timer_interrupt)
00010001 22.686ms (+0.000ms): update_one_process (update_process_times)
00010001 22.686ms (+0.000ms): run_local_timers (update_process_times)
00010001 22.686ms (+0.000ms): raise_softirq (update_process_times)
00010001 22.687ms (+0.000ms): scheduler_tick (update_process_times)
00010001 22.687ms (+0.000ms): sched_clock (scheduler_tick)
00010001 22.687ms (+0.000ms): rebalance_tick (scheduler_tick)
00000002 22.687ms (+0.000ms): do_softirq (smp_apic_timer_interrupt)
00000002 22.688ms (+0.000ms): __do_softirq (do_softirq)
00000001 23.120ms (+0.432ms): sub_preempt_count (del_timer)
00000001 23.121ms (+0.000ms): update_max_trace (check_preempt_timing)

TSC mcount
==========

>From your patch, added several mcount() calls to mark_offset_tsc.
To summarize the trace results, here is a table that reports the
delta times for each location. Each row represents one of the dozen
trace outputs per latency trace. Row columns are the file names
(lt.xx) in the tar file. Times are in usec.

     01  03  04  13  16  26  27  31  32  35  37  39
01  000 000 000 069 000 000 000 000 000 081 136 000
02  032 000 000 000 000 000 000 000 000 000 000 000
03  000 000 000 000 000 000 000 000 000 000 000 000
04  001 000 000 070 231 139 138 093 252 062 000 067
05  000 000 000 000 000 000 000 000 000 000 000 000
06  042 003 003 004 003 004 004 053 145 076 003 004
07  004 004 004 004 008 004 005 006 010 011 004 005
08  001 001 002 002 008 002 002 002 001 002 001 002
09  000 000 000 000 000 000 000 000 000 000 000 000
10  000 000 000 000 000 000 000 000 000 000 000 000
11  000 000 000 000 000 000 000 000 000 000 000 000
12  000 000 000 061 000 130 129 129 000 000 000 060

NOTE: This is not all the results w/ mark_offset_tsc listed. After the
first few items, I only listed those with significant (>100 usec) delays.

Network Poll
============

I still have a few traces w/ long durations during network poll. Since
you merged that other change into Q6 (and I assume Q7) I will try that
later today.

I will also try to make a modified version of latencytest that does
the extended trace of the write system call (occasionally) to see what
is going on there as well.

For reference, I will send the tar file with all the traces in another
message (not to linux-kernel).

 --Mark


^ permalink raw reply	[flat|nested] 57+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-09-01 14:37 Mark_H_Johnson
  2004-09-01 19:31 ` Takashi Iwai
  0 siblings, 1 reply; 57+ messages in thread
From: Mark_H_Johnson @ 2004-09-01 14:37 UTC (permalink / raw)
  To: Takashi Iwai
  Cc: Ingo Molnar, Lee Revell, Mark_H_Johnson, K.R. Foley, linux-kernel,
	Felipe Alfaro Solana, Daniel Schmitt, alsa-devel

>Ok, the second try.

This patch appears to work well. No snd_es1371 traces in over 25 minutes
of testing (I had a couple hundred yesterday in similar tests). The sound
was OK as well.

I am seeing some additional CPU overhead during the disk I/O tests with
today's kernel but I don't think this is due to this patch.

Thanks.
  --Mark


^ permalink raw reply	[flat|nested] 57+ messages in thread
[parent not found: <2yiVZ-IZ-15@gated-at.bofh.it>]
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-31 20:10 Mark_H_Johnson
  2004-08-31 20:37 ` Ingo Molnar
  0 siblings, 1 reply; 57+ messages in thread
From: Mark_H_Johnson @ 2004-08-31 20:10 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
	Lee Revell

>since the latency tracer does not trigger, we need a modified tracer to
>find out what's happening during such long delays. I've attached the
>'user-latency-tracer' patch ontop of -Q5, which is a modification of the
>latency tracer.
Grr. I should have checked before I built with this patch. With this in
I now get the
  kernel: Could not allocate 4 bytes percpu data
messages again. Need to increase that data area so
  #define PERCPU_ENOUGH_ROOM 196608
or something similar (should leave about 50K free for modules).

I will rebuild with this change plus the latest of the others.

--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


^ permalink raw reply	[flat|nested] 57+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-31 12:46 Mark_H_Johnson
  0 siblings, 0 replies; 57+ messages in thread
From: Mark_H_Johnson @ 2004-08-31 12:46 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
	Lee Revell

>in theory the patch is more or less equivalent to setting
>netdev_max_backlog to a value of 1 - could you try that setting too?
>(with the patch unapplied.)

Ugh. That setting is VERY BAD. Just a quick test without
doing anything complex...
  # echo 1 > /proc/sys/net/core/netdev_max_backlog
  # ping dws7
  PING dws7 (192.52.215.17) 56(84) bytes of data.
[so the DNS lookup worked]
  From dws77... (192.52.215.87) icmp_seq=0 Destination Host Unreachable
  From dws77... (192.52.215.87) icmp_seq=1 Destination Host Unreachable
  From dws77... (192.52.215.87) icmp_seq=2 Destination Host Unreachable
  ...
[NOTE - these are plugged into the same 10/100 Ethernet switch]
  # echo 8 > /proc/sys/net/core/netdev_max_backlog
  # ping dws7
  PING dws7 (192.52.215.17) 56(84) bytes of data.
[so the DNS lookup worked]
  From dws77... (192.52.215.87) icmp_seq=0 ttl=64 time=2210 ms
  From dws77... (192.52.215.87) icmp_seq=1 ttl=64 time=1210 ms
  From dws77... (192.52.215.87) icmp_seq=2 ttl=64 time=210 ms
  From dws77... (192.52.215.87) icmp_seq=2 ttl=64 time=0.355 ms
  From dws77... (192.52.215.87) icmp_seq=2 ttl=64 time=0.397 ms
  ...
I tried again with 2, 3, and 4. Two appears to be "way too small" with
a ping of 1000 ms and nominal values of 0.800 ms. Three does not appear
to be good either with nominal values of 0.500 ms. Four has similar
results to eight (8).


--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


^ permalink raw reply	[flat|nested] 57+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-30 22:04 Mark_H_Johnson
  2004-08-31  6:31 ` Ingo Molnar
  2004-09-01  7:30 ` Ingo Molnar
  0 siblings, 2 replies; 57+ messages in thread
From: Mark_H_Johnson @ 2004-08-30 22:04 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
	Lee Revell

>regarding this particular latency, could you try the attached patch
>ontop of -Q5? It turns the ->poll() loop into separate, individually
>preemptable iterations instead of one batch of processing. In theory
>this should result in latency being lower regardless of the
>netdev_max_backlog value.

First time - stopped during init script - when trying to start a network
service (automount).
Second time - stopped during init script - did not get the "OK" for
bringing up interface eth0.
Alt-SysRq-P still does not show any data [not sure why].
Alt-SysRq-T captured data - for example, shows dhclient in sys_select ->
__pollwait -> do_select -> process_timeout -> add_wait_queue ->
schedule_timeout -> __mod_timer. Very odd, almost every other task (except
initlog) is stuck in one of:
 - generic_handle_IRQ_event
 - sub_preempt_count
 - do_irqd
 - do_hardirq
and all the tasks I can see have "S" status.
Third time - ditto - back to stopping at automount start up.
Alt-SysRq-T captured data, again everything I could look at in "S" mode.
In all three cases, another system attempting to "ping" the system under
test failed to get any responses.

In all cases, Ctrl-Alt-Del was good enough to get a clean reboot.

This looks like a bad patch; will go back to the last good kernel for
further testing.

--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


^ permalink raw reply	[flat|nested] 57+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5
@ 2004-08-30 19:13 Mark_H_Johnson
  2004-08-30 19:21 ` Ingo Molnar
                   ` (2 more replies)
  0 siblings, 3 replies; 57+ messages in thread
From: Mark_H_Johnson @ 2004-08-30 19:13 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: K.R. Foley, linux-kernel, Felipe Alfaro Solana, Daniel Schmitt,
	Lee Revell

>i've uploaded -Q5 to:
> [snip the rest...]

Thanks.

This appears to be the first 2.6.x kernel I've run that has results
comparable to 2.4.x kernels with low latency patches and kernel preemption.
The few remaining symptoms I see include:

 - a few long (> 1 msec) delays in the real time CPU loop (no system calls)
 - varying time to complete the write system call (for audio) - much
different than 2.4
 - a couple latency traces (> 700 usec) in the network driver

For reference, these tests were performed on the following SMP system:
  Dual 866 Mhz Pentium III
  512 Mbyte memory
  IDE system disk (DMA enabled)
The basic test is Benno's latency test
(http://www.gardena.net/benno/linux/audio) with some slight modifications
to the tests to keep the second CPU busy (non real time CPU burner) and to
add network I/O tests. The 2.4 tests were run with 2.4.20, the 2.6 tests
were run with 2.4.9-rc1-Q5. On 2.6, voluntary_preemption,
kernel_preemption, hardirq_preemption, and softirq_preemption are all 1. I
also set
  /sys/block/hda/queue/max_sectors_kb = 32
  /sys/block/hda/queue/read_ahead_kb = 32
  /proc/sys/net/core/netdev_max_backlog = 8
and the audio driver was set to be non-threaded.

BASIC RESULTS
=============
Comparison of results between 2.6.x and 2.4.x; values in milliseconds.
Nominal values for the write operation is 1.45 msec; the CPU loop is 1.16
msec.

          Max CPU Delta     Max Write Delta
Test     2.4.x     2.6.x    2.4.x     2.6.x
X11       0.10      0.16     0.05      0.65
/proc     0.07      0.17     0.05      0.65
net out   0.15      0.19     0.05      0.75
net in    0.17      0.23     0.05      0.95
dsk wrt   0.49      0.18     0.25      1.05
dsk copy  2.48      0.68     2.25      1.25
disk rd   3.03      1.61     2.75      1.35

LONG DELAYS
===========

Note I still see over 110% worst case overhead on a max priority real time
CPU task (no system calls) when doing heavy disk I/O on 2.6. It is much
better than 2.4, but still disturbing. What I would hope would happen on a
dual CPU system like mine, is that the real time task tends to be on one
CPU and the other system activity would tend to stay on the other CPU.
However, the results do not seem to indicate that behavior.

VARYING SYSTEM CALL TIMES
=========================

In 2.4, it appears that the duration of the write system call is basically
fixed and dependent on the duration of the audio fragment. In 2.6, this
behavior is now different. If I look at the chart in detail, it appears the
system is queueing up several write operations during the first few seconds
of testing. You can see this by consistently low elapsed times for the
write system call. Then the elapsed time for the write bounces up / down in
a sawtooth pattern over a 1 msec range. Could someone explain the cause of
this new behavior and if there is a setting to restore the old behavior? I
am concerned that this queueing adds latency to audio operations (when
trying to synchronize audio with other real time behavior).

LONG NETWORK LATENCIES
======================

In about 25 minutes of heavy testing, I had two latency traces with
/proc/sys/kernel/preempt_max_latency set to 700. They had the same start /
end location with the long delay as follows:
  730 us, entries: 361
  ...
  started at rtl8139_poll+0x3c/0x160
  ended at   rtl8139_poll+0x100/0x160
  00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
  00000001 0.140ms (+0.140ms): rtl8139_rx (rtl8139_poll)
  00000001 0.556ms (+0.416ms): alloc_skb (rtl8139_rx)
  ... remaining items all > +0.005ms ...

  731 us, entries: 360
  ...
  started at rtl8139_poll+0x3c/0x160
  ended at   rtl8139_poll+0x100/0x160
  00000001 0.000ms (+0.000ms): rtl8139_poll (net_rx_action)
  00000001 0.000ms (+0.000ms): rtl8139_rx (rtl8139_poll)
  00000001 0.002ms (+0.001ms): alloc_skb (rtl8139_rx)
  00000001 0.141ms (+0.139ms): kmem_cache_alloc (alloc_skb)
  00000001 0.211ms (+0.070ms): __kmalloc (alloc_skb)
  00000001 0.496ms (+0.284ms): eth_type_trans (rtl8139_rx)
  00000001 0.565ms (+0.068ms): netif_receive_skb (rtl8139_rx)
  ... remaining items all > +0.005ms ...

Still much better than my previous results (before setting
netdev_max_backlog).

I will be running some additional tests
 - reducing preempt_max_latency
 - running with sortirq and hardirq_preemption=0
to see if these uncover any further problems.

Thanks again for the good work.
--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


^ permalink raw reply	[flat|nested] 57+ messages in thread
* Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q0
@ 2004-08-28 17:52 Lee Revell
  2004-08-28 19:44 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q2 Ingo Molnar
  0 siblings, 1 reply; 57+ messages in thread
From: Lee Revell @ 2004-08-28 17:52 UTC (permalink / raw)
  To: K.R. Foley
  Cc: Felipe Alfaro Solana, Ingo Molnar, linux-kernel, Mark_H_Johnson

On Sat, 2004-08-28 at 12:50, K.R. Foley wrote:
> Felipe Alfaro Solana wrote:
> > On Saturday 28 August 2004 14:03, Ingo Molnar wrote:
> > 
> > 
> >>Similarly, there are 4 independent options for the .config:
> >>CONFIG_PREEMPT, CONFIG_PREEMPT_VOLUNTARY, CONFIG_PREEMPT_SOFTIRQS and
> >>CONFIG_PREEMPT_HARDIRQS. (In theory all of these options should compile
> >>independently, but i've only tested all-enabled so far.)
> > 
> > 
> > I must be missing something, but after applying diff-bk-040828-2.6.8.1.bz2 and 
> > voluntary-preempt-2.6.9-rc1-bk4-Q1 on top of 2.6.8.1, I'm unable to find 
> > neither CONFIG_PREEMPT_VOLUNTARY, CONFIG_PREEMPT_SOFTIRQS, nor 
> > CONFIG_PREEMPT_HARDIRQS.
> > 
> > Any ideas are welcome.
> 
> Looks like all of these config options are missing from Q1 also. I was 
> just looking myself.
> 

Same results here, none of those config options seem to exist.  I also
get this warning a lot:

include/linux/rwsem.h: In function `down_read':
include/linux/rwsem.h:43: warning: implicit declaration of function `cond_resched'

Lee


^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2004-09-02 22:24 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-31 15:17 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 Mark_H_Johnson
2004-08-31 17:20 ` Lee Revell
2004-08-31 18:09   ` Lee Revell
2004-08-31 18:53     ` Takashi Iwai
2004-08-31 18:56       ` Ingo Molnar
2004-09-02 16:59         ` Jaroslav Kysela
2004-09-02 17:50           ` Lee Revell
2004-08-31 18:19   ` Takashi Iwai
2004-08-31 18:48     ` Ingo Molnar
2004-08-31 19:02       ` Takashi Iwai
2004-08-31 18:50   ` Ingo Molnar
     [not found] <OFD220F58F.002C5901-ON86256F02.005C2FB1-86256F02.005C2FD5@raytheon.com>
2004-09-01 17:09 ` Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2004-09-01 15:21 Mark_H_Johnson
2004-09-02 22:24 ` Ingo Molnar
2004-09-01 14:37 Mark_H_Johnson
2004-09-01 19:31 ` Takashi Iwai
     [not found] <2yiVZ-IZ-15@gated-at.bofh.it>
     [not found] ` <2ylhi-2hg-3@gated-at.bofh.it>
     [not found]   ` <2ynLU-42D-7@gated-at.bofh.it>
     [not found]     ` <2yqJJ-5ZL-1@gated-at.bofh.it>
     [not found]       ` <2yQkS-6Zh-13@gated-at.bofh.it>
     [not found]         ` <2zaCV-4FE-3@gated-at.bofh.it>
     [not found]           ` <2zaWk-4Yj-9@gated-at.bofh.it>
     [not found]             ` <2zmE8-4Zz-11@gated-at.bofh.it>
     [not found]               ` <2zngP-5wD-9@gated-at.bofh.it>
     [not found]                 ` <2zngP-5wD-7@gated-at.bofh.it>
     [not found]                   ` <2znJS-5Pm-25@gated-at.bofh.it>
2004-08-31 23:06                     ` Andi Kleen
     [not found]                     ` <2znJS-5Pm-27@gated-at.bofh.it>
     [not found]                       ` <2znJS-5Pm-29@gated-at.bofh.it>
     [not found]                         ` <2znJS-5Pm-31@gated-at.bofh.it>
     [not found]                           ` <2znJS-5Pm-33@gated-at.bofh.it>
2004-08-31 23:10                             ` Andi Kleen
2004-09-01  7:05                               ` Ingo Molnar
2004-08-31 20:10 Mark_H_Johnson
2004-08-31 20:37 ` Ingo Molnar
2004-08-31 12:46 Mark_H_Johnson
2004-08-30 22:04 Mark_H_Johnson
2004-08-31  6:31 ` Ingo Molnar
2004-09-01  7:30 ` Ingo Molnar
2004-08-30 19:13 Mark_H_Johnson
2004-08-30 19:21 ` Ingo Molnar
2004-08-31  8:49 ` Ingo Molnar
2004-09-02  6:33 ` Ingo Molnar
2004-08-28 17:52 [patch] voluntary-preempt-2.6.9-rc1-bk4-Q0 Lee Revell
2004-08-28 19:44 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q2 Ingo Molnar
2004-08-28 20:10   ` Daniel Schmitt
2004-08-28 20:31     ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q3 Ingo Molnar
2004-08-28 21:10       ` Lee Revell
2004-08-28 21:13         ` Ingo Molnar
2004-08-28 21:16           ` Lee Revell
2004-08-28 23:51             ` Lee Revell
2004-08-29  2:35               ` Lee Revell
2004-08-29  5:43                 ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q4 Ingo Molnar
2004-08-30  9:06                   ` [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5 Ingo Molnar
2004-08-30 14:25                     ` Thomas Charbonnel
2004-08-30 18:00                       ` Ingo Molnar
2004-08-31 19:23                         ` Thomas Charbonnel
2004-08-31 19:30                           ` Ingo Molnar
2004-08-31 19:45                             ` Thomas Charbonnel
2004-08-31  6:40                     ` Lee Revell
2004-08-31  6:53                       ` Ingo Molnar
2004-08-31 23:03                         ` Lee Revell
2004-09-01 15:52                           ` Martin Josefsson
2004-09-01 21:15                             ` Lee Revell
2004-09-01 21:30                             ` Lee Revell
2004-08-31  7:06                       ` Ingo Molnar
2004-08-31 19:21                         ` Lee Revell
2004-08-31 19:37                           ` Ingo Molnar
2004-08-31 19:47                             ` Lee Revell
2004-08-31 19:51                               ` Ingo Molnar
2004-08-31 20:09                                 ` Ingo Molnar
2004-08-31 20:10                                   ` Lee Revell
2004-08-31 20:14                                     ` Ingo Molnar
2004-08-31 20:20                                       ` Ingo Molnar
2004-08-31 20:34                                         ` Lee Revell
2004-08-31 20:39                                           ` Ingo Molnar
2004-08-31 20:41                                             ` Lee Revell
2004-08-31 17:40                     ` Peter Zijlstra
2004-09-01  1:43                     ` Lee Revell
2004-09-01  2:30                     ` Lee Revell
2004-09-01  7:27                     ` Lee Revell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox